If you've been watching the AI agent space, you know the dirty little secret nobody talks about at conferences: most agents are running on developers' laptops, fighting over noisy development environments and risking catastrophic "oops" moments when an agent decides to apt install half a Linux distro. Containarium wants to fix that with a self-hostable, MCP-native sandbox platform built around Incus (the modern LXC manager) and released under Apache 2.0 with no CLA required.
What Is Containarium Actually Building?
At its core, Containarium runs full Linux system containers—real OS instances with systemd, SSH access, persistent storage via ZFS snapshots, and the ability to host services on the public internet through a clever "sentinel" architecture. The sentinel is typically an e2-micro VM that fronts multiple backend hosts: it terminates SSH (via sshpiper) and HTTPS (via Caddy), routing traffic by username or hostname respectively. This means your backend VMs can be spot instances or bare-metal GPU nodes with no public IPs—they reach out via Cloud NAT while the sentinel holds the static IP so DNS never changes when backends rotate. Each agent gets their own isolated container they can shell into, install packages in, and expose services from, all controlled through MCP tools rather than hoping commands don't scroll off a TTY.
The MCP Surface Is What Makes This Different
Unlike SaaS sandbox providers like e2b, Modal, or Replit that bolt MCP onto existing infrastructure, Containarium was built MCP-native from the start. There are two MCP servers: mcp-server runs on the host and exposes platform operations (create_container, delete_container, expose_port), while agent-box runs inside every container over stdio and gives agents direct access to shell_exec, read_file, write_file, list_directory, move_file, and delete_file—all typed, bounded, and safe. For untrusted agents, you can set AGENTBOX_ROOT at runtime to constrain file operations to a specific project directory. The CLI and MCP surface are identical: every platform action lands as containarium first, then MCP wraps it—see CLAUDE.md for the convention.
Security Primitives Worth Noting
Containarium doesn't skimp on isolation. LXC containers run unprivileged (container root ≠ host root), each user gets their own proxy account with /usr/sbin/nologin so they can only route through to their container, and fail2ban operates per-user so an attack on Alice's account won't ban Bob. There's AppArmor profiling per container, ClamAV + Trivy malware scanning across all backends, and ZFS-backed storage with daily snapshots and 30-day retention by default. The sentinel detects GCP spot preemption in roughly 10 seconds, serves a maintenance page during recovery (which takes about 85 seconds total), and holds the static IP so your DNS doesn't flip out when backends restart.
GPU Passthrough for ML Workflows
For agents doing machine learning work, Containarium supports PCI-level NVIDIA GPU passthrough—tested with RTX 3090 and RTX 4090 cards on bare-metal GPU nodes. The container sees the GPU directly, no virtualization layer stealing performance. This is a meaningful differentiator from Docker-based solutions where GPU access typically means nvidia-docker or similar workarounds that don't give you the full system container experience.
Roadmap: Q4 2026 OSS v1.0
The team (FootprintAI) has laid out an aggressive roadmap. Q2 2026 is nearly complete with agent-box MCP, ssh-config CLI, expose-port CLI, and demo recordings shipped. Q3 2026 focuses on tier-2 agent-box features like MCP Roots support and background process management, plus documentation and examples driven by demos. The big milestone is Q4 2026: OSS v1.0 with a stable API surface (protobuf-defined with gRPC-gateway) and a contribution guide. APIs are already marked stable in the current release, so this is more about formalizing the contract than breaking changes.
Key Takeaways
- Full Linux system containers via Incus—systemd, SSH, real networking, Docker-in-LXC works out of the box
- MCP-native from day one with two servers: platform admin (mcp-server) and in-box tools (agent-box)
- Self-host on your own infrastructure—no per-hour billing, Apache 2.0 license, no CLA to sign
- Sentinel architecture enables ephemeral backends with static IP/DNS fronted by e2-micro VMs
- ZFS snapshots (daily, 30-day retention), GPU passthrough for RTX 3090/4090, multi-region GCP deployment tested
The Bottom Line
Containarium is the infrastructure layer the AI agent ecosystem desperately needs but nobody's been building: a self-hosted alternative to vendor-locked SaaS sandboxes that treats MCP as a first-class citizen rather than an afterthought. If you're running Cursor, Claude Code, or any MCP-speaking agent in production and you're tired of either polluting your laptop or trusting third-party compute with your code and data, this is worth spinning up. The Apache 2.0 license means you can fork it, sell derivatives, or just run it on a $5 VM forever—no vendor dependency required.