CongaLine just landed on Hacker News with a straightforward proposition: stop cramming every AI agent into one shared instance. The project from cruxdigital-llc gives each agent its own Docker container with isolated networks, secrets, and configuration — think of it as the lobster migration strategy for AI infrastructure. Each agent is a spiny lobster in the conga line, moving independently but protected by the collective.

Pluggable Runtimes and Deployment Targets

The architecture splits concerns cleanly: providers decide where agents run (local Docker, any SSH-accessible host, or AWS), while runtimes decide what runs inside each container. Currently supported are OpenClaw (Node.js, JSON config) and Hermes Agent (Python, YAML config). Adding a third runtime means implementing a 22-method interface in a single Go package under pkg/runtime// — no changes to providers or core logic. That's the kind of extensibility that makes infrastructure tooling actually usable long-term.

Multi-Channel Support and Policy Enforcement

Slack and Telegram integration comes baked in, each running dedicated router containers that fan events out to per-agent containers. One platform crash doesn't take down the others. The conga-policy.yaml file defines egress rules, model routing, and security posture declaratively — each provider enforces what it can and reports gaps transparently. Local providers run in validate-only mode for egress, while remote and AWS always enforce when allowed_domains are defined.

Deployment Options for Every Budget

Starting is as simple as conga admin setup --provider local with Docker Desktop and an Anthropic API key. Want to go production? Deploy to a $5 VPS via SSH, or spin up hardened AWS infrastructure with zero ingress using SSM tunnels. The promotion pipeline concept is smart: develop locally, validate on remote, enforce in production — same config everywhere. The bootstrap from manifest approach (conga bootstrap demo.yaml --env demo.env) provisions entire environments in one shot, idempotent and additive.

MCP Integration for AI-Assisted Management

The CLI includes an MCP server exposing agent management as tools for AI coding assistants like Claude Code. This is the kind of meta-tooling that gets overlooked — letting AI manage your AI infrastructure conversationally. Copy .mcp.json.example, point it at your provider, restart Claude Code, and you've got a conversational interface to list agents, check status, set secrets, refresh containers, all through natural language.

The Bottom Line

CongaLine nails the isolation problem that most AI agent platforms ignore. Running everything in one container is fine for experiments, but teams need per-agent secrets, network isolation, and independent lifecycles. The pluggable runtime architecture means this isn't a locked-in solution — it's infrastructure that adapts as the AI agent space evolves. Worth a serious look if you're running anything beyond hobby projects.