CongaLine just landed on Hacker News, and it's solving a problem that's been nagging at the AI agent space for a while: how do you run a fleet of AI assistants without them stepping all over each other's secrets, configs, and network boundaries? The answer, apparently, is isolation at the container level — one agent per Docker container, period.
The Architecture
The system pulls this off with a pluggable design that keeps providers (where agents run) completely orthogonal to runtimes (what runs inside them). You can deploy locally via Docker, remotely over SSH to any Linux host, or go full production on AWS. Meanwhile, agents themselves can be OpenClaw (Node.js) or Hermes Agent (Python), with a claimed 22-method interface that makes adding a third runtime a single Go package. That's clean engineering.
Security First
Let's be real — most agent frameworks treat security as an afterthought. CongaLine makes it the primary design constraint. Each agent gets its own Docker network, isolated secrets storage (mode 0400 files), and the conga-policy.yaml lets you define egress rules, model routing, and security posture in one portable file. The local provider validates what it can; AWS enforces everything via per-agent Envoy proxies with domain-based CONNECT filtering. The policy system transparently reports gaps between what you want and what each provider can actually enforce.
Channels and MCP
Slack and Telegram support comes through dedicated router containers — one per platform — that fan events out to individual agent containers. One Slack crash doesn't take down Telegram. For AI coding assistants, there's an MCP server exposing agent management as tools, so Claude Code or similar can manage your deployment conversationally. That's a nice touch for the infrastructure-as-code crowd.
Key Takeaways
- Runtime isolation via per-agent Docker containers, not shared instances
- Three deployment targets: local Docker, remote SSH hosts (VPS, Raspberry Pi, etc.), and AWS
- Two supported agent runtimes: OpenClaw (Node.js) and Hermes Agent (Python), with pluggable interface for more
- Slack/Telegram integration via dedicated router containers, not platform plugins baked into agents
- Portable security policy via conga-policy.yaml that travels through dev → staging → production
The Bottom Line
CongaLine is what happens when you apply zero-trust thinking to AI agent infrastructure. It's not trying to be the flashiest framework — it's trying to be the one you can actually trust in production. The multi-runtime support and provider abstraction show this was architected for longevity, not a single use case. Worth watching.