CongaLine just landed on Hacker News, and it's solving a problem that's been haunting AI agent deployments: how do you run a fleet of autonomous AI assistants without them stepping on each other's secrets, network access, and configuration? The answer, apparently, is per-agent Docker containers with full isolation — no shared instances, no cross-contamination of credentials.

How It Works

The architecture is clean: a CLI called conga manages everything, and it separates the concept of a "provider" (where your agents run) from "runtime" (what agent software executes). You can deploy to local Docker, any SSH-accessible Linux host, or AWS. Meanwhile, you choose between OpenClaw (the Node.js agent) or Hermes Agent (Python) for each individual bot. The two dimensions are orthogonal — any provider works with any runtime.

Runtimes: Pick Your Poison

OpenClaw gets the Node.js treatment with JSON config at ghcr.io/openclaw/openclaw:2026.3.11 (they pinned to this version after a Slack socket mode regression in 2026.3.12). Hermes Agent runs Python with YAML config using the nousresearch/hermes-agent:latest image. Here's the kicker: adding a third runtime is literally one Go package implementing a 22-method interface. No core changes needed. That's proper plugin architecture.

Deploy Anywhere

Local Docker is the obvious starting point — spin up a laptop dev environment in minutes. But the remote provider is where it gets interesting: SSH into any VPS, Raspberry Pi, Mac Mini, or colocated server and CongaLine auto-installs Docker if needed. AWS gets the full treatment with EC2, SSM Parameter Store, Secrets Manager, and zero-ingress architecture via Session Manager. One manifest bootstraps the entire environment: agents, secrets, Slack/Telegram channels, egress policy — all idempotent.

Security Architecture

This is the whole point. Each agent gets its own Docker container, bridge network, and secret files (mode 0400). The conga-policy.yaml defines egress allowlists, model routing, and security posture. Local provider runs in validate-only mode for egress (warnings only), but remote and AWS providers actually enforce domain-based CONNECT filtering. The policy format is portable across all three tiers — develop locally, validate on remote, enforce in production. Same config everywhere.

Key Takeaways

  • Per-agent Docker isolation prevents credential cross-contamination in multi-agent fleets
  • Pluggable runtime architecture supports OpenClaw, Hermes Agent, and future runtimes via a 22-method Go interface
  • Unified policy format works across local, remote SSH, and AWS providers

The Bottom Line

CongaLine is what happens when you take AI agent deployment seriously as a security problem rather than an afterthought. The isolation model is sound, the pluggable runtimes are legit, and having both Slack and Telegram support out of the box covers real use cases. Worth watching — this is the kind of infrastructure that mature AI agent deployments are going to need.