CongaLine is the real deal for teams that need self-hosted AI assistants without the security headache of shared instances. Built by cruxdigital-llc and posted to Hacker News on April 8, 2026 (score: 20), this system runs every agent in its own Docker container with isolated networks, secrets, and configuration. The primary design constraint? Security first, everything else second.

Architecture: Orthogonal Providers and Runtimes

The clever bit here is the separation between providers (where agents run) and runtimes (what runs inside them). Providers include local Docker, remote SSH hosts (VPS, Raspberry Pi, bare metal), and AWS. Runtimes are OpenClaw (Node.js) and Hermes Agent (Python). Any provider works with any runtime — they're orthogonal. Adding a third runtime means writing one Go package implementing the 22-method Runtime interface, with no changes to providers or core logic.

Runtimes: OpenClaw vs Hermes

OpenClaw gets pinned to v2026.3.11 (the last stable release before a Slack socket mode regression in v2026.3.12). It uses JSON config (openclaw.json) and defaults to ghcr.io/openclaw/openclaw:2026.3.11. Hermes uses YAML config (config.yaml) with the nousresearch/hermes-agent:latest image — linux/amd64 only, but CongaLine automatically retries Apple Silicon pulls with --platform linux/amd64 for Rosetta translation. Both support Slack and Telegram via dedicated router containers that fan events out to per-agent containers.

Channel Integration

Slack uses Socket Mode with HTTP fan-out; Telegram uses long-polling (or webhooks in production). Each runs as a separate router container — one Slack crash doesn't take down Telegram. The conga channels add command walks you through @BotFather for Telegram or creating a Slack app with the right scopes. Bind agents to channels with conga channels bind myagent slack:U0123456789.

Deployment Pipeline

The system forms a promotion pipeline: develop locally, validate on a remote host, enforce in production. The conga-policy.yaml file is the portable policy artifact that defines egress rules (which domains agents can reach), routing (model selection, fallback chains), and posture (isolation level, secrets backend). Local provider shows validate-only warnings for egress; AWS enforces via per-agent Envoy proxies with domain-based CONNECT filtering.

MCP Server Integration

The CLI includes an MCP server exposing agent management as tools for AI coding assistants like Claude Code. Copy .mcp.json.example, configure your provider and credentials, restart Claude Code — the conga tools appear automatically. This is huge for AI-driven infrastructure management.

Key Takeaways

  • Per-agent Docker isolation means one compromised agent doesn't access another's secrets or network
  • Pluggable runtimes let teams choose their agent technology without rewriting infrastructure
  • Portable policy across local/remote/AWS enables real promotion pipelines from dev to prod
  • MCP server integration is the killer feature for AI-first infrastructure management

The Bottom Line

CongaLine actually gets isolation right — not as an afterthought but as the foundation. For teams tired of cramming multiple agents into shared instances, this is the build-your-own alternative that's been missing. The MCP integration alone makes it worth watching as AI agent infrastructure matures.