Spacebot dropped on Hacker News this week, and it's clear someone actually understood what makes AI agents hard. Unlike every "agent" that's just a fancy chat wrapper, Spacebot treats LLM processes as infrastructure with dedicated roles. Channel, Branch, Worker — three distinct process types that handle user interaction, reasoning, and execution respectively. This isn't a chatbot. It's an orchestration layer. The architecture is where it gets interesting. The Channel is your user-facing ambassador — one per conversation, with actual personality and identity. It delegates everything else. Branch forks the channel's context to go think privately, with full conversation history, and returns only a conclusion. Worker is the grunt — gets a task and tools, no personality, no chitchat, just focused execution. Three process types, zero blocking. That's how you build for large teams. Then there's Cortex, the memory system that makes this actually useful. Every 60 minutes it queries the memory graph across 8 dimensions and synthesizes a briefing. Every conversation reads it on every turn — lock-free, zero-copy. The Association Loop continuously scans memories for embedding similarity and builds graph edges between related knowledge. Facts link to decisions. Events link to goals. The graph gets smarter on its own. No more starting cold. Built in Rust, because of course it is. Single binary, no runtime dependencies, no garbage collector pauses, predictable resource usage. No Docker required for production (though you can run it in Docker). The tech stack reads like a who's who of serious infrastructure: Tokio, SQLite, LanceDB, redb, FastEmbed, Serenity, Chromiumoxide. Ten LLM providers with automatic routing and fallbacks. This is machine code for AI agents — built to last. Pricing is straightforward: $29/mo gets you 3 agents, $59/mo for 6 agents (most popular), $129/mo for 12. Self-hosted starts at $59/mo or $299/mo for priority support. Enterprise gets SLAs and procurement workflows. All plans require your own API keys — BYOK model, no bundled credits yet. Deploy with one Docker command: docker run -d -v spacebot-data:/data -p 19898:19898 ghcr.io/spacedriveapp/spacebot:latest Migration from OpenClaw is built in. Drop your MEMORY.md and daily logs into the ingest folder — Spacebot extracts structured memories and wires them into the graph. Skills go in the skills folder, compatible out of the box. File ingestion supports .md, .txt, .json, .yaml, .csv, .log, .toml, .xml, .html, .rst. The LLM reads each chunk, classifies it, recalls related memories to avoid duplicates, and saves with importance scores.
Key Takeaways
- Three-process architecture: Channel (user face), Branch (reasoning fork), Worker (execution)
- Cortex memory system synthesizes briefings every 60 minutes across 8 dimensions
- Built in Rust: single binary, no runtime deps, predictable resource usage
- 10 LLM providers with auto-routing and fallbacks
- Self-host with one Docker command, or managed cloud from $29/mo
- OpenClaw migration path: MEMORY.md and skills folder compatible
The Bottom Line
Spacebot gets it. Most agent frameworks are chat interfaces with delusions of grandeur. This is actual infrastructure — Rust, dedicated process roles, memory graphs that learn. If you're building AI employees for a team and not just playing around with prompts, this is the stack. The OpenClaw alternative isn't just viable, it might be better engineered for production workloads.