Fronalabs just shipped the first public release of Frona, a self-hosted personal AI assistant platform built around a single Rust engine, a single Cedar-based policy language, and per-principal sandboxing for every actor in the system. If you've been burned by AI agents that run wild with your credentials or leak data to third-party providers, this architecture deserves your attention.
Security Model That Doesn't Play Nice With Excuses
Frona's security approach is refreshingly paranoid—and that's a compliment. Every agent, MCP server, deployed app, and messaging channel runs as its own principal with isolated policies. CLI tool calls, MCP servers, and apps execute in sandboxed Linux processes with policy-driven syscall, filesystem, and network filtering, spawned and reaped on demand. No Docker containers per agent. No daemon to babysit. The credential vault integration is where this gets interesting: agents request secrets at the moment they're needed and you approve or deny in real time. Supported vaults include 1Password, Bitwarden, HashiCorp Vault, KeePass, and Keeper—and critically, credentials never enter agent memory or LLM provider traffic. If you're running an AI agent that needs to access your infrastructure, this is how it should work. Dual LLM dispatch for inbound messages adds another layer: untrusted channel inbounds route to a quarantined LLM with a restricted tool registry, so a hostile message can't talk the agent into running tools or leaking data on its behalf. Combined with isolated browser profiles per user and credential context, you've got real isolation rather than theater.
Built-In Agents, Real Delegation
Frona ships with four built-in agents at install time: Assistant, Researcher, Developer, and Receptionist. Custom agents are first-class citizens. The agent-to-agent delegation feature lets you chain specialized agents together with structured handoff and result return—your Research agent can hand off findings to your Developer agent, which can hand off a deployed app to your Receptionist for monitoring. Persistent memory includes automatic compaction and deduplication. User-scoped facts share across all an owner's agents; agent-scoped facts stay private. Spaces group related conversations and feed summarized cross-chat context into new ones. Skills package reusable instructions you can install built-in, share across agents, or scope to one specifically.
Tools That Actually Ship
The tool set reads like a power user's wishlist: browser automation via Browserless with persistent profiles, web search through SearXNG (self-hosted), Tavily, or Brave Search, and code execution in sandboxed shell, Python, and Node.js environments with per-principal filesystem, network, and resource caps. Agents can build and deploy web apps and services with an approval gate before anything goes live—including auto-hibernation of idle apps and supervised restart on failure. Voice calls work via Twilio integration with speech recognition and DTMF navigation. Scheduling and heartbeats handle cron-driven tasks and ongoing checklists. Notifications surface task completion, app deployments, and credential approvals into a top-bar feed so you're never wondering what your agents are doing.
Deployment and Provider Flexibility
The platform runs in a single rootless OCI container that handles the API server, embedded SurrealDB with RocksDB storage, scheduler, and tool execution—no per-agent containers even at scale. Real-time streaming of token, tool-call, and tool-result events goes over Server-Sent Events. LLM support at launch covers Anthropic, OpenAI, Google Gemini, DeepSeek, Mistral, Cohere, xAI (Grok), Groq, OpenRouter, Together, Perplexity, Hyperbolic, Moonshot, Hugging Face, Mira, Galadriel, and Ollama for local models. Channels at launch include Telegram and SMS with pairing flows that lock channels to your devices by default.
Key Takeaways
- Frona's per-principal sandboxing means every agent runs in its own isolated Linux process with policy-driven access controls—no container sprawl
- Credential vault integration keeps secrets out of agent memory and LLM provider traffic entirely
- Cedar-based policy language handles tool authorization, file paths, network destinations, and port binds at one decision point
- Built-in agents (Assistant, Researcher, Developer, Receptionist) plus full custom agent support with delegation between them
- Self-hosted by design: pick your LLM provider, run on your infrastructure, data never leaves your servers
The Bottom Line
Frona isn't trying to be everything to everyone—it's a focused, security-first platform for people who want AI agents that actually respect boundaries. The credential vault alone makes it worth evaluating if you've been manually managing what your agents can access. The BSL 1.1 license (converts to Apache 2.0 in 2029) gives you time to evaluate before any commercial implications kick in. This is the kind of project that gets interesting when you start chaining agents together with tight policies.