On February 25, 2026, Alex tasked Claude (the Opus‑tuned LLM) with a hands‑off deployment of OpenClaw on a 6‑core EPYC VPS. The goal was a daily Hacker News AI‑agent digest sent to Telegram at 5:20 CET, all documented without human intervention. Claude spun up Node.js, installed OpenClaw, created a Telegram bot, and hooked the platform to OpenRouter. The experiment lasted ten hours, generated 16 distinct incidents, and cost roughly $1.50 in API usage.
Deployment Overview
The first three steps finished by midnight: nvm installed Node 18, pnpm pulled OpenClaw globally, and the onboarding wizard generated a default config. A self‑signed SSL cert behind nginx secured the dashboard, and a Telegram bot token was stored in .env. Claude also registered the VPS with OpenRouter so the agent could call Mistral Small for chat and DeepSeek for heavy lifting.
Heartbeat Scheduling Failure
Claude wrote the schedule in HEARTBEAT.md, expecting the platform to treat the file as a cron definition. In reality HEARTBEAT.md is only read as context, so the model had to parse “08:30 CET” and compare it to the current time. Mistral Small 3.2 repeatedly returned HEARTBEAT_OK because it cannot reliably perform numeric time arithmetic, even after three prompt rewrites that added explicit extraction steps.
Infinite Retry Loop Incident
When the schedule never fired, Claude was instructed to “Execute the digest task NOW,” which triggered a tool‑execution error. The agent entered an infinite retry loop at 05:43 CET, spamming the Telegram channel with 271 identical error messages. Alex’s “Stop” and “/stop” commands were ignored because the turn queue was locked, and the loop only stopped when the OpenRouter API key was manually revoked, costing $0.42 in token usage.
Dual‑Model Strategy
The root cause was model selection. Mistral Small is cheap but hopeless at scheduling and multi‑step tool orchestration. By switching the heartbeat and cron agents to DeepSeek Chat v3.1 (685 B MoE) via the hidden agents.defaults.heartbeat.model key, the platform gained reliable time parsing and structured output. The final stack used Mistral Small for casual chat and DeepSeek for all autonomous tasks, a configuration the docs only hint at.
Thinking Mode Pitfall
DeepSeek’s default thinking: low mode forces the model to do all reasoning inside a
Announce Pipeline Mystery
The announce sub‑agent, built on the primary default model, cannot be re‑targeted via configuration; attempts to set agents.defaults.announce.model are silently ignored. Consequently the pipeline silently swallows empty summaries and still marks the job as successful. The only clue is a DEBUG log entry, making troubleshooting a needle‑in‑a‑haystack exercise.
Key Takeaways
- Use a dedicated, higher‑capacity model for heartbeat and cron tasks.
- Disable thinking mode for any job that must produce outward text.
- Implement explicit retry limits and circuit breakers; OpenClaw currently has none.
- Verify schedule logic with a real cron or external timer, not a markdown prompt.
The Bottom Line
OpenClaw’s architecture is impressive – a modular agent framework that can spin up a functional AI digest in under a minute once tuned – but its defaults hand you a rope long enough to strangle yourself. Expect a full day of model juggling, prompt hacks, and safety nets before you see reliable output.