A week ago, developer lmgamaral committed a YAML file listing 50 features, set Claude Code Routines to wake up twice daily, and went to bed. By morning, a new feature was live in production. Zero lines of code written by human hands. Five days in, the pipeline kept running. This is what broke first.
The Example Problem Nobody Warns You About
Day three brought strange HTML comments leaking into production page source: "". Nine pages were live with internal prompt scaffolding visible to users. The original instruction included a literal example comment showing the target format. By default, AI models copy examples—especially working ones—even when explicitly told not to include them in output. "A concrete example beats an abstract warning almost every time," lmgamaral noted. The fix: delete the example entirely. No template. Just the abstract rule. Zero leaks after that.
Cloud Sandboxes Have Rules They Don't Tell You
Push attempts started returning 403 errors on day 14 of running the pipeline. Branches with other names still worked fine—only pushes to main were blocked. The culprit: Anthropic's Claude Code cloud sandbox restricts pushes to branches prefixed "claude/" by design, a security measure to prevent runaway agents from rewriting protected branches. This is documented in the security docs. Nobody reads those first. The workaround split into two parts. On the agent side: explicit fallback instructions to create session branches named "claude/run-
Build Artifacts Don't Belong in Git
The auto-merge Action started failing with "Your local changes to public/sitemap.xml would be overwritten by checkout". The cause was mundane: next-sitemap regenerates sitemap.xml on every build with a fresh timestamp. Tracked in git, this meant CI builds kept introducing local changes that blocked the next branch checkout. This problem doesn't exist when only humans run builds—your laptop's sitemap matches git because you committed it last time. Two minutes to fix once understood: "git rm --cached public/sitemap.xml" plus one line in .gitignore.
Race Conditions Hit the Merge Gate, Not the Agent
At 15:02, lmgamaral manually triggered the agent without noticing a scheduled run at 15:00 with typical few-minute delay. Both ran within two minutes. Both read the same backlog. Both built the same next item. The first finished, pushed its branch, Action merged successfully. The second finished minutes later—same outcome until merge step failed on paths already taken. Making the agent itself idempotent would require locks, queues, and coordination logic—expensive to design and easy to get subtly wrong. Instead, the fix lives at the boundary: "Detection at the boundary beats correctness everywhere upstream." Collision detection in the merge step handles every race condition automatically.
What Actually Changed
"A month ago, my model of AI in development was 'the AI helps me code'. Pair programming. Always in the loop," lmgamaral wrote. Now it's different: "The AI ships code while I sleep." The engineering shifted from writing prompts to three questions: What can the agent do without me? When it goes wrong, how will I know? When I know, what tools do I have to recover?
Key Takeaways
- Don't show models examples of unwanted behavior—even as warnings. Delete the template entirely.
- Every service in an automated pipeline has hidden defaults and restrictions. Build fallbacks before you need them.
- Build outputs are derivatives of source, not source itself. Never track them in version control.
- Push collision detection at merge time beats idempotent agent logic—it's cheaper, simpler, and robust to unknown race conditions.