Claude Code just got a serious upgrade for developers who want to set it and forget it. The new /goal command, documented at code.claude.com, lets you define a completion condition that Claude keeps working toward automatically—cycling through turns until the condition is satisfied rather than handing control back after every response. It's a game-changer for substantial refactoring tasks, test migrations, or grinding through labeled issue queues where you'd otherwise find yourself repeatedly copy-pasting prompts.
How Autonomous Goal-Taking Works
When you run /goal followed by your condition—something like 'all tests in test/auth pass and the lint step is clean'—Claude immediately starts working toward that end state. After each turn, a separate fast model (defaulting to Haiku) evaluates whether the transcript shows the condition has been met. If it hasn't, Claude fires off another turn automatically without waiting for your input. A small ◎ indicator shows how long the goal has been running. The evaluator returns a short reason explaining its decision, so you can see what Claude thinks it's still missing.
/goal vs Other Autonomous Approaches
The documentation clarifies where this fits in Claude Code's broader autonomy toolkit. /loop runs on a time interval and stops when you kill it or Claude decides work is done. Stop hooks let your own script or prompt decide whether to continue. Auto mode approves tool calls within a single turn but doesn't start new ones—it still judges completion itself rather than checking against an explicit condition. The key difference with /goal: the evaluator is a separate model from the one doing the work, so you're not relying on Claude's own assessment of when it's finished.
Writing Conditions That Actually Work
The evaluator can't run commands or read files independently—it only judges what Claude has already surfaced in the conversation. This means your condition needs to describe something demonstrable through Claude's output: test results, build exit codes, file counts, git status checks. Effective conditions typically include one measurable end state, a stated check method like 'npm test exits 0', and any constraints that must hold along the way such as 'no other test file is modified'. You can also bound runtime with clauses like 'or stop after 20 turns' to prevent runaway loops.
Status Tracking and Session Persistence
Run /goal with no arguments to see current status: the condition itself, running duration, turn count, token spend, and the evaluator's most recent reasoning. If you close a session while a goal is still active, resuming with --resume or --continue restores it—the condition carries over but timers reset. The command also works non-interactively through 'claude -p "/goal [condition]"', letting you queue up unattended work that runs to completion in a single invocation.
Requirements and Limitations
The feature requires accepting the trust dialog since it's built on the hooks system, and it's unavailable if disableAllHooks is set or allowManagedHooksOnly is configured in managed settings. Each session supports one active goal at a time; running /goal while one exists simply replaces it. Conditions max out at 4,000 characters, which should cover most use cases but might cramp complex multi-part requirements.
Key Takeaways
- /goal automatically triggers follow-up turns until your specified condition is satisfied
- The evaluator runs on a separate small model (default Haiku) for objective assessment
- Write conditions around Claude's transcript-visible outputs—test results, exit codes, file states
- One goal per session; use 'or stop after X turns' to prevent runaway execution
The Bottom Line
This is exactly the kind of practical autonomy that makes CLI tools actually useful in real development workflows. No more babysitting a migration or refactor with repetitive prompts—just define the finish line and let Claude sprint toward it.