There's a dirty secret circulating through dev shops everywhere: AI agents don't get better with time—they get worse. A new analysis from developer blog Hold The Robot breaks down what many in the industry are quietly experiencing but haven't articulated clearly.

The Expert-to-Novice Inversion

When you hand a green human a fresh project, they start useless and grow into experts over months of struggle and learning. AI agents do the exact opposite. On small projects, these systems present like seasoned architects—quickly analyzing codebases, answering complex questions, shipping real functionality. But as the codebase expands, context window limitations force increasingly desperate behavior: punching through abstractions, duplicating logic, ignoring established guidelines, and making changes without grasping the implications.

Amazon's Senior Review Requirement

The clearest signal that something fundamental is broken comes from Amazon itself. The company now requires senior engineers to review all code changes made with AI assistance—not as a nice-to-have, but as mandatory policy. That requirement exists because someone high up recognized that handing an agent the wheel produces output that needs expert oversight. Your trust in a human working on a project grows over time. With agents, it recedes. Fast.

The Positioning Mistake

"A mistake I keep seeing and making is positioning the AI above the human," Hold The Robot notes. "It's a pretty good advisor but a terrible lead." This reframing matters. When you start feeling the pull toward 'sure agent, whatever you think sounds good,' that's the moment to power it off. Good judgment requires understanding, and letting that understanding slip is how you end up with a spaghetti codebase held together by duct tape and prayers.

The Paved-Path Problem

Here's where it gets philosophically interesting. AI doesn't just enable laziness—it also enables deeper exploration. Being able to instantly synthesize information and drill into fuzzy concepts is genuinely incredible for learning. But you're one keystroke away from 'getting the thing done' in a way that will probably work fine, for now. The technology simultaneously makes offloading your thinking easier AND pursuing genuine understanding more accessible. Which path you take depends entirely on discipline.

Key Takeaways

  • AI agents perform like experts at project start but degrade as context windows fill with accumulated code and history
  • Context window limitations force increasingly desperate coding patterns: abstraction violations, duplicated logic, guideline drift
  • Amazon's mandatory senior review of all AI-assisted changes signals industry-wide recognition that agent output requires human oversight
  • The critical mistake is positioning agents as leads rather than advisors—good judgment requires understanding the full picture

The Bottom Line

We're roughly six months past 'agentic coding' being acknowledged as useful by anyone not selling it. The formative period is happening now, and the mental models haven't caught up yet. Emphasize the 'assistance' in AI-assisted coding, or you'll spend your career debugging someone else's shortcuts. The full analysis with additional context is available at Hold The Robot.