If you've been letting AI coding tools write your projects end-to-end, there's a decent chance you're learning less than you think you are. Dr. Cat Hicks, a psychological scientist who studies software teams, just released learning-opportunities—a plugin for both Claude Code and Codex that fights back against the cognitive shortcuts AI assistants inadvertently encourage.
The Core Problem This Skill Addresses
The repository is straightforward about its purpose: after significant coding work (new files, schema changes, refactors, architectural decisions), Claude will offer optional 10-15 minute learning exercises. These aren't tutorials—they're active recall and generation tasks that force you to engage with what you've just shipped rather than passively accepting generated code.
The Learning Science Is Legit
What's refreshing here is the explicit grounding in evidence-based techniques. Hicks identifies five cognitive risks that AI coding amplifies: the generation effect (accepting code vs. writing it), the fluency illusion (clean output feels like understanding), spacing problems (velocity prevents reflection cadence), metacognition gaps (no room to assess what you actually know), and reduced retrieval practice (agents give complete answers, eliminating self-testing). The skill counters each with structured exercises: predictions before observation, sketching implementations from scratch, execution tracing with step-by-step reasoning, debugging scenarios, teach-it-back explanations, and session-starting retrieval check-ins.
How the Exercises Actually Work
After you accept an exercise prompt, Claude pauses and waits for your input rather than immediately answering its own questions. This is intentional friction—pushing against the model's default to provide full answers. Hicks acknowledges this can feel frustrating, but that's precisely the point. The skill includes suppression conditions so it won't bother you if you've already declined once or completed two exercises in a session.
Repo Orientation With Empirical Backing
The learning-opportunities package also includes orient, a plugin that generates repository orientation lessons using strategies from program comprehension research. It draws on how expert developers sample codebases strategically rather than reading exhaustively—then offers targeted lessons through the main skill after running /learning-opportunities orient.
A Measurement Playbook for Teams
For organizations experimenting with this at scale, MEASURE-THIS.md provides a companion playbook with validated survey items from peer-reviewed research on developer thriving and AI skill threat. It includes guidance on interpreting variance (not just averages), team boast templates for leadership communication, and statistical rigor nudges if you want Claude to help with analysis.
Installation Is Dead Simple
For Codex: codex plugin marketplace add https://github.com/DrCatHicks/learning-opportunities.git. For Claude Code: /plugin marketplace add [that same URL], then /plugin install learning-opportunities@learning-opportunities, restart, done. Optional Linux/macOS post-commit hooks via learning-opportunities-auto are available for automatic prompting after git commits.
Key Takeaways
- Grounded in Bjork, Dunlosky, and Ericsson research on expertise development—citations aren't hand-wavy
- Forces active generation rather than passive consumption of AI output
- Built by someone who's actually studied thousands of developers in AI-assisted workflows
- Includes a team measurement layer for organizations that want to quantify the experiment
The Bottom Line
This is exactly the kind of meta-tooling the ecosystem needs right now. Everyone's shipping faster with AI, but nobody's measuring whether developers are actually getting better at their craft or just becoming better prompters. If you're serious about not plateauing as an engineer while using agentic coding tools, this is worth 15 minutes of friction.