A Hacker News thread is heating up after a developer documented what they call 'unacceptable' behavior from Anthropic's Claude: repeatedly refusing to generate AGPLv3 license text, claiming it violates the AI's content policy. The poster claims this has happened across three separate projects and they've reproduced the issue four times total.
The Blocking Incident
When attempting to add an AGPLv3 license header to their codebase, users are met with a blunt error: 'API Error: Output blocked by content filtering policy.' That's it. No explanation, no workaround suggestions—just a wall. One developer noted they received this block even when simply asking Claude to generate standard open-source license text that millions of projects use daily.
This Isn't New
The problem isn't isolated. The original HN poster references at least two prior reports on the issue, including an active GitHub issue (#12705) in Anthropic's official claude-code repository. Another Hacker News discussion (id: 46569652) from earlier also flagged similar behavior. Despite multiple reports over time, Anthropic hasn't publicly addressed why their model flags a widely-used license—recognized by the Free Software Foundation and used by projects like PostgreSQL, GNU MediaGoblin, and many others—as problematic content.
The Irony Is Hard to Ignore
The developer didn't hold back on the irony: 'I would be highly surprised if they didn't train on AGPL code in the first place.' It's a fair point. Claude almost certainly trained on repositories containing AGPL-licensed code, which means the model learned from open-source contributions it's now refusing to help generate headers for. The hypocrisy angle isn't subtle here—Anthropic built their foundation model partly on publicly available code that includes AGPL projects, yet their content filters treat that same license text as verboten.
Developers Are Watching
The poster announced they're 'going to start diversifying (or switching entirely to Codex)' and warned they could see a future where Claude refuses to work with any AGPL codebase. That's not an empty threat when you consider how much of the infrastructure powering modern software runs on AGPL-licensed components. If AI assistants start treating permissive open-source licenses as policy violations, that's a massive can of worms for the developer ecosystem.
Transparency Remains Elusive
The poster also called out Anthropic's pattern of 'lack of transparency'—a complaint that resonates with anyone who's watched the company make controversial model behavior changes without clear communication. When your AI tool starts deciding which legal licenses it's allowed to help you use, you deserve an explanation. So far, that hasn't come.
Key Takeaways
- Claude's content filter is blocking AGPLv3 license generation across multiple projects and users
- The issue has been reported for months with no official resolution from Anthropic
- Developers are increasingly frustrated by the lack of transparency around model behavior changes
- The situation raises questions about AI training data ethics if models learned from open-source code they now refuse to output
The Bottom Line
Anthropic needs to come clean on this. Either AGPLv3 is genuinely problematic for their systems (in which case, explain why) or their content filters are overcautious and need adjustment. Letting a filter quietly block standard legal text without explanation erodes trust—and in the AI tooling space, that's a competitive disadvantage waiting to happen.