Back in February, Daniel Stenberg pulled the plug on curl's bug bounty program entirely. The deluge of AI-generated slop had become untenable—low-quality submissions flooding in at a rate that was burning out maintainers faster than they could triage them. Now, just months later, something remarkable has happened: the junk has been replaced with something actually useful.
From Slop to Signal
The curl project relaunched on HackerOne in March 2026 after determining GitHub's infrastructure wasn't up to snuff. What came back was fundamentally different from what they'd shut down. Report frequency doubled compared to 2025, which itself was already more than double previous years. More importantly, the confirmed vulnerability rate climbed back to and surpassed pre-AI levels—hitting roughly 15-16%. Stenberg notes that bug reports (non-vulnerabilities but still legitimate issues) are also showing up at significantly higher rates than before. The tell-tale signs are everywhere: phrasings that scream AI assistant, detailed writeups that would take humans hours to produce. But the difference now versus early 2025 is stark—these aren't half-baked hallucinations anymore. They're coherent, reproducible, and actionable. Stenberg puts it bluntly in his blog post: 'Almost every security report now uses AI to various degrees' and most of them are landing at a quality level that actually helps.
A Universal Pattern Emerges
Curious if curl was an outlier, Stenberg ran an informal poll on Mastodon. The response was overwhelming. Representatives from Apache httpd, BIND, Django, Firefox, git, glibc, the Linux kernel, Python, Ruby, Wireshark, and dozens of other projects confirmed they were experiencing identical trends. The exact numbers vary, but the pattern holds: AI-assisted security research is producing better results across the open source ecosystem. This isn't curl's problem—it's everyone's problem now.
Volume Creates New Pressures
The irony isn't lost on Stenberg. Better quality means more confirmable vulnerabilities, which means more CVEs to publish and patches to release. With curl 8.20.0 shipping in late April, they're expecting at least six new vulnerabilities in that release alone. Projected out for the full year? Roughly 50 CVEs—a record by a significant margin. The maintainer burden is intensifying. 'This avalanche is going to make maintainer overload even worse,' Stenberg writes. Some projects simply don't have the human bandwidth to absorb this kind of backlog expansion without additional help. And there's an uncomfortable reality lurking underneath: attackers can use these same AI tools to find problems before they're disclosed and fixed.
Key Takeaways
- AI-generated security reports have flipped from low-quality noise to high-signal contributions, with confirmation rates hitting 15-16%
- Dozens of major open source projects—from the Linux kernel to Django—are experiencing the same surge in volume and quality
- Maintainer burnout remains a critical risk as vulnerability disclosure volumes hit historic highs across the ecosystem
The Bottom Line
The AI security research era isn't coming—it's here, and it's producing results. That's genuinely good news for software hardening. But unless open source projects get more hands on deck to handle the paperwork of responsible disclosure, this 'high quality chaos' might just become another way for attackers to stay ahead of the fix cycle.