Daniel Stenberg

Don’t Insert Crazy! On cURL and AI Slop - Daniel Stenberg

After AI-generated reports dropped the signal-to-noise ratio to less than one in twenty, the creator of cURL shut down his bug bounty program.

Don’t Insert Crazy! On cURL and AI Slop - Daniel Stenberg
#1about 1 minute

Why the cURL project shut down its bug bounty program

The bug bounty program was closed due to an overwhelming volume of low-quality, AI-generated security reports that made triage unsustainable.

#2about 4 minutes

Understanding the problem of AI-generated "slop" reports

AI chatbots generate reports with hallucinated vulnerabilities, made-up function names, and false positives based on common C functions like strcpy.

#3about 3 minutes

The high operational cost of managing low-quality submissions

AI-generated reports are often long and elaborate, creating a significant time burden for maintainers who must manually verify each invalid claim.

#4about 7 minutes

Moving vulnerability reporting from HackerOne to GitHub

The new process for reporting vulnerabilities will be through GitHub, without the financial incentives previously provided by the Internet Bug Bounty fund.

#5about 11 minutes

How AI threatens the sustainability of open source projects

AI-generated code can disrupt the open source model by reducing feedback loops, creating licensing ambiguity, and undermining ad-based revenue streams.

#6about 3 minutes

Monetizing open source with commercial support contracts

A sustainable monetization model for foundational projects like cURL involves selling long-term support and expert assistance to businesses that rely on the software.

#7about 3 minutes

Planning for project continuity and the bus factor

The cURL project ensures its longevity through a core team of trusted contributors and a well-documented, open process, mitigating the risk of a single point of failure.

#8about 8 minutes

The future of cURL security without a bounty program

Maintainers are not concerned about a drop in quality reports, as genuine researchers are often motivated by more than money and many reported bugs are historical or API misuse.

#9about 5 minutes

The responsibility of researchers to validate AI findings

Security researchers using AI tools must take responsibility for verifying the claims and reproducing the issues before submitting reports to avoid wasting maintainer time.

#10about 2 minutes

How to spot AI-generated text in issue reports

AI-generated text can often be identified by its excessive length, perfect grammar, overuse of bullet points, and an unusually apologetic tone.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

Related Articles

View all articles
DC
Daniel Cranney
Dev Digest 204: Agentic AI Book, Creepy Links & Time to Ditch Projects
Inside last week’s Dev Digest 204 . 📘 The Agentic AI Handbook 💻 Writing a browser with AI 👔 LinkedIn Job Scams 🔗 The 2025 Web Almanac 📈 A cross-browser performance testing agent 💨 How Python’s packaging library got 3x faster 🫣 Create creepy links an...
Dev Digest 204: Agentic AI Book, Creepy Links & Time to Ditch Projects
DC
Daniel Cranney
Dev Digest 194: AI vs. Version Control, Password Louvre & Cursed Webdev
Inside last week’s Dev Digest 194 . 🧠 Learn how to become an AI-native software engineer 🤷‍♂️ How can you stand out when anyone can build anything? 👂 Whisper Leak allows listening to encrypted chats 🐝 What’s new the OWASP2025 Top Ten List 🙅‍♀️ Curse...
Dev Digest 194: AI vs. Version Control, Password Louvre & Cursed Webdev

From learning to earning

Jobs that call for the skills explored in this talk.