TL;DR: LLMs are turbo-charging low-effort bug bounty hunters and script kids without any technical understanding.
Impact of LLMs on bug bounty
I spent many years processing security vulnerabilities reported via our security.txt at work. However, in recent months, there has been a surge in countless vulnerability reports that were obviously generated using tools such as ChatGPT. These “reports” not only read generically, but are laughably incorrect and vague. Filtering between critical vulnerabilities and AI slop takes several hours per week. It is an epedimia of useless work generated for already exhausted IT staff on the other side.
In this article, I would like to take a very subjective look at the negative effects on maintainers and security employees. Knowing that there have been cases in which valid vulnerabilities have been discovered completely autonomously by AI.
These “reports” follow some interesting patterns:
- Randomly generated Gmail addresses (e.g.
harry.potter476@gmail[.]com) - Low-effort findings found by cheap security scanner (missing security header, etc.)
- Nearly similar template structure of the reports
- Sometimes no mention of the vulnerable endpoint at all (no URL, IP, etc.)
- Non-existing vulnerabilities a LLM just made up
- LLM-generated responses and clearly no understanding of the conversation
- Prompt-Injection possible in the answer 😈
In the perspective of bug bounty “hunters” this is, if done “right”, a very lucrative business idea. It takes some effort to set up workflow automation for security scanners, OpenAI API and sending emails. Then you can lovely share your AI slop with exhausted IT staff.
Daniel Stenberg, author and maintainer of curl and libcurl also stated in his article Death by a thousand slops:
I have previously blogged about the relatively new trend of AI slop in vulnerability reports submitted to curl and how it hurts and exhausts us. This trend does not seem to slow down. On the contrary, it seems that we have recently not only received more AI slop but also more human slop. The latter differs only in the way that we cannot immediately tell that an AI made it, even though we many times still suspect it. The net effect is the same.
Update: On 14. January 2026 Daniel Stenberg announced the official ending of curls bugbounty program on Mastodon and GitHub. Apparently the reason was the growing AI slop to which he commented: „nothing can stop it, but we can hopefully slow it down by removing a strong incentive“. This is also evidence for the huge amount of useless work AI does generate on the other side. In the future we might see AI solving AI problems. I suggest this problem stems from pure monetary greed of some many individuals who try get some money through begbounty.
State of HackerOne
Since the introdustions of GPAI (General-Purpose AI) HackerOne saw surge of AI slop. Let’s have a look at an example:
Buffer Overflow in WebSocket Handshake (lib/ws.c:1287)
Source: hackerone.com
This “bug” was reported to curl. The user (the AI to be precise) found a vulnerability is this part of the source code:
CURLcode Curl_ws_request(struct Curl_easy *data, struct dynbuf *req)
{
unsigned int i;
CURLcode result = CURLE_OK;
unsigned char rand[16];
char *randstr;
size_t randlen;
char keyval[40];
//...
heads[2].val = &keyval[0];
/* 16 bytes random */
result = Curl_rand(data, (unsigned char *)rand, sizeof(rand));
if(result)
return result;
result = curlx_base64_encode((char *)rand, sizeof(rand), &randstr, &randlen);
if(result)
return result;
DEBUGASSERT(randlen < sizeof(keyval));
if(randlen >= sizeof(keyval)) {
free(randstr);
return CURLE_FAILED_INIT;
}
strcpy(keyval, randstr);
//... According to the reporter, this code is vulnerable to memory corruption and enables a remote buffer overflow via unsafe strcpy(). Have look at the hilarious exploit code yourself.
Enyone with a basic knowledge of C can see here that an overflow of keyval is impossible at this point. It is true that the use of strcpy is no longer recommended. This function may have been a trigger point for the LLM. However, in this case, it is implemented flawlessly.
When Daniel responded: “Can you please explain to us, preferably without an AI, how that strcpy() can copy more bytes than what fits in ‘keyval’ ?”
The reporter then politely apologizes for his analysis: “I need to withdraw this WebSocket vulnerability report. After reviewing your response, I realize my analysis was wrong – the bounds check if(randlen >= sizeof(keyval)) prevents the buffer overflow, so the strcpy() is actually safe. This is not a vulnerability. Sorry for the incorrect report I will be more thorough if I submit any in future!”
To which Daniel replied: “You will not submit any more issues to us, you are banned for violating the AI slop rules. AI slop deluxe“
This report produces two reactions in the reader: a hearty laugh and sympathy for Daniel.