AI Polluting Bug Bounty Platforms with Fake Vulnerability Reports

Bug bounty programs, once celebrated for incentivizing independent researchers to report real-world vulnerabilities, are now facing a significant challenge from AI-generated fake vulnerability reports. These fabricated submissions, known in the industry as “AI slop,” are increasingly wasting maintainers’ time and, more concerningly, sometimes receiving actual monetary payouts. The phenomenon represents a growing trend where malicious […] The post AI Polluting Bug Bounty Platforms with Fake Vulnerability Reports appeared first on Cyber Security News.

May 8, 2025 - 13:27
 0
AI Polluting Bug Bounty Platforms with Fake Vulnerability Reports

Bug bounty programs, once celebrated for incentivizing independent researchers to report real-world vulnerabilities, are now facing a significant challenge from AI-generated fake vulnerability reports.

These fabricated submissions, known in the industry as “AI slop,” are increasingly wasting maintainers’ time and, more concerningly, sometimes receiving actual monetary payouts.

The phenomenon represents a growing trend where malicious actors leverage large language models (LLMs) to generate technical-sounding but entirely fictitious security reports.

These AI-generated reports pose a particular threat because they often appear legitimate at first glance, especially to organizations without dedicated security experts.

The reports typically include technical jargon, references to security concepts, and even suggested patches-all designed to pass initial triage processes.

However, upon closer inspection by subject matter experts, these reports quickly reveal their fraudulent nature, as they describe vulnerabilities that cannot be reproduced and reference functions that do not exist.

Socket.dev researchers have identified this trend as particularly problematic for open-source projects and under-resourced organizations that lack the internal expertise to properly evaluate technical security reports.

Many organizations find themselves in a difficult position: investing time and resources to thoroughly investigate each report or simply paying bounties to avoid potential security risks and negative publicity.

A recent high-profile case involved the curl project, which received a fraudulent vulnerability report via HackerOne. The report, identified as H1#3125832, was flagged by the curl team as AI-generated slop after it cited nonexistent functions and included unverified patch suggestions.

The attacker, linked to the @evilginx account, has reportedly used similar tactics against other organizations, successfully receiving bug bounty payouts in some instances.

Security researcher Harry Sintonen noted that curl, being a highly technical open source project with deep expertise, immediately recognized the deception.

“The attacker miscalculated badly,” Sintonen stated. “Curl can smell AI slop from miles away.”

Anatomy of AI Slop Vulnerability Reports

When examining these fabricated reports more closely, several telltale characteristics emerge. The reports typically maintain a veneer of technical legitimacy by referencing plausible-sounding functions or methods that don’t actually exist in the codebase.

For instance, in the curl case, the report referenced a non-existent function called “ngtcp2_http3_handle_priority_frame.”

When questioned, the attacker would deflect by claiming the issue exists in specific old or new versions, often citing made-up commit hashes to further the deception.

These reports deliberately remain vague about reproduction steps, making it impossible for maintainers to verify the claimed vulnerability.

They often combine legitimate security concepts with fabricated implementation details, creating a narrative that seems plausible until subjected to expert scrutiny.

This approach specifically targets the weak points in bug bounty triage systems, where limited resources may prevent thorough investigation of each submission.

Seth Larson, the Python Software Foundation’s Security Developer-in-Residence, confirmed that open source maintainers’ time is increasingly being consumed by reviewing such AI-generated vulnerability reports.

“The issue is in the age of LLMs, these reports appear at first-glance to be potentially legitimate and thus require time to refute,” Larson explained, highlighting how this phenomenon strains already limited resources in the open source security ecosystem.

Are you from the SOC and DFIR Teams? – Analyse Real time Malware Incidents with ANY.RUN -> Start Now for Free.

The post AI Polluting Bug Bounty Platforms with Fake Vulnerability Reports appeared first on Cyber Security News.