Back to home
INTERNATIONAL6 May 2026
AI Slop Floods Cybercrime Forums, Sparking a New Backlash
Cybercriminals are complaining that AI‑generated spam and phishing content are flooding their forums, threatening the quality of illicit knowledge exchange and prompting a push for better moderation.
La
La Rédaction
The Vertex
5 min read

Source: www.wired.com
In the shadowy forums where cybercriminals trade zero‑day exploits and ransomware playbooks, a new grievance has emerged: the deluge of AI‑generated “slop” that clutters every thread.
The rise of large language models enables mass‑production of convincing phishing lures, automated exploit write‑ups, and synthetic chatter that mimics human expertise. Forums such as the now‑infamous “Exploit.in” report a 40 % surge in low‑effort posts, forcing moderators to allocate disproportionate resources and eroding the quality of peer‑reviewed advice. This surge not only overwhelms moderation capacity but also fuels a feedback loop where AI‑generated content begets more AI‑generated content, further degrading discourse quality.
Beyond technical nuisance, this flood reshapes the economics of crime. Low‑skill actors can now purchase AI‑crafted payloads on dark‑web marketplaces, democratizing attack vectors and intensifying competition. Simultaneously, the credibility of seasoned practitioners suffers, as genuine discussions become indistinguishable from algorithmic filler, prompting a brain‑drain toward more exclusive, invitation‑only channels.
This backlash mirrors earlier cycles, such as the spam epidemic of the early 2000s and the botnet wars of the 2010s, where new communication tools repeatedly upended established trust models. The current AI slop crisis reflects the rapid diffusion of generative models across all layers of the internet, outpacing both platform governance and the community’s ability to self‑regulate.
Looking ahead, the sustainability of these underground spaces will depend on the emergence of AI‑aware moderation tools, reputation systems that penalize synthetic content, and possibly a cultural shift toward verified provenance. If the community cannot adapt, the very platforms that underpin modern cybercrime may fragment, pushing activity into more opaque corners of the darknet.