Cybercriminals Are Complaining About AI Slop in Cybercrime Forums

Underground forum thread illustrating AI slop in cybercrime forums with spammy, repetitive scam posts

Cybercriminals Are Complaining About AI Slop in Cybercrime Forums

By Agustin Giovagnoli / May 6, 2026

Criminal forums are wrestling with a paradox. Generative AI is making attacks cheaper and more convincing, yet the same tools are filling discussion threads and marketplaces with repetitive, low-quality text that wastes time. Complaints about AI slop in cybercrime forums have grown as underground users encounter the same high-volume output trend that has hit the open web [1][2][3].

How Generative AI Is Powering Cybercrime

Security researchers and government analysts describe how dark or jailbroken LLMs support phishing, business email compromise, malware writing, and identity fabrication. Check Point details black-hat platforms that auto-generate content and manage large fleets of fake social media accounts used for scams, disinformation, and corporate impersonation, and notes that long-standing spam services are upgrading with generative models to craft tailored messages that bypass filters [4]. A Department of Homeland Security assessment reviews the broader impact of AI on criminal and illicit activities, including its role in lowering barriers for actors seeking to generate code or deception at scale [5]. Barracuda’s research similarly maps the threat landscape, highlighting tools marketed for criminals, such as WormGPT, and the use of generative AI to assemble realistic lures and package malware for operations [6].

For businesses, this translates into more credible phishing and faster turnarounds on campaigns that previously required specialized skills. The blend of dark LLMs for cybercrime, AI-enabled phishing campaigns, and tooling like WormGPT and malware writing services gives low-skilled actors a shortcut to higher-quality fraud [4][5][6].

Why AI slop in cybercrime forums is rising

The term “slop” has been used to describe high-volume AI text that prioritizes output over usefulness. Reporting on mainstream platforms shows how feeds get saturated with bland or redundant material, with Medium singled out for a flood of likely AI-written posts around popular tags [1]. Commentary and analysis echo the concern that the current internet is increasingly filled with AI-generated posts across forums, blogs, and email [2][3].

These patterns appear in underground markets too. As novice users lean on generic models, they post derivative tutorials, weak scripts, or template scams that add little value. The result is AI-generated spam in underground markets that drowns out credible resources and makes it harder to separate signal from noise [1][2][3].

Evidence: vendor research and government assessments

Multiple streams point to the same direction of travel. Check Point’s research highlights generative AI embedded across black-hat services, from automated persona farms to upgraded spam engines [4]. DHS frames AI as a force multiplier for criminal activity, documenting how it enables code and content generation that supports illicit operations [5]. Barracuda underscores the maturing ecosystem of dark-model tooling and services that criminals use to construct lures, write malware, and operationalize campaigns [6].

While methodologies differ, the throughline is clear: references to malicious AI tools appear frequently in security reporting and assessments, and activity around these capabilities has intensified in recent years [4][5][6].

Why the noise matters to businesses and threat intel teams

Underground forum signal-to-noise AI dynamics have real costs. Analysts who mine forums for early indicators of phishing kits, impersonation services, or exploit chatter must sift through more AI-produced filler to locate credible leads. Meanwhile, the offensive side keeps improving. Fake personas built with AI, plus phishing and BEC lures drafted by models, raise the baseline quality of social engineering and increase risk for brands and employees [4][5][6].

For additional context on business email compromise trends and defenses, see the FBI’s public service advisories FBI IC3 guidance (external).

How to detect and monitor AI-driven threats and forum noise

  • Track mentions of jailbroken or dark models and branded tools across sources, then prioritize threads that include working code, reproducible steps, or verified IOCs over generic prompts [4][5][6].
  • Look for linguistic signatures of templated content to triage low-value posts faster, while escalating items that show testing artifacts or victim telemetry [1][3].
  • Correlate forum chatter with inbound email telemetry to spot AI-enabled phishing campaigns as they emerge [4][6].
  • Compare persona networks against known bot-farm traits described in vendor research to flag impersonation clusters earlier [4].

Analysts can also deepen playbooks with practical checks and workflows. Explore AI tools and playbooks.

Operational recommendations for security leaders

  • Harden email security and BEC defenses with robust authentication and targeted training that reflects model-written lures [4][6].
  • Expand brand protection to include AI-assisted impersonation, including social persona monitoring tied to takedown workflows [4].
  • Integrate vendor and government reporting into intel cycles to keep pace with dark LLMs for cybercrime and service updates [4][5][6].
  • Formalize an intake process that scores underground posts for credibility to reduce time lost to slop while preserving coverage.

Policy, vendor responsibility, and the road ahead

Mainstream platforms are grappling with moderation limits as AI content grows, highlighting how quickly slop can dominate feeds [1][3]. In parallel, vendor research and official assessments continue to surface the mechanics of cybercrime-as-a-service AI, informing defenders even as criminal tooling evolves [4][5][6]. Expect continued tension between more powerful attack kits and the drag of noisy forums that make collaboration harder for criminals and intelligence collection harder for defenders.

Conclusion

Generative AI is elevating the quality and scale of cybercrime while polluting the forums that help criminals coordinate. For defenders, the takeaway is twofold: prepare for better-crafted phishing and impersonation, and refine collection methods to cut through forum slop. Those that do both will keep their visibility and stay ahead of the next wave [1][4][5][6].

Sources

[1] AI Slop Is Flooding Medium | WIRED
https://www.wired.com/story/ai-generated-medium-posts-content-moderation/

[2] AI slop is ruining the current internet, including forums, email, blogs, announc… | Hacker News
https://news.ycombinator.com/item?id=47244004

[3] AI-generated slop is quietly conquering the internet. Is it a threat to journalism or a problem that will fix itself? | Reuters Institute for the Study of Journalism
https://reutersinstitute.politics.ox.ac.uk/news/ai-generated-slop-quietly-conquering-internet-it-threat-journalism-or-problem-will-fix-itself

[4] Generative AI is the Pride of Cybercrime Services – Check Point Blog
https://blog.checkpoint.com/research/generative-ai-is-the-pride-of-cybercrime-services/

[5] [PDF] Impact of Artificial Intelligence (AI) on Criminal and Illicit Activities
https://www.dhs.gov/sites/default/files/2024-10/24_0927_ia_aep-impact-ai-on-criminal-and-illicit-activities.pdf

[6] The dark side of generative AI: Unveiling the AI threat landscape | Barracuda Networks Blog
https://blog.barracuda.com/2025/08/14/the-dark-side-of-generative-ai–unveiling-the-ai-threat-landscap

Scroll to Top