×
AI hallucination bug spreads malware through “slopsquatting”
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI-powered software hallucinations are creating a new cybersecurity threat as criminals exploit coding vulnerabilities. Research has identified over 205,000 hallucinated package names generated by AI models, particularly smaller open-source ones like CodeLlama and Mistral. These fictional software components provide an opportunity for attackers to create malware with matching names, embedding malicious code whenever programmers request these non-existent packages through their AI assistants.

The big picture: AI-generated code hallucinations have evolved into a sophisticated form of supply chain attack called “slopsquatting,” where cybercriminals study AI hallucinations and create malware using the same names.

  • When AI models hallucinate non-existent software packages and a developer requests these components, attackers can serve malware instead of error messages.
  • The malicious code then becomes integrated into the final software product, often undetected by developers who trust their AI coding assistants.

The technical vulnerability: Smaller open-source AI models used for local coding show particularly high hallucination rates when generating dependencies for software projects.

  • CodeLlama 7B demonstrated the worst performance with a 25% hallucination rate when generating code.
  • Other problematic models include Mistral 7B and OpenChat 7B, which frequently create fictional package references.

Historical context: This technique builds upon earlier “typosquatting” attacks, where hackers created malware using misspelled versions of legitimate package names.

  • A notable example was the “electorn” malware package, which mimicked the popular Electron application framework.
  • Modern application development’s heavy reliance on downloaded components (dependencies) makes these attacks particularly effective.

Why this matters: AI coding tools automatically request dependencies during the coding process, creating a new attack vector that’s difficult to detect.

  • The rise of AI-assisted programming will likely increase these opportunistic attacks as more developers rely on automation.
  • The malware can be subtly integrated into applications, creating security risks for end users who have no visibility into the underlying code.

Where we go from here: Security researchers are developing countermeasures to address this emerging threat.

  • Efforts are focused on improving model fine-tuning to reduce hallucinations in the first place.
  • New package verification tools are being developed to identify these hallucinations before code enters production.
Slopsquatting: The worrying AI hallucination bug that could be spreading malware

Recent News

Unpublished AI system allegedly stolen by synthetic researcher on GitHub

The repository allegedly contains an unpublished recursive AI system architecture with suspicious backdated commits and connection to a potentially synthetic researcher identity with falsified credentials.

The need for personal AI defenders in a world of manipulative AI

Advanced AI systems that protect users from digital manipulation are emerging as essential counterparts to the business-deployed agents that increasingly influence consumer decisions and behavior.

AI excels at identifying geographical locations but struggles with objects in retro games

Modern AI systems show paradoxical visual skills, excelling at complex geographic identification while struggling with simple pixel-based game objects.