×
Nearly half of AI-generated code contains security vulnerabilities, claims study
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Nearly half of AI-generated code contains security vulnerabilities despite appearing production-ready, according to new research from Veracode, a cybersecurity company, that examined over 100 large language models across 80 coding tasks. The findings reveal that even advanced AI coding tools are creating significant security risks for companies increasingly relying on artificial intelligence to supplement or replace human developers, with no improvement in security performance across newer or larger models.

What you should know: The security flaws affect all major programming languages, with Java experiencing the highest failure rate at over 70%.

  • Python, C#, and JavaScript also showed concerning failure rates between 38-45%.
  • Large language models chose insecure coding methods 45% of the time across all tested scenarios.
  • The research found no correlation between model size or recency and security performance.

The vulnerabilities in detail: AI-generated code consistently fails to defend against common attack vectors that have plagued software development for years.

  • Cross-site scripting vulnerabilities appeared in 86% of cases where LLMs should have implemented proper defenses.
  • Log injection attacks succeeded 88% of the time against AI-generated code.
  • These failure rates occur even when the generated code appears functional and ready for production use.

In plain English: Cross-site scripting is like leaving your front door unlocked—it allows malicious actors to inject harmful code into websites that then runs on visitors’ computers. Log injection is similar to someone tampering with a building’s security logbook to hide their tracks or plant false information.

Why this matters: The security gaps coincide with AI’s growing role in software development, creating a potentially dangerous combination.

  • As much as one-third of new code at Google and Microsoft is now AI-generated, according to the research.
  • “The rise of vibe coding, where developers rely on AI to generate code, typically without explicitly defining security requirements, represents a fundamental shift in how software is built,” explained Veracode CTO Jens Wessling.
  • AI also enables attackers to exploit vulnerabilities faster and at scale, amplifying the impact of insecure code.

What they’re saying: Security experts warn that the current trajectory could create massive technical debt if left unaddressed.

  • “Our research shows models are getting better at coding accurately but are not improving at security,” Wessling noted.
  • “AI coding assistants and agentic workflows represent the future of software development… Security cannot be an afterthought if we want to prevent the accumulation of massive security debt,” he concluded.

Recommended solutions: Veracode suggests several measures to mitigate the security risks while still leveraging AI development tools.

  • Enable security checks in AI-driven workflows to enforce compliance and security standards.
  • Adopt AI remediation guidance to train developers on secure coding practices.
  • Deploy firewalls and detection tools that can identify flaws earlier in the development process.
  • Implement systematic security reviews for AI-generated code before production deployment.
Nearly half of all code generated by AI found to contain security flaws - even big LLMs affected

Recent News

Tim Cook tells Apple staff AI is “as big as the internet”

The rare all-hands meeting signals mounting pressure as talent flees to competitors.

Google adds 4 new AI search features including image analysis

Desktop users can now upload PDFs and images for instant AI analysis.

Take that, Oppenheimer: Meta offers AI researcher $250M over 4 years in talent war

Young researchers now hire agents and share negotiation strategies in private chat groups.