×
Biosecurity concerns mount as AI outperforms virus experts
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI models now outperform PhD-level virologists in wet lab problem-solving, according to a groundbreaking new study shared exclusively with TIME. This development represents a significant double-edged sword for science and security: while these systems could accelerate medical breakthroughs and pandemic preparedness, they also potentially democratize bioweapon creation by providing expert-level guidance to individuals with malicious intent, regardless of their scientific background.

The big picture: AI models significantly outperformed human experts in a rigorous virology problem-solving test designed to measure practical lab troubleshooting abilities.

  • OpenAI‘s o3 model achieved 43.8% accuracy while Google’s Gemini 2.5 Pro scored 37.6% on the test, compared to human PhD-level virologists who averaged just 22.1% in their declared areas of expertise.
  • This marks a concerning milestone as non-experts now have unprecedented access to AI systems that can provide step-by-step guidance for complex virology procedures.

Why this matters: For the first time in history, virtually anyone has access to non-judgmental AI virology expertise that could potentially guide them through creating bioweapons.

  • The technology could accelerate legitimate medical and vaccine development while simultaneously increasing bioterrorism risks.

The researchers’ approach: The study was conducted by a multidisciplinary team from the Center for AI Safety, MIT’s Media Lab, Brazilian university UFABC, and pandemic prevention nonprofit SecureBio.

  • The researchers consulted virologists to create an extremely difficult practical assessment that measured the ability to troubleshoot complex laboratory protocols.
  • The test focused on real-world virology knowledge rather than theoretical understanding.

Voices of concern: Seth Donoughe, a research scientist at SecureBio and study co-author, expressed alarm about the dual-use implications of these AI capabilities.

  • Experts like Dan Hendrycks and Tom Inglesby are urging AI companies to implement robust safeguards before these models become widely available.

Proposed safeguards: Security experts recommend multiple measures to mitigate potential misuse while preserving beneficial applications.

  • Suggested protections include gated access to advanced models, input and output filtering systems, and more rigorous testing before new models are released.
  • The challenge lies in balancing scientific advancement with responsible AI development in sensitive domains.
Exclusive: AI Bests Virus Experts, Raising Biohazard Fears

Recent News

Musk-backed DOGE project targets federal workforce with AI automation

DOGE recruitment effort targets 300 standardized roles affecting 70,000 federal employees, sparking debate over AI readiness for government work.

AI tools are changing workflows more than they are cutting jobs

Counterintuitively, the Danish study found that ChatGPT and similar AI tools created new job tasks for workers and saved only about three hours of labor monthly.

Disney abandons Slack after hacker steals terabytes of confidential data using fake AI tool

A Disney employee fell victim to malware disguised as an AI art tool, enabling the hacker to steal 1.1 terabytes of confidential data and forcing the company to abandon Slack entirely.