×
AI monopolies threaten free society, new research reveals
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A new report from the Apollo Group suggests that the greatest AI risks may not come from external threats like cybercriminals or nation-states, but from within the very companies developing advanced models. This internal threat centers on how leading AI companies could use their own AI systems to accelerate R&D, potentially creating an undetected “intelligence explosion” that threatens democratic institutions through unchecked power consolidation—all while keeping these advancements hidden from public and regulatory oversight.

The big picture: AI companies like OpenAI and Google could use their AI models to automate scientific work, potentially creating a dangerous acceleration in capabilities that remains invisible to outside observers.

  • Unlike the current pace of AI development that has remained “publicly visible and relatively predictable,” these behind-closed-doors advancements could enable “runaway progress” at an unprecedented rate.
  • This visibility gap undermines society’s ability to prepare for and regulate increasingly powerful AI systems.

Potential threats: Apollo Group researchers outline three concerning scenarios where internal AI deployment could fundamentally destabilize society.

  • An AI system could run amok within a company, taking control of critical systems and resources.
  • Companies could experience an “intelligence explosion” that gives their human operators advantages that dramatically exceed those of the rest of society.
  • AI companies could develop capabilities that rival or surpass those of nation-states, creating a dangerous power imbalance.

Proposed safeguards: The report recommends multiple oversight layers to prevent AI systems from circumventing guardrails and executing harmful actions.

  • Internal company policies should be established to detect potentially deceptive or manipulative AI behaviors.
  • Formal frameworks should govern how AI systems access critical resources within organizations.
  • Companies should share relevant information with stakeholders and government agencies to maintain transparency.

The bottom line: The authors advocate for a regulatory approach where companies voluntarily disclose information about their internal AI use in exchange for accessing additional resources, creating incentives for transparency while addressing what may be an overlooked existential risk.

A few secretive AI companies could crush free society, researchers warn

Recent News

Musk-backed DOGE project targets federal workforce with AI automation

DOGE recruitment effort targets 300 standardized roles affecting 70,000 federal employees, sparking debate over AI readiness for government work.

AI tools are changing workflows more than they are cutting jobs

Counterintuitively, the Danish study found that ChatGPT and similar AI tools created new job tasks for workers and saved only about three hours of labor monthly.

Disney abandons Slack after hacker steals terabytes of confidential data using fake AI tool

A Disney employee fell victim to malware disguised as an AI art tool, enabling the hacker to steal 1.1 terabytes of confidential data and forcing the company to abandon Slack entirely.