×
AI could make humans more altruistic, DeepMind CEO suggests
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Artificial general intelligence (AGI) development is accelerating rapidly, with Google DeepMind‘s CEO Demis Hassabis forecasting a 50% chance of achieving human-equivalent AI capabilities within the next decade. This potential shift represents one of the most significant technological inflection points in human history, promising solutions to humanity’s greatest challenges while simultaneously introducing unprecedented risks that will require careful governance and ethical frameworks to navigate successfully.

The big picture: Hassabis believes AGI will arrive incrementally rather than through a sudden breakthrough, giving society time to adapt to increasingly capable AI systems.

  • He defines AGI as AI with cognitive capabilities equivalent to humans across all important domains.
  • The DeepMind CEO expects transformational AGI applications to emerge gradually rather than through a dramatic “hard takeoff” scenario.

Why this matters: AGI could potentially solve humanity’s most pressing problems while fundamentally reshaping society.

  • Hassabis envisions AGI helping cure diseases, discover new energy sources, and potentially extend human lifespans.
  • He suggests truly advanced AI could create “radical abundance” that might reduce human selfishness by solving zero-sum economic problems.

Key details: Google DeepMind’s development approach balances competitive pressures against safety considerations.

  • Hassabis acknowledges a technological race against other companies and nations, particularly China.
  • He advocates for “smart, nimble” international regulations that enable innovation while mitigating risks.
  • The DeepMind CEO believes AGI development requires careful governance to prevent misuse by bad actors.

Behind the numbers: Hassabis’s 5-10 year timeline for AGI (with 50% probability) reflects accelerating progress in AI capabilities.

  • This estimate is more aggressive than many other AI leaders have publicly stated.
  • The timeline suggests DeepMind may have internal research progress that supports this optimistic forecast.

Reading between the lines: Hassabis’s philosophical motivations extend beyond commercial interests to a deeper quest for understanding reality.

  • His personal journey from chess prodigy to neuroscience researcher to AI pioneer reflects a lifelong intellectual pursuit.
  • The interview suggests Hassabis views AGI as a tool for exploring fundamental questions about consciousness and the nature of intelligence.

Potential risks: Despite his optimism, Hassabis acknowledges significant dangers associated with AGI development.

  • Technical risks include the possibility of uncontrolled AI systems with unpredictable behaviors.
  • Societal risks include potential disruption to economic systems and power structures.
Google DeepMind’s CEO Thinks AI Will Make Humans Less Selfish

Recent News

Grok stands alone as X restricts AI training on posts in new policy update

X explicitly bans third-party AI companies from using tweets for model training while still preserving access for its own Grok AI.

Coming out of the dark: Shadow AI usage surges in enterprise IT

IT leaders report 90% concern over unauthorized AI tools, with most organizations already suffering negative consequences including data leaks and financial losses.

Anthropic CEO opposes 10-year AI regulation ban in NYT op-ed

As AI capabilities rapidly accelerate, Anthropic's chief executive argues for targeted federal transparency standards rather than blocking state-level regulation for a decade.