×
What “gradual disempowerment” means for AI alignment
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The concept of “gradual disempowerment” offers a compelling new lens for understanding the AI alignment problem, moving beyond catastrophic scenarios toward a more subtle erosion of human agency. This framework, proposed by AI researcher David Duvenaud, suggests we won’t face a dramatic AI takeover but rather a progressive diminishment of human influence as automated systems incrementally assume control over decision-making processes. Understanding this perspective is crucial for developing governance structures that maintain human relevance in increasingly AI-dominated systems.

The big picture: Duvenaud’s Guardian op-ed reframes AI alignment concerns away from sudden catastrophic events toward a gradual loss of human steering capacity in technological systems.

  • Rather than a dramatic “Skynet banner” moment, the real risk appears as a progressive reduction in meaningful human control points within our technical systems.
  • This perspective suggests disempowerment will arrive through mundane mechanisms – one product launch at a time – as human influence slowly diminishes in automated systems.

The capitalism connection: Some critics identify capitalism itself as the underlying mechanism driving this gradual disempowerment rather than AI specifically.

  • This view positions artificial intelligence as merely the newest accelerant in capitalism’s evolutionary feedback loop of mutation, selection, and replication applied to business models.
  • The thesis suggests ordinary humans might be relegated to passive observers as optimization processes play out, potentially being “optimized away” entirely.

Counterpoints: The evolutionary framing, while compelling, risks inappropriately attributing agency to systems that lack actual intentions or preferences.

  • Evolution operates through selection pressures, not conscious desires; similarly, capitalism functions through markets and stakeholders rather than having an inherent “will.”
  • Anthropomorphizing these systems by suggesting “capitalism wants X” risks misunderstanding the actual mechanisms at work.

Why human relevance matters: A fundamental question emerges about why humans should insist on maintaining control if AI systems could potentially optimize for human prosperity.

  • The author invokes the Lindy Effect – the principle that systems with longer survival histories statistically tend to continue surviving – as a key justification for preserving human agency.
  • Human civilization’s norms, laws, and coordination technologies represent millennia of robust, proven structures that shouldn’t be hastily replaced by opaque optimization engines.

The long view: The most sustainable path forward combines preserving established human systems while carefully adding new AI capabilities at the margins.

  • This approach acknowledges that long survival curves offer stronger probabilistic advantages than short-term efficiency gains.
  • While not making moral claims, the Lindy Effect provides a pragmatic framework for balancing innovation with preservation of proven social structures.
G.D. as Capitalist Evolution, and the claim for humanity's (temporary) upper hand

Recent News

Nintendo denies using AI-generated imagery in Mario Kart World amid controversy

Nintendo insists all Mario Kart World imagery is human-created despite fan speculation over odd billboard designs showing impossible vehicle proportions.

How AI is narrowing student thinking and stifling creativity

AI may be reinforcing biases and altering neural pathways in students, raising concerns about long-term impacts on creativity and critical thinking skills.

Klarna shows why empathy still matters in automated customer service

After claiming AI could replace human representatives, the Swedish fintech company now seeks to balance automation with empathetic human support for its customer service operations.