×
How extreme rationalism and AI fear contributed to a mental health crisis
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The rationalist community, an influential but insular intellectual movement in technology circles, has faced scrutiny following a series of tragedies linked to its member Ziz LaSota and her followers. This story of mental health struggles, suicide, and the psychological impacts of rationalist thinking reveals the darker side of a philosophy embraced by many Silicon Valley leaders working on artificial intelligence safety. The case highlights how ideological extremism, even when intellectually sophisticated, can profoundly affect vulnerable individuals and raises questions about the mental health impacts of communities focused on existential risks.

The big picture: A small but influential rationalist splinter group led by Ziz LaSota has been linked to several suicides and concerning mental health outcomes among followers who embraced extreme versions of rationalist thinking.

  • Maia Pasek, a 24-year-old Polish immigrant and follower of LaSota, died by suicide in January after documenting her psychological struggles with rationalist concepts in detailed online writings.
  • At least three other people connected to LaSota’s circle have died by suicide, while others report experiencing psychological distress after engaging with the group’s ideas.

Who they are: The rationalist community developed around the writings of Eliezer Yudkowsky and his blog LessWrong, attracting followers who aim to improve human reasoning through overcoming cognitive biases.

  • Rationalism has gained significant influence in Silicon Valley, particularly among AI safety researchers, and has shaped thinking at organizations like the Machine Intelligence Research Institute (MIRI).
  • LaSota, previously known as Edward, formed a splinter group known as “Zizians” that took rationalist ideas to more extreme conclusions, particularly around AI risk and ethical frameworks.

The case of Maia Pasek: Pasek’s descent from promising student to suicide victim illustrates the potential psychological dangers of certain rationalist concepts when taken to extremes.

  • Before her death, Pasek wrote extensively about experiencing “philosophical zombie” thoughts—the terrifying sense she wasn’t fully conscious or real—after engaging deeply with rationalist thought experiments.
  • Her writings directly connected her mental health struggles to concepts she encountered in rationalist circles, particularly those promoted by LaSota.

What they’re saying: Former members describe LaSota’s group as having developed cult-like characteristics despite its intellectual foundation.

  • “She takes these very intellectual ideas and weaponizes them. She turns them into spiritual ideas and religious ideas,” said Jay Winterford, a former follower who later became critical of LaSota.
  • Ziz LaSota rejects responsibility for these tragedies, writing that she doesn’t believe she caused these deaths but acknowledging they represented a “horrifying pattern” among her circle.

The bigger concerns: Mental health professionals warn that certain philosophical ideas can be particularly dangerous for vulnerable individuals.

  • Concepts like simulation theory, philosophical zombies, and extreme utilitarian ethics can trigger existential crises in people already predisposed to mental health struggles.
  • The rationalist community’s focus on AI risk and potential human extinction creates a high-stakes mindset that can exacerbate psychological distress.

Between the lines: The tragedy exposes tensions within the broader rationalist community about responsibility, influence, and the real-world impacts of abstract philosophical ideas.

  • Many leaders in the community have distanced themselves from LaSota while acknowledging the need for more attention to mental health.
  • The case raises difficult questions about how intellectual communities should handle potentially dangerous ideas and support vulnerable members.
Before killings linked to cultlike ‘Zizians,’ a string of psychiatric crises befell AI doomsdayers

Recent News

TikTok introduces AI-powered photo-to-video conversion tool

The platform's latest feature turns still photos into short videos using AI-based text prompts while implementing multiple safety checks and content transparency measures.

UAE seeks 1M+ advanced Nvidia chips amid US export debate

UAE's push for Nvidia chips marks a sharp reversal of Biden-era policies aimed at preventing advanced AI technology transfers to China through allies.

Android devices gain major Gemini AI boost across cars and watches

Google's conversational AI is expanding beyond phones to power smarter interactions across Android smartwatches, car systems, televisions, and XR devices in the coming months.