×
Video Thumbnail
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

How AI threatens our trust networks

In a riveting interview with WIRED, historian Yuval Noah Harari unpacks the existential challenges artificial intelligence poses to humanity's future. Moving beyond the typical AI discourse of job displacement or economic change, Harari delves into something more fundamental: how AI disrupts the trust networks that have defined human civilization since our earliest days.

The premise is both elegant and terrifying. Throughout history, humans have dominated Earth not through individual intelligence but through our unique ability to create vast networks of cooperation among strangers. We achieve this through shared stories and trust systems—from religions to currencies to national identities. Now, for the first time, we face an intelligence that might outperform us not just in calculating or pattern recognition, but in the very thing that made us dominant: creating compelling narratives that connect millions.

Key insights from Harari's analysis:

  • AI is fundamentally an agent rather than a tool—unlike previous technologies like printing presses that required human operators and decision-makers, AI can independently create content, make decisions, and form new ideas without human guidance.

  • Trust networks underpin civilization—humans cooperate in massive groups of strangers because we believe in shared fictions like money, religions, and nations, creating intricate trust networks AI can potentially manipulate or replace.

  • The "paradox of trust" drives dangerous acceleration—tech companies and nations race toward superintelligence because they don't trust each other, yet contradictorily believe they can trust the alien intelligence they're creating.

  • Information isn't inherently truthful—in free information markets, fiction outcompetes truth because it's cheaper to produce, easier to customize, and often more pleasurable to consume.

The most profound insight from Harari's conversation is the alarming "paradox of trust" driving the AI race. When asked why they're developing increasingly powerful AI despite known risks, tech leaders consistently claim they can't slow down because they don't trust competitors or rival nations to do the same. Yet in the next breath, these same leaders profess confidence that the superintelligent systems they're building will be trustworthy.

This contradiction exposes a dangerous blind spot in our AI development approach. We have millennia of experience understanding human psychology, motives, and power dynamics—and still struggle with international cooperation. Yet we na

Recent Videos