back
6 reasons why “alignment-is-hard” discourse seems alien to human intuitions, and vice-versa
Get SIGNAL/NOISE in your inbox daily
TL;DR: AI alignment has a culture clash. On one side, the “technical-alignment-is-hard” / “rational agents” school-of-thought argues that we should expect future powerful AIs to be power-seeking ruthless consequentialists. On the other side, people observe that both humans and LLMs are obviously capable of behaving like, well, not that. The latter group accuses the former of head-in-the-clouds abstract theorizing gone off the rails, while the former accuses the latter of mindlessly assuming that the future will always be the same as the present, rather than trying to understand things …
Recent Stories
Jan 13, 2026
11 things UC Berkeley AI experts are watching for in 2026
How will AI disrupt the labor market? What will deepfake videos mean for our understanding of truth? Are we in a bubble, and if so, will the bubble burst?
Jan 13, 2026US approves sale of Nvidia’s advanced H200 chips to China
Nvidia has been caught in a tug-of-war between the US and China as the countries compete for AI dominance.
Jan 13, 2026Robots learn human touch with less data using adaptive motion system
Japanese researchers develop an adaptive robot motion system that enables human-like grasping using minimal training data.