back
6 reasons why “alignment-is-hard” discourse seems alien to human intuitions, and vice-versa
Get SIGNAL/NOISE in your inbox daily
TL;DR: AI alignment has a culture clash. On one side, the “technical-alignment-is-hard” / “rational agents” school-of-thought argues that we should expect future powerful AIs to be power-seeking ruthless consequentialists. On the other side, people observe that both humans and LLMs are obviously capable of behaving like, well, not that. The latter group accuses the former of head-in-the-clouds abstract theorizing gone off the rails, while the former accuses the latter of mindlessly assuming that the future will always be the same as the present, rather than trying to understand things …
Recent Stories
Jan 13, 2026
Anthropic Introduces Claude Cowork
The general purpose agent is intended as a more accessible version of the company's Claude Code.
Jan 13, 2026Grok AI deepfake victim says UK government should have acted faster
Presenter Jess Davies says the UK government has been "dragging its feet" when creating AI deepfake laws.
Jan 13, 2026China Restricts Nvidia Chip Purchases to Special Circumstances
The Chinese government this week told some tech companies it would only approve their purchases of Nvidia’s H200 AI chips under special circumstances, such as for university research and development labs, according to two people with direct knowledge of the situation. The latest communication ...