In a political landscape where objectivity seems increasingly rare, turning to artificial intelligence for an unbiased assessment might sound appealing. This is exactly what Joseph Steinberg attempted in his recent video experiment, using ChatGPT to evaluate former President Donald Trump's fifth month in office—a fascinating intersection of AI capabilities and political analysis that reveals as much about our technology as it does about our politics.
The most revealing aspect of this experiment wasn't the score itself but what it demonstrates about large language models approaching political topics. Unlike human pundits who often reveal their biases in the first sentence, ChatGPT delivered an assessment that carefully balanced positive achievements with criticisms—a type of political analysis that's increasingly rare in our media ecosystem.
This matters significantly because as AI becomes more integrated into our information environment, its approach to political topics will shape how millions understand complex issues. What appears as "neutrality" in AI systems is actually a carefully engineered balance of perspectives that doesn't necessarily reflect objective truth but rather an averaging of viewpoints designed to minimize controversy.
What Steinberg's experiment doesn't address is how different AI models might score the same political performance differently based on their training data. OpenAI has been particularly careful about political neutrality, but other models might not be. For example, Meta's LLaMA or Anthropic's Claude might produce substantively different assessments of the same presidential month based on their training methodologies and guardrails.
This inconsistency reveals an important truth: we're increasingly relying on AI systems to help navigate complex information landscapes, but these systems reflect the values and priorities of their creators. When Anthropic recently faced criticism for