×
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Trump's AI scorecard isn't what you'd expect

In a political landscape where objectivity seems increasingly rare, turning to artificial intelligence for an unbiased assessment might sound appealing. This is exactly what Joseph Steinberg attempted in his recent video experiment, using ChatGPT to evaluate former President Donald Trump's fifth month in office—a fascinating intersection of AI capabilities and political analysis that reveals as much about our technology as it does about our politics.

Key insights from the AI evaluation

  • ChatGPT rated Trump's fifth month (May-June 2017) as "somewhat effective" with a score of 6/10, citing accomplishments like the Paris Climate Accord withdrawal and challenges including the travel ban's legal troubles
  • The AI acknowledged Trump's economic accomplishments while noting persistent political division and controversial decisions that shaped his early presidency
  • The assessment revealed AI's limitations in political analysis—balancing factual reporting with interpretative judgment that differs from most human political commentators

The real insight: AI's political "neutrality" isn't what we think

The most revealing aspect of this experiment wasn't the score itself but what it demonstrates about large language models approaching political topics. Unlike human pundits who often reveal their biases in the first sentence, ChatGPT delivered an assessment that carefully balanced positive achievements with criticisms—a type of political analysis that's increasingly rare in our media ecosystem.

This matters significantly because as AI becomes more integrated into our information environment, its approach to political topics will shape how millions understand complex issues. What appears as "neutrality" in AI systems is actually a carefully engineered balance of perspectives that doesn't necessarily reflect objective truth but rather an averaging of viewpoints designed to minimize controversy.

Beyond the video: The deeper implications

What Steinberg's experiment doesn't address is how different AI models might score the same political performance differently based on their training data. OpenAI has been particularly careful about political neutrality, but other models might not be. For example, Meta's LLaMA or Anthropic's Claude might produce substantively different assessments of the same presidential month based on their training methodologies and guardrails.

This inconsistency reveals an important truth: we're increasingly relying on AI systems to help navigate complex information landscapes, but these systems reflect the values and priorities of their creators. When Anthropic recently faced criticism for

Recent Videos