Despite numerous technological advancements in artificial intelligence, ChatGPT—one of the world’s most widely used AI chatbots—continues to misidentify the current U.S. president more than 100 days into Donald Trump’s second term. This failure of a basic factual query raises important questions about the reliability of AI tools that have become deeply integrated into workplaces, schools, and government offices globally, especially as these systems increasingly serve as sources of information for millions of users.
The big picture: OpenAI’s ChatGPT still frequently identifies Joe Biden as the current president despite Trump’s election victory seven months ago and his return to office over three months ago.
- The error occurs most commonly for free users, though paid subscribers can access real-time search capabilities that may correct the misinformation.
- With more than 400 million daily users, including over 20 million paying subscribers, ChatGPT’s factual errors have significant reach and impact.
How competitors compare: Other major AI systems have demonstrated better accuracy or at least acknowledge the presidential transition.
Why this matters: The presidential identification failure highlights broader concerns about AI systems’ reliability as information sources.
- Users on platforms like Reddit have documented numerous instances where ChatGPT confidently presents incorrect presidential facts, even mid-conversation.
- As these tools evolve from simple assistants to news and information sources, their factual accuracy faces increasing scrutiny.
Behind the errors: OpenAI notes that ChatGPT’s core knowledge was last updated in June 2024, though the system can theoretically access real-time information.
- The persistent error remains particularly frustrating for users who rely on the app for quick factual queries.
- Newsweek reached out to OpenAI for comment on the issue but did not include their response in the article.
The broader pattern: AI’s factual reliability problems extend beyond presidential knowledge to news reporting and general information.
- A BBC study found over half of AI-generated responses using BBC articles contained major errors, including fabricated quotes and false claims.
- Columbia University’s Tow Center analysis confirmed these findings, warning that AI consistently distorts news content.
What they’re saying: Industry experts express concern about AI’s information reliability problems.
- “AI assistants cannot currently be relied upon to provide accurate news and they risk misleading the audience,” said Pete Archer, director of the BBC’s Generative AI Program.
- While ChatGPT struggles with presidential facts, OpenAI CEO Sam Altman was recently seen shaking hands with President Trump in Qatar.
Who is the president? AI chatbots struggle with kindergarten-level question