×
AI safety advocates need political experience before 2028 election, experts warn
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI safety advocates need to develop political expertise well before the 2028 U.S. presidential election if they want to effectively influence AI policy. The current lack of political knowledge and experience could severely hamper future electoral efforts around AI safety, particularly given the potentially existential stakes of upcoming elections.

The big picture: AI safety advocates need to gain practical political experience through earlier campaigns rather than waiting until the 2028 presidential election for their first major political test.

  • The author argues that either the 2024 or 2028 U.S. presidential election is “probably the most important election in human history,” with significant implications for AI governance.
  • With “short timelines” suggesting transformative AI will arrive before 2033, political momentum around AI safety needs to build immediately rather than waiting until 2027.

Key questions that require real-world testing: AI safety advocates currently lack critical political knowledge that experienced political operatives possess.

  • Who constitutes their base, swing voters, and crucial institutional allies remains undefined.
  • Effective messaging, voter trust issues, and potential failure modes are unknown without real campaign experience.
  • Understanding how opponents would answer these same questions is equally important for strategic planning.

Why this matters: These fundamental political questions cannot be answered through theoretical analysis alone, requiring actual electoral campaigns or public lobbying efforts to develop practical expertise.

  • California’s SB1047 (a bill regarding AI safety) represented “a good start” but more political experimentation is needed.
  • The 2026 U.S. midterm elections or upcoming international elections could serve as valuable testing grounds.

Where we go from here: Starting political groundwork immediately is essential for any serious attempt to influence the 2028 presidential race on AI safety issues.

  • The author acknowledges not having specific recommendations for immediate action but emphasizes the importance of beginning political efforts now rather than waiting.
  • While uncertain if this approach should be prioritized over other AI safety efforts, the author argues that political action, if pursued, should be done correctly with adequate preparation.
2028 Should Not Be AI Safety's First Foray Into Politics

Recent News

Musk-backed DOGE project targets federal workforce with AI automation

DOGE recruitment effort targets 300 standardized roles affecting 70,000 federal employees, sparking debate over AI readiness for government work.

AI tools are changing workflows more than they are cutting jobs

Counterintuitively, the Danish study found that ChatGPT and similar AI tools created new job tasks for workers and saved only about three hours of labor monthly.

Disney abandons Slack after hacker steals terabytes of confidential data using fake AI tool

A Disney employee fell victim to malware disguised as an AI art tool, enabling the hacker to steal 1.1 terabytes of confidential data and forcing the company to abandon Slack entirely.