×
Could a new political party fill America’s dangerous AI safety gap?
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The artificial intelligence industry is advancing at breakneck speed, with companies racing to develop increasingly powerful systems that could reshape society within the next decade. Yet despite widespread public concern about AI’s potential risks—from mass unemployment to existential threats—the United States lacks a sustained political movement dedicated to ensuring these technologies develop safely.

This gap represents both a critical vulnerability and a significant opportunity. While AI companies invest billions in capabilities research, government spending on AI safety remains minimal. Meanwhile, the competitive dynamics driving AI development create powerful incentives for companies to prioritize speed over caution, potentially leading to catastrophic outcomes.

The solution may lie in electoral politics: creating a dedicated AI safety political party that can shift policy priorities, educate voters, and pressure mainstream parties to take these risks seriously. This approach has precedent—from environmental movements that birthed successful Green parties worldwide to Ross Perot’s Reform Party, which influenced American politics for decades despite never winning major office.

Three key factors make this strategy compelling: comprehensive government support could dramatically improve AI safety outcomes, current government action remains woefully inadequate, and a focused political party represents a cost-effective mechanism for driving change with reasonable prospects for success.

The competitive dynamics driving unsafe AI development

The race to develop transformative AI creates what economists call a “first-mover advantage” of unprecedented scale. The first organization to achieve artificial general intelligence (AGI)—AI systems that match or exceed human cognitive abilities across all domains—could potentially secure overwhelming economic and strategic power. This advantage could compound rapidly through recursive self-improvement, where AI systems help design even more capable successors, creating an exponential leap in capabilities.

These high stakes distort incentives throughout the industry. Major AI laboratories face intense pressure to move quickly or risk being overtaken by competitors. Safety measures that slow development feel like “unilateral disarmament” when rivals might not impose similar constraints on themselves. Even well-intentioned companies find themselves trapped in this dynamic.

Anthropic, one of the AI companies most focused on safety research, acknowledges this reality in their corporate communications: “Our hypothesis is that being at the frontier of AI development is the most effective way to steer its trajectory towards positive societal outcomes.” In other words, even safety-conscious companies believe they must win the race to have any influence over how the technology develops.

The evidence of this competitive pressure is everywhere. Meta has reportedly spent hundreds of millions of dollars poaching talent from other firms, while industry analysis suggests many AI companies barely meet basic safety standards. This creates exactly the environment where government oversight becomes essential—when market forces alone cannot ensure responsible development.

Leading AI researchers have been increasingly vocal about this problem. Geoffrey Hinton, who won the Nobel Prize for his contributions to artificial intelligence, told The Guardian in 2024: “My worry is that the invisible hand is not going to keep us safe. So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely. The only thing that can force those big companies to do more research on safety is government regulation.”

Yoshua Bengio, recipient of the Turing Award (computing’s highest honor), echoed this sentiment in a 2024 interview with Live Science: “There’s a conflict of interest between those who are building these machines, expecting to make tons of money and competing against each other with the public. We need to manage that conflict, just like we’ve done for tobacco, like we haven’t managed to do with fossil fuels.”

The magnitude of potential risks

Beyond competitive dynamics, the sheer scale of potential AI impacts demands democratic oversight rather than leaving critical decisions to a handful of technology executives. If AI systems could permanently transform society or pose existential risks, pluralistic institutions should guide their development rather than unaccountable corporate leaders.

Consider a hypothetical but illustrative scenario: imagine a major AI company CEO faces a personal medical crisis—perhaps a family member develops terminal cancer that current treatments cannot address. The executive might push for accelerated AI development, hoping advanced systems could discover new treatments, even if this increases safety risks for society. While employees might object, the concentration of decision-making power in these companies means individual leaders can override broader concerns.

This scenario illustrates a broader principle: the risk appetites, personal circumstances, and biases of a small number of technology leaders could materially affect global AI safety. Democratic institutions, while imperfect, distribute decision-making power more broadly and create accountability mechanisms that private companies lack.

Government’s unique capacity for AI safety

The federal government possesses resources that dwarf even the largest technology companies. While OpenAI, currently the most prominent AI company, generated an estimated $3.7 billion in revenue in 2024, the U.S. government could easily allocate $100 billion annually to AI safety research—over 20 times OpenAI’s revenue but less than 15% of defense spending.

This scale matters because AI safety research requires massive investments in areas where private companies have limited incentives to invest. Companies naturally focus on capabilities that generate revenue, while safety research often requires fundamental advances in our understanding of how AI systems work—research with uncertain commercial applications but critical societal benefits.

Government funding could support independent safety research, create oversight institutions, and establish international coordination mechanisms that no single company could achieve. The question isn’t whether government has the capacity to make a difference, but whether it has the political will to deploy these resources effectively.

Current government action falls far short

Despite compelling reasons for government intervention, U.S. AI safety efforts remain minimal and are actually moving in the wrong direction under current leadership. The Trump administration’s “America’s AI Action Plan,” released in 2025, exemplifies this backward trajectory.

The plan’s first “pillar” is literally “Accelerate AI Innovation,” with the opening priority being to “Remove Red Tape and Onerous Regulation.” The document criticizes the Biden administration’s AI executive order as “dangerous actions,” despite that order’s modest scope—it primarily established frameworks for future regulation rather than imposing immediate constraints.

The action plan does propose government investment to advance AI capabilities, suggesting authorities should “Prioritize investment into theoretical computational and experimental research to preserve America’s leadership in discovering new and transformative paradigms that advance the capabilities of AI.” Meanwhile, AI safety receives roughly two paragraphs in the 28-page document, with most attention focused on “interpretability, control, and robustness breakthroughs”—technical improvements rather than comprehensive safety measures.

Even during the previous administration, which showed more awareness of AI risks, government investment remained negligible. The National Science Foundation spent only $20 million on AI safety between 2023 and 2024—approximately 0.00244% of Department of Defense spending in fiscal year 2023. To put this in perspective, the Pentagon’s budget for those two years totaled over $1.6 trillion, meaning AI safety received less funding than the military spends in about two hours.

This pattern reflects a broader challenge: while many policymakers acknowledge AI risks in principle, translating concern into sustained funding and regulatory action has proven difficult. Meanwhile, well-funded industry lobbying operations actively work to shape government AI policy in directions that prioritize innovation over safety.

Why a political party approach could succeed

Political science research and historical precedents suggest that a focused AI safety party could achieve significant influence even without winning major elections. Third parties have repeatedly demonstrated the ability to shift policy debates, educate voters, and pressure mainstream parties to adopt their priorities.

Lessons from environmental politics

Green parties worldwide provide the most relevant model for AI safety political organizing. Germany’s Green Party, for example, evolved from a protest movement in the 1970s to become part of the governing coalition in recent years, directly shaping national energy and environmental policy.

Even less electorally successful Green parties have driven significant policy changes. In the United States, Green Party candidate Jill Stein proposed a “Green New Deal” during her 2012 presidential campaign—a concept that later influenced Representative Alexandria Ocasio-Cortez’s 2019 Green New Deal resolution, which became a major Democratic Party priority.

Research confirms these parties’ broader impact. Studies have found a significant inverse relationship between Green Party presence in government and greenhouse gas emissions, while Green Party candidates in competitive elections have been associated with pro-environmental platform shifts in the Democratic Party.

The Ross Perot precedent

Ross Perot’s Reform Party campaigns in the 1990s demonstrate how third parties can reshape American political discourse around specific issues. While Perot never won office, his focus on deficit reduction, trade policy, and anti-interventionism moved these topics into mainstream political conversation.

The Reform Party’s influence extended far beyond Perot’s campaigns. Donald Trump first ran for president as a Reform Party candidate in 2000, and many Reform Party themes—skepticism of international trade deals, concern about budget deficits, and opposition to foreign interventions—became central to Trump’s successful 2016 and 2024 campaigns.

Political scientists Ronald B. Rapoport and Walter J. Stone documented this influence in their book “Three’s a Crowd: The Dynamics of Third Parties, Ross Perot, and Republican Resurgence,” concluding that the Reform Party had lasting effects on the two-party system through strategic platform shifts and voter migration.

Recent third-party influence

Robert F. Kennedy Jr.’s 2024 campaign provides the most recent example of third-party leverage. Despite averaging only about 5% in polling and having no significant electoral track record, Kennedy negotiated a position leading the Department of Health and Human Services in exchange for dropping out and endorsing Trump.

Kennedy’s campaign demonstrates how savvy third-party candidates can translate modest popular support into substantial policy influence. His focus on vaccine skepticism and health policy reform—regardless of one’s views on these positions—shows how specialized political movements can achieve outsized impact through strategic alliance-building.

Public support for AI safety measures

Polling data suggests Americans would support an AI safety political agenda, providing a foundation for party-building efforts. According to Pew Research Center surveys, 58% of Americans worry that government “will not go far enough in its regulation of AI.”

More detailed polling reveals widespread anxiety about AI development. A Reuters/Ipsos poll concluded that 47% of respondents agreed that “AI is bad for humanity,” compared to just 31% who considered “AI is good for humanity.” Additionally, 58% agreed that “AI could risk the future of humankind,” while 71% expressed concern that “too many people will lose jobs” due to artificial intelligence.

The AI Policy Institute, a nonpartisan research organization, found even stronger support for government oversight. In their survey of 1,481 voters, 80% supported a regulatory approach emphasizing government oversight of new AI model releases to ensure safety, compared to just 24% who favored minimal regulation.

These numbers suggest that an AI safety party wouldn’t need to convince Americans from scratch about AI risks—significant public concern already exists. The challenge lies in translating this diffuse anxiety into focused political action.

Cost-effectiveness analysis of political organizing

Launching a competitive third-party presidential campaign requires substantial resources, but the potential returns justify the investment. Analysis of Federal Election Commission data from 2004-2024 reveals that third-party campaigns typically achieve far better cost-effectiveness than major party efforts.

The average campaign spends approximately $18 million per percentage point of vote share (adjusted for inflation). However, third-party campaigns average only about $10.4 million per percentage point, nearly twice as efficient as the overall average.

Excluding established parties like the Greens and Libertarians—which benefit from name recognition and existing infrastructure—newer third parties still maintain significant efficiency advantages. Even accounting for outliers and campaigns with limited ballot access, well-organized third parties consistently outperform major party campaigns in converting dollars to votes.

Realistic scenarios and costs

Based on this historical data, an AI safety party could achieve meaningful impact with relatively modest investments. In a worst-case scenario—performing no better than the average campaign—securing 5% of the national vote would cost approximately $90 million. While substantial, this investment pales compared to the potential benefits of redirecting even a small portion of government spending toward AI safety.

More optimistic scenarios suggest much lower costs. A campaign achieving the efficiency of Jill Stein’s 2024 Green Party effort would need only about $20 million to reach 5% vote share. At the high end of third-party efficiency—matching Chase Oliver’s 2024 Libertarian campaign—$12 million could potentially secure 10% of the national vote.

These figures represent the upper bound of required investment. Early organizing phases—gathering petition signatures for ballot access, building volunteer networks, testing message resonance—cost far less and provide natural checkpoints for evaluating campaign viability before major spending begins.

Addressing common objections

“Voters won’t care about abstract AI risks”

While existential AI risks may seem abstract, the technology already affects employment, privacy, and security in tangible ways. Job displacement through automation, deepfake manipulation, and AI-enabled surveillance create immediate concerns that connect to longer-term safety issues.

Climate change faced similar challenges—it’s largely invisible, future-oriented, and scientifically complex—yet environmental movements successfully built political coalitions around these issues. AI safety organizing can learn from environmental campaigns while addressing more immediate, visible impacts of current AI systems.

“Third parties can’t win major elections”

Electoral victory isn’t necessary for significant policy influence. A focused party can elevate AI safety in national debates, legitimize pro-safety positions, attract media attention, and pressure mainstream parties to adopt safety priorities. These impacts don’t require winning offices and offer more realistic paths to influence than waiting for spontaneous policy changes.

“It’s too late to make a difference”

AI development timelines remain uncertain and face potential bottlenecks in hardware availability, energy supply, and algorithmic progress. Even modest delays in advanced AI development or improvements in safety measures could prove crucial. Additionally, government action can prevent harmful policies—such as subsidizing unsafe AI development or defunding existing safety research—even if proactive safety measures prove politically difficult.

“What about other policy issues?”

An effective AI safety party should address broader concerns while maintaining clear priorities. A platform emphasizing evidence-based solutions, democratic governance, and economic security could appeal to voters across traditional political divides. Connecting AI safety to job displacement concerns through policies like automation taxes or universal basic income creates natural bridges to kitchen-table economic issues.

“Third parties create spoiler effects”

Strategic campaign management can minimize spoiler risks while maximizing policy influence. In close elections between candidates with different AI safety positions, a responsible third party might endorse the more safety-friendly candidate in swing states or negotiate policy concessions in exchange for strategic withdrawals.

The spoiler dynamic can also work as leverage—mainstream candidates have strong incentives to adopt third-party positions to prevent vote defection. This transforms a potential weakness into a strategic advantage for extracting policy commitments.

International cooperation and competition concerns

The “AI race with China” narrative often assumes that safety measures inherently disadvantage American companies, but this framing misses crucial dynamics. An unaligned superintelligent AI system poses existential risks regardless of its country of origin—there are no winners in a race toward potentially catastrophic outcomes.

Chinese leadership has actually demonstrated more explicit concern about AI safety than recent U.S. administrations, creating opportunities for international cooperation rather than pure competition. The United States currently leads in both AI hardware and software capabilities, giving America significant leverage in negotiations for binding international safety frameworks.

Historical precedents for superpower cooperation on existential risks—from nuclear arms control to biological weapons treaties—demonstrate that even adversarial nations can coordinate when facing shared threats. A U.S. political movement focused on AI safety could advocate for international cooperation while maintaining American technological leadership through responsible development practices.

Alternative approaches and their limitations

Lobbying and advocacy limitations

Traditional lobbying faces structural disadvantages in AI safety advocacy. Research by Dr. Amy McKay shows that lobbying has a strong status quo bias—it takes approximately 3.5 lobbyists supporting change to counteract one lobbyist defending existing arrangements. AI safety advocates already face this uphill battle against well-funded industry lobbying operations that naturally oppose regulatory constraints.

Major AI companies possess vastly superior lobbying resources compared to safety advocacy organizations. Meta, Google, Microsoft, and other technology giants maintain sophisticated government relations operations with annual budgets that dwarf the entire AI safety advocacy ecosystem. Electoral politics offers a way to build countervailing power that doesn’t depend purely on matching corporate spending.

Working within existing parties

Attempting to influence major parties from within faces significant challenges. Once AI safety becomes associated with one party, partisan polarization could undermine bipartisan support for safety measures. Additionally, major party candidates face pressure to balance AI safety against other priorities and donor interests, potentially diluting safety commitments.

A dedicated third party maintains flexibility to prioritize AI safety without compromise while building coalitions across traditional political divides. This approach avoids premature politicization while developing public support for safety measures that mainstream parties can later adopt.

The path forward

An AI safety political party doesn’t need to transform American politics overnight to justify its existence. Even modest success—raising public awareness, pressuring mainstream candidates, or influencing policy debates—could yield enormous returns given the stakes involved.

The costs of launching such an effort, while substantial, remain manageable compared to potential benefits. Early organizing phases provide natural checkpoints for evaluating progress before major investments. If initial efforts fail to gain traction, resources can be redirected to other approaches with minimal sunk costs.

Meanwhile, the alternative—hoping that existing political institutions will spontaneously prioritize AI safety—seems increasingly unrealistic. Current government action remains minimal despite growing risks, while industry lobbying actively works to prevent stronger safety measures.

The window for political influence may be narrowing as AI capabilities advance, but it hasn’t closed. Democratic institutions still hold decisive power over regulation, funding, research priorities, and international coordination. An AI safety party represents one of the few mechanisms available for translating public concern about AI into sustained political pressure for responsible development.

The question isn’t whether such a party would face challenges—all political movements do. The question is whether the potential benefits justify the risks and costs involved. Given the stakes of advanced AI development and the inadequacy of current safety measures, the answer increasingly appears to be yes.

Electoral politics may be slow and imperfect, but it remains one of the highest-leverage tools available for ensuring that transformative AI technologies develop in ways that benefit rather than threaten human civilization. The time for considering this approach seriously has arrived.

The Case for An AI Safety Political Party in the US

Recent News

CuspAI raises $100M to accelerate AI-designed materials discovery

The startup promises to cut material development from a decade to six months.

BNY joins the 5% of companies seeing real AI ROI with internal tool

The custody bank's "Eliza" tool aims for 100% employee adoption across the organization.

Arm launches Lumex chips for on-device AI in smartphones and wearables

The designs eliminate cloud dependency, tackling latency and privacy concerns head-on.