×
California and New York target frontier AI models with $1B damage thresholds
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

California and New York are poised to become the first states to enact comprehensive regulations targeting frontier AI models—the most advanced artificial intelligence systems capable of causing catastrophic harm. The legislation aims to prevent AI-related incidents that could result in 50 or more deaths or damages exceeding $1 billion, marking a significant shift toward state-level AI governance as federal oversight remains limited.

What you should know: Both states are targeting “frontier AI models”—large-scale systems like OpenAI’s GPT-5 and Google’s Gemini Ultra that represent the cutting edge of AI innovation.

  • California’s bill passed the state Senate and requires developers to implement safety protocols, create transparency reports, and publish risk assessments before deployment.
  • New York’s similar measure was approved by state lawmakers in June, with Democratic Gov. Kathy Hochul having until year-end to sign it into law.
  • The New York version sets a higher threshold, requiring safety policies to prevent harm involving 100 or more deaths or $1 billion in damages.

The big picture: These bills emerge as states fill regulatory gaps while federal AI oversight remains uncertain under changing administrations.

  • California previously attempted stricter AI regulations last year, but Democratic Gov. Gavin Newsom vetoed the measure, arguing it would apply “stringent standards to even the most basic functions” of AI systems.
  • The current legislation reflects recommendations from the Joint California Policy Working Group on AI Frontier Models, a state advisory group that emphasized balancing innovation with safety through empirical research.

Industry pushback: Tech developers and trade associations are strongly opposing both measures, arguing they could stifle innovation without improving safety.

  • Paul Lekas of the Software & Information Industry Association, a tech trade group, called California’s bill “an overly prescriptive and burdensome framework that risks stifling frontier model development without adequately improving safety.”
  • NetChoice, a trade association representing Amazon, Google, and Meta, urged New York’s governor to veto the legislation, warning it would “undermine its very purpose, harming innovation, economic competitiveness, and the development of solutions to some of our most pressing problems.”

Why this matters: These state-level initiatives could establish the first regulatory framework for preventing AI catastrophic risks in the United States, potentially setting precedents for other states and influencing future federal policy. The outcome will likely shape how the most powerful AI systems are developed and deployed, balancing innovation concerns with public safety imperatives.

California, New York could become first states to enact laws aiming to prevent catastrophic AI harm

Recent News

517K AI workers drive rent spikes in Seattle, San Francisco and more

High-paid AI professionals spend just 19% of income on housing while pricing out everyone else.

Kennedy mandates ChatGPT for all HHS employees amid staff revolt

Over 1,000 employees have demanded his resignation, citing policies that "endanger the nation's health."

Adobe launches 6 AI agents to automate marketing campaigns

A central orchestrator interprets plain English requests and routes tasks to specialized domain experts.