California Governor Gavin Newsom has signed into law a groundbreaking measure aimed at regulating advanced artificial intelligence models. The new legislation, Senate Bill 53 (SB 53), requires major AI developers—including OpenAI, Google, Meta, Nvidia, and Anthropic—to publicly disclose how they plan to mitigate catastrophic risks from their cutting-edge AI systems.
Newsom emphasized that California, home to the nation’s largest cluster of AI companies, is taking the lead in setting rules for a technology that will shape the economy and society. He stated the law balances innovation with public safety, positioning the state as a model for potential federal legislation.
The law applies to companies with over $500 million in annual revenue. These firms must conduct assessments on risks such as AI models escaping human control or being misused to develop bioweapons. They are required to release those assessments to the public, and violations may result in fines of up to $1 million.
This marks a shift from California’s earlier attempt at AI regulation, which Newsom vetoed in 2023 due to industry backlash. That proposal demanded costly third-party audits and imposed heavy financial penalties. SB 53, however, adopts a more balanced framework, which Jack Clark, co-founder of Anthropic, praised as promoting both safety and innovation.
Still, some in the tech sector are concerned. Critics, including Andreessen Horowitz’s head of government affairs, warned that state-led laws risk creating a fragmented regulatory environment across the U.S. With similar AI laws passed in Colorado and New York, industry leaders fear compliance challenges for startups and smaller firms.
Federal lawmakers are now under pressure to act. Representative Ted Lieu highlighted the need for national standards, asking whether Americans prefer “17 states” regulating AI independently or a unified approach from Congress. Meanwhile, Representative Jay Obernolte is drafting federal AI legislation that could preempt state-level rules.
The California law is widely seen as a catalyst that may accelerate Washington’s efforts to establish a nationwide AI regulatory framework.


International Stabilization Force for Gaza Nears Deployment as U.S.-Led Planning Advances
Air Force One Delivery Delayed to 2028 as Boeing Faces Rising Costs
IBM Nears $11 Billion Deal to Acquire Confluent in Major AI and Data Push
Nvidia Weighs Expanding H200 AI Chip Production as China Demand Surges
Russian Drone Attack Hits Turkish Cargo Ship Carrying Sunflower Oil to Egypt, Ukraine Says
SpaceX Begins IPO Preparations as Wall Street Banks Line Up for Advisory Roles
SoftBank Shares Slide as Oracle’s AI Spending Plans Fuel Market Jitters
Trump’s Approval of AI Chip Sales to China Triggers Bipartisan National Security Concerns
Thailand Vows Continued Military Action Amid Cambodia Border Clash Despite Trump Ceasefire Claim
U.S. Homeland Security Ends TSA Union Contract, Prompting Legal Challenge
U.S. Lifts Sanctions on Brazilian Supreme Court Justice Amid Shift in Brazil Relations
Trello Outage Disrupts Users as Access Issues Hit Atlassian’s Work Management Platform
U.S. Military Bill Seeks to End Dependence on China for Display Technology by 2030
Hong Kong Democratic Party Disbands After Member Vote Amid Security Crackdown
Australia Pushes Forward on AUKUS Submarine Program Amid Workforce and Production Challenges
Colombia’s Clan del Golfo Peace Talks Signal Mandatory Prison Sentences for Top Leaders
Syria Arrests Five Suspects After Deadly Attack on U.S. and Syrian Troops in Palmyra 



