California State Senator Scott Wiener has released new amendments to his latest bill, SB 53. This legislation represents a big step toward imposing transparency requirements on the world’s largest artificial intelligence (AI) companies. To protect public safety and security, the proposed legislation would require these companies to fully disclose their safety and security protocols. It further requires them to publish reports each time a safety incident happens.
The amendments to SB 53 come in response to recommendations by California’s AI policy working group. Specifically, they called for transparency as a matter of extreme urgency throughout the industry. The group’s final report underscored the need for non-governance-based requirements that would force AI developers to share information related to the performance and capabilities of their systems. This push for accountability is intended to create a “robust and transparent evidence environment” concerning AI safety.
If signed into law, California would become the first state to implement meaningful transparency measures for leading AI developers, including well-known firms like OpenAI, Google, Anthropic, and xAI. The bill makes a huge change from earlier proposals. It represents a less hysterical line of action in recognition of calls for better AI safety.
Yet despite the clear need for transparency, Senator Wiener’s earlier legislation, SB 1047, would have increased transparency only to be vetoed by Governor Gavin Newsom himself. In his veto message, Governor Newsom called on AI innovators and developers to work together. He envisions them creating a policy advisory body to establish objectives for the state’s AI safety efforts. This response is indicative of the increased understanding that we need to regulate how AI is developed.
Even as the bill moves forward, it remains a work in progress. I look forward to working with everyone on all sides in the coming weeks to make this proposal into the most scientific and equitable law it can be,” said Senator Wiener.
In light of the recent developments, supporters of SB 53 emphasize the necessity for AI companies to clarify their safety measures. For Nathan Calvin, the Vice President of State Affairs for Encode, it’s essential for companies to articulate precisely how they will mitigate risks. He views this added transparency as a basic but important first step to opening a dialogue with the public and legislators. He continued to say that these issues have been on the radar and discussed among local advocacy organizations for years.
Geoff Ralston, former president of Y Combinator, echoed similar sentiments: “Ensuring AI is developed safely should not be controversial — it should be foundational.” His comments underscore the growing consensus on the need for regulatory action. This is especially important in the hyperdynamic realm of artificial intelligence.
The urgency around SB 53 is only heightened by the continued movement we’re seeing on this issue in other states. In an encouraging sign, New York Governor Kathy Hochul is reportedly and actively considering moving to introduce the RAISE Act. This legislation would create a new framework for increasing safety procedures and accountability in the development of AI. This trend is indicative of a larger rush by state lawmakers to mitigate potential harms from AI technologies.
Take Anthropic, another top AI developer and a company that has publicly advocated for more transparency across the industry. Their support highlights the growing momentum toward a more public conversation about AI safety and accountability.
As SB 53 progresses through the legislative process, it remains crucial for stakeholders to engage in meaningful discussions that prioritize public safety without stifling innovation. Here’s what it would mean for AI companies, should this legislation pass. It could establish the terms of how they operate not only in California, but across the whole country.