Attention Substack users! ETO blog posts are also available on Substack.
Introduction
The CSET blog California's Approach to AI Governance details California's 2024 AI-related legislation and highlights how this approach builds upon the state's history of regulating new technologies. Of the 18 California AI-related laws enacted in 2024, and covered in that analysis, eight can be found in ETO's AGORA (as of writing). AGORA is a living collection of AI-related laws, regulations, standards, and similar documents. Using AGORA’s thematic tags, we find that these eight laws emphasize themes related to accessing and exchanging information about AI systems. Although these laws provide only a small window into California's legislative activity, they do offer insight into several AI governance approaches that California legislators have considered. Looking to 2025, the California legislature has considered several bills that emphasize themes similar to the eight 2024 laws examined here, setting the stage for California to expand its patchwork of AI-related laws.
California AI Laws in AGORA
In 2024, California enacted a total of 18 laws that govern AI. These laws cover diverse topics such as disclosures of AI use cases and protections of individuals' digital likeness (see Table 1 in the accompanying CSET blog). Some of these laws introduce new governance for AI, such as Senate Bill (SB) 1288 that requires the creation of new educational materials about AI. Others amend or clarify the scope of existing legislation, like Assembly Bill (AB) 1008's extension of enshrined privacy protections to cover AI systems.
As of writing, eight of these laws (SB 1120, AB 2885, AB 1831, SB 1288, SB 1381, AB 2013, AB 3030, SB 896) have been annotated, or assigned thematic tags in AGORA. While these are only a handful of all AI-related laws enacted in California in 2024, they do offer insight into the AI issues and strategies that state legislators considered at the time.
AGORA is a collection of AI-related laws, regulations, standards, and similar documents. Each document in AGORA is either an entire law or a thematically distinct, AI-focused portion of a longer text. An AGORA document includes metadata, summaries, and thematic codes developed through rigorous annotation and validation processes. Thematic codes are organized under a taxonomy that consists of several dimensions, including risk factors and governance strategies.
Tables 1 and 2 show the types of risk factors and governance strategies that are addressed by the eight laws. Risk factors are characteristics of AI systems that make them more or less risky, whereas governance strategies refer to the approaches used in legislation and other documents to tackle different issues.
A complete list of AGORA's thematic codes and their definitions can be found in the AGORA codebook.
Tables 1 and 2 demonstrate the prevalence of transparency and disclosures across the eight laws. Both transparency and disclosures pertain to sharing relevant information about AI systems. Transparency governs whether, how, or to what extent people who are affected by or otherwise have an interest in an AI system have access to relevant information about the system, whereas disclosures involve exchanging information between a party who is familiar with an AI system and a third party. The presence of these themes indicates that the eight laws can help the California government and populace better understand how AI is being developed and used.
The next most prominent risk factors are safety and privacy, and the second most popular governance strategy is evaluations. Safety pertains to an AI system's potential negative impact on human life, health, property, or the environment, whereas privacy refers to the use of personally identifiable information by AI systems and any related protections. Evaluations involve the systematic assessment of AI systems and the infrastructure that they are integrated into. The prevalence of these themes suggests that the eight laws are concerned with addressing risks to people and the environment, protecting personal information, and understanding the capabilities of AI systems and their supporting infrastructure.
Explore the 100+ documents in AGORA that highlight transparency as a risk factor and include disclosure as a governing strategy.
Looking Forward
In 2025, a number of AI-related bills have worked their way through the California legislature. We highlight three of these bills that vary in scope and topic: SB 53, AB 412, and AB 1064. In October, SB 53 became law and AB 1064 was vetoed.
SB 53 is a pared-down version of SB 1047, a bill that would have placed significant responsibilities on developers of powerful models including adopting comprehensive safety protocols and undergoing independent audits. After the Governor vetoed SB 1047, he convened a policy working group on AI frontier models headed by some of the academics who had opposed SB 1047. The main differences between SB 53 and SB 1047 derive partly from the policy group's report. There is a shift in focus from strict safety requirements to greater transparency (see Table 3: Comparing Requirements of Vetoed SB 1047 and Enacted SB 53).
Table 3: Comparing Requirements of Vetoed SB 1047 and Enacted SB 53
| Vetoed SB 1047 Requirements | Enacted SB 53 Requirements |
|---|---|
| Requires developers and computing cluster operators to implement shutdown capabilities for their models | Developers must share their safety protocols with the public |
| Imposes penalties of up to 30% of a model's development cost for noncompliance | Introduces capped civil fines based on the severity of the violation |
| Requires developers to report any AI incidents that increase the risk of critical harm to the Attorney General | Creates a new mechanism for frontier AI companies and the public to report potential critical safety incidents to California's Office of Emergency Services |
Note: Table 3 contains a subset of requirements from both bills.
AB 412 also addresses transparency measures, but in the context of copyright infringement (Table 4: Spotlighting AB 412). Newspapers, artists and authors have been suing AI companies for using their works to train models without permission. AB 412 seeks to empower rights holders by requiring developers of generative AI models to disclose whether any "covered material," defined as content registered, preregistered, or indexed with the U.S. Copyright Office, was used in training their models. Additionally, the bill requires developers to respond to copyright owners' information requests. AB 412 will not move forward in 2025, as it has been stalled until 2026.
Table 4: Spotlighting AB 412
| Summary | Protection offered by AB 412 | Challenge raised by AB 412 |
|---|---|---|
| Requires developers of generative AI models to disclose whether copyrighted material was used to train their models and respond to copyright owners' information requests | Empowers copyright owners to hold developers accountable for copyright infringement | May impede AI innovation by requiring any developer whose models are available for use by Californians to devote significant resources to identifying and disclosing covered materials |
Unlike SB 53 and AB 412, which broadly grapple with transparency measures to address systemic risks, AB 1064 is narrowly scoped to target teenage suicides linked to chatbots. More specifically, AB 1064 prohibits developers from creating AI chatbots intended for use by children if they pose risks of emotional or psychological harm and authorizes the recovery of civil penalties for violations of the bill.
Big Picture
We find that eight AI laws enacted in California in 2024 emphasize themes related to accessing and exchanging information about AI systems. Specifically, these laws focus on addressing transparency and safety risk factors and predominately deploy disclosures and evaluations as governing strategies. A couple bills that have moved through the California legislature in 2025 also cover transparency themes. As more bills are proposed, a more coherent picture of California's legislative priorities for AI will emerge.
If you would like to explore different subsets of AI governance documents in AGORA, you can peruse AGORA's thematic collections, conduct a keyword search for topics that interest you (such as "medical devices"), or apply relevant thematic or metadata tags (such as "interpretability and explainability risk factors") to the full collection of documents.
As always, we're glad to help - visit our support hub to contact us, book live support with an ETO staff member, or access the latest documentation for our tools and data. 🤖

