Exploring State AI Governance in AGORA: Utah

Announcing new state AI governance blog series
🔔

Attention Substack users! ETO blog posts are also available on Substack.

⭐️ New State AI Governance Blog Series

We are excited to announce the launch of a new AI governance blog series powered by AGORA. The series will provide a window into the AI governance approaches of U.S. states, with future blog posts potentially exploring other jurisdictions. U.S. state AI governance is an underexplored but key part of the AI governance ecosystem. Most AI policymaking in the United States occurs at the subnational level where state AI laws directly shape company behavior in developing and deploying AI technologies. States effectively act as laboratories of democracy by experimenting with new regulatory frameworks and piloting best practices and novel initiatives that can be scaled up nationwide.

👉

AGORA is a collection of AI-related laws, regulations, standards, and similar documents. Each document in AGORA is either an entire law or a thematically distinct, AI-focused portion of a longer text. An AGORA document includes metadata, summaries, and thematic codes developed through rigorous annotation and validation processes. Thematic codes are organized under a taxonomy that consists of several dimensions, including risk factors and governance strategies.

Each blog will cover the AI governance approaches of a particular state or a cross-cutting AI governance theme that spans multiple states. Blogs that spotlight a state will be accompanied by links to the full set of AI-related laws enacted since 2020 in that state to support deeper analysis.

Our first blog in the series spotlights Utah, a state that has created an office dedicated to studying AI and enacted laws that govern deepfakes, AI companions, and AI use in high-risk interactions. After reading the blog, check out the full set of Utah's AI-related laws enacted since 2020 in AGORA.

🏛️ Utah's Model of AI Governance

Utah illustrates a lighter-touch, institution-first model of AI governance emerging from a Republican-led state in the west. With a population of roughly 3.5 million, Utah is smaller than and distinct in terms of political leadership from coastal states with tech hubs. Rather than imposing broad, cross-sector compliance duties on private actors, Utah's approach emphasizes state capacity building, experimentation, and targeted consumer protections.

That orientation is clearest in Utah SB 149 (Artificial Intelligence Policy Act), enacted in March 2024. The Act creates governance infrastructure by establishing an Office of Artificial Intelligence Policy within the Department of Commerce that administers an artificial intelligence learning laboratory program. The learning laboratory program is designed to study AI risks, benefits, and regulatory options in collaboration with industry, academia, and agencies. The laboratory is paired with regulatory mitigation agreements, which allow companies to pilot AI systems under negotiated limits, safeguards, and reporting requirements. Participation does not constitute state endorsement, and the statute explicitly shields the state from liability arising out of these pilots. In effect, Utah treats AI governance as an iterative, participatory process.

Liability and responsibility under Utah's framework are correspondingly narrow and actor-centric. The law makes clear that using generative AI does not excuse violations of consumer protection statutes: "the AI did it" is not a defense. Entities that deploy or prompt AI remain responsible for deceptive or unlawful conduct. Core obligations center on disclosure, especially when AI systems interact directly with consumers or provide services tied to regulated occupations. The statute also adds a criminal provision covering offenses committed with the aid of generative AI, reinforcing the principle that human actors remain accountable for misuse. Enforcement flows through existing consumer protection channels, including administrative fines, civil penalties, and injunctive relief.

Importantly, Utah's AI policy has evolved rather than expanded wholesale. Amendments enacted in March 2025 (SB 226 and SB 332) refined the framework. The sunset date was extended to July 1, 2027, underscoring the provisional nature of the regime. Disclosure requirements were clarified for consumer transactions and high-risk use cases, with many general AI interactions now requiring disclosure only upon a clear and unambiguous consumer request. The amendments also introduced disclosure-based safe harbors, signaling that compliance hinges on transparency rather than continuous risk assessment or impact documentation. Two other laws, HB 452 and SB 271, were enacted in March 2025. These introduced more specific rules in sensitive areas, including mental health chatbots which are subject to tailored advertising, disclosure, and privacy requirements, and unauthorized AI-generated impersonations, expanding restrictions around deepfakes.

Taken together, Utah represents a sandbox-oriented governance model. Utah builds institutional scaffolding, narrows liability to disclosure and consumer protection contexts, and uses time-limited experimentation to inform future regulation. The approach in Utah, which involves state-led learning mechanisms that keep regulatory burdens light while reserving the option to harden rules later, warrants close attention.

Table 1 shows the types of governance strategies that are addressed by four of the Utah laws discussed earlier (Utah SB 271 is excluded from the table because it contains no governance strategies as of writing). Governance strategies refer to the approaches used in legislation and other documents to tackle different issues.

👉

A complete list of AGORA's thematic codes and their definitions can be found in the AGORA codebook.

Governance development and disclosure are common governance strategies across the four laws. Governance development refers to encouraging or imposing conditions on the creation of additional AI-related governance documents, whereas disclosures involve exchanging information between a party that is familiar with an AI system and a third party. The prevalence of these governance strategies suggests that most laws in Table 1 both require the disclosure of generative AI usage in certain scenarios and support the creation of follow-on documentation and resources to manage AI risks.

✉️ Get in Touch

As always, we're glad to help you get the most out of AGORA and our other resources. Visit ETO's support hub to contact us, book live support with an ETO staff member, or access the latest documentation for our tools and data. 👋

Keep in touch

Twitter
LinkedIn
Substack
Email
RSS
Terms of Use and Privacy Policy