Attention Substack users! ETO blog posts are also available on Substack.
In this two-part analysis, we use data from the Emerging Technology Observatory's AGORA to explore AI-related legislation that was enacted by Congress between January 2020 and March 2025. This first blog explores the origin and application domains of the AI-related legislation we reviewed. The second blog examines the governance strategies, risk-related concepts, and harms covered by this legislation.
Key Findings
Contrary to conventional wisdom, we find that Congress has enacted many AI-related laws and provisions that often apply to military and public safety contexts given that a large portion of the dataset is drawn from National Defense Authorization Acts (NDAAs).
About the Data
The data in this analysis was drawn from AGORA, a collection of AI-related laws, regulations, standards, and similar documents. Each document in AGORA is either an entire law or a thematically distinct, AI-focused portion of a longer text, such as a section or title in a package (a large document that contains a mixture of AI-related and unrelated materials). Therefore, documents can be long or short, but they represent discrete units of analysis. AGORA collects information from a wide variety of domestic and international sources, but for this analysis, we track AI-related legislation enacted in the United States at the federal level.
Each document in AGORA includes metadata, summaries, and thematic codes developed through rigorous annotation and validation processes. Thematic codes are organized under a taxonomy that consists of five dimensions:
- Application domains
- Governance strategies
- Risk-related concepts
- Harms
- Incentives for compliance
This series analyzes each of the dimensions above except incentives for compliance, which was rarely addressed by the documents in our dataset.
Origin of Legislative Documents
AGORA contains 147 documents that are drawn from laws enacted by Congress between January 2020 and March 2025. Figure 1 demonstrates that the majority of these AI-related legislative documents are drawn from NDAAs, laws that authorize appropriations for defense-related activities. This reflects the breadth of NDAAs, which are omnibus legislative packages that provide ample recommendations for spending on programs, departments, and policies that are relevant to national defense.
The share of AI-related documents in an NDAA does not increase smoothly by year – in fact, the NDAA for fiscal year 2023 (FY 2023) contains the largest share of AI-related legislative documents. The remaining 36 documents are derived from a collection of 19 laws, 14 of which fall under the Miscellaneous category and can be found in this Github repository; some are primarily dedicated to governing AI, whereas others have provisions related to AI but focus on topics such as energy conservation or airport operations.
Application Domains
Legislative documents in AGORA address AI's development, deployment, or use in a range of application areas. This is true for the 147 Congressionally enacted documents in our analysis, which discuss the use of AI in a variety of domains (Figure 2). The most common application domain is government: military and public safety, which covers using AI for purposes such as intelligence, weaponry, or disaster response. The popularity of the military and public safety domain is expected given the prevalence of documents from NDAAs related to defense spending.
Below, we showcase text from analyzed legislative documents that use AI in this domain.
Legislation covering AI application in government: military and public safety
(a) Design of Pilot Program.—
(1) Design.—Not later than 90 days after the date of the enactment of this Act, the Chief Digital and Artificial Intelligence Officer of the Department of Defense … shall design a pilot program to optimize the logistics of aerial refueling and fuel management in the context of contested logistics environments through the use of advanced digital technologies and artificial intelligence (in this section referred to as the "pilot program").
(a) Draft Policy.—Not later than 1 year after the date of the enactment of this Act, the Director of National Intelligence … shall draft a potential policy to promote the intelligence community-wide use of code-free artificial intelligence enablement tools.
(b) Elements.—The draft policy under subsection (a) shall include the following:
(1) The objective for the use by the intelligence community of code-free artificial intelligence enablement tools.
(2) A detailed set of incentives for using code-free artificial intelligence enablement tools.
(3) A plan to ensure coordination throughout the intelligence community, including consideration of designating an official of each element of the intelligence community to oversee implementation of the policy and such coordination.
Note: Text lightly edited for clarity.
In this blog, we leverage AGORA to identify a subset of AI-related legislative documents that reveal how Congress has approached lawmaking for AI. We find that most legislative documents are from NDAAs and address AI's development, deployment, or use in military and public safety application domains. In the second part of the series, we will explore the governance strategies, risk-related concepts, and harms addressed by this national security-focused collection of legislative documents.
If you would like to explore different subsets of AI governance documents in AGORA, you can peruse AGORA's thematic collections, conduct a keyword search for topics that interest you (such as "medical devices"), or apply relevant thematic or metadata tags (such as "interpretability and explainability risk factors") to the full collection of documents.
As always, we're glad to help - visit our support hub to contact us, book live support with an ETO staff member, or access the latest documentation for our tools and data. 🤖