ETO Logo

Exploring AI legislation in Congress with AGORA: Risks, Harms, and Governance Strategies

Senate Floor

2025-07-29

Mina Narayanan, Sonali Subbu Rathinam

Using AGORA to explore AI legislation enacted by U.S. Congress since 2020
🔔

Attention Substack users! ETO blog posts are also available on Substack.

In the second installation of our blog series analyzing 147 AI-related laws enacted by Congress between January 2020 and March 2025 from AGORA, we explore the governance strategies, risk-related concepts, and harms addressed in the legislation. In the first blog, we showed that the majority of these AI-related legislative documents were drawn from National Defense Authorization Acts and apply to national security contexts.

Key Findings

In this blog, we find that Congress is in the early days of governing AI. Legislative documents primarily focus on laying the groundwork to harness AI’s potential rather than scaling up AI systems or placing concrete demands on them or their developers. We also find that fewer legislative documents mention risk or harm concepts than propose strategies related to AI development and adoption, possibly indicating that Congress is more focused on understanding and harnessing AI’s potential than addressing its downsides.

Specifically, we find that

  • Most of Congress’s 147 enactments focus on commissioning studies of AI systems, assessing their impacts, providing support for AI-related activities, convening stakeholders, and developing additional AI-related governance documents
  • Very few enactments leverage performance requirements, pilots, new institutions, or other governance strategies that place concrete requirements on AI systems or represent investments in maturing or scaling up AI capabilities
  • Fewer legislative documents directly tackle risks or undesirable consequences from AI (such as harm to infrastructure) than propose strategies such as government support, convening, or institution-building

Governance Strategies

As shown in Figure 1, government studies, government support, evaluations, governance development, and convening are the most common governance strategies in the enacted Congressional documents in AGORA.

👉
In AGORA, governance strategies refer to the approaches used in legislation and other documents to tackle different issues.

Figure 1 does not include several governance strategies that rarely appeared in the dataset, but a complete list of governance strategies can be found in the AGORA codebook.

Evaluations are a common strategy to examine the feasibility of AI-related projects and weigh the benefits and risks of AI adoption. Legislative documents that leverage evaluations sometimes require the creation of benchmarks or metrics to assess AI usage. The two most common types of evaluations called for in the dataset are conformity assessments and impact assessments (Box 1).

  • Conformity assessments test whether a deployed AI system meets the expectations specified prior to deployment
  • Impact assessments systematically predict the real-world consequences of an AI system before it is deployed

The popularity of these evaluations is unsurprising given that impact and conformity assessments can take many forms that range from studying the impact of smart manufacturing technologies on manufacturing jobs to certifying the compliance of unmanned surface vessels with legal requirements. The science of AI evaluation is also relatively new and rapidly changing, so policymakers may be cautious of mandating more tightly scoped evaluations and instead opt for evaluations that provide a general understanding of system performance or impact.

Similarly, the popularity of government studies may reflect policymakers’ desire to learn more about the technology. Government studies, reports, or plans typically

  • Assess the feasibility of governing AI
  • Outline steps to study or govern AI and the broader systems that incorporate the technology
  • Document progress on AI governance

Government support is also a popular governance strategy that can enable the development and adoption of AI systems (Box 1). Government support often takes the form of investments in talent or research and may involve

  • Establishing a scholarship to fund students in critically needed technology areas
  • Creating programs to advance research in microelectronics and high-performance computing

Like government studies and support, the convening of different stakeholders (such as an event for military departments to collaborate with the private sector on innovations in autonomous systems) and governance development (the creation of additional AI-related governance documents such as risk management frameworks) are strategies that help lay the groundwork for effectively harnessing AI. These strategies may involve

  • Bringing stakeholders together to exchange information related to AI systems
  • Developing processes to govern AI systems and their supporting infrastructure

Box 1: Examples of AI legislative document governance strategies

Conformity assessment: Section 811, FY 2024 NDAA

(a) Modernizing the Department of Defense Requirements Process. — Not later than October 1, 2025, the Secretary of Defense … shall develop and implement a streamlined requirements development process for the Department of Defense, to include revising the Joint Capabilities Integration and Development System, in order to improve alignment between modern warfare concepts, technologies, and system development and reduce the time to deliver needed capabilities to warfighters.

(b) Reform Elements. — The process required by subsection (a) shall – …

(6) establish a process to rapidly validate the ability of commercial products and services to meet capability needs or opportunities …

Source: https://agora.eto.tech/instrument/188

Impact assessment: Division A, Title LVXXXIV, Section 8411, FY 2021 NDAA

(a) ASSESSMENT. — The Commandant, acting through the Blue Technology Center of Expertise, shall regularly assess available unmanned maritime systems and satellite vessel tracking technologies for potential use to support missions of the Coast Guard.

(b) REPORT. —

(1) IN GENERAL. — Not later than 1 year after the date of the enactment of this Act, and biennially thereafter, the Commandant shall submit … a report on the actual and potential effects of the use of then-existing unmanned maritime systems and satellite vessel tracking technologies on the mission effectiveness of the Coast Guard.

Source: https://agora.eto.tech/instrument/160

Government support: AI workforce-related: Division A, Title II, Section 513, FY 2021 NDAA

(a) AUTHORITY. — The Secretary, in consultation with the Secretary of Education, may carry out a program to make grants to eligible entities to assist such entities in providing education in covered subjects to students in the Junior Reserve Officers’ Training Corps …

(2) The term 'covered subjects' means — "(A) science; "(B) technology; "(C) engineering; "(D) mathematics; "(E) computer science; "(F) computational thinking; "(G) artificial intelligence; "(H) machine learning; "(I) data science; "(J) cybersecurity; "(K) robotics; "(L) health sciences; and "(M) other subjects determined by the Secretary of Defense to be related to science, technology, engineering, and mathematics."

Source: https://agora.eto.tech/instrument/145

Government support: for R&D: Section 25005, Infrastructure Investment and Jobs Act

(b) Establishment of Program. — The Secretary shall establish a program, to be known as the "Strengthening Mobility and Revolutionizing Transportation Grant Program", under which the Secretary shall provide grants to eligible entities to conduct demonstration projects focused on advanced smart city or community technologies and systems in a variety of communities to improve transportation efficiency and safety.

Source: https://agora.eto.tech/instrument/164

Note: Text lightly edited for clarity.

On the other hand, performance requirements, pilots, and new institutions are less common in the dataset. These strategies place concrete demands on AI systems and in some cases represent a significant commitment to maturing AI capabilities. Although some institutions and pilots in the dataset are relatively minor (such as a working group for AI-related information technology infrastructure or a pilot project to evaluate job applicants based partly on their electronic portfolio), others require significant effort to create and maintain (such as an AI Security Center within the National Security Agency and a pilot program to develop AI for biotechnology applications through public-private partnerships).

Some AI systems can have unintended negative consequences, and many of the legislative documents in our dataset include provisions aimed at addressing these.

👉
In AGORA, AI "risk-related concepts" are characteristics of AI systems that make them more or less risky (e.g. bias, transparency), whereas "harms" are real world consequences of risks that arise from the development or use of AI.

We find that there are approximately twice as many legislative documents that directly address AI risks as there are documents that directly address AI harms (Figure 2), suggesting that through enacted legislation, Congress emphasizes governing risks more than addressing harms.

Overall, fewer legislative documents mention risk or harm concepts than propose strategies such as government support, convening, or institution-building, possibly indicating that Congress is more focused on harnessing AI’s potential than addressing its downsides. Figures 3 and 4 depict the distribution of concepts related to AI risks and harms.

The most common risk-related concept is security, which is likely attributable to the prevalence of national defense-related documents in our collection. Legislative documents tagged with governing security range from addressing the security vulnerabilities of military AI systems to the entire AI supply chain. More than half of legislative documents tagged with governing security address risks to cybersecurity, such as by instructing government departments to sponsor an analysis of cybersecurity tools and capabilities.

The second most common risk-related concept is reliability, which pertains to the ability of an AI system to perform normally under the conditions of expected use and over a given period of time. Legislative documents govern reliability through a variety of methods, including through the creation of standards and testbeds for robust system development.

Safety, the third most common risk-related concept, refers to addressing an AI system’s potential negative impact on human life, health, property, or the environment. For example, legislative documents tagged with governing safety seek to ensure that military systems can operate autonomously in GPS-denied environments or require a review of methods to protect against AI-enabled biological attacks. Box 2 provides additional examples of some of these risk-related concepts.

Box 2: Examples of AI legislative documents governing risk-related concepts

Governing security: Section 6504, FY 2025, NDAA

(b) Establishment. — Not later than 90 days after the date of the enactment of this Act, the Director of the National Security Agency shall establish an Artificial Intelligence Security Center …

(c) Functions. — The functions of the Artificial Intelligence Security Center shall be as follows:

(1) Developing guidance to prevent or mitigate counter-artificial intelligence techniques.

(2) Promoting secure artificial intelligence adoption practices for managers of national security systems (as defined in section 3552 of title 44, United States Code) and elements of the defense industrial base.

Source: https://agora.eto.tech/instrument/1739

Governing security: cybersecurity: Section 1515, FY 2025 NDAA

(a) In general. — The Secretary of Defense shall carry out a detailed evaluation of the cybersecurity products and services for mobile devices to identify products and services that may improve the cybersecurity of mobile devices used by the Department of Defense, including mitigating the risk to the Department of Defense from cyber attacks against mobile devices.

Source: https://agora.eto.tech/instrument/1742

Governing reliability: Title II, Subtitle B, Research and Development, Competition, and Innovation Act

(g) Testbeds. — In coordination with other Federal agencies as appropriate, the private sector, and institutions of higher education (as such term is defined in section 101 of the Higher Education Act of 1965 (20 U.S.C. 1001)), the Director may establish testbeds, including in virtual environments, to support the development of robust and trustworthy artificial intelligence and machine learning systems, including testbeds that examine the vulnerabilities and conditions that may lead to failure in, malfunction of, or attacks on such systems.

Source: https://agora.eto.tech/instrument/76

Note: Text lightly edited for clarity.

The most common AI harms addressed in legislative documents are harm to health or safety, violation of rights or liberties, and harm to infrastructure. Legislative documents that address harm to health or safety may reference

  • The appropriate use of military applications such as unmanned aerial systems and autonomous weapon systems, which have the ability to inflict physical harm on people
  • The development of guidance to implement human oversight of AI systems
  • The investigation of the impact of AI on the cognitive development and health of children

Legislative documents that address harm to infrastructure also cover military operations and may

  • Require threat assessments of unmanned aerial swarms or cyber operations and the design of methods to counter such threats
  • Propose ways to strengthen the nation's transportation and electric grid infrastructure

Legislative documents that address violations of rights or liberties or detrimental content focus on defending against online harms such as

  • Cyber exploitation
  • Digital content forgery
  • Detrimental deepfakes

A couple of legislative documents that address violations of rights or liberties also focus on developing tools to improve veteran healthcare and investigating Chinese AI businesses that support concentration reeducation camps in Xinjiang. Box 3 contains text that addresses violations of rights or liberties and detrimental content.

Box 3: Examples of AI legislative documents addressing harms

Addressing violations of rights or liberties: Division A, Title V, Section 589F, FY 2021 NDAA

(a) STUDY. — Not later than 150 days after the date of the enactment of this Act, the Secretary of Defense shall complete a study on —

(1) the cyberexploitation of the personal information and accounts of members of the Armed Forces and their families; and

(2) the risks of deceptive online targeting of members and their families.

Source: https://agora.eto.tech/instrument/152

Addressing detrimental content: IOGAN Act

The Director of the National Science Foundation, in consultation with other relevant Federal agencies, shall support merit-reviewed and competitively awarded research on manipulated or synthesized content and information authenticity, which may include -

(1) fundamental research on digital forensic tools or other technologies for verifying the authenticity of information and detection of manipulated or synthesized content, including content generated by generative adversarial networks …

Source: https://agora.eto.tech/instrument/60

Note: Text lightly edited for clarity.

Future Outlook

Based on our sample of 147 legislative documents, we find that Congress has passed legislation to better understand the impact of AI systems and provide support for AI-related activities and infrastructure – largely within national defense contexts – over the last five years. Congress has placed less of an emphasis on addressing specific, undesirable consequences from AI and imposing precise requirements on AI systems through enacted legislation. Nevertheless, Congress may decide to enact laws that more broadly engage with the risks and harms that AI systems pose as they become increasingly embedded in society. Over time, the aim of legislative documents may shift from better understanding and promoting the technology to tailoring governance requirements to different AI-related entities and activities.

If you would like to explore different subsets of AI governance documents in AGORA, you can peruse AGORA's thematic collections, conduct a keyword search for topics that interest you (such as "medical devices"), or apply relevant thematic or metadata tags (such as "interpretability and explainability risk factors") to the full collection of documents.

As always, we're glad to help - visit our support hub to contact us, book live support with an ETO staff member, or access the latest documentation for our tools and data. 🤖

ETO Logo

Keep in touch

Twitter