ETO Logo

The state of global AI safety research

A stylized image of a map and compass.


Insights from ETO's Research Almanac and Map of Science

🔔 Attention Substack users! ETO blog posts are also available on Substack.

Over the past several months, we've been busy improving and updating the research data that powers many of our tools, such as the Map of Science and Research Almanac. Today, we're launching a new series featuring topic-by-topic insights from the data, beginning with the increasingly high-profile field of AI safety research.

Key findings

  • AI safety research is growing fast, but is still a drop in the bucket of AI research overall.
  • American schools and companies lead the field, with Chinese organizations less prevalent than in other AI-related research domains.
  • Notable clusters of AI safety research from ETO's Map of Science covered themes including data poisoning, algorithmic fairness, explainable machine learning, gender bias and out-of-distribution detection.
  • According to the latest estimates from the Research Almanac, about 30,000 AI safety-related articles were released between 2017 and 2022. (This total, and the other Research Almanac-derived findings in this post, are based on articles with English titles or abstracts in our Merged Academic Corpus; they omit articles published solely in Chinese and non-public research. For further details and caveats, see the Almanac documentation.)
  • AI safety research grew 315% between 2017 and 2022.
  • Despite this rapid growth, we estimate AI safety research comprises only 2% of all research into AI.
  • Pound for pound, AI safety research is highly cited - the average AI safety-related research article has been cited 33 times, compared with 16 times for the average article across all AI fields.
  • 40% of the AI safety-related articles in the Research Almanac dataset had American authors. 12% had Chinese authors, and 19% had European authors. (Note that some articles lack information about author nationality, and articles without English titles or abstracts are omitted, which could affect the numbers for Chinese authors.)
  • Looking only at highly cited articles, America continues to lead in research production. 58% of top-cited AI safety articles (defined as the 10% of articles in each publication year with the most citations) had American authors, compared to 20% with Chinese authors and 15% with European authors.
  • Compared to the U.S., Chinese authors tend to be less prevalent in AI safety research than in AI research overall, or research in other AI-related subfields (in all cases, looking at research articles with English titles or abstracts only). That said, China still claims the number two spot overall - and AI safety research is a much smaller "slice of the pie" for both the United States and China.

👉 To view the next five leading countries in AI safety research and see how authorship trends have evolved over time for all countries, visit the "Countries" section in the Research Almanac.

Top organizations

  • The biggest producers of AI safety-related articles include several American universities well known for strength in artificial intelligence, such as Carnegie Mellon, MIT, and Stanford. U.S. and Chinese organizations round out the top ten.
  • When only highly cited articles are counted, Google rises to the top of the table, followed by Stanford and MIT.

👉 To view the top ten companies active in AI safety research, visit the "Patents and industry" section in the Research Almanac.

Top research clusters

Using the Map of Science's recently revamped subject search along with other filters, we developed a query to identify especially prominent, fast-growing clusters of research within the broader field of AI safety. Some highlights from the list:

For more insight into the fast-growing field of AI safety, visit its subject page in the Research Almanac or explore AI safety research with the Map of Science subject search. As always, we're glad to help - visit our support hub to contact us, book live support with an ETO staff member or access the latest documentation for our tools and data. 🤖

ETO Logo

Keep in touch