Advertisement

With ‘AI Jam,’ Anthropic, OpenAI pursue work at US national labs

Scientists from the country’s national labs will experiment with AI tools from the private sector.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
(Getty Images)

Through a new project, about a thousand government scientists will analyze AI models from companies like OpenAI and Anthropic.

The initiative, called a “Scientist AI Jam,” will focus on these chatbots’ scientific and research capabilities.

The announcement isn’t surprising. The national laboratories, some of which house the world’s fastest supercomputers, are increasingly interested in both studying and deploying large language models. While they’ve developed AI technology of their own, federal researchers are also focused on probing other generative AI systems — and building relationships with the private sector.  

In a similar vein, the new project is supposed to focus on “real-world research problems” and study whether the system can help cut down on scientific tasks. 

Advertisement

“Together, we organized a “1,000 Scientist AI Jam Session” — a first-of-its-kind event taking place today across nine national labs, bringing together over 1,000 scientists for a day to use AI to accelerate scientific discovery,” OpenAI said in a blog post.

Anthropic said in its press released that “scientists from multiple laboratories will explore Claude’s capabilities across a range of scientific tasks — from problem understanding and literature search to hypothesis generation, experiment planning, code generation, and result analysis.”

The company continued: “These scientists will test Claude’s abilities using real-world research problems from their respective domains. This testing offers a more authentic assessment of AI’s potential to manage the complexities and nuances of scientific inquiry, as well as evaluate AI’s ability to solve complex scientific challenges that typically require significant time and resources.” 

The work expands on Anthropic’s previous work with the Energy Department’s National Nuclear Security Administration, which was focused on security testing and sensitive information. The ability of AI systems to generate nuclear information, and to disperse it, remains a major concern for federal officials. Anthropic has also submitted its technology for evaluation by the U.S. and U.K. AI Safety Institutes. 

Last month, OpenAI announced that it had partnered with the national laboratory system to help boost scientific research.

Advertisement

Anthropic emphasized that focusing on Claude’s scientific and research capabilities will help boost American competitiveness, an argument that the Trump administration, as well as other large AI players, have echoed. 

This story was updated Feb. 28, 2025, to note that OpenAI and other companies are also part of the program.

Rebecca Heilweil

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Latest Podcasts