Advertisement

DOE launches AI testbed to evaluate models for energy operations

Lawrence Livermore National Lab and the Energy Department’s CESER office designed the platform to assess vulnerabilities and risks.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
The U.S. Department of Energy building is seen behind a sign marking the location of the agency's headquarters on March 18, 2025, in Washington, D.C. (Photo by J.David Ake/Getty Images)

The Department of Energy’s Office of Cybersecurity, Energy Security and Emergency Response has partnered with Lawrence Livermore National Laboratory to develop an AI testbed capable of identifying model weaknesses, the agency said in a blog post Tuesday. 

Energy-sector stakeholders, including utilities, grid operators, vendors, national labs and research organizations, can use the platform to better understand model risk and how to integrate AI into critical systems. 

Users will upload AI models to the platform and perform adversarial tests to assess security posture. 

“The testbed enables users to observe the effects of attacks and quantify how vulnerable the model is to manipulation and leaked information,” DOE said in the blog post. “This facilitates apples-to-apples comparisons between models, showing users which model options are most robust and by what margin.”

Advertisement

Named after the Norse god Thor’s hammer, the Mjölnir AI Testbed will give energy-sector players a look at whether an AI model behaves unsafely or exposes sensitive data at a time when AI models are becoming more integrated into critical workflows. 

The technology is a high-value target for cyberattacks, underlining the need for resilient models. Anthropic, for example, says that its models have been targeted by Chinese competitors in attempts to steal information about how the technology works. 

“As AI systems handle increasingly sensitive data and perform critical societal functions, failures in AI security could result in severe consequences, including privacy violations, operational disruptions, economic damages, and threats to public safety,” researchers from the Japan AI Safety Institute said in a July 2025 report

Even when not targeted directly, AI systems are subject to the downstream impacts of attacks elsewhere in the supply chain. OpenAI had a near miss this month following a widespread supply-chain attack that infected a popular open-source library, CyberScoop reported Monday. 

Threat actors are also expected to lean on AI to create more efficient exploit attempts at higher volumes. 

Advertisement

Amidst the high stakes, the Mjölnir AI Testbed offers energy-sector players a way to test out the resiliency of their AI models while they work to comply with presidential provisions outlined in the AI Action Plan and Genesis Mission

Lindsey Wilkinson

Written by Lindsey Wilkinson

Lindsey Wilkinson is a reporter for FedScoop in Washington, D.C., covering government IT with a focus on DHS, DOT, DOE and several other agencies. Before joining Scoop News Group, Lindsey closely covered the rise of generative AI in enterprises, exploring the evolution of AI governance and risk mitigation efforts. She has had bylines at CIO Dive, Homeland Security Today, The Crimson White and Alice magazine.

Latest Podcasts