Advertisement

Energy Department offloads some emergency operations to AI

The technology was “used in place of a large team that would be required for coding and development of emergency services facilitation,” according to DOE’s AI inventory.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
An entrance to the James Forrestal Building, the headquarters of the Department of Energy, in Washington, DC on June 22, 2022.

The Department of Energy began offloading some emergency operations to AI last year, according to the agency’s inventory of use cases posted last week. 

Using natural language processing, the Energy Department is tapping AI from Dataminr to provide information about incidents occurring around sites of the National Nuclear Security Administration and other DOE locations. The tool is said to speed up emergency operations reporting and could be a cost-saving software for the federal government, according to the inventory.

“The AI is being used in place of a large team that would be required for coding and development of emergency services facilitation,” the agency said in its inventory, referring to the use case that was first deployed in January 2025. 

The Department of Energy is leaning into three explicitly outlined focuses of the current Trump administration: speed, efficiency and AI. But as federal agencies ramp up adoption and hand over tasks to technology that were previously completed by humans, there is some skepticism among experts that the efforts are wholly positive.  

Advertisement

“Using AI to replace human expertise in emergency services can improve efficiency but also introduces serious risks like unclear decision-making, data bias and weakened oversight,” said Dia Adams, board chair at Washington, D.C.-based think tank The AI Table. Adams emphasized the need for adequate governance. 

“Guardrails such as keeping humans in the loop, requiring mandatory transparent audit trails, and stress-testing models under rare conditions are essential for ensuring public safety,” Adams said. 

The Energy Department did not include any information in the inventory regarding its risk-mitigation practices for the AI-powered emergency operations workflow. The agency also didn’t respond to a request for comment prior to publication. 

Use cases, like the one at DOE, where human labor is substituted for AI are becoming more common, according to Stephen Weymouth, professor and faculty affiliate with the AI, Analytics, and the Future of Work Initiative at Georgetown University’s McDonough School of Business. 

“It appears that this use case substitutes for federal employees building and staffing a comparable in-house system,” Weymouth said in an email to FedScoop. “No matter who develops it, the system is designed to augment human expertise, not necessarily replace it — [though] there are risks.”

Advertisement

Weymouth warned of the federal government becoming too reliant on an automated system before efficacy is established. Leaning on vendors can also lead to unforeseen security and privacy vulnerabilities, Weymouth said, citing the public-safety concerns associated with both pitfalls. 

Lawmakers have developed legislation that, if passed, would provide improved insights into the intersection of AI adoption and the workforce. The bipartisan AI Workforce Prepare Act introduced in December, for example, set the groundwork for modernized AI-related labor market data

Legislators have also proposed bills that aim to protect those potentially impacted, such as the Workforce of the Future Act introduced in December by a trio of Senate Democrats that would kickstart related research if passed.  

The scramble for better data and worker protections come amid an unrelenting push for AI innovation, encapsulated best by the launch of the Energy Department-led Genesis Mission

The concern for agencies, experts said, is that leaders might sacrifice safety standards for speed. There’s a delicate balance to be struck in the trade-off, according to Jason Hausenloy, policy lead at the San Francisco-based nonprofit Center for AI Safety.

Advertisement

“Having more AI automation can dramatically increase government efficiency … [but] you want to be very careful if you’re deferring decisions to these AI models,” Hausenloy said. “If we defer more and more decisions to the AI, then this, in fact, gives an unjust amount of power to the AI corporations that decide the values of these systems.”

Latest Podcasts