Energy Department would host AI risk program under Senate bill

As President Donald Trump veers away from the safety-centered approach to artificial intelligence favored by his predecessor, a bipartisan pair of senators is pitching a new Department of Energy AI testing program to assess risks posed by the technology.
Introduced Monday by Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., the Artificial Intelligence Risk Evaluation Act would establish a risk evaluation program at DOE to track AI safety concerns tied to national security, civil liberties and labor protections.
The legislation would require AI developers to submit product information to the agency before any new tech they develop is deployed.
“As Big Tech companies continue to develop new generations of artificial intelligence, the wide-ranging risks of their technology continue to grow unchecked and underreported. Simply stated, Congress must not allow our national security, civil liberties, and labor protections to take a back seat to AI,” Hawley said in a press release. “This bipartisan legislation would guarantee common-sense testing and oversight of the most advanced AI systems, so Congress and the American people can be better informed about potential risks.”
The bill, first reported by Axios, would charge the new DOE Advanced Artificial Intelligence Evaluation Program with examining advanced AI systems and collecting data on “adverse AI incidents” — defined as loss-of-control scenarios, risk of weaponization by foreign adversaries, threats to critical infrastructure, scheming behavior or “a significant erosion of civil liberties, economic competition, and healthy labor markets.”
Developers of advanced AI systems would be required to take part in the DOE program and provide information about those systems if requested by the agency. Such systems would be sidelined until the developer has demonstrated that its AI product is in compliance with the program.
Some of the information AI developers may be asked to provide includes the underlying code of the AI system, what data was used to train the system, model weights or other adjustable parameters, and details on training, model architecture and other aspects of the system.
Finally, the bill calls on the Energy secretary to deliver a yearly report to Congress, detailing plans for federal oversight of advanced AI systems informed by the results from the DOE program.
“AI companies have rushed to market with products that are unsafe for the public and often lack basic due diligence and testing,” Blumenthal said in the press release. “Our legislation would ensure that a federal entity is on the lookout, scrutinizing these AI models for threats to infrastructure, labor markets, and civil liberties — conducting vital research and providing the public with the information necessary to benefit from AI promises, while avoiding many of its pitfalls.”
The bill earned plaudits from Americans for Responsible Innovation. The nonprofit’s president, Brad Carson, called the legislation “a welcome show of bipartisan support for creating rules of the road to protect the public.”
“If the innovators in Silicon Valley are right about what they’re building, and AI has the capacity to supersede human intelligence, we need Congress to get serious about federal oversight that safeguards our national security, our families, and our workforce,” Carson said in a statement. “Sens. Hawley and Blumenthal’s new bill moves the debate forward with a serious attempt at creating transparency, accountability, and guardrails for AI developers at the highest level.”
Unveiled in July, the Trump administration’s AI Action Plan emphasized innovation and the elimination of barriers to AI development over guardrails and safety, features emblematic of the Biden administration’s general philosophy on the technology.
From an energy perspective, that plan sought to expedite permitting for the nationwide buildout of AI data centers, expanding energy capacity and cutting clean air and water regulations along the way.
This story was updated Sept. 30, 2025, with comments from ARI’s Carson.