Advertisement

NRC report pushes nuclear sector on monitoring and boundaries for AI use

The U.S., U.K. and Canadian nuclear regulatory agencies said in a report that the industry should focus on monitoring, securing and modularizing AI systems.
(Getty Images)

The nuclear energy industry must prioritize the continuous monitoring of systems and strong boundaries for artificial intelligence tools as the technology’s use grows, the sector’s federal regulator said in a report released this week.

The Nuclear Regulatory Commission, along with the United Kingdom’s Office for Nuclear Regulation and the Canadian Nuclear Safety Commission, outlined in the document potential requirements for AI use and governance for the nuclear sector. Top priorities were the nonstop monitoring of systems to ensure security and integrity, establishing system boundaries for defining the scope of an AI system and approaching AI in a modular manner in nuclear applications, according to the report. 

“The rapid pace of recent AI development is somewhat antithetical to the slow and methodical change process that the nuclear industry traditionally follows,” the report states. “Nevertheless, the primary goal for the nuclear industry and regulators with respect to AI systems will be maintaining adequate safety and security while benefiting from their deployment.”

While the agencies recognize that AI has the potential to benefit the sector, the report focuses in part on the safety and security of use, stating that AI is difficult to trust to perform a function with any level of integrity because “no method exists to quantify the failure probability of an AI component within a system.”

Advertisement

According to the report, safe and consistent operation of AI systems can be achieved through implementing boundaries in ways such as data availability limitations to optimize resources, limiting software input and output as controlled by conventional systems to place trust in the wider architecture, and implementing “diverse, redundant and isolated systems” to minimize unintended actions. 

In order to prevent failures and maintain system integrity while also detecting potential anomalies, the nuclear sector would benefit from the continuous monitoring of AI systems, according to the report. Monitoring the system includes applying anomaly detection algorithms, a mechanism for detecting and alerting potential attacks from adversaries or tracking metrics like model accuracy. 

The agencies agreed that operating AI at a modular level, or dividing a system into “smaller, independent modules” with “well-defined function,” could be more beneficial than monolithic AI. This would allow any issues or errors to be isolated at the problematic module, the report states, and would not adversely affect the larger functionality of the operational system.

Caroline Nihill

Written by Caroline Nihill

Caroline Nihill is a reporter for FedScoop in Washington, D.C., covering federal IT. Her reporting has included the tracking of artificial intelligence governance from the White House and Congress, as well as modernization efforts across the federal government. Caroline was previously an editorial fellow for Scoop News Group, writing for FedScoop, StateScoop, CyberScoop, EdScoop and DefenseScoop. She earned her bachelor’s in media and journalism from the University of North Carolina at Chapel Hill after transferring from the University of Mississippi.

Latest Podcasts