Advertisement

Energy Department will draft its own ethical AI principles

The civilian agency follows on the heels of the Department of Defense and the intelligence community.
Cheryl Ingstad, Department of Energy
The Department of Energy's Cheryl Ingstad. (Cheryl Ingstad / Twitter)

The Department of Energy intends to draft its own set of ethical AI principles that will regulate how it develops, deploys and shares the technology, said the director of the Artificial Intelligence & Technology Office.

AITO has been looking at separate ethical AI principles released by the Department of Defense and the intelligence community for inspiration, Cheryl Ingstad said during the Microsoft Federal Science & Research Summit on Tuesday. Energy’s national security portfolio includes nuclear weapons programs and research by the National Laboratories system for the DOD.

As a new office, AITO will play a critical role in addressing AI across the department, and establishing ethical AI processes that can help combat adversarial AI at the same time, Ingstad said.

“I think about our data, and I think about our algorithms,” Ingstad said. “Have we really reviewed the data and made sure that we understand the provenance of these data, and do we have a way to monitor changes to these data?”

Advertisement

Ingstad wants DOE to establish an immutable record of any changes made to its data and who approved them. Additionally she wants a tool that can examine AI algorithms for bias that has, itself, been tested to ensure there wasn’t bias in its training.

Such a tool would serve a twofold purpose of enforcing ethical AI principles but also good cyber hygiene.

“This can address some of the problems that we encounter from adversarial AI, where an adversary is actively planting an attack inside our data or in our algorithm or somehow has corrupted the data,” Ingstad said.

Increasing reliance on AI has opened federal agencies up to new lines of attack, where state and non-state actors attempt to impede or confuse such systems or else get them to release important information.

While adversarial AI overlaps with cybersecurity, it remains less understood.

Advertisement

“Adversarial AI is an area where we need to do a lot of innovation around it and create new principles and new processes and methodologies to address it,” Ingstad said.

DOD officially adopted ethical AI principles in February with a focus on how the military will retain full control and understanding over how machines make decisions. The process took more than four months, and since then the Joint AI Center has been working to make the conceptual guidance actionable with a shared vocabulary.

Meanwhile the Office of the Director for National Intelligence issued its own ethical AI guidance in July, complete with a framework for determining how and when AI applications should be employed. The principles are consistent with DOD’s.

Latest Podcasts