Department of Defense AI ethics principles still lack implementation guidance
The Department of Defense will produce guidance for its artificial intelligence ethical principles by late August, six months after an initial self-directed deadline for the creation of the guidance.
Officials had said by February 2021 the DOD would detail how the bureaucracy should implement its five AI principles, which state that the technology should be responsible, equitable, traceable, reliable and governable. But that date has come and gone without any such document detailing how the bureaucracy should translate the principles into their daily work.
The principles were adopted by the department in February 2020 and were designed as a starting point for the DOD’s approach to building and using AI ethically. The new deadline for a draft of implementation guidance was mandated by a May memo from Deputy Secretary Kathleen Hicks which reiterated the department’s commitment to building “responsible AI.”
Alka Patel, head of responsible AI at the Joint AI Center, told FedScoop in September that the guidance was to give DOD offices working on AI a “shared vocabulary” on how to understand and work with the principles. She said the guidance would be a critical part of turning the conceptual framework into rules to live by.
“We recognizing the urgency around this work,” Alka Patel said during a press conference Thursday. “We are making progress,” she and other officials added.
When the principles were first adopted, Lt. Gen. Jack Shanahan, the then-director of the Joint AI Center, said implementing them would be the hard part.
“Implementing the AI ethics principles will be hard work. The Department’s efforts over the next year will shape the DOD’s future with AI,” Shanahan said when the principles were adopted in February 2020.
But implementation goes beyond any one document, said Paul Scharre, vice president and director of studies at the Center for New American Security who focuses on autonomy and AI in warfare.
“The DOD runs on process,” he said. For something as novel and diverse as AI, “it looks like a more diffuse set of policies, procedures, offices, organizational knowledge.”
Patel also mentioned that diffuse set of knowledge in previous interviews, saying that AI ethics “is all part of our jobs.” She had said that the guidance will help with that as it will help build a shared vocabulary for AI ethics.
The latest memo from Hicks reaffirms the DOD’s commitment to responsible AI, signed by the deputy secretary of defense in May. The memo tasks the JAIC with leading work on developing policy on responsible AI through working groups and adds more high-level tenets to how the department will approach the making AI.
The JAIC is not the only group working on ethics. There is a responsible AI subcommittee of the DOD’s AI steering group which meets monthly and an international program for military-to-military collaboration on AI among 16 partner nations. There also has been progress in developing test and evaluation programs to assess the reliability of AI systems the DOD is working on.
The principles also have been included in contracts, with requests from the department seeking industry’s input on how they would use the principles in their work.
Scharre said that the DOD is not the only institution that is struggling with how to implement AI ethics. With such a new set of technologies, it requires new procedures to implement ethical frameworks, he said.
“It’s not like the best AI researchers in the world don’t have this problem to solve,” he said.