Advertisement

How the DOD is developing its AI ethics guidance

"Frankly, it is all part of our jobs," the JAIC's head of AI ethics policy told FedScoop in her first interview.
US Pentagon at sunset
(Getty Images)

It has been six months since the Department of Defense adopted ethical principles for artificial intelligence. Since then, the department’s Joint AI Center has faced the daunting challenge of taking that conceptual work and scaling it to develop actionable guidance for the rest of the military.

The goal is to give anyone who works in technology development — from contracting officers to software developers — a “shared vocabulary” for building ethics into any DOD work involving AI. What’s at stake, leaders say, is ensuring that the DOD uses the emerging technology in ways that uphold the department’s values while managing potentially huge shifts in the “character” of warfare.

The first step is to agree on a document that turns the principles into clear guidance. In the next six months, JAIC anticipates having a detailed guide in the hands of employees across the entire department. The document will explain the DOD’s five ethics principles — that the use of artificial intelligence should be responsible, equitable, traceable, reliable and governable — and describe how they apply in the development of any AI technology, from backend business applications to lethal weapon systems.

For now though, the discussions are still primarily happening within the JAIC, though a small policy team working on ethics-training pilot programs. (The DOD’s AI working group, which is comprised of employees from across the Pentagon, also has a subcommittee focused on ethics.)

Advertisement

Alka Patel, the center’s head of AI ethics policy, told FedScoop her goal is to make “ethics a natural part of the way we all think.” The center is drawing upon experience from several pilot programs, including one that pulled together 15 people who work in different parts of the AI development lifecycle in the JAIC and gave them detailed training to bring an ethical perspective back to their respective teams.

“The JAIC is a wonderful testbed,” Patel said in her first interview with a news outlet in her new role. “We definitely want it to continue forward.” Now, it is just figuring out how to do it best.

Patel said she is in talks with other DOD components and the services to scale the champions program.

Confronting competitors

One of the ways JAIC officials think the DOD can lead in the ethical application of AI is through strong testing and evaluation, said Patel and Jane Pinelis, the center’s head of testing and evaluation. JAIC officials have predicted that the Pentagon will be a global leader in the testing and evaluation of AI due to the seriousness of its initiatives.

Advertisement

The department has to contend with foreign adversaries that may not be as willing to put ethical restraints on their own use of AI in military operations — or at least properly test existing restrains. A global escalation of autonomy in lethal weapons is a top concern for Patel and others in the center, she said.

That pressure pulls the JAIC in different directions — between the need to move fast and the need to be thorough in testing.

“Ethics is not meant to be an obstacle or slow things down or create extra hurdles,” Patel said, but it might seem that way if U.S. competitors and adversaries are putting fewer constraints on their development of AI for warfare.

The JAIC’s testing strategy has multiple angles, including evaluating AI technology against preset requirements, checking its integration with existing systems, examining the human-machine relationship and direct operational testing.

“We have tried to build ethics into every piece of the test and evaluation process,” said Pinelis, who participated in the pilot cohort for the Responsible AI Champion’s program. Few programs in the JAIC have reached the level of maturity to run through all the different types of testing, she added.

Advertisement

Using humans and machines to test

The JAIC will look to both humans and digital tools to build ethics into AI, according to officials. Tools that could quantify how well a model fits a problem set or “remind the human” a model is not suited for certain inputs would help augment users’ judgment, Pinelis said.

JAIC acting Director Nand Mulchandani has stressed he wants to see more testing tools come from the private sector.

“A lot of AI testing is being done manually,” he said last week. There are “not enough tools and products for testing.”

Despite the desire for tools to augment human judgment on when, how and where to ethically use AI, Patel said inside the JAIC, ethical decisions will still rest in the hands of human operators. Even with tools that can show where bias is, or how a model may or may not fit a problem set, the final decision to go or not comes down to the humans in the loop.

Advertisement

“I think about how do we have everyone think about ethics across the organization, and not think perhaps it is just the technologist’s job to address it,” Patel said. “Frankly, it is all part of our jobs.”

Latest Podcasts