Advertisement

Former national security leaders want DOD to work more collaboratively on AI testing

In a new report, the Pentagon's former No. 3 Michèle Flournoy and other national security experts describe the need to ensure proper AI testing and evaluation in the DOD.
Under Secretary of Defense for Policy, Michele Flournoy, smiles at the enthusiasm of Soldiers, as she and U.S. Army Lt. Col. William Lindner, commander, 3rd Squadron, 7th Cavalry, 2nd Heavy Brigade Combat Team, 3rd Infantry Division, watch U.S. and Iraqi security forces conduct combined checkpoint training on Forward Operating Base Marez, near Mosul, Iraq, Jan. 9, 2010.

The Pentagon’s former No. 3 official Michèle Flournoy and some other national security experts want the Department of Defense to establish a more collaborative approach for testing and evaluation of artificial intelligence.

In a new report, Flournoy and her co-authors stress the need for robust methods of testing, evaluation, validation and verification of AI that can move the development pipeline fast enough to achieve the department’s ambitious AI goals. And Flournoy’s recommendation carries special weight as she has been floated by some as a frontrunner for the top job in the Pentagon if Joe Biden is vote president in November.

The report lays out new structures of teams for testing, ways Congress can fund help the DOD purchase AI testing tools and other new approaches to speed up testing in a way that improves reliability. Avril Haines, an Obama administration White House Deputy National Security Adviser, and Gabrielle Chefitz, a senior associate with Flournoy’s strategic advisory firm WestExec Advisors, co-authored the report.

Without trust in the AI systems that DOD builds, they won’t be useful, Flournoy said during a virtual event promoting the report hosted by the Center for Security and Emerging Technology. “This is going to require much greater coordination across the entire [testing, evaluation, validation and verification] ecosystem,” she said.

Advertisement

Flournoy, who served as undersecretary for policy in the Obama administration, and her co-authors suggest creating a cross-functional team that would report to the Office of the Deputy Secretary of Defense, pulling members from across services and secretariats for a common purpose and to allow for greater flexibility and collaboration. The majority of the team’s work would center on testing and evaluation research, but also the assessment of specific models.

“You really can’t have a one-size-fits-all approach in this area,” Flournoy said.

The report also floats the idea that Congress should grant new budgetary authorities to the DOD for buying AI testing and evolution tools, studies and other needed elements from the private sector. Current regulations do not allow for the needed flexibility, according to the report.

The Joint AI Center in the Pentagon has been working on new solutions to solve many of the challenges highlighted in the report, and some of its officials participated in interviews for its production.

Inside the DOD there is a fear that testing could be a “bottleneck” to AI progress, Jane Pinelis, the JAIC’s head of testing and evaluation, said Tuesday. Flournoy and others echoed the concern, saying that if the DOD doesn’t balance the need for thorough testing and moving fast to field AI, there is a risk of losing technical advantage to adversaries.

Latest Podcasts