Advertisement

Former DOD No. 2 on AI: ‘We should be a lot further along than we are’

"I can't shake the nagging feeling...that we should be a lot further along than we are and we're losing ground to our competitors," former Deputy Secretary of Defense Bob Work said.
Deputy Secretary of Defense Bob Work speaks during The Department of Defense Combined Federal Campaign of the National Captial Area Awards Ceremony at the Pentagon, Jan. 29, 2015. (Photo by Master Sgt. Adrian Cadiz)(Released)

The Pentagon has recently made a huge push into adopting operationalized artificial intelligence for military applications. But according to former Deputy Secretary of Defense Bob Work, the U.S. military is in grave danger of falling behind the Chinese in the race to develop AI and lose its competitive advantage if it doesn’t go all-in.

“I can’t shake the nagging feeling … that we should be a lot further along than we are and we’re losing ground to our competitors,” said Work, now a distinguished senior fellow for defense and national security at the bipartisan Center for a New American Security.

Responsible for the Third Offset Strategy during his time as deputy secretary of Defense, Work helped establish the Pentagon’s algorithmic warfare efforts that now serve as a precursor for the Joint AI Center, launched last summer to lead the military’s AI efforts. Earlier this year, he was named as co-chair of the National Security Commission for Artificial Intelligence.

“If we’re going to succeed against a competitor like China that is all-in in this competition — I mean they are all in, from the top leadership down to the commanders in the field — we’re going to have to grasp the inevitability of AI and adapt our own innovation culture and behavior so that AI has a chance to take hold,” he said Wednesday at AFCEA’s Artificial Intelligence and Machine Learning Summit.

Advertisement

Specifically, Work said the DOD needs “small plays” going on departmentwide — “substantial, sustained experimentation using these technologies, widespread applications being applied by the services across all operating domains.”

“We will not be able, in my view, to defeat China in this competition, unless we change the way we’re going after this in a broad way,” he said.

In a large sense, that’s the point of the JAIC: to help DOD from, the departmental view, wrap its arms around the hundreds of AI projects ongoing around the military. Lt. Gen. Jack Shanahan, who heads JAIC, agreed Wednesday that “we have to move faster, to do better, that’s what we’re really trying to do right now.”

“If we project 20 years into the future, and we’re on the cusp of a major conflict with a peer competitor, if at that point we have a truly AI-enabled DOD force, that by itself will not imply that we will win the conflict,” Shanahan said in a separate keynote. “If we don’t have a fully AI-enabled force, we will incur an unacceptably high risk of losing. That’s how important this is to our national security.”

Talking ethics

Advertisement

As they often do, the conversations Wednesday on AI found their way back to the ethical implication of the technology being used in connection to lethality.

Work and Shanahan both agreed that it’s healthy to have a dialogue about ethics — but they also pointed to misconceptions about the U.S. military’s ethical use of technology in general.

“I would argue that the United States military is the most ethical military force in the history of warfare, and we think the shift to AI-enabled weapons will continue this trend,” Work said. The existing policy, which predates any of this current work, he said, “is very clear that these weapons have to be consistent with the laws of armed conflict, supporting the principles of distinction and proportionality, and it has been DOD policy since 2012, three years before the Third Offset, that every weapon we field must be designed and deployed to allow commanders and operators to exercise appropriate levels of human judgment in the lethal application of force.”

Shanahan explained there “are grave misperceptions about what DOD is actually working on” with AI.

“In my experience, in the last two years, what I’ve found is there’s the assumption in some corners that the DOD in a back laboratory somewhere in a basement of a building has got a free-will AGI, artificial general intelligence, that’s going to roam indiscriminately across the battlefield,” he said. “We do not.”

Advertisement

Instead, he explained, DOD is looking to adopt applications of artificial narrow intelligence — “it’s for specific problems, and just like every other technology we ever work with in the department, from the beginning we take into this question of what is the technology meant to be used for? What are the ethical, safety and law implications of using that technology?”

And with those applications, the DOD is looking at “minimizing risk of collateral damage, civilian casualties … minimizing the potential for blue-on-blue [attacks], it’s about how we use this to do better at our business of warfighting operations,” Shanahan said.

He said it’s up to the DOD to dispel the misconceptions that it will use AI otherwise.

“We will, as we have done with every technology in the history of the Department of Defense, take into account law of war, laws of armed conflict, international humanitarian law, rules of engagement, special instructions, and maybe, finally at the end of that, most important, the commander’s judgment,” Shanahan said. “Accountability and transparency matter. But somehow that conversation has gotten off track. … We know there’s work to do to continue a healthy dialogue about what our value system is, how we do adhere to international norms and how some of our potential adversaries are likely not. We are the good guys — at least we believe that.”

In addition to existing policy, the Defense Innovation Board is currently developing principles for DOD’s ethical use of AI.

Latest Podcasts