Pentagon unveils strategy for military adoption of artificial intelligence

The strategy emphasizes the "urgency, scale, and unity of effort" needed to make AI transformational both for the DOD and the nation at an operational level.
Dana Deasy, Jack Shanahan, artificial intelligence
DOD Chief Information Officer Dana Deasy and Director of the Joint Artificial Intelligence Center U.S. Air Force Lt. Gen. Jack Shanahan hold a roundtable meeting at the Pentagon on Feb. 12, 2019. (DOD / U.S. Army Sgt. Amber I. Smith)

The Department of Defense issued an unclassified summary Tuesday of its strategy for the accelerated adoption of artificial intelligence for military applications.

Then-Deputy Secretary Patrick Shanahan approved the classified strategy in June 2018. The summary further details many things DOD has revealed since last June, like the Joint Artificial Intelligence Center (JAIC), which is the focal point of the strategy.

The unclassified document emphasizes the “urgency, scale, and unity of effort” needed to make AI transformational both for the DOD and the nation. In that respect, CIO Dana Deasy told reporters Tuesday, it works in step nicely with the AI executive order President Donald Trump issued Monday, introducing the administration’s plan for American leadership in the development of artificial intelligence.

Perhaps the most central theme throughout the strategy summary, Deasy said, is “the need to increase the speed and agility, which is how we will deliver AI capabilities across every DOD mission.”


“We do not have time to waste,” he said. “We must be careful not to give a longterm advantage to our adversaries.”

Operationalized AI with JAIC

The Pentagon’s AI strategy is all about near-term, departmentwide operational capabilities supported through JAIC.

Lt. Gen. Jack Shanahan, JAIC’s first director, explained that his office underscores the importance of transitioning from research and development to operational-fielded capabilities. The JAIC will operate across the full AI application lifecycle, with emphasis on near-term execution and AI adoption.”

As revealed previously, JAIC is focused on “national mission initiatives” (NMIs) — which are broad, cross-functional programs that impact more than one mission or agency — as well as “component mission initiatives” (CMIs), which “are specific to individual components who are looking for an AI solution to a particular problem,” the general told reporters.


Already, JAIC is piloting two NMIs — one focused on predictive maintenance and another on humanitarian assistance and disaster relief. Others are in the works, such as a project with “the U.S. Cyber Command on a “cyberspace-related NMI,” Shanahan said.

From these projects, it is DOD’s intention to establish “a common foundation enabled by enterprise cloud with particular focus on shared data repositories, reusable tools, frameworks and standards, and cloud and edge services,” he said.

That’s, for example, why one of DOD’s first NMIs is focused on disaster relief, particularly identifying fire lines in wildfire disasters — not something you might associate with DOD.

“This a perfect example of us reaching out and working with other agencies. … When you actually get down to the real science of what we’re doing with this humanitarian one — it’s taking large, large areas using video, using still, and using AI to determine what is going on,” Deasy said. “A lot of this is going to be applicable elsewhere in the Department of Defense. In picking that as an NMI, we are also picking this as a great one where we can build the tools — we talk about tools that are going to end up inside of JAIC as an asset that others can use inside the Department of Defense. This is a perfect example of one that can help us build out our tools.”

The DOD is massive, however, and of course military services and other DOD components will explore AI for their own needs. This is where CMIs come in to play, and JAIC hopes to synchronize those disparate DOD AI activities and aligning them for enterprisewide efficiencies.


Deasy said JAIC’s level of success will be measured in its ability to attract those components to take advantage of its resources.

“At the end of the day, no matter who is standing up an AI capability, they’re going to need governance, they’re going to need processes, they’re going to need tools, they’re going to need infrastructure,” he said. “So what we’ve said is JAIC’s success is going to be based on our ability to create common tools, common processes, common development methodologies, common infrastructure where you can get jumpstarted a lot faster than going out and starting something on your own. The services are really keen on that.”

The strategy also comes with the stipulation that any DOD AI project that costs more than $15 million must be vetted by JAIC first.

There are also other research entities, like DARPA, that deal with AI, but in the longer-term. Deasy said JAIC’s role will be to feed off that R&D to bring real, immediate applications to the military.

“This needs to be a balanced conversation” between R&D and operation, he said. “What JAIC is focused on is the actual applied application of taking all that science that’s available from either the academic world or the commercial world and then applying it to real-world solutions. But then keeping tightly linked to the research side of where this is going, where the art of the possible is going to take you.”


But JAIC is still in its early days, Deasy said. Though it’s up and running with some work underway, JAIC started small — the big transformation and scaling of the organization will take place in its second year, he said.

So far, it’s received less than $90 million in funding, most to research and development, and is staffed by military detailees. The hope is that the fiscal 2020 budget will really reflect the growth strategy of JAIC.

The ethics of militarized AI, the Google problem and autonomous weapons

The biggest piece of the DOD AI strategy still missing is a set of ethical principles — but it’s in the works. Though DOD is moving forward standing up JAIC and launching mission initiatives, the Defense Innovation Board (DIB) is in the process of developing ethical principles for the department’s use of AI.

Some companies have been tentative to work with DOD on AI and other high-tech initiatives because of the possibility that their tech could be used for lethal applications. Google is the poster company for this turmoil after it stepped away from work supporting DOD’s Project Maven, which uses algorithms and AI to help Air Force analysts make better use of full-motion video surveillance.


“I think it’s very important the Defense Innovation Board is doing this,” said Shanahan, who has led Project Maven for the past two years since its inception. “It doesn’t stop us from going forward. This is an important point … in every technology the department has ever introduced, we take this into account early and often. The idea of human judgment, safe, lawful use of any weapons system, and then, as important as anything else, is the issue of accountability. A human is held accountable. … We are taking this into account from the day we start working on any project.”

Deasy said the work on the front end of DOD adopting AI is too urgent and intensive to wait for the DIB’s development of principles. “There is a lot of work we have to do to stand up a joint artificial shared capability. So we can keep moving down the road setting that up while the DIB works through those big questions that they’ll eventually bring back to us with their thoughts on.”

While Google’s refusal to continue work with Project Maven, and then its decision to not bid on DOD’s Joint Enterprise Defense Infrastructure (JEDI) enterprise cloud, has gotten a lot of public attention, Deasy and Shanahan said that has been the exception, not the rule.

“Our experience has been, with very few exceptions, an enthusiasm about working with the Department of Defense,” Shanahan said, explaining the importance of being upfront with companies about the application of their technologies to the DOD mission. “There’s questions we ask in that problem framing to make sure we all understand on both sides of the line what they’re working on for the Department of Defense, what the Department of Defense intends to use that model for and we haven’t had any real problems with that.”

Deasy said the DOD accepts that some companies don’t want to work with it. “We’ve got some of the most difficult, challenging and important problems to solve. So far, we’ve seen no evidence that there is something we’re missing out on if a company chooses not to participate in our mission sets going forward. We are getting lots and lots of inquiries.”


That tune, however, may change the closer DOD gets to deploying futuristic things now only seen in movies and TV shows. Shanahan said the department’s adoption of AI doesn’t involve autonomous weapons “right now.”

“And that’s what people get most skittish about what the department is or is not doing,” he said, noting that the Pentagon has an existing directive for autonomous weapons. “We are taking those policies into account, but we haven’t progressed to the point in the JAIC or in Maven where that has become the driver of whether or not we go forward with a project. We are nowhere close to the full autonomy question that most people seem to leap to a conclusion when they think about DOD and AI.”

Deasy didn’t count out the possibility of it either.

“We want JAIC to support the full breadth of what the Department of Defense needs to do to accomplish its missions — if that’s deterrence, if that’s lethality, if that’s reform, if that’s creating better alliances and partnerships,” he said.

Latest Podcasts