IC leaders want AI to be your coworker, not your replacement

Intelligence leaders want analysts to see AI as a "powerful sidecar," but its adoption is not without its hurdles.
(Getty Images)

The U.S. intelligence community’s leader said Tuesday that it needs to direct “more creativity” toward a number of future challenges facing it, and other top officials from the IC proposed artificial intelligence as a tool for making it happen.

Agencies will face greater demand for economic intelligence, as well as pressure to collect and analyze mountains of data of all types, Director of National Intelligence Dan Coats said.

“As you discuss the various tasks of our intelligence craft, from technology to acquisition to our workforce, I ask that you do so with an eye toward these topics,” Coats said at the Intelligence & National Security Summit held at National Harbor near Washington.

Leaders from around the IC said that adding AI to the mix could help crack those problems not by replacing analysts, but by aiding them with powerful new tools to spend less time processing raw data and more time thinking about the bigger picture.


“I think the opportunity that we see, in terms of the application, is that [AI] can be a very powerful sidecar to our scarcest resources, which is really good analysts,” said Dawn Meyerriecks, CIA director of science and technology. “We don’t look at this as it’s going to suddenly make analytic talent obsolete. It takes our best people and it cues up for them the things that are going to fundamentally impact their judgments.”

Advancements in AI technology have come at a time when the federal government has seen its workforce aging and interest from younger talent pools waning, leading to a familiar refrain from federal leaders, even in the IC, that emphasizes the advantages of incorporating the emerging technology to help do more with less personnel.

Agency leaders face a number of challenges, however, as they try to take advantage of AI.

The right skillsets 

For starters, the potential applications of AI are outpacing the workforce’s ability to train for them, said Mike Bender, director of the National Security Agency’s Lab for Analytic Science.


“To the extent of right now, I would say the capabilities [of AI], from what I see, are far evolving than where the training is going,” he said. “I don’t think we have enough training, but I don’t know that we know enough about how to train to be a good user.”

Neil Wiley, director of analysis at the Defense Intelligence Agency, said the advent of AI shouldn’t require a wholesale facelift of the intelligence workforce, because its applications could grow to be more user-friendly with increased adoption, just as the iPhone’s intuitive design didn’t require its users to be engineers.

He added that a recent crop of entry-level employees has shown even more acumen for quantitative analysis than previous generations, demonstrating a wealth of talent potential that just needs the right technological tools.

“I’m not really sure it’s a question of all of your workforce and sending them to re-education camps out in the countryside,” he said. “I think we can do this in such a way that it becomes approachable and that you need a comparatively small cadre of deep data science specialists and a much larger cadre of people who bring grammar, logic, rhetoric, intellectual passion, critical thinking and the ability to work well with others in a way that’s more quantitatively literate.”

But to enable such a workforce requires tools designed for users with those skillsets that adapt and evolve quickly, and that could require continued partnership with the private sector.


The Maven problem

Another challenge is getting Silicon Valley on board. As the intelligence community has looked to acquire AI tools to streamline its workloads, it has sought to partner with leading technology companies to test new applications. But those partnerships have seen some resistance, most notably when Google declined to compete for a new contract for the Air Force’s Project Maven program following employee backlash over its use of machine learning to analyze full-motion video surveillance.

As a result, IC agencies have to keep a wary eye towards the companies they choose to partner with to ensure the solutions they plan to use will remain available.

Meyerriecks said that though Google decided not to pursue another Project Maven contract, technology companies have still shown an interest in working with the IC on implementing new technologies.

“Companies are going to make decisions that are in their own best interests,” she said. “I would say that, yes, there have been some very public sort of backing away. I would also say that we’ve also had closer partnerships than I’ve ever seen in my career in other instances.”


Meyerriecks added that the CIA’s Commercial Cloud Services (C2S) contract with Amazon Web Services has provided a good example of how intelligence agencies can partner with private industry to incorporate new solutions and innovate with increasing speed.

“We just have to be thoughtful about how we craft those partnerships in order to [support] those national security missions,” she said.

Don’t blame the AI

Though AI applications are continuing to advance, there are still things the technology can be counted on to do, namely, assess the context of some information.

It’s for this reason that agency leaders have tried to ensure employees that they won’t be replaced by the new technology because the human element is still essential to intelligence gathering.


“Unless you are a drudge someplace that does alphabetize things in the library, no, you are not going to be replaced by AI,” Meyerriecks said. “There is no reason that a human being should look at videos and photos and spend their eight hours [on that]. Just from a behavioral mechanics perspective, you can’t spend that much time looking for something, that performance will start to decay.”

But in an age where the speed of data analysis becomes the currency of the intelligence realm, AI presents a tantalizing opportunity where technology could outpace the speed of human analysis, if the nation’s leaders allow it.

“There’s a question from a policy decision leader’s ethical perspective: at what point are we comfortable not having human accountability in warfighting decisions,” said Wiley. “That’s not an intelligence question, that’s a national leadership question. But the extent to which ultimately the leadership makes the determination will drive what we are then required to do.”

Latest Podcasts