Sue Gordon: Government must invest in AI assurance, security and ‘sensemaking’

The federal government has to be a "fast follower" and "invest in the gaps in AI that the private sector isn't looking at because our needs are different," the former principal deputy DNI says.
Sue Gordon, principal deputy DNI
Sue Gordon at the IC Women's Summit in Bethesda, Maryland, on March 22, 2019. (ODNI)

As artificial intelligence permeates the modern world, it is the role of government — which is mostly trailing behind the progress in industry — to invest in the gaps where commercial companies are turning a blind eye, says Sue Gordon.

Namely, those areas are AI assurance and security, and the technology’s ability to make sense of a massive volume of information, not just sort through it in search of some known thing, said Gordon, the former principal deputy director of national intelligence and longtime champion of intelligence community IT.

The federal government has to be a “fast follower” and “invest in the gaps in AI that the private sector isn’t looking at because our needs are different,” she said at the 2019 Kalaris Intelligence Conference at Georgetown University. “The biggest one that comes to mind is artificial intelligence assurance and security.”

Gordon, who recently resigned from her role as the no. 2 in Trump’s intelligence community, explained the need for assurance by comparing the mission of a company like Netflix to that of the IC, and what happens if the artificial intelligence is wrong in making a decision. “It’s pretty cool if Netflix recommends to me the wrong film. It will not work if what the intelligence community does is gets a recommendation for a target that is not sound.”


“How do we ensure that the algorithms — that are really just instantiations of human opinion — are both well crafted and well secured so they can’t be corrupted,” Gordon said. “Because it’s not just theft of data that our adversaries would seek to influence. It will be the decisions that our machines are making.”

The answer isn’t simply inserting a human into that decision-making loop, she said. “So many things are going to happen so fast that humans aren’t going to be able to be in the loop. So how do you make it explainable and protected and known?”

Currently, the intelligence community is mostly being used to go “through massive amounts of information to find known things faster in broader areas,” Gordon said, such as locating things geospatially, being able to identify voices — “more about being able to find things you care about that you understand in large swaths of data.”

The government needs to invest more to move beyond finding things that are already known to begin to make intelligent decisions and sense of correlations, she argued. “We’re going to have to have the government invest much more into the research into sensemaking. Because right now AI is really good at counting things, getting better at seeing change.” But it is “not yet very good at seeing what significant change is. It isn’t able to cross domains. Can’t tell me the deepfakes” versus authentic media.

It’s always been incumbent upon the national security to make such a critical investment, Gordon implied. “The drive of national security has always driven what is societally advanced. And if intelligence makes those kinds of protections, that kind of sensemaking happen, you all will all feel better about getting into a self-driving car.”

Latest Podcasts