Advertisement

DARPA director cautions on AI’s limitations

The Pentagon's R&D arm is heavily invested in driving the future of artificial intelligence and machine learning, but the program's director warned the technology isn't without its limitations.

The Pentagon’s blue-sky research agency is heavily invested in driving the future of artificial intelligence and machine learning but knows the technology isn’t without limitations, its director said Monday. 

The best facial recognition systems out there are statistically better than most humans at image identification, for instance, but when they’re wrong, “they are wrong in ways that no human would ever be wrong,” Arati Prabhakar, director of the Defense Advanced Research Projects Agency, told an Atlantic Council event. 

“I think this is a critically important caution about where and how we will use this generation of artificial intelligence,” she said.

“You want to embrace the power of these new technologies but be completely clear-eyed about what their limitations are so that they don’t mislead us,” Prabhakar said. That’s a stance humans must take with technology writ large, she said, explaining her hesitance to take for granted what many of her friends in Silicon Valley often assume  — that more data is always a good thing.

Advertisement

“More data could just mean that you have so much data that whatever hypothesis you have you can find something that supports it,” Prabhakar said

Artificial intelligence is still in its infancy, however — what Prabhakar called “the second wave of AI”

“It’s a wave that is about machines that learn. It’s been fueled by new GPU architectures, by new algorithms and especially by the vastness of data that’s available for these systems to train on,” she said, referring to technologies like facial recognition, Wall Street Autonomous trading and the self-driving car, many of which are rooted in previous DARPA research. The agency weeks ago publicly christened its Naval Sea Hunter, an autonomous submarine-hunting boat.  

Likewise, DARPA launched a grand challenge in March inviting teams to use artificial intelligence to address spectrum overload in military and commercial scenarios. Whereas teams in DARPA challenges generally compete against one another, those participating in this challenge will work collaboratively to “maximize the usage of that spectrum and get many more times more data out of a fixed band of spectrum,” Prabhakar said.  

The forthcoming third wave will be more about advancing machine learning to point where software applications can learn from their own mistakes. Prabhakar said the machines will “explain themselves to us and tell us what their limitations are” based on causal models and “start learning how to take what they’ve learned in one domain and use it in different domains, something they can’t really do at all today.” 

Advertisement

Despite her belief in the radical progression of AI to a point where machine learning will inform many of the decisions the Defense Department makes on the battlefield, there will always be a human element to it, she believes.

“It’s hard for me to imagine a future… where a machine just tells us what the right thing is to do,” Prabhakar said. “It keeps coming back to this synthesis of the insight humans can bring aided by machines that are able to digest and start building causal models and to give us hypotheses to start exploring the vastness of the space we’re in.”     

Latest Podcasts