Those who adopt AI will disrupt those who do not, CIA cyber policy adviser says

One thing that "is pretty clear is those entities that augment their activities with AI applications will likely disrupt those entities that do not," said Dan Richard, CIA's chief cyber policy adviser.
Government employees inside the CIA headquarters in McLean, Virginia. (Getty Images)

While many organizations are hesitant to be first adopters during the current wave of artificial intelligence, for the CIA, it’s “imperative” that the agency takes swift action to begin working with the rapidly evolving technology or risk falling victim to those that do, according to the agency’s top cyber policy adviser.

One thing that “is pretty clear is those entities that augment their activities with AI applications will likely disrupt those entities that do not. So I think it’s imperative that we find a way to tap into this technology to support the activities that we are entrusted with,” Dan Richard, chief cyber policy adviser for the CIA, said during a Billington Cybersecurity event last week.

Long before the current craze around generative AI, the intelligence community, led by the Office of the Director of National Intelligence, developed guiding principles and a supporting framework for the ethical use of AI by intelligence agencies. Richard noted that work as evidence of the CIA and other intelligence agencies being ahead of the curve.

“The intelligence community has been working on AI and artificial intelligence issues for over a decade,” he said. “So this is an area that we have already been grappling with how to take advantage of this technology and apply it for our mission.”


Speaking about the current explosion in the use of generative AI tools, Richard said: “There are clearly areas of concern,” like AI-powered threats to cyberdefenses and disinformation, that need to be addressed “head on.”

“These are real issues that we need to deliberately work through and ensure that we are proactively seeking solutions for, but those solutions cannot be at the expense of the innovation that we really need to more effectively and efficiently conduct the mission that we are asked to do,” he said.

Richard’s comments come on the heels of news that the CIA has been developing its own generative AI tool to compete with China.

Despite being on the leading edge in adopting and creating guardrails for AI, the CIA still could do better with some of the basic blocking and tackling for its IT enterprise, including data management and partnering with commercial innovators, Richard said.

“I think one of the things that we are grappling with is data management,” he said. “We assemble and review large amounts of data information, and we are constantly looking for ways to be able to more effectively analyze, synthesize and provide insights that we can get from that information out to the private sector.”


Historically, the CIA has preferred to develop tools and applications in-house due to the sensitive and secretive nature of its work. But it’s clear that the agency is unable to keep pace with innovation from the commercial world and must find ways to better partner with those firms, Richard explained.

“Something that could take us several months to sort of assemble in terms of an application solution, the commercial sector has already taken advantage of it. And what we’re looking to do is leverage those solutions to more quickly address some of these problems that we’re currently facing,” he said.

Billy Mitchell

Written by Billy Mitchell

Billy Mitchell is Senior Vice President and Executive Editor of Scoop News Group's editorial brands. He oversees operations, strategy and growth of SNG's award-winning tech publications, FedScoop, StateScoop, CyberScoop, EdScoop and DefenseScoop. After earning his journalism degree at Virginia Tech and winning the school's Excellence in Print Journalism award, Billy received his master's degree from New York University in magazine writing while interning at publications like Rolling Stone.

Latest Podcasts