Advertisement

CISA official: AI tools ‘need to have a human in the loop’

Lisa Einstein, the cyber agency’s chief AI officer, made the case at two D.C. events for “strong human processes” when using the technology.
CISA Chief AI Officer Lisa Einstein speaks during a panel discussion at the NVIDIA AI Summit in Washington, D.C., on Oct. 9, 2024. (Screenshot)

An abbreviated rundown of the Cybersecurity and Infrastructure Security Agency’s artificial intelligence work goes something like this: a dozen use cases, a pair of completed AI security tabletop exercises and a robust roadmap for how the technology should be used.

Lisa Einstein, who took over as CISA’s first chief AI officer in August and has played a critical role in each of those efforts, considers herself an optimist when it comes to the technology’s potential, particularly as it relates to cyber defenses. But speaking Wednesday at two separate events in Washington, D.C., Einstein mixed that optimism with a few doses of caution.

“These tools are not magic, they are still imperfect, and they still need to have a human in the loop and need to be used in the context of mature cybersecurity processes,” Einstein said during a panel discussion at NVIDIA’s AI Summit. “And in some ways, this is actually good news for all of us cybersecurity practitioners, because it means that doubling down on the basics and making sure we have strong human processes in place remains super critical, even as we use these new tools for automation.”

At Recorded Future’s Predict 2024 event later in the day, Einstein doubled down on those comments, noting that the “AI gold rush” happening across the tech sector now has people perhaps overly excited about AI-generated code. In reality, there’s plenty to be concerned about with AI as it’s observed “echoing previous generations of software security issues.”

Advertisement

“AI learns from data, and humans historically are really bad at building security into their code,” she said. “The human processes for all of these security inputs are going to be the most important thing. Your software assurance processes, it’s not going to be just fixed with some magical, mystical AI tool.”

Assessments of that kind from Einstein are possible thanks in part to CISA’s decades-long experience with commercial AI products, as well as the agency’s more recent work with a handful of bespoke tools. She specifically cited a reverse malware engineering system that leverages machine learning to aid analysts in diagnosing malicious code.

For that AI tool and others like it, Einstein said, human review is still absolutely critical.

“We don’t yet have a situation where there’s some AI agent doing all of our cyber defense for us,” she said. “And I think we have to be realistic about how important it is to still keep humans in the loop across all of our cybersecurity use cases.”

CISA has been able in recent months to drive home that human-centered case through two tabletop exercises led by the Joint Cyber Defense Collaborative. Einstein spoke at both Wednesday events about JCDC’s AI efforts, highlighting the agency’s decision to enlist new industry partners specializing in the emerging technology. 

Advertisement

“AI companies are part of the IT sector, that’s part of critical infrastructure, and they need to understand how they can share information with CISA and with each other in the wake of possible AI incidents or threats,” she said.

The JCDC’s first AI security tabletop exercise was held in June and the second was completed “just a couple weeks ago,” Einstein said. Next up for the group will be the publication this fall of an AI security incident collaboration playbook, which she hopes will be “useful … in the context of future threats and incidents.”

“What we hope is that that community will be able to keep building this muscle memory of collaboration,” she said, “because it’s a terrible time to make new collaboration during a crisis. We need to have these strong relationships increase trust ahead of whatever crisis might happen.”

Part of CISA’s crisis planning in the months ahead will come in the form of its second set of risk assessments required by the White House’s AI executive order. Einstein said the agency is already “deep” into that second round of assessments, on track for a January delivery date. In the meantime, Einstein has a few words of advice for public or private-sector cyber officials as they consider using the technology.

“Don’t be a solution looking for a problem; become obsessed with the problem you’re trying to solve, and then use the best available automation or human to fix that problem,” she said. “Just because you have an AI hammer doesn’t mean that everything’s a nail, right?”

Matt Bracken

Written by Matt Bracken

Matt Bracken is the managing editor of FedScoop and CyberScoop, overseeing coverage of federal government technology policy and cybersecurity. Before joining Scoop News Group in 2023, Matt was a senior editor at Morning Consult, leading data-driven coverage of tech, finance, health and energy. He previously worked in various editorial roles at The Baltimore Sun and the Arizona Daily Star. You can reach him at matt.bracken@scoopnewsgroup.com.

Latest Podcasts