Federal cyber leaders proceed with caution on AI as a defensive tool

Agency IT leaders warn of the technology’s tendency to bring in bad data, underscoring the need for “risk-based approaches” and human involvement.
Department of Labor CISO Paul Blahusch, center, and NIST National Cybersecurity Center of Excellence Director Cherilyn Pascoe, right, participate in a CyberTalks panel on Nov. 16, 2023, in Washington, D.C. (Scoop News Group photo)

Three years ago, chief information security officers couldn’t go anywhere without hearing about zero trust. Today, artificial intelligence is the defensive measure du jour for those same government IT leaders. 

With a healthy dose of skepticism formed through years of protecting digital infrastructure from advanced threats, many federal cybersecurity practitioners have significant concerns about AI, viewing it as a technology that needs corralling. That’s especially true for large language models and other data sources, they say. 

“It’s garbage in, garbage out,” said Paul Blahusch, CISO for the Department of Labor. “If our adversary can poison that data, well, we’re going to start getting the wrong information back out from our artificial intelligence. It’s going to say, ‘Day is night, night is day. Black is white, white is black.’ And are we going to just take that and say, ‘Oh well, that must be what it says because the AI said so?’”

Speaking during an Advanced Technology Academic Research Center webinar last week, Blahusch and other government and industry cyber experts painted AI as a technology that’s not entirely new, having found itself in the cultural zeitgeist thanks to ChatGPT. But it’s one that can and will be put to better use.


“I’m sure that my … antivirus [software] has been using some form of AI and machine learning for a long time,” Blahusch said. “The whole idea of artificial intelligence within cyber tooling has been there for a while — all our threat intel types of analyses use some of that. But we can certainly take it to the next level.”

That next level should come in the form of reducing burdens on the federal cyber workforce, Blahusch said. When it comes to data analysis, those employees can focus on “higher-value work” if AI systems are positioned to handle the rest. 

“I don’t have all the resources to have 100 people looking at streams,” he said. “I need technology to help me with that and have my limited number of people do the things that human beings need to do.”

Jennifer R. Franks, director of the Government Accountability Office’s Center for Enhanced Cybersecurity, Information Technology & Cybersecurity Team, acknowledged during the panel that she’s “not really an AI enthusiast,” but as a cyber professional who also works in privacy and data protection, the technology is “here to stay.” 

New uses of automation in government work are necessary given staffing shortages, but humans will still play a critical role since emerging technologies like AI also bring on additional vulnerabilities, she said. 


“We can’t be naive to the risk-based approaches that we have to take, making sure that we still have human decision-making. You know that is going to help us in managing some of the complexities,” Franks said. “We have to make sure that … we’re managing some of the controls around the tools and technologies and the machine learning aspect of the codes that are going into the algorithms, [so they] are not compromised.”

As a former federal IT manager now on the industry side, Youssef Takhssaiti said government cyber officials need to embrace AI, leveraging the technology’s ability to analyze network traffic, detect anomalies, automate responses to standard attack scenarios and myriad other defensive techniques. 

But procurement officers also “have to be very careful when it comes to adopting or purchasing” AI products, according to Takhssaiti, a Treasury Department and Consumer Product Safety Commission alum who’s working on a PhD in artificial intelligence. 

“Everyone is focused on speed to market — how can I get my product and application out to the market and consumers,” said Takhssaiti, now global GRC director for Aqua Security. “Before adopting any [AI products], two key things to focus on: Are they a vulnerability for you or as vulnerability-free as they could be? And what do they do with my data? Is it being used to retrain these models?”

Whether it’s continuing to embrace zero-trust architectures, dabbling in AI or looking out for the next big defensive thing in cyber, federal security professionals agree that threat protection strategies need to take an “all of the above” approach while also leaning on tried-and-true mitigation methods.  


“We’re still actively deploying and implementing the initiatives as ZTA across our various environments. But now we have AI, right?” Franks said. “But we cannot still forget … the basic cyber hygiene strategies. … And then going forward, we have to redesign and strengthen where it is we need to go so that we can stay ahead of the vulnerability curve.”

Matt Bracken

Written by Matt Bracken

Matt Bracken is the managing editor of FedScoop and CyberScoop, overseeing coverage of federal government technology policy and cybersecurity. Before joining Scoop News Group in 2023, Matt was a senior editor at Morning Consult, leading data-driven coverage of tech, finance, health and energy. He previously worked in various editorial roles at The Baltimore Sun and the Arizona Daily Star. You can reach him at

Latest Podcasts