DHS releases commercial generative AI guidance and is experimenting with building its own models

"Our strategy has been that because AI is such a broad technology — or set of technologies — we are issuing specific guidance on different types of AI technology," DHS CIO Eric Hysen tells FedScoop.
Department of Homeland Security Chief Information Officer Eric Hysen speaks at FedTalks 2022. (Image credit: Pepe Gomez / Pixelme Studio)

The Department of Homeland Security is leaning into the use of generative artificial intelligence by issuing new guidance on how its workforce should use commercial applications of the technology and experimenting with building its own models, the department’s top IT official told FedScoop.

DHS is rolling out a set of policies around specific AI technologies — a process that started in September with the publication of guidance on the use of facial recognition and face capture technologies and continues now with a policy governing how the department will use commercial generative AI models.

The memo is signed and dated Oct. 24 but was uploaded to the department’s website Thursday. DHS also recently issued a privacy impact statement on commercial generative AI tools conditionally approved for use within the department, including ChatGPT, Bing Chat, Claude 2 and DALL-E2.

During an interview on FedScoop’s Daily Scoop Podcast, CIO Eric Hysen said DHS developed the new policy using the White House’s AI executive order and the corresponding Office of Management and Budget draft memo on how federal agencies should implement the order “to make sure that we have a comprehensive governance approach to specific types of AI technology that are in use across the department.”


“Our strategy has been that because AI is such a broad technology — or set of technologies — we are issuing specific guidance on different types of AI technology,” Hysen said.

Within the new guidance, Hysen writes he has “determined that DHS must enable and encourage DHS personnel to responsibly use commercial products to harness the benefits of Gen AI and ensure we continuously adapt to the future of work.”

The memo lays out how the department will “develop and maintain a list of conditionally approved commercial Gen AI tools for use on open-source information only” — such as those included in the recent privacy assessment — and security requirements and standards that personnel must follow when using commercial generative AI models.

“Immediate appropriate applications of commercial Gen AI tools to DHS business could include generating first drafts of documents that a human would subsequently review, conducting and synthesizing research on open-source information, and developing briefing materials or preparing for meetings and events. I have personally found these tools valuable in these use cases already, and encourage employees to learn, identify, and share other valuable uses with each other,” the memo reads.

At the same time, DHS has also been “experimenting” with building out its own large language models in-house and with the support of industry, he said.


“What we’re really looking to do there is learn,” Hysen told FedScoop. “I want a portfolio of AI projects that use models from different companies that let us understand what the benefits of different types are, that use some closed proprietary models, that use some open-source models, that test out different ways of deploying these models, some that might be shared commercial cloud instances, some that we might deploy on our cloud infrastructure, some that we might deploy in-house on our own hardware. We’re really in a learning mode here and are looking to try many different things technically, which is, as I’ve talked with other CIOs across government and across the private sector, I think, really the mode everyone is in.”

“We want to be maximizing our ability to learn how we can leverage these technologies in support of our mission,” he said.

DHS also issued broader guidance in September governing how DHS components should acquire and use AI and machine learning technologies.

Around that same time, Secretary Alejandro Mayorkas named Hysen — who’s been CIO of the department since 2021 — as DHS’s first chief AI officer. And back in April, Mayorkas launched a DHS Artificial Intelligence Task Force, which — co-chaired by Hysen — is responsible for producing the policies.

As such, Hysen and DHS have been developing a vision for the adoption and responsible use of AI that precedes the White House order and draft OMB guidance, which will require federal agencies to have chief AI officers within 60 days, once the directive is finalized. FedScoop has been tracking those CAIOs as they’re named.


“This is not something that just started when we added this title,” Hysen said. “This is work that has been going on for many years — many of our agencies and offices have been using AI and data science and machine learning in their operations for many years now. But … with the explosion of interest in generative AI and other topics over the last year, we’ve seen a need to really focus our approach across the department.”

The release of the executive order didn’t necessarily change any of that either because DHS has been working closely with the White House and anticipated its requirements.

“As you can imagine, with any document as comprehensive as the president’s executive order, it had been in the works for quite a while and many parts of the department had been working closely with the White House and the interagency for some time on several aspects of it. So we were anticipating some of the requirements there,” he said.

Hysen said that Mayorkas has been a driving force behind the department’s leaning into AI adoption, instead of hesitating as some federal agencies have.

“Very early he was using ChatGPT and other tools, right after they were released, in his personal life, and asking me and others how we could be leveraging the benefits of those technologies to better empower our workforce and give them what they need to get their job done,” Hysen said.


“Every day, we interact with more members of the public on a daily basis than any other federal agency. And the workload that our employees have is only growing,” he said. “And so when it comes to our use of AI within the department, the secretary saw very early that this could be a tool that could act as a force multiplier for us that could allow our agents and officers on the frontlines to spend less time doing routine paperwork, and more time actually focused on their security missions that would ultimately enhance our homeland security.”

Billy Mitchell

Written by Billy Mitchell

Billy Mitchell is Senior Vice President and Executive Editor of Scoop News Group's editorial brands. He oversees operations, strategy and growth of SNG's award-winning tech publications, FedScoop, StateScoop, CyberScoop, EdScoop and DefenseScoop. After earning his journalism degree at Virginia Tech and winning the school's Excellence in Print Journalism award, Billy received his master's degree from New York University in magazine writing while interning at publications like Rolling Stone.

Latest Podcasts