Advertisement

AI transparency creates ‘big cultural challenge’ for parts of DHS, AI chief says

Transparency around AI may result in issues for DHS elements that are more discreet in their operations and the information they share publicly, CIO Eric Hysen said.
Department of Homeland Security Chief Information Officer Eric Hysen speaks at FedTalks 2022. (Image credit: Pepe Gomez / Pixelme Studio)

As the Department of Homeland Security ventures deeper into the adoption of artificial intelligence — while doing so in a transparent, responsible way in line with policies laid out by the Biden administration — it’s likely to result in friction for some of the department’s elements that don’t typically operate in such an open manner, according to DHS’s top AI official.

Eric Hysen, CIO and chief AI officer for DHS, said Tuesday at the CrowdStrike Gov Threat Summit that “transparency and responsible use [of AI] is critical to get right,” especially for applications in law enforcement and national security settings where the “permission structure in the public eye, in the public mind” faces a much higher bar.

But that also creates a conundrum for those DHS elements that are more discreet in their operations and the information they share publicly, Hysen acknowledged.

“What’s required to build and maintain trust with the public in our use of AI, in many cases, runs counter to how law enforcement and security agencies generally tend to operate,” he said. “And so I think we have a big cultural challenge in reorienting how we think about privacy, civil rights, transparency as not something that we do but that we tack on” to technology as an afterthought, but instead “something that has to be upfront and throughout every stage of our workplace.”

Advertisement

While President Joe Biden’s AI executive order gave DHS many roles in leading the development of safety and security in the nation’s use of AI applications, internally, Hysen said, the department is focused on “everything from using AI for cybersecurity to keeping fentanyl and other drugs out of the country or assisting our law enforcement officers and investigators in investigating crimes and making sure that we’re doing all of that responsibly, safely and securely.”

Hysen’s comments came a day after DHS on Monday published its first AI roadmap, spelling out the agency’s current use of the technology and its plans for the future. Responsible use of AI is a key part of the roadmap, pointing to policies DHS issued in 2023 promoting transparency and responsibility in the department’s AI adoption and adding that “[a]s new laws and government-wide policies are developed and there are new advances in the field, we will continue to update our internal policies and procedures.”

“There are real risks to using AI in mission spaces that we are involved in. And it’s incumbent on us to take those concerns incredibly seriously and not put out or use new technologies unless we are confident that we are doing everything we can, even more than what would be required by law or regulation, to ensure that it is responsible,” Hysen said, adding that his office worked with DHS’s Privacy Office, the Office for Civil Rights and Civil Liberties and the Office of the General Counsel to develop those 2023 policies.

To support the responsible development and adoption of AI, Hysen said DHS is in the midst of hiring 50 AI technologists to stand up a new DHS AI Corp, which the department announced last month.

“We are still hiring if anyone is interested,” Hysen said, “and we are moving aggressively expand our skill sets there.”

Latest Podcasts