Advertisement

Department of Homeland Security reveals nearly 160 ways it’s using AI 

The agency made public 29 deployed use cases that are safety- or rights-impacting, with another 10 on the way.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
Department of Homeland Security Chief Information Officer Eric Hysen speaks at FedTalks 2022. (Image credit: Pepe Gomez / Pixelme Studio)

The Department of Homeland Security on Tuesday released the latest version of its artificial intelligence use case inventory, reporting 158 active applications for the technology — a major jump from the 67 it made public last year. 

In an explanatory blog post outlining how the agency approaches artificial intelligence, Eric Hysen, the agency’s CAIO and CIO, said that 29 deployed AI use cases and 10 upcoming AI use cases were deemed to be rights- or safety-impacting, a new level of scrutiny established by recent White House guidance. Roughly half of those deployed use cases included technologies related to face recognition and face-capture technologies, he said. 

The use cases for Customs and Border Protection include technology from the company Babel, a passive body scanner meant to help with weapons detection at pedestrian border crossings and an autonomous underwater vehicle. Immigration and Customs Enforcement is using a facial recognition-based biometric check-in tool, as well as facial recognition technology for identifying victims of child exploitation and a facial recognition service used in other investigations. The Transportation Security Administration has listed AI-enabled Axon body cameras and a generative AI system for employee workflows. More details about these use cases are available on the inventory. 

The document also mentions an internal generative AI chatbot used by the Cybersecurity and Infrastructure Security Agency.

Advertisement

Hysen said in the blog posted that he “reviewed and approved each use case to ensure they were meeting each practice, including testing performance in a real-world context, maintaining human oversight and accountability, conducting ongoing monitoring and mitigation for AI-enabled discrimination, among other requirements.” He noted that he did not need to issue any waivers of the required risk management practices. The Office of Management and Budget approved five short-term compliance deadline extensions for five safety- or rights-impacting use cases, he added. 

DHS said its goal with the release was to expand the inventory and err on the side of transparency, a decision based on guidance from the agency’s governing board to reveal as much information as possible, “even if some details about certain use cases cannot be publicly shared.” AI related to the intelligence community is excluded, based on a recent national security memorandum, but the agency will release more information about approaching those use cases in April. 

The full inventory, which also includes deactivated use cases, is available here

FedScoop has continued to report on the accuracy and transparency of AI inventories, including changes to DHS’s inventory made last year. Researchers at Stanford have also tracked these inventories, and a government watchdog previously flagged issues with DHS’s tracking.  

Latest Podcasts