Advertisement

Homeland Security adds facial comparison, machine learning uses to AI inventory

Inventory now includes a facial comparison tool being used by the Transportation Security Agency and Customs and Border Protection.
DHS, Department of Homeland Security, CISA, RSA 2019
(Scoop News Group photo)

The Department of Homeland Security recently updated its artificial intelligence use case inventory to reflect several uses of the technology that have already been made public elsewhere, including facial comparison and machine learning tools used within the department.

The additions include the U.S. Customs and Border Protection’s use of its Traveler Verification Service, a tool that deploys facial comparison technology to verify a traveler’s identity, in addition to the Transportation Security Agency’s deployment of that same tool for the PreCheck process. 

The department also added the Federal Emergency Management Agency’s geospatial damage assessments, which uses machine learning and machine vision to assess damage caused by a disaster, and CBP’s use of AI to inform port of entry risk assessment decisions. 

While the four additions were picked up by a website tracker used by FedScoop on Oct. 31, all appear to have been already public elsewhere — three for at least a year — underscoring existing concerns about inventories not reflecting the full scope of publicly known AI uses. 

Advertisement

When asked why the uses were added now, a DHS spokesperson pointed to its process for evaluating public disclosure.

“Due to DHS’s sensitive law enforcement and national security missions, we have a rigorous internal process for evaluating whether certain sensitive AI Use Cases are safe to share externally. These use cases have recently been cleared for sharing externally,” the spokesperson said in an emailed statement.

Aside from the Department of Defense, those in the intelligence community, and independent regulatory agencies, federal agencies are required to post publicly their uses of AI in an annual inventory under a Trump-era executive order. But they’ve so far been inconsistent in terms of categories included, format and timing. Among the concerns researchers and advocates have pointed to is apparent exclusion of public uses in the inventories.  

Use of the Traveler Verification Service facial comparison technology has been referenced elsewhere on the TSA’s website since at least early 2021 and on the CPB’s website since at least 2019, according to pages archived by the Wayback Machine. And according to a Government Accountability Office report, the Traveler Verification Service was developed and implemented in 2017. The use of AI for geospatial damage assessments has also appeared on FEMA’s website since August 2022, according to the Wayback Machine’s archive.

The spokesperson also noted that DHS Chief Information Officer and Chief AI Officer Eric Hysen testified on CBP’s port of entry risk assessment use case at a September hearing before the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation.

Advertisement

Ben Winters, a senior counsel at the Electronic Privacy Information Center who also leads its AI and Human Rights Project, called the lack of promptness and completeness in the disclosure “concerning.”

“AI use case inventories are only as valuable as compliance with them is. It illustrates why the government does not have the adequate oversight, transparency, and accountability mechanisms in place to continue using or purchasing sensitive AI tools at this time,” Winters said in an emailed statement to FedScoop.

He added that he hopes the Office of Management and Budget guidance “does not broadly exempt these types of ‘national security’ tools and DHS chooses to prioritize transparency and accountability moving forward.”

Currently, there isn’t a clear process for agencies to add or remove things from their inventories. In the past, the OMB has said that agencies “are responsible for maintaining the accuracy of their inventories.”

DHS previously added and removed several uses of AI in August. At that time, it added Immigration and Customs Enforcement’s use of facial recognition technology, as well as CBP’s use of technology to identify “proof of life” and prevent fraud on an agency app. It also removed a reference to a TSA system that it described as an algorithm to address COVID-19 risks at airports.

Advertisement

The agency also expects to soon release more information about its work with generative AI, according to the spokesperson.

“DHS is actively exploring pilots of generative AI technology across our mission areas and expects to have more to share in the coming weeks,” the spokesperson said.

Latest Podcasts