ICE drives AI use case growth within Homeland Security
The Department of Homeland Security is actively working on 200-plus artificial intelligence use cases, a nearly 37% increase compared to July 2025, according to its latest AI inventory posted Wednesday. Immigration and Customs Enforcement is a driving force behind the growth.
ICE added 25 AI use cases since its disclosure last summer, including to process tips, review mobile device data relevant to investigations, confirm identities of individuals via biometric data and detect intentional misidentification. Of the newly added uses at ICE, three are products from Palantir, which has been a notable — and at times controversial — technology partner for the U.S. government under the Trump administration.
“This inventory is coming out at a moment where there are significant, widespread questions about the legality of actions being taken by DHS and their potential infringement on the civil liberties and privacy of millions of people across the country,” said Quinn Anex-Ries, a senior policy analyst focused on equity and civic tech at the Center for Democracy and Technology, a nonprofit technology policy organization.
Anex-Ries added: “There are some initial indications that the inventory leaves us wanting for more.”
The annual inventory process stems from a 2020 executive order during the first Trump administration that was later enshrined into federal statute. Early iterations of the inventories gained a bad reputation that wasn’t easily mitigated. In a report published three years after the presidential directive, the Government Accountability Office said most agency AI inventories were incomplete or inaccurate. Steps were taken in 2024 to enhance the process.
Delays this year, however, are expected following the longest federal government shutdown in history. Only a few agencies published their 2025 inventories earlier this month, but a White House official told FedScoop last week that a consolidated federal resource would be published to GitHub “soon.” A shell for the document went live Wednesday. DHS’s inventory came as one of the first substantial uploads from an agency. The inventories for 2025 will be the first completed during either of President Donald Trump’s terms.
New mentions, additions
Palantir has long been thought to be a familiar vendor for DHS, but this is the first year it has appeared in the federal agency’s AI inventory.
Notably, Palantir technology is being deployed for a use case titled “Enhanced Lead Identification & Targeting for Enforcement” or ELITE. That tool uses generative AI to help Enforcement and Removal Operations officers more easily extract information from records such as “rap sheets and warrants.”
Meanwhile, other Palantir tech is used for tip processing and software development.
The tech vendor taps AI to save time when reviewing and categorizing incoming tips via commercially available large language models that have not been trained using agency data, per the inventory. The agency is also using a Palantir tool for generative AI coding assistance tools to debug, query databases and analyze system metrics. The coding tools have not been trained on agency data, either.
Another newly added use case is Mobile Fortify, a mobile application that compares biometric information, such as facial images and fingerprints, with agency records to assist agents in identity verification.
Use of that platform was initially reported by 404 Media and has since attracted scrutiny from Democratic lawmakers, who want to limit misuse. DHS listed uses of the Mobile Fortify application for Customs and Border Protection as well as ICE. According to the inventory, the agencies began using the tool in May 2025.
‘Determined not high-impact’
DHS’s inventory appears to make heavy use of a process to exempt certain use cases that are assumed by default to be “high-risk” from increased risk management practices.
Under governance memos established by both the Biden and Trump administrations, certain use cases are presumed to fall under a category that demands increased risk management. For Biden, those were rights- and safety-impacting uses and for Trump, the category is called high-impact uses.
Like the Biden administration, the Trump administration’s AI governance memo defined certain uses that are presumed to be high-impact — such as uses in law enforcement contexts or those that leverage biometrics, among many other categories.
However, agency officials have the power to determine that a presumed high-impact use case doesn’t actually meet the definition and therefore doesn’t need additional risk management practices. Under the Trump memo, a use is high-impact “when its output serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety.”
Of the active DHS use cases, 51 were listed as high-impact, 108 were not high-impact, and 46 were deemed “presumed high-impact but determined not high-impact.” Many of the use cases that fell in that latter category were within CBP and ICE, which immediately raised concerns among experts.
Among the uses with that classification was the enhanced lead, or ELITE, tool from Palantir. In line with its reasoning for other uses, DHS says the tool isn’t high-impact because “its outputs are limited to normalized address data and do not serve as a principal basis for decisions or actions with legal, material, binding, or significant effects on individuals.”
“It is jarring to see,” Varoon Mathur, former senior adviser on AI at the Office of Management and Budget and Presidential Innovation Fellow, told FedScoop, pointing to the categorization of risks as well as the lack of identified risk management tactics.
“What’s most important for DHS is to be able to answer questions many will have on their AI use, not generate more questions to answer,” Mathur added.
For most of the use cases that fell into the “presumed high-impact but determined not high-impact” category, DHS credits the technology’s use as a supportive function — rather than the principal basis for decisions or actions — for creating the distinction.
“It’s a pretty high definitional bar,” Valerie Wirtschafter, a fellow in the Brookings Institution’s foreign policy, artificial intelligence and emerging technology initiative, said in an email.
DHS didn’t respond to a request for comment by the time of publication.
High-impact tasks
Other use cases that are high-impact still have work to do.
Under the Trump administration memo, agencies must stop using any high-impact uses that do not meet minimum risk management practices by April. Mobile Fortify, which is listed in both instances as high-impact, still needs to complete several of those minimum practices.
Despite the application actively being deployed in immigration activities, ICE is still working on completing an AI impact assessment and potential impacts of the tool have yet to be identified, according to the inventory.
The DHS division has also not completed its development of monitoring protocols that would alert ICE to adverse impacts to security or violations of privacy and civil liberties. The inventory also notes that ICE has not established an appropriate fail-safe to minimize the risk of significant harm or established an appeal process in the event that an impacted individual would want to contest the AI system’s outcome.
“DHS has some of the use cases that pose some of the most significant threats to people’s fundamental rights and civil liberties, so there’s even greater importance on the agency’s ability to rigorously implement the risk management practices,” Anex-Ries said. “There’s bipartisan consensus across multiple administrations about what these minimum risk practices should be.”
Aside from in-the-field AI applications, the inventory provides more insight into how ICE is using the technology for human resources tasks. The new details come after NBC News reported a system error within an AI resume-screening tool that sent recruits to offices without being trained.
While it is not clear whether it’s the same use case, the inventory does report ICE’s use of an “AI-Assisted Resume Screening Tool.”
That use case began earlier this month and leverages OpenAI’s GPT-4 to review resumes and apply scores to candidates. Like Mobile Fortify, the tool is labeled as high-impact and is in the process of pre-deployment testing, an impact assessment, an independent review and monitoring protocol development.
“Especially with these really high-profile use cases, it’s important that the inventory contains a thorough description,” Anex-Ries said. “When the description falls short of at least what we’ve heard in terms of its use on the ground, [it] leaves open questions.”
Following the public release of DHS’s AI inventory last year, the agency published a companion blog post and simplified version of its initial entry. Stakeholders would prefer the pattern continue.
“It’s my hope that DHS now, and other agencies, will follow suit to take further steps to keep publishing more information about these tools, how they’re making governance and oversight decisions about them and make them easily available to the public,” Anex-Ries said.