Advertisement

140+ groups flag DHS’s AI use cases in new letter to Mayorkas

The letter to the secretary from a collection of immigrant and civil rights organizations, unions and more comes ahead of upcoming deadlines for agencies related to their AI use case inventories.
Secretary of Homeland Security Alejandro Mayorkas testifies before the House Homeland Security Committee on the fiscal year 2025 budget in Washington, D.C. on April 16, 2024. (Photo by Allison Bailey / Middle East Images / AFP via Getty Images)

More than 140 groups, including immigrant and civil rights organizations, are raising concerns about the Department of Homeland Security’s use of artificial intelligence, arguing in a letter sent to Secretary Alejandro Mayorkas on Wednesday that the agency is violating federal rules on the technology. 

The letter calls on DHS and many of its components to suspend several AI use cases, including systems deployed by Customs and Border Protection and Immigration and Customs Enforcement. It comes as federal agencies face two upcoming deadlines related to their use of AI use case inventories, which are required by law and under Biden and Trump administration executive orders. 

By Dec. 1, agencies must certify their waivers to sidestep some of the AI inventory guidance requirements, as well as complete processes for determining whether use cases are considered rights- or safety-impacting. FedScoop first reported the news that the White House had released its final AI use case inventory guidance late last month.

“DHS’s use of AI appears to violate federal policies governing the responsible use of AI, particularly when it comes to AI used to make life-impacting decisions on immigration enforcement and adjudications,” the letter states. “We have serious concerns that DHS has fast-tracked the deployment of AI technologies in contravention of these federal policies, executive orders, and agency memoranda.”

Advertisement

The letter pointed to several AI use cases at DHS, including a “Predicted to Naturalize” AI tool, certain algorithms used by ICE, and CBP use of biometric surveillance. Ultimately, the signatories call on the agency “to cancel or suspend the use of any non-compliant AI tool” by the Dec. 1 deadline. 

DHS said in a statement to FedScoop that the agency “is committed to ensuring that our use of AI safeguards privacy, civil rights, and civil liberties, that it avoids inappropriate biases, and is transparent and explainable to our workforce and to those we serve. DHS is actively working to implement OMB’s requirements and guidance for AI governance, innovation, and risk management, including the minimum practices for rights and safety-impacting AI, and is on track to meet all associated timelines.”

The groups that signed the letter include the Service Employees International Union, the Virginia League of Women Voters, the AI Now Institute, and the American Jewish Committee, among many others. 

Paromita Shah, the founding executive director of Just Futures Law, which is leading the effort, said in an interview that “AI, essentially, is a black box. Things go in and you don’t know what the algorithm is doing. And then then things come out, and you’re not really sure what they’re trying to say.” 

She continued: “The algorithm is not really disclosed to the public. We know nothing about who’s looking at these outputs from an AI program, and we don’t know what they’re doing, in real-time, to monitor it.” 

Advertisement

DHS said that though AI is used to assist its personnel in their work, the agency “does not use the outputs of AI systems as the sole basis for any law enforcement action or denial of benefits. We welcome dialogue with any interested party that wishes to learn more about how the Department is responsibly using AI to carry out and improve our mission.”

Shah expressed myriad concerns about the use of artificial intelligence by DHS, including issues of bias and discrimination. She added that only recently had the group met with DHS about their concerns and said that the agency has been slow to communicate with immigrant rights groups over its use of AI. 

“Many agencies have said that they’re not going to get rid of the human element as a safeguard,” Shah said. “I just don’t think the data is showing that the human element can really mitigate against the kind of bias and discrimination that can come out of these programs.” 

Another challenge is that these steps, under the Biden administration’s AI rules, won’t take place before the inauguration of a new president next year, as Republicans have expressed growing interest in repealing the Biden administration’s AI executive order should former President Donald Trump win a second term. 

This story was updated Sept. 5, 2024 with comments from DHS.

Latest Podcasts