Advertisement

New DHS AI directive sets prohibited uses, expands acquisition governance

The directive, quietly issued Wednesday, sets forth DHS’s latest policy for using and acquiring the emerging technology.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
DHS Department of Homeland Security
(U.S. Customs and Border Protection / Flickr)

The Department of Homeland Security unveiled a list of artificial intelligence uses that are prohibited for agency missions as part of a new directive quietly introduced this week.

The directive, which is DHS’s latest effort to create a guiding policy for the use and acquisition of AI, also sets governance requirements for how the department and its components should approach the technology — including how it should buy, test and operate it, and report any incidents involving its use.

While DHS has briefly addressed in previous policy that it’s prohibited for department personnel to use AI for discriminatory purposes, the latest policy expands upon and adds to that, more thoroughly detailing uses of AI and associated data that are forbidden.

Under the directive, DHS personnel are forbidden from:

Advertisement
  • Relying on AI outputs as the sole basis for determining law enforcement and civil enforcement actions, as well as denying government benefits;
  • Using AI or its associated data to “make or support decisions based on the unlawful or improper consideration of race, ethnicity, gender, national origin, religion, sexual orientation, gender identity, age, nationality, medical condition, disability, emotional state, or future behavior predictions”;
  • Improperly profiling, targeting or discriminating against individuals based on the same considerations in the preceding bullet;
  • Using AI to conduct unlawful or improper systemic, indiscriminate, or large-scale monitoring, surveillance, or tracking of individuals;
  • Sharing department data or other outputs from using AI within the department with third parties to be used in ways that violate the law or other government or department policies; and
  • Using AI or its associated data in any way that violates the law or other government or department policies.

The remainder of the directive serves as a procedural update on the policy and requirements DHS parties must adhere to throughout the AI lifecycle, which encompasses “planning, designing, developing, deploying, and operating systems, services, techniques, software, and hardware by or on behalf of DHS,” it explains. That includes the acquisition of AI by or on behalf of the department and the development of requirements for AI acquisitions.

This latest directive, signed by DHS Undersecretary for Management Randolph Alles, supersedes a policy statement outgoing Secretary Alejandro Mayorkas issued in August 2023, when the department was taking its first steps at institutionalizing AI operations, including launching a chief AI officer role.

In the waning days of the Biden administration, DHS has taken a flurry of AI-related actions as the department faces uncertainty in how the Trump administration may address the use of the technology when it takes office. Last week, the department issued an AI playbook. Back in December, DHS touted DHSChat, its new internal chatbot, as it released the latest version of its AI use case inventory, comprising 158 active AI applications.

Billy Mitchell

Written by Billy Mitchell

Billy Mitchell is Senior Vice President and Executive Editor of Scoop News Group's editorial brands. He oversees operations, strategy and growth of SNG's award-winning tech publications, FedScoop, StateScoop, CyberScoop, EdScoop and DefenseScoop. After earning his journalism degree at Virginia Tech and winning the school's Excellence in Print Journalism award, Billy received his master's degree from New York University in magazine writing while interning at publications like Rolling Stone.

Latest Podcasts