DHS reveals new artificial intelligence playbook
The Department of Homeland Security released an artificial intelligence playbook on Tuesday aimed at guiding government use of generative AI systems. The document, written for both federal and local officials, represents the culmination of work on artificial intelligence under Secretary Alejandro Mayorkas’ before the upcoming change in administrations.
The guide emphasizes investing in mission-oriented pilots, building support from senior leadership, and assessing existing tools. For example, the guide notes that some AI systems are expensive to use through existing cloud contracts, while open-weight models might be less expensive. Law enforcement use cases, the document noted, might require models that can run in secure environments.
The guide also recommends coming up with a framework for responsible use of the technology, investing in technical talent, and measuring usability.
“The rapid evolution of GenAI presents tremendous opportunities for public sector organizations. DHS is at the forefront of federal efforts to responsibly harness the potential of AI technology,” Mayorkas said in a statement. “By sharing our experiences and best practices, we aim to empower other government agencies to leverage AI in a way that enhances their missions while safeguarding the rights and privacy of the individuals they serve.”
Also discussed in the guide were pilot use cases for generative AI that DHS deemed successful, including a large language model-enhanced search function used by Homeland Security Investigations, which focuses on transnational criminal activity and breaches of immigration laws. The agency claims this use case, for instance, helped detect fentanyl networks and advance work investigating child exploitation. Other use cases included hazard mitigation plans and using the technology to train immigration officers.
DHS has taken a series of steps to advance government use of artificial intelligence, including establishing an AI board with corporate and civil rights leaders, analyzing chemical and nuclear risks related to the technology, and releasing a new and far more detailed inventory of AI use cases.