Advertisement

DHS releases AI chemical safety recommendations, building off voluntary White House pledges

The report from the agency’s Countering Weapons of Mass Destruction Office offers guidance on how to combat AI-fueled chemical and biological threats.
The seal of the Department of Homeland Security is seen on a podium on Feb. 23, 2015, in Washington, D.C. (Photo by MANDEL NGAN/AFP via Getty Images)


The Department of Homeland Security has released its long-awaited report focused on reducing the ways artificial intelligence could exacerbate chemical and biological threats. 

The document, which was finalized in April but only made public after apparent suggestions from presidential advisers, details several new recommendations focused on AI safety, including encouraging the use of additional credentialing for high-risk scientific databases and the creation of standards for “unacceptably dangerous responses” from large language models. The recommendations build on voluntary White House AI safety commitments that several large technology companies have signed onto, including OpenAI and Palantir. 

The report, which was created by DHS’s Countering Weapons of Mass Destruction Office, highlights elevated AI safety risks due to companies’ varied approaches, “inconsistent access” to chemical and biological threats expertise, and the dual-use nature of basic science. Known limitations in biological and U.S. chemical security regulations could also exacerbate dangerous research outcomes, the report notes. 

Mary Ellen Callahan, assistant secretary for the CWMD office, said in an interview with FedScoop last month that “all the frontier models have made voluntary commitments to the president from last year. Those [are] promises [like] safety and security, including focusing on high-risk threats, like [chemical, biological, radiological, and nuclear defense]. They all want to do a good job. They’re not quite sure exactly how to do that job.” 

Advertisement

Callahan continued: “We have to develop guidelines and procedures in collaboration with the U.S. government, the private sector, and academia to make sure that we understand how we try to approach these highly sensitive, high-risk areas of information.” 

The report makes a series of proposals, many of which focus on building a consensus on chemical and biological threats within the national security, public health, and animal health communities. The recommendations also include incorporating AI-specific CBRN risks in the National Biodefense Strategy Implementation Plan and National Security Memorandums 15, 16 and 19. The document also argues for building “common guidance among federal agencies on classification parameters” related to AI and CBRN. 

DHS is recommending more specific actions that would have a direct impact on those involved in AI, chemical, or biological research, too. For instance, the report pushes for government support for the development of “granular release practices” for source code and AI model weights, as well as “safe harbor” reporting processes. It suggests that biological labs consider delegating a responsible official to “safeguard the digital-to-physical frontier.”

One notable proposal includes forming “criteria for tactical exclusion and/or protection of sensitive chemical and biological data — such as sequence information associated with pathogenicity or toxicity — from publicly accessible databases on which AI could train.” The agency is also recommending that the federal government create evaluation standards for LLMs “consisting of questions or lines of questioning and thresholds for unacceptably dangerous responses to improve the models.” It was not immediately clear from the report what this recommendation would entail. 

The report mentions “Know Your Customer” requirements for high-risk tools and services, an idea Callahan raised in her interview with FedScoop. Relatedly, the agency suggests forming government-focused CBRN threat awareness training — including training on specific high-risk materials and dissemination methods — for model evaluators and red teams, provided they participate in background checks or obtain security clearances. 

Advertisement

Additionally, the DHS office wants to build on the White House’s voluntary AI safety commitments by creating “a standard framework for the release of AI models for pre-release evaluations and red teaming of AI models by third parties and post-release reporting of potential hazards for foundation models to accrue information.” 

One of the main challenges outlined by DHS is that the country does not have an “overarching legal or regulatory framework” focused on AI research and development oversight. The fact that AI work is spread across agencies can also create different information-sharing and regulatory hurdles.

The report addresses several opportunities to address these gaps. Existing law could be used to address AI’s impact on physical and life scientists, DHS said, while current frameworks for dealing with commerce — including intellectual property, expert controls, and foreign investment — could also help provide some AI oversight.

Latest Podcasts