Advertisement

DHS official: AI could exacerbate chemical and biological threats

The assistant secretary for DHS’s Countering Weapons of Mass Destruction office warned in an interview that AI could supercharge biological research — and invent new pathogens.
The Department of Homeland Security logo is seen at the ICE Cyber Crimes Center expanded facilities in Fairfax, Va., on July 22, 2015. (Photo by Paul J. Richards/AFP via Getty Images)

A Department of Homeland Security team dedicated to deterring the use of weapons of mass destruction is now studying how artificial intelligence could exacerbate these kinds of threats. In the wake of a report announced last month, one of the top officials with that office is pointing to a series of potential strategies to confront the ways AI tools could be deployed — even inadvertently — to synthesize dangerous chemical and biological materials.  

In an interview with FedScoop, Mary Ellen Callahan, the assistant secretary for the DHS Countering Weapons of Mass Destruction (CWMD) office, outlined how the U.S. government could deal with this kind of challenge, including looking at intellectual property and copyright enforcement and encouraging journals with large stores of biological and chemical research to introduce more stringent access requirements. The effort needs to be whole-of-government and international, she argued. 

“Both the [DHS] secretary and the president have said that regulation in AI may not be effective or helpful because it’s reactive. It’s also answering probably yesterday’s problem,” she said. “We’re going to look to see if we can leverage the currently existing models.”

The interview comes after DHS submitted a report to the president looking at the intersection of Chemical, Biological, Radiological, and Nuclear (CBRN) threats and artificial intelligence. The president’s advisers have recommended making that report public, Callahan said, though only a fact sheet is available right now. AI labs, along with those representing the Energy Department, think tanks, and model evaluators, were consulted. The DOE is also working on a separate, classified report into AI and nuclear threats, specifically. “The effort to produce the report regarding nuclear threats and AI is ongoing,” a spokesperson for the agency told FedScoop. 

Advertisement

Editor’s note: The transcript has been edited for clarity and length.

FedScoop: Can you start by explaining what the threat actually is, here? 

Assistant Secretary Mary Ellen Callahan: Artificial intelligence and generative artificial intelligence is the processing of a lot of different data to try to find novel or new content. Let’s talk about biology, specifically: It is using artificial intelligence to update, enhance, and improve research. … We really want to maximize artificial intelligence for good for research while minimizing the malign actors’ ability to leverage artificial intelligence for bad. 

FS: Is the idea that someone could use something like OpenAI to just come up with something really bad instead of something really good?

MEC: We don’t want to give people ideas. But what we want to do allow the really novel research — the important biological and chemical research breakthroughs — to happen, while still providing hurdles for bad guys to try to get their hands on, say, known recipes for pathogens [and] to make sure that we are leveraging the promise of AI while minimizing the peril. 

Advertisement

FS: Are you making a distinction between chemical, biological, and nuclear? And is there a reason why one would be more relevant than another in terms of AI threats?

MEC: The Countering Weapons of Mass Destruction office here at DHS has been around for about five and a half years. It is intended to be the prevention and detection of all weapons of mass destruction threats. That is usually summarized as chemical, biological, radiological, and nuclear (CBRN). It’s all on the prevention and detection side. We’re really focused on how we deter people before they get to actually triggering something. … The executive order asked us to talk about CBRN threats. We do in the report that is before the president right now generally talk about CBRN threats, and the fact sheet that is out publicly does talk about that. 

We focus primarily on chemical and biological threats for two reasons: One is the access to chemical equations and bio-recipes is higher and it’s more advanced. Both the bio and the chemical [information] are pretty available in the common parlance and the common internet where they could be indexed by artificial intelligence models or frontier models.

… With regard to radiological and nuclear, the research associated with that is often on closed networks and maybe classified systems. The Department of Energy was asked to do a parallel report on nuclear threats specifically. Therefore, we’ve ceded that specific question about radiological or nuclear threats to the classified report the Department of Energy is working on right now.

FS: One of the points that’s made in the public fact sheet is the concern about companies taking heterogeneous approaches in terms of evaluation and red-teaming. Can you talk a little bit more about that?

Advertisement

MEC: All the frontier models have made voluntary commitments to the president from last year. Those [are] promises [like] safety and security, including focusing on high-risk threats, like CBRN. They all want to do a good job. They’re not quite sure exactly how to do that job. 

… We have to develop guidelines and procedures in collaboration with the U.S. government, the private sector, and academia to make sure that we understand how we try to approach these highly sensitive, high-risk areas of information. That we create a culture of responsibility for the AI developers — those voluntary commitments are the first step in that. … But [we need] to make sure that all the folks that are within the ecosystem are all looking at ways to deter bad actors from leveraging either new information or newly visible information that was distilled as a mosaic coming out of generative AI-identifying elements. So it’s really got a look at the whole universe on how to respond to this.

FS: Another thing that was both interesting and worrisome to me was the concern that’s highlighted about limitations in regulation and enforcement and where holes might be in terms of AI.

MEC: I am more sanguine now than I was when I started looking at that. So hopefully, that will give you some comfort. Really, we’re looking at a variety of different laws and factors. …We want to look at existing laws to see if there can be impacts taken, like, for example, export controls, intellectual property, tech transport, foreign investments. [We want to] look at things that already exist that we could already leverage to go and try to make it be successful and useful.

Some of the authorities are spread throughout the federal government, but that actually could make it stronger because then you have an ability to attack these issues in a couple of different ways, like the intellectual property of misuse [or] if somebody is using something that is copyrighted in order to leverage and create a technical or biological threat. 

Advertisement

The international coordination piece is very important. There’s a really significant interest in leaning in together and working on this whole-of-community effort to establish their appropriate guidelines and really to look to provide additional restraints on models, but also to amplify that culture of responsibility. 

We could look at updating regulatory requirements as the opportunity presents, but we’re not leading with regulations for a couple of reasons: Both the secretary and the president have said that regulation in AI may not be effective or helpful because it’s reactive. It’s also answering probably yesterday’s problem. 

FS: I’m curious about how you see the risks with open-source AI versus things that are not open source. I know that’s a big discussion in the AI community. 

MEC: There are pros and cons to open-source AI. From a CBRN perspective, understanding some of the weights may be helpful, but they also may reveal more information. … There’s a lot of information that’s on the internet and it’s going to be very hard to protect that existing content right now from AI. 

There are also a lot of high-end bio and chem databases that are behind firewalls, that are not on the internet, that are subscription-based, that are really very valuable for biologists. One of the things we’re recommending doing [for] data that isn’t on the internet — or that isn’t readily available to use for models —  is to actually have a higher standard, a higher customer standard, like a know-your-customer procedure. That benefits the promise of AI for good while detracting from bad actors and trying to get access to it. 

Advertisement

FS: Have you had conversations with some of the academic organizations and what are those conversations like? Are they open to this?

MEC: We spoke to a lot of academic organizations, a lot of think tanks, and all the major models. I don’t want to answer the question specifically about high-end databases, but I can say that across the board, people were very supportive of having appropriate controls around sensitive data. 

FS: How do we deal with companies that would not want to help with this or countries that would not want to help with this — like what’s the strategy there? 

MEC: That’s the whole idea. Everyone has to work collaboratively on this whole-of-community effort. Right now, there is a real appetite for that. All of this is early, but I think that people understand that [this is] the year and the time to try to build a governance framework in which to think about these issues.

FS: I’m curious if you would call this like a present threat or something that we should be worried about for the future, whether this is something we’re thinking about, like, this could happen tomorrow, or this could happen in a few years from now?

Advertisement

MEC: We tried to write the report to talk about present risk and near-term future risk. We can look at the speed and the rapidity in which AI models are developing and we can extrapolate kind of what the impact is. I want to highlight a couple of things with regard to the present-day risk to the near future. Right now, they say ChatGPT is like having an undergraduate biology student on your shoulder. There’s some discussion, as these models developed, that it would be like a graduate student on your shoulder. 

I also want to note that we’re talking about CBRN harms that are created by AI, but there also could be unintentional harm. We very much want to put in what I’m calling hurdles, or obstacles for people, who want to do harm, malign actors. But we also have to recognize that there could be unintentional harm that’s created by well-intending actors.

The other thing that we want to do with this whole-of-community effort with these guidelines and procedures that we’re encouraging to be created between international government, private sector, and academia, is to safeguard the digital to physical frontier. Right now, there’s a possibility that as I said, you could have an undergraduate student on your shoulder helping you to search to try to create a new chemical compound, but that — right now —  is mostly on the computer and then the screen and is not yet able to do it in real life. 

We’re really trying to make sure that the border between digital and frontier remains as strong as it can be. That’s probably the … three-to-five-year risk: something happens and is capable of being translated into real life. It’s still going to be hard though, hopefully. 

Latest Podcasts