AI national security memo aims to avoid U.S. ‘squandering’ its lead
The Biden administration published its anticipated national security memo on artificial intelligence Thursday, establishing a roadmap that aims to ensure U.S. competitiveness with adversaries on the technology, while still upholding democratic values in its deployment.
Specifically, the memo details more responsibilities for the Department of Commerce’s AI Safety Institute, directs agencies to evaluate models for risks and identify areas in which the AI supply chain could be disrupted, outlines actions to streamline acquisition of AI used for national security, and defines new governance practices for federal agencies through a new framework.
In remarks on the memo delivered Thursday at National Defense University, National Security Advisor Jake Sullivan highlighted the potential AI has for the country’s national security advantage but spoke in dire terms about taking action.
“The stakes are high,” Sullivan said. “If we don’t act more intentionally to seize our advantages, if we don’t deploy AI more quickly and more comprehensively to strengthen our national security, we risk squandering our hard-earned lead.”
The memo, called for in President Joe Biden’s October 2023 executive order on AI, sets up a strategy within the U.S. government to rapidly deploy AI as part of national security systems and aide AI development through projects such as the National Science Foundation’s National AI Research Resource. The document simultaneously identifies areas in the AI supply chain that could be compromised and evaluates the technology for the radiological, nuclear, chemical and biological threats it poses.
Divyansh Kaushik, a vice president at Beacon Global Strategies who focuses on emerging technology and national security, said the memo “takes a very, very broad view of national security.”
The memo looks “at the national security ecosystem writ large,” from NSF to the Department of Defense and intelligence communities, Kaushik said. “This is essentially a reflection of how national security — the definition of national security — has evolved over the last decade.”
Among the actions outlined in the memo, the AI Safety Institute is required to “pursue voluntary preliminary testing of at least two frontier AI models” before their public release within 180 days following the order. That directive follows agreements the institute already struck with OpenAI and Anthropic to evaluate their models before and after their public release.
The AI Safety Institute, which was designated as the official point of contact for the private sector under the memo, will also be required to develop or recommend ways to assess AI systems in the context of national security and public safety and provide guidance to developers on testing and risk mitigation of dual-use foundation models.
Other agencies also have new objectives for testing, evaluation and research. The Department of Energy, for example, has a new requirement to develop means through which to rapidly test for nuclear and radiological threats posed by AI models. Meanwhile, DOE, the Department of Homeland Security, the Department of Health and Human Services, DOD and NSF also have a mandate to support high-performance computing efforts in AI systems that could improve biosecurity.
The memo also outlines requirements to identify areas in which the government could educate and train the workforce and underscores the need for effective acquisition and procurement of the technology, building on existing work in those spaces.
While the Office of Management and Budget already established some guidance for AI acquisition in a recent memo, for example, the national security memo includes specific requirements for buying the technology for national security purposes. The DOD and the Office of the Director of National Intelligence are required to establish a working group on the topic with OMB and make recommendations on procurement.
Similar to governance required for other agencies on AI, the new memo also mandates that covered agencies have a chief AI officer to coordinate use and risk management of the technology as well as establishing an AI governance board. While some of the governance components were included in the memo, some of that direction spilled into an additional document.
In addition to the memo, the administration published a separate governance framework that largely sets up a national security-specific companion to the governance process already in place for other federal government AI uses. That document establishes new prohibited uses of the technology — such as uses for casualty estimates or detecting an individual’s emotional state — and risk management practices for those deemed “high impact.”
High-impact uses appear to be similar to the Biden administration’s existing rights- and safety-impacting uses of AI in that they take on additional risk management practices. Covered agencies will also have to maintain an inventory of high-impact AI uses and systems, but unlike those required of agencies outside the national security space, those inventories don’t have a public reporting requirement.
The memo received praise from organizations like the Information Technology Industry Council, which represents tech companies. In a statement, ITI President and CEO Jason Oxman said the memo ensures the U.S. can leverage the benefits of AI for national security and highlights the roles of the AI Safety Institute and the National AI Research Resource.
“As implementation of the memorandum begins, we urge policymakers to leverage industry’s expertise and ensure collaboration with the tech sector remains a top priority,” Oxman said.
But other organizations, such as the American Civil Liberties Union, said the policy doesn’t go far enough.
“National security agencies must not be left to police themselves as they increasingly subject people in the United States to powerful new technologies,” said Patrick Toomey, deputy director of the ACLU’s National Security Project. He specifically pointed to lack of transparency and independent oversight.
Sen. Mark R. Warner, D-Va., who chairs the chamber’s Select Committee on Intelligence, said he was “heartened” to see the administration acknowledge the consequences AI has for the economy, national security and democracy. He also encouraged “the administration to work in the coming months with Congress to advance a clearer strategy to engage the private sector on national security risks directed at AI systems across the AI supply chain.”
The requests to work together to achieve national security priorities are mutual.
During his remarks, Sullivan called on Congress to take several actions to support the memo, including making it easier for STEM graduates to obtain visas. Sullivan pointed to the fact that Congress hasn’t fully funded the science portion of the CHIPS and Science Act, while adversaries like China increase their science spending.
“We want to work with Congress to make sure this and the other requirements within the AI national security memorandum are funded, and we’ve received strong bipartisan signals of support for this from the Hill,” he said. “So it’s time for us to collectively roll up our sleeves on a bicameral, bipartisan basis and get this done.”
Correction, 10/25/24: An earlier version of this article misspelled the name of Divyansh Kaushik.