Advertisement

NIST AI framework must take into account global catastrophic risk say UC Berkeley researchers

The academics cite arguments by theorists that instability caused by new AI systems could increase the probability of nuclear war.
BOULDER, CO - OCTOBER 9: The U.S. Department of Commerce's National Institute of Standards and Technology (NIST) building is seen October 9, 2012 in Boulder, Colorado. David J. Wineland, a physicist at NIST, won the 2012 Nobel Prize in Physics for "ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems." Wineland has worked at NIST for 37 years and is internationally recognized for his research on trapped ions. (Photo by Dana Romanoff/Getty Images)

Researchers from the University of California, Berkeley have called on the National Institutes of Standards and Technology (NIST) to explicitly consider the threat of AI-triggered global catastrophes in its new framework intended to govern use of the technology across federal government.

In an evidence paper submitted to NIST on Sept. 15, the team of researchers warned that policymakers must consider the global risks posed by AI because of its unique ability to scale and the widespread application of the technology to areas of critical national importance.

“Increasingly advanced and general AI models such as GPT-3 could pose societal catastrophic risks, including potential for correlated robustness failures across multiple high-stakes application domains such as critical infrastructure,” the researchers said in their submission. GPT-3 is an artificial intelligence system built by Elon Musk-founded company OpenAI, which undertakes unsupervised learning and can mimic human writing and speech in a highly sophisticated, creative manner.

As an example of near-term global catastrophic risk, the UC Berkeley academics cited recent arguments by nuclear deterrent theorists that developments in the field of AI could increase the probability of nuclear war by reducing the stability of nuclear forces.

Advertisement

The submission was filed in response to a request for evidence issued in June by NIST, in order to identify and manage bias in artificial intelligence. The request is part of NIST’s larger project to establish a wider framework that is intended to govern the use of AI across federal government agencies.

Global catastrophic risk is one of three broad categories of concern to the researchers, along with the risk posed by AI to human rights and wellbeing, and also to democracy and security.

The researchers in their evidence also offered a detailed response to requests for information across 12 specific areas including governance, inclusiveness, and the characteristics of trustworthiness for AI systems.

It comes after a group of civil rights, tech and other advocacy organizations last week called on NIST to recommend steps to ensure nondiscriminatory and equitable outcomes in the final draft of its Proposal for Identifying and Managing Bias with AI.

NIST’s draft document is part of NIST’s broader work to develop a risk management framework for trustworthy and responsible AI, which was launched in late 2020.

Latest Podcasts