Advertisement

OpenAI, Meta and other tech firms sign onto White House AI commitments

The commitments include committing to pre-release security testing for AI models and forming insider threat safeguards and cybersecurity investments focused on unreleased and proprietary model weights.
WASHINGTON, DC - AUGUST 07: The exterior of the White House from the North Lawn on August 7, 2022 in Washington, DC. (Photo by Sarah Silbiger/Getty Images)

Seven major companies building powerful artificial intelligence software have signed onto a new set of voluntary commitments to oversee how the technology is used. These commitments are focused on AI safety, cybersecurity, and public trust, and come as the White House develops an upcoming executive order and bipartisan legislation focused on AI.

The seven companies participating in the effort are Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, according to a White House official who spoke with reporters on Thursday. Myriad countries, including Brazil, the UAE, India, and Israel have been consulted on the voluntary commitments, too. 

These latest updates reflect the Biden administration’s growing focus on artificial intelligence, and come just weeks after Senate Majority Chuck Schumer introduced his own SAFE Innovation Framework, which focuses on both regulating and incubating the technology.

The commitments include committing to pre-release internal and external security testing for AI models and forming insider threat safeguards and cybersecurity investments focused on unreleased and proprietary model weights. Weights serve a critical role in “training” AI neural networks. 

Advertisement

Along with commitments to research bias and privacy risks associated with the technology, the companies have also pledged to support the development of new tools that could automatically label AI-created content, including through the use of “watermarking”.

The commitments follow concern from the Biden administration over the use of AI. The leaders of several major AI firms, including OpenAI, Microsoft, and Anthropic, visited the White House in May to meet with Vice President Harris.

The White House’s chief cyber advisor Anne Neuberger met with executives from several tech companies, including OpenAI and Microsoft, in April to discuss cybersecurity risks created by these tools. At the sametime, Neuberger urged the companies to consider ways to deploy AI watermarking, FedScoop reported in May. 

Notably, there’s growing skepticism toward using voluntary measures and commitments to rein in Big Tech companies. 

On a call with reporters, a White House official said that in some cases, these commitments would be a change in the status quo for these companies. The White House is already in conversation with members of both parties on AI issues, according to the official, and emphasized the upcoming executive order, too. 

Advertisement

Microsoft President Brad Smith, Google President Kent Walker, Anthropic CEO Dario Amodei and Inflection AI CEO Mustafa Suleyman will today participate in a meeting with the White House to discuss the new commitments.

They will be joined by Meta President Nick Clegg, OpenAI President Greg Brockman and Amazon Web Services CEO Adam Selipsky.

In a statement shared with FedScoop, president of global affairs at Google and Alphabet Kent Walker said: “Today is a milestone in bringing the industry together to ensure that AI helps everyone. These commitments will support efforts by the G7, the OECD, and national governments to maximize AI’s benefits and minimize its risks.”

Meta President Nick Clegg said: “Meta welcomes this White House-led process, and we are pleased to make these voluntary commitments alongside others in the sector. They are an important first step in ensuring responsible guardrails are established for AI and they create a model for other governments to follow.

He added: “AI should benefit the whole of society. For that to happen, these powerful new technologies need to be built and deployed responsibly. As we develop new AI models, tech companies should be transparent about how their systems work and collaborate closely across industry, government, academia and civil society.”

Advertisement

In a blogpost commenting on the commitments, Microsoft President Brad Smith wrote: “The commitments build upon strong pre-existing work by the U.S. Government (such as the NIST AI Risk Management Framework and the Blueprint for an AI Bill of Rights) and are a natural complement to the measures that have been developed for high-risk applications in Europe and elsewhere.

He added: “We look forward to their broad adoption by industry and inclusion in the ongoing global discussions about what an effective international code of conduct might look like.”

Editor’s note, 7/21/23 at 11:30 a.m. ET: This story was updated to include comment from Google, Meta and Microsoft.

Rebecca Heilweil

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Latest Podcasts