Where Biden’s voluntary AI commitments go from here
As President-elect Donald Trump prepares to return to the White House, only a handful of technology companies are publicly commenting on whether they’re sticking to President Joe Biden’s voluntary artificial intelligence safety commitments.
The Biden administration’s artificial intelligence commitments, which were published last July, were meant to coalesce some of the country’s most influential AI companies on a series of safety, security, and trust principles. Promises included conducting internal and external security testing, making progress toward information sharing, and developing technology for detecting AI-generated content. Since those commitments were signed by an initial group of backers — including OpenAI and Anthropic — Apple, Palantir and a flurry of other companies joined.
It’s not clear that these efforts were pursued by the companies only because of the White House pledge. But now, as Trump prepares to enter office again — with promises to be somewhat hostile to much of the Biden administration’s work on artificial intelligence — it’s possible that a stress test for this approach to AI policymaking is coming.
“Trump has already vowed to repeal the Biden White House executive order on AI, and it’s safe to say that he will not honor the voluntary commitments in full,” Nicol Turner Lee, a senior fellow at the Brookings Institution who directs the think tank’s Center for Technology Innovation, told FedScoop.
She continued: “First, there will be less federal pressure for AI companies to act in the public interest, and second, compliance and safety will be deprioritized under a Trump presidency. This also means that other issues where progress has been made on ethics, bias mitigation, and other accountability measures will not be top of mind for the incoming administration.”
The Biden administration’s AI commitments were voluntary, so they did not include explicit rules and were largely future-oriented. Still, there’s some evidence they had at least pushed companies to be transparent about their progress toward those goals. In 2023, Google shared updates related to its progress on meeting the commitments. This fall, Salesforce reported similar updates, including that it had improved on red-teaming and made changes that led to a reduction in toxic output in one feature.
MIT Technology Review reported on the pledges this past summer, finding some progress toward the companies’ red-teaming, cybersecurity and insider threat commitments, as well as societal risk research, and AI watermarking technologies. But more work would be needed on some of the other commitments, including public reporting on system capabilities and third-party reporting avenues.
It’s not clear where those efforts will go under the new administration. The White House did not respond to a request for comment.
How companies responded
FedScoop reached out to 16 signatories about whether they planned to stick to the commitments: Amazon, Anthropic, Google, Inflection AI, Meta, Microsoft, OpenAI, Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, Stability, and Apple. Anthropic and Microsoft both declined to comment.
Inflection AI, a tech company founded in 2022 by Reid Hoffman, Karén Simonyan, and Mustafa Suleyman, said its commitments remain. “We’ll be active participants in the forefront of safe and responsible AI,” a company spokesperson said in an email.
Scale AI, a labeled data firm that just last week released a military-focused large language model, shared a similar statement. “Scale intends to follow through on our White House voluntary commitments,” a spokesperson told FedScoop. “We look forward to working with the Trump administration to ensure American leadership in AI.”
A spokesperson for Nvidia said the company’s position on the commitments had not changed.
A spokesperson for OpenAI said the company was committed to working with the Trump administration, just as it had with the Biden administration. The spokesperson pointed to two tweets from the firm’s CEO, Sam Altman, who congratulated Trump and emphasized that “it is critically important that the US maintains its lead in developing AI with democratic values.”
Shannon Kellogg, vice president of AWS Public Policy, Americas, said in an email that Amazon is “committed to continued collaboration with policymakers, here in the U.S. and globally, and the AI community to advance the responsible and secure use of AI. We are dedicated to driving innovation on behalf of our customers while also establishing and implementing necessary safeguards.”
A spokesperson for IBM shared the following statement: “IBM has always believed AI should be transparent and explainable, and we will continue advocating for U.S. approaches to AI policy that provide robust protections for individuals without creating burdensome regulations for American companies of all sizes, such as mandatory third-party audits, or discussions of existential risk that aren’t grounded in science or reality.”
A representative for Salesforce said the following: “Agents are the future of AI, and as the leading agent-first enterprise, Salesforce will continue to innovate while creating guardrails in collaboration with global stakeholders to ensure trustworthy AI — including standards and policies for privacy, safety, ethical use, and transparency.”
Notably, xAI, the Al company founded by Elon Musk, was not publicly identified as ever signing on to the commitments. It’s not clear if the company was ever asked to join by the White House or if leadership declined to participate.
What’s next
One challenge for the voluntary commitments was that they did not legally bind companies to specific goals — and many of the focus areas were issues the firms were likely to already work on. Prem Trivedi, policy director for New America’s Open Technology Institute, told FedScoop that they were broad and not “game-changing commitments.” Still, it’s possible there will be less pressure on companies to publicly outline their progress from the Trump White House.
“I think pressure on companies to be forthcoming there and to talk about what they’ve done with their training, what their training data is, how model inputs are structured, how they’ve thought about downstream risk that have a biased or discriminatory effect — all those things I think are likely to receive much less pressure from the administration,” Trivedi said. “More generally, there’ll be less of a focus on responsible and meaningful transparency.”
The voluntary commitments established last summer were a step toward creating accountability mechanisms for leading AI companies, Turner Lee said, adding that Trump leadership appears to already be allied with certain companies — meaning not all other firms “may be invited or asked to return to the table.” Overall, the administration is likely to focus intensely on competition with China and ”securing a more America-first supply chain” for the U.S. technology industry, she said.
It’s unlikely that companies will stop investing in many of the issues outlined in the voluntary commitments, said Cobun Zweifel-Keegan, managing director of the International Association of Privacy Professionals.
“Even if they were to go away as an explicit mechanism, I think those same standards are not really evolving or being taken away very dramatically,” Zweifel-Keegan said. “They’re only as good as the strength of the commitments on paper, even without any kind of explicit mechanism for enforcement.”
FedScoop reporter Madison Alder contributed to this article.