Advertisement

White House science adviser defends ‘conflicting’ AI frameworks released by Biden admin

Arati Prabhakar said the White House AI Blueprint and the NIST AI framework "are not contradictory," in response to queries from House lawmakers.
SAN FRANCISCO, CA - OCTOBER 20: Director of DARPA, Arati Prabhakar, (L) and coanchor of CNBC’s Squawk Alley, Jon Fortt, speak onstage at Yerba Buena Center for the Arts on October 20, 2016 in San Francisco, California. (Photo by Mike Windle/Getty Images for Vanity Fair)

The Biden administration’s AI ‘Bill of Rights’ Blueprint and the NIST AI Risk Management Framework do not send conflicting messages to federal agencies and private sector companies attempting to implement the two AI safety frameworks within their internal systems, according to the director of the White House Office of Science and Technology Policy.

In a letter obtained exclusively by FedScoop, Arati Prabhakar responded to concerns raised by senior House lawmakers on the House Science, Space and Technology Committee and the House Oversight Committee over apparent contradictions in definitions of AI used in the documents.

“These documents are not contradictory. For example, in terms of the definition of AI, the Blueprint does not adopt a definition of AI, but instead focuses on the broader set of “automated systems,” Prabhakar wrote in a letter sent to House Science Chairman Frank Lucas, R-Okla., and Oversight Chairman James Comer, R-Ky., a few months ago.

“Furthermore, both the AI RMF and the Blueprint propose that meaningful access to an AI system for evaluation should incorporate measures to protect intellectual property law,” Prabhakar added.

Advertisement

In the letter, Prabhakar also described the “critical roles” both documents play in managing risks from AI and automated systems, and said they illustrate how closely the White House and NIST are working together on future regulation of the technology.

The two Republican leaders sent a letter in January to the OSTP director voicing concern that the White House’s AI ‘Bill of Rights’ blueprint document is sending “conflicting messages about U.S. federal AI policy.”

Chairman Lucas and Chairman Comer were highly critical of the White House blueprint as it compares with the NIST AI risk management framework.

Prabhakar in her letter also noted the close partnership between NIST and OSTP regarding AI policymaking and the high engagement both entities have had with relevant stakeholders within industry and civil society in crafting AI policy.

She also highlighted that the AI ‘Bill of Rights’ document recognizes the need to protect technology companies’ intellectual property. Although it calls for the use of confidentiality waivers for designers, developers and deployers of automated systems, it says that such waivers should incorporate “measures to protect intellectual property and trade secrets from unwarranted disclosure as appropriate.”

Commerce Secretary Gina Raimondo said in April that NIST’s AI framework represents the “gold standard” for the regulatory guidance of AI technology and the framework has also been popular with the tech industry.

Advertisement

This came after the Biden administration in October 2022 published its AI ‘Bill of Rights’ Blueprint, which consists of five key principles for regulating the technology: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration and fallback.

Chairman Lucas and Chairman Comer’s engagement with OSTP earlier this year regarding conflicting messages being sent by the Biden administration on AI policy followed concerns expressed by industry and academia about varying definitions within the two documents and how they relate to the definitions used by other federal government agencies.

While they are both non-binding, AI experts and lawmakers have warned about the chilling effect that lack of specificity within framework documents could have on innovation both inside government and across the private sector.

“We’re at a critical juncture with the development of AI and it’s crucial we get this right. We need to give companies useful tools so that AI is developed in a trustworthy fashion, and we need to make sure we’re empowering American businesses to stay at the cutting edge of this competitive industry,” Chairman Lucas said in a statement to FedScoop.

“That’s why our National AI Initiative called for a NIST Risk Management Framework. Any discrepancies between that guidance and other White House documents can create confusion for industry. We can’t afford that because it will reduce our ability to develop and deploy safe, trustworthy, and reliable AI technologies,” he added.

Advertisement

Meanwhile, the White House has repeatedly said the two AI documents were created for different purposes but designed to be used side-by-side and noted that both the executive branch and the Department of Commerce had been involved in the creation of both frameworks.

OSTP spokesperson Subhan Cheema said: “President Biden has been clear that companies have a fundamental responsibility to ensure their products are safe before they are released to the public, and that innovation must not come at the expense of people’s rights and safety. That’s why the Administration has moved with urgency to advance responsible innovation that manage the risks posed by AI and seize its promise—including by securing voluntary commitments from seven leading AI companies that will help move us toward AI development that is more safe, secure, and trustworthy.”

“These commitments are a critical step forward, and build on the Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. The Administration is also currently developing an executive order that will ensure the federal government is doing everything in its power to support responsible innovation and protect people’s rights and safety, and will also pursue bipartisan legislation to help America lead the way in responsible innovation,” Cheema added.

Editor’s note, 8/2/23: This story was updated to add further context about NIST’s AI Risk Management Framework and prior concerns raised by AI experts.

Latest Podcasts