GOP House committee leaders probe ‘conflicting definitions’ in NIST AI framework and AI ‘Bill of Rights’

A House Science Committee spokesperson says lawmakers' concerns center on varying definitions of the technology included in the two documents.
WASHINGTON, DC - JULY 27: Committee ranking member Rep. James Comer (R-KY) attends a House Oversight Committee hearing titled Examining the Practices and Profits of Gun Manufacturers in the Rayburn House Office Building. (Photo by Drew Angerer/Getty Images).

Two Republican members of Congress sent a letter late last week to OSTP director Arati Prabhakar voicing concern that the White House’s AI ‘Bill of Rights’ blueprint document is sending “conflicting messages about U.S. federal AI policy.” 

House Science Chairman Frank Lucas, R-Okla., and Oversight Chairman James Comer, R-Ky., were highly critical of the blueprint as it compares with the AI risk management framework that was published Thursday by the National Institute of Standards and Technology.

House Science Committee Senior Advisor for Strategy and Director of Communications Heather Vaughan earlier this week told FedScoop that their primary concern is the “conflicting definitions” of artificial intelligence technology contained within the two documents.

NIST in its AI Risk Management Framework defines an AI system as “an engineered or machine-based system that can, for a given set of human-defined objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.”  


Meanwhile, the Biden administration’s AI ‘Bill of Rights’ blueprint document uses the term “autonomous systems” which are defined as “any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities.” 

Vaughan said: “What we don’t want is for the competing guidance to muddy the waters and make it harder for industry, academia, and federal/state/local government organizations to implement the best policies.”

She added: “It’s our hope that when the final draft of NIST’s AI Risk Management Framework is released tomorrow, the White House will send a clear message that this technical guidance is the tool the U.S. industry and government should utilize to manage risks to individuals, organizations and society.”

The missive follows concerns expressed by industry and academia about varying definitions within the two documents and how they relate to the definitions used by other federal government agencies. While they are both non-binding, AI experts have warned about the chilling effect that lack of specificity within framework documents could have on innovation both inside government and across the private sector.

Speaking with FedScoop, AWS Global Director of AI and Machine Learning Policy Nicole Foster, said: “I think there is some inconsistencies between the two [documents] for sure. I think just at a basic level they don’t even define things the same way.


She added: “I’m not sure if there are massive implications to how the Bill of Rights and the Framework define AI, because it’s non binding. But when you’re talking about things that create binding requirements or regulatory requirements, or liability, then then the definition really matters. We need to know what you’re talking about.”

Another expert on AI regulation, speaking anonymously in order to offer their candid views, said: “The ‘Bill of Rights’, for example, fails to distinguish between the originators of AI systems and companies deploying the systems, which has serious implications from a liability standpoint.” The expert added that the lack of specificity could prevent certain private sector companies from making new technology publicly available.

In their letter to the White House, the GOP lawmakers also criticized criticized OSTP’s process of creating the AI Bill of Rights, saying that “while the Administration collected input for a year with the Blueprint, the NIST Framework has gone through a longer and much more rigorous, transparent, and open process with workshops, open RFIs, and multiple public drafts.  That’s the kind of process necessary to develop practical, technical guidance and OSTP failed to do that for the Blueprint.”

Speaking at the launch of the NIST AI Risk Management Framework, White House science policy leader Alondra Nelson said the document was designed to be used side-by-side with the AI ‘Bill of Rights’, and noted that both the executive branch and the Department of Commerce had been involved in the creation of both frameworks.

“The United States is taking a principled, sophisticated approach to AI that advances American values and meets the complex challenges of this technology,” Nelson said.


She added: “It’s why, at the same time, NIST was at the table as OSTP developed the Blueprint for an AI Bill of Rights … helping us set out specific practices that can be used to address one critical category of risks: the potential threats posed by AI and automated systems to the rights of the American public … Complementary frameworks.”

Latest Podcasts