Advertisement

Senate legislation to establish third-party AI audit guidelines is now bipartisan

The bill would direct the Department of Commerce’s NIST to work with federal agencies and stakeholders on developing guidelines for third-party AI evaluations.
(Getty Images)

A bill that would require the National Institute of Standards and Technology to create detailed guidance for third-party evaluators that work with artificial intelligence providers was officially introduced Wednesday night in the Senate, now with bipartisan sponsorship.

The legislation from Sen. John Hickenlooper, D-Colo., called the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act, now counts Sen. Shelley Moore Capito, R-W.Va., as its co-sponsor. Hickenlooper announced plans to introduce the bill earlier this month but without co-sponsorship.

Under the bill, NIST will be required to work with agencies, industry, academia, and civil society to develop the guidance. According to a release from the lawmakers, the legislation would “create a pathway for independent evaluators, with a function similar to those in the financial industry and other sectors, to work with companies as a neutral third-party to verify their development, testing, and use of AI is in compliance with established guardrails.”

The legislation comes as members of Congress continue to eye various legislative approaches to address the risks and benefits of the booming technology. A bipartisan group of senators in May released a roadmap for AI policy that identified priorities for Senate action on the technology. That roadmap encouraged committees to support the “development and standardization” of testing and evaluation methods, including commercial auditing standards. 

Advertisement

Hickenlooper — who chairs the Senate Commerce Committee’s Subcommittee on Consumer Protection, Product Safety, and Data Security — said in a statement that “we have to move just as fast to get sensible guardrails in place to develop AI responsibly before it’s too late. Otherwise, AI could bring more harm than good to our lives.”

Capito, a member of the Senate Commerce Committee, said the “commonsense bill will allow for a voluntary set of guidelines for AI, which will only help the development of systems that choose to adopt them.” Capito added that she looks forward to getting the VET AI Act and the bipartisan AI Research, Innovation, and Accountability Act (S. 3312), which would establish accountability mechanisms for high-impact AI applications, passed out of the committee soon. 

Specifically, the VET AI Act directs NIST to coordinate with the Department of Energy and the National Science Foundation to develop voluntary guidance for AI system developers and deployers so that they “conduct internal assurance and work with third parties on external assurance regarding the verification and red-teaming of AI systems.”

The bill would establish a 15-member advisory committee — including members from academia, consumer advocacy groups, public safety organizations, and those deploying, developing, and assessing AI — to recommend criteria for individual auditors and audit organizations seeking accreditation. It would also require NIST to study the capabilities of the entities that conduct internal and external AI assurance.

The legislation has early support from several policy and public interest organizations, including the Center for AI Policy, the Bipartisan Policy Center, the Federation of American Scientists, New America’s Open Technology Institute, and the Software & Information Industry Association. Booz Allen also offered its support for the legislation.

Advertisement

John Larson, executive vice president and head of Booz Allen’s AI business, said the legislation’s “emphasis on ‘consensus-driven and evidence-based guidelines’ resonates with our belief that a one-size-fits-all approach is impractical for AI systems.”

Madison Alder

Written by Madison Alder

Madison Alder is a reporter for FedScoop in Washington, D.C., covering government technology. Her reporting has included tracking government uses of artificial intelligence and monitoring changes in federal contracting. She’s broadly interested in issues involving health, law, and data. Before joining FedScoop, Madison was a reporter at Bloomberg Law where she covered several beats, including the federal judiciary, health policy, and employee benefits. A west-coaster at heart, Madison is originally from Seattle and is a graduate of the Walter Cronkite School of Journalism and Mass Communication at Arizona State University.

Latest Podcasts