Advertisement

Senate bill eyes agency reviews of AI systems before deployment

The legislation from Democratic Sens. Welch and Luján would require NIST to come up with a “trustworthy-by-design framework” for agencies to follow.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
Sen. Peter Welch, D-Vt., speaks with Sen. Laphonza Butler, D-Calif., after a Senate Judiciary Committee hearing at the U.S. Capitol on July 11, 2024 in Washington, D.C. (Photo by Bonnie Cash/Getty Images)

A pair of Senate Democrats are pushing a new bill for a more rigorous, systematic review of artificial intelligence systems before those tools are put to use by the federal government. 

The Trustworthy by Design Artificial Intelligence Act from Sens. Peter Welch, D-Vt., and Ben Ray Luján, D-N.M., would task the National Institute of Standards and Technology with creating a “trustworthy-by-design framework” for AI products, a move that in some ways mirrors the Cybersecurity and Infrastructure Security Agency’s secure-by-design initiative.  

Under the TBD AI Act, NIST would also be required to come up with definitions for “key trustworthiness evaluation criteria” that federal agencies would follow, and select which AI system components would be subject to evaluations throughout every step of the development process. 

“With great power comes great responsibility — we need to ensure AI is used safely,” Welch said in a statement. “That includes taking action to ensure the government remains a leader in the ethical deployment of AI and setting clear guidelines that provide Americans with trustworthy and fair services. This bill will establish best practices and manage risks of AI.” 

Advertisement

Luján said the legislation would support AI innovations, including those he’s observed with the New Mexico AI Consortium. “These innovations have great potential if we establish clear guidelines on how to keep AI secure, resilient, transparent, and fair,” he said, adding that the bill should result in the accelerated “development of AI guidelines that build consumer trust and pave the road for innovation.” 

The TBD AI Act lays out how federal agencies would be expected to use the NIST-created framework to ensure that the AI systems and tools they use are deployed in a trustworthy manner. The bill calls for agency heads to publicly report on compliance measures and evaluation status for “all covered AI systems” and deliver an additional report to Congress within three years of the law’s enactment on AI deployment. Such systems already in use by agencies would be evaluated via the new framework and required to either meet the guidelines within two years or be sunset by the agency.

NIST’s director would have some discretion to pull from other guidelines and best practices when compiling this new framework. The legislation envisions that framework as a living document, requiring NIST’s director to make “periodic updates” to it, at least once a year. 

Trustworthiness is the North Star of the legislation, a concept defined in the bill text as AI systems built with validity and reliability, safety, security, resiliency, transparency and accountability, explainability and interpretability, privacy, fairness and absence of bias, and “other matters relating to safety, security, or trustworthiness as the [NIST] Director considers appropriate.”

The legislation is supported by several tech policy nonprofit and advocacy groups, including Public Citizen, Public Knowledge, Encode, Accountable Tech and the Transparency Coalition.

Advertisement

“It’s common sense that the government should ensure that the AI tools it is using are trustworthy, unbiased, fair and understandable. But we can’t rely on AI companies to deliver such trustworthy tools, without standards in place,” Robert Weissman, co-president of Public Citizen, said in a statement. “The Trustworthy by Design AI Act would convert common sense into policy, strengthening government operations and setting a standard for the private market.”

Matt Bracken

Written by Matt Bracken

Matt Bracken is the managing editor of FedScoop and CyberScoop, overseeing coverage of federal government technology policy and cybersecurity. Before joining Scoop News Group in 2023, Matt was a senior editor at Morning Consult, leading data-driven coverage of tech, finance, health and energy. He previously worked in various editorial roles at The Baltimore Sun and the Arizona Daily Star. You can reach him at matt.bracken@scoopnewsgroup.com.

Latest Podcasts