Advertisement

New House legislation would push agencies toward NIST’s AI framework

The bipartisan proposal follows similar legislation introduced in the Senate.
Rep. Ted Lieu, D-Calif., listens at a news conference at the U.S. Capitol Building on Sept. 19, 2023, in Washington, D.C. (Photo by Anna Moneymaker/Getty Images)

A bipartisan quartet of House lawmakers revealed new legislation Wednesday meant to rein in how federal agencies use and purchase artificial intelligence. 

The proposal from Reps. Ted Lieu, D-Calif, Zach Nunn, R-Iowa, Don Beyer, D-Va., and Marcus Molinaro, R-N.Y., follows similar legislation unveiled in the Senate in November. It also signals Congress’ growing effort to regulate how the government uses and acquires artificial intelligence, particularly as the Biden administration encourages federal agencies to adopt the technology.

The Federal Artificial Intelligence Risk Management Act would order the Office of Management and Budget to issue guidance that requires agencies to incorporate the National Institute for Standards and Technology’s Artificial Intelligence Risk Management Framework, which was introduced early last year, into their operations. 

The Comptroller General, who leads the Government Accountability Office, would also study the impact of the framework on how agencies use AI, while OMB would report back to Congress on agency compliance. 

Advertisement

The legislation would also have contractors adopt aspects of the framework. Agencies would be provided with contract language to ensure vendor compliance. The Federal Acquisition Regulatory Council would also create regulations that apply the framework to solicitations, acquisition requirements, and contract clauses. 

Among other additional measures, OMB, with the help of the General Services Administration, would also launch an initiative to help provide agencies with AI acquisition expertise. 

“As the federal government expands its use of innovative AI technology, it becomes increasingly important to safeguard against AI’s potential risks,” Beyer said in a statement. “Our bill, which would require the federal government to put into practice the excellent risk mitigation and AI safety frameworks developed by NIST, is a natural starting point.”

Rebecca Heilweil

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Latest Podcasts