New House legislation would push agencies toward NIST’s AI framework
A bipartisan quartet of House lawmakers revealed new legislation Wednesday meant to rein in how federal agencies use and purchase artificial intelligence.
The proposal from Reps. Ted Lieu, D-Calif, Zach Nunn, R-Iowa, Don Beyer, D-Va., and Marcus Molinaro, R-N.Y., follows similar legislation unveiled in the Senate in November. It also signals Congress’ growing effort to regulate how the government uses and acquires artificial intelligence, particularly as the Biden administration encourages federal agencies to adopt the technology.
The Federal Artificial Intelligence Risk Management Act would order the Office of Management and Budget to issue guidance that requires agencies to incorporate the National Institute for Standards and Technology’s Artificial Intelligence Risk Management Framework, which was introduced early last year, into their operations.
The Comptroller General, who leads the Government Accountability Office, would also study the impact of the framework on how agencies use AI, while OMB would report back to Congress on agency compliance.
The legislation would also have contractors adopt aspects of the framework. Agencies would be provided with contract language to ensure vendor compliance. The Federal Acquisition Regulatory Council would also create regulations that apply the framework to solicitations, acquisition requirements, and contract clauses.
Among other additional measures, OMB, with the help of the General Services Administration, would also launch an initiative to help provide agencies with AI acquisition expertise.
“As the federal government expands its use of innovative AI technology, it becomes increasingly important to safeguard against AI’s potential risks,” Beyer said in a statement. “Our bill, which would require the federal government to put into practice the excellent risk mitigation and AI safety frameworks developed by NIST, is a natural starting point.”