Congressional panel outlines five guardrails for AI use in House 

Human oversight, comprehensive policies, and testing and evaluation are among the guardrails in a new “flash report” from the Committee on House Administration.
Congress, U.S. Capitol, fiscal 2022
The US Capitol on March 10, 2022. (Photo by Daniel SLIM / AFP) (Photo by DANIEL SLIM/AFP via Getty Images)

A House panel has outlined five guardrails for deployment of artificial intelligence tools in the chamber, providing more detailed guidance as lawmakers and staff explore the technology.

The Committee on House Administration released the guardrails in a “flash report” on Wednesday, along with an update on the committee’s work exploring AI in the legislative branch. The guardrails are human oversight and decision-making; clear and comprehensive policies; robust testing and evaluation; transparency and disclosure; and education and upskilling.

“These are intended to be general, so that many House Offices can independently apply them to a wide variety of different internal policies, practices, and procedures,” the report said. “House Committees and Member Offices can use these to inform their internal AI practices. These are intended to be applied to any AI tool or technology in use in the House.”

The report comes as the committee and its Subcommittee on Modernization have focused on AI strategy and implementation in the House, and is the fifth such document it has put out since September 2023.


According to the report, the guardrails are a product of a roundtable the committee held in March that included participants such as the National Institute of Standards and Technology’s Elham Tabassi, the Defense Department’s John Turner, the Federation of American Scientist’s Jennifer Pahlka, the House chief administrative officer, the clerk of the House, and senior staff from lawmakers’ offices.

“The roundtable represented the first known instance of elected officials directly discussing AI’s use in parliamentary operations,” the report said. The report added that templates for the discussion were also shared with the think tank Bússola Tech, which works on modernization of parliaments and legislatures.

Already, members of Congress are experimenting with AI tools for things like research assistance and drafting, though use doesn’t appear widespread. Meanwhile, both chambers have introduced policies to rein in use. In the House, the CAO has approved only ChatGPT Plus, while the Senate has allowed use of ChatGPT, Microsoft Bing Chat, and Google Bard — with specific guardrails.

Interestingly, AI was used in the drafting of the committee’s report, modeling the transparency guardrail the committee outlined. A footnote in the document discloses that “early drafts of this document were written by humans. An AI tool was used in the middle of the drafting process to research editorial clarity and succinctness. Subsequent reviews and approvals were human.”

Madison Alder

Written by Madison Alder

Madison Alder is a reporter for FedScoop in Washington, D.C., covering government technology. Her reporting has included tracking government uses of artificial intelligence and monitoring changes in federal contracting. She’s broadly interested in issues involving health, law, and data. Before joining FedScoop, Madison was a reporter at Bloomberg Law where she covered several beats, including the federal judiciary, health policy, and employee benefits. A west-coaster at heart, Madison is originally from Seattle and is a graduate of the Walter Cronkite School of Journalism and Mass Communication at Arizona State University.

Latest Podcasts