Republican lawmakers look to AI for agency rules review
Agencies could begin identifying redundant and outdated rules with the help of AI under legislation introduced in the House last week by Republican Rep. Blake Moore of Utah.
The Leveraging Artificial Intelligence to Streamline the Code of Federal Regulations Act of 2026 tasks the Office of Management and Budget with implementing an annual review process using an AI tool. The applicable agency would ultimately decide whether to take action on the identified rule.
The goal is not to “make automatic cuts,” but rather work in conjunction with agency personnel on recommendations, according to a Friday press release. The bill was co-sponsored by Republican Rep. Aaron Bean of Florida.
Companion legislation was also introduced in the Senate last year by a group of Republicans, including John Husted of Ohio, Joni Ernst of Iowa, Marsha Blackburn of Tennessee, Ted Budd of North Carolina, Jim Banks of Indiana, Pete Ricketts of Nebraska and Markwayne Mullin of Oklahoma.
The bill promoting AI as a rules review tool for federal agencies comes amid dual tailwinds: a presidential push for agencies to expand AI adoption and federal focus on efficiency.
The Trump administration has worked to make it easier for agencies to adopt AI by tasking leaders with removing “unnecessary and bureaucratic requirements that inhibit innovation,” as outlined in an April 2025 memorandum. AI providers have also tried to facilitate rapid adoption via lower procurement costs.
At the same time, the Trump administration has pushed agencies to do more with less in the face of federal workforce reform and overarching emphasis on efficiency. AI is part of that puzzle. The technology has been characterized as a facilitator of faster workflows. The administration’s Genesis Mission, for example, aims to “dramatically accelerate scientific discovery” via a national, integrated AI platform and AI agents.
But even as federal agencies increase their use of AI and the technology matures, drawbacks are aplenty. AI systems and chatbots can fabricate information, misinterpret text or data and lead to data security concerns. AI agents are even more immature and ripe for friction, underlining the need for guardrails.
“AI agents could misinterpret a user’s goal or take unethical actions to achieve a goal,” a September 2025 Government Accountability Office report found. “In one test, for example, AI agents tried to blackmail humans to avoid being shut down.”
Agencies also grapple with adoption challenges around data access and quality, IT infrastructure complexities, talent gaps and resource constraints, according to 2025 AI compliance plans posted late last year.