A House oversight subcommittee plans to discuss the state of artificial intelligence use case inventories at a Thursday hearing, putting congressional focus on the process.
A 2020 Trump administration executive order ordered agencies to annually create inventories of their planned and current artificial intelligence use cases and make a version of that inventory public. But those inventories have lacked consistency.
Ahead of the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation hearing, a committee spokesperson confirmed that staff were aware of a major Stanford research paper outlining a range of problems with these inventories and compliance with the executive order, as well as ongoing FedScoop reporting analyzing the CIO Council’s evolving guidance and agencies’ inconsistent approach to the disclosures.
The spokesperson said the committee expects these topics to come up during the hearing, which will feature three speakers: White House Office of Science and Technology Policy Director Arati Prabhakar, Defense Department Chief Digital and Artificial Intelligence Officer Craig Martell, and Department of Homeland Security Chief Information Officer Eric Hysen.
“The hearing is a great opportunity both for Congress to communicate the importance of consistent and accurate AI use case inventories and for the executive branch to communicate what it may need, if anything, from Congress to publish inventories as required by EO 13960 and the FY 2023 NDAA,” Christie Lawrence, one of the co-authors of the report and an affiliate at Stanford’s RegLab, told FedScoop in advance of the hearing.
She added: “Since Stanford’s RegLab first published its implementation assessment in December 2022, senior-level attention on these inventories seems to have increased, with more agencies publishing inventories and the CIO Council publishing more detailed guidance.”
The inventories have attracted attention as Congress ramps up its effort to both regulate and support the development of the technology through new legislative proposals. The White House is also expected to announce a new executive order focused on AI. And on top of that, the Office of Management and Budget is supposed to release new guidance for federal agencies using the technology.
“It’s been very spotty and very inconsistent. And part of that is certainly the timing of the executive order was unfortunate being at the end of one administration,” Lynne Parker, a former deputy US chief technology officer who helped craft the executive order and AI inventory requirement, told FedScoop in a recent interview.
“The CIO Council did come up with some basic reporting,” she said. “It was, frankly, trying to check a box in the sense of, ‘let’s make sure that we have something reported,’ as opposed to thinking through how the information that’s reported could be useful.”
The Stanford analysis, which was published in December 2022 by the university’s RegLab, noted myriad problems with agency compliance with the executive order, including among large agencies with previously-known AI use cases. Many agencies, they noted at the time of publication, had failed to publish even their first inventory.
Agencies have also taken varied approaches to completing their inventories. For example, the Department of Energy told FedScoop that a surge in use cases documented in its updated inventory was due to “enhanced” guidance from the White House. OMB, meanwhile, has acknowledged issues with the reporting process.
After FedScoop asked about a ChatGPT use case attributed to the Federal Aviation Administration’s office, the Department of Transportation quickly removed reference to the technology. The National Archives and Records Administration published its use case list publicly after FedScoop asked the agency and OMB about its decision to release the list only on the Max.gov portal, a federal government information-sharing platform.
The hearing may shine more of a light on the state of these inventories.
“[E]xecutive branch officials can spotlight positive developments, including ways in which agencies are using AI to better realize their missions, and concretely identify obstacles that may impede consistent reporting and broader strategic deliberation about how agencies develop, procure, and deploy trustworthy AI,” noted Lawrence.