Advertisement

Trump White House issues internal federal guidance on AI reporting

The updated guidance on use case reporting broadly resembles that of the Biden administration, though it abbreviates the process.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
The North Lawn of the White House in Washington, D.C., on Nov. 18, 2022. (Photo by MANDEL NGAN/AFP via Getty Images)

The Office of Management and Budget has issued its version of guidance on annual artificial intelligence use case reporting within agencies, outlining a similar process to the previous administration, albeit slimmer.

That guidance obtained by FedScoop is dated June 27 and has been shared internally in the federal government but hasn’t been made public. It’s accompanied by a document breaking down the questions in the various fields. 

The move suggests that despite the Trump White House’s markedly different tone on AI, some details may not look so different. Despite President Donald Trump’s criticism of President Joe Biden’s handling of AI, including the immediate rescission of his AI executive order, the updated process will ask agencies to provide much of the same information, including the stage of development, whether it was developed in-house or purchased, and whether the use case involves personally identifiable information maintained by the agency, among other categories.

Ultimately, it sets a compilation deadline of Nov. 4 and a publication deadline of Dec. 2, maintaining a similar schedule to the previous year.

Advertisement

The new guidance comes several years after the first Trump administration in its final weeks created the requirement for public, annual inventorying of AI uses within federal agencies. That requirement was later codified. While the first few years were rife with issues, such as inconsistency among agencies and even errors, the Biden administration made efforts to improve the process

Notably, Lynne Parker, who played a major role in establishing the AI inventory process during the first administration, has now returned to the second Trump administration as principal deputy director of the Office of Science and Technology Policy and executive director of the Presidential Council of Advisors on Science and Technology. 

In December 2024, the Biden administration published a consolidated list of public use cases across the civilian agencies in the federal government, initially totaling 1,700 uses — more than doubling the amount of uses reported for the previous year. Several agencies still struggled to meet the deadline, however, and an updated list of the data maintained on GitHub now reflects 2,133 uses for the 2024 reporting year.

Since its rollout, the AI inventory has revealed myriad surprising use cases, including the FBI’s use of Amazon Rekognition in its 2023 inventory. Such a use case didn’t appear on the Justice Department’s 2024 inventory, however. 

To be sure, the guidance does make changes and generally slims the data collection exercise down to fewer categories.

Advertisement

Among those differences, the inventory will no longer reflect the Biden era’s designation of specific risk management practices for use cases that are rights- and safety-impacting. Instead, the Trump administration has opted for a similar designation known as “high-impact,” which it defined in previous guidance on use of the technology in government. 

It isn’t clear how many use cases that were rights- and safety-impact are also high-impact, but they’ll still get similar treatment in that agencies will be required to answer additional questions about them. Those categories include whether an independent review or pre-deployment testing had been completed, as well as whether it had fail-safes to minimize harm.

Also unclear is how the growing movement to adopt generative AI throughout the federal government will impact the AI inventorying process. For the past several years, agencies have  seemed interested in providing chatbots directly to users, which may allow employees to find their own use cases for the technology. Chatbots are among the types of uses the inventory has traditionally encompassed.

Some categories have been tweaked to remove certain information or consolidate. That includes replacing individual questions about dates for each phase of deployment with a single question about the operational development. 

The guidance also removed a question about what Procurement Instrument Identifiers were used, which had been a portion of a question about whether a system was part of a contract. That had provided detail about how agencies were procuring AI. However, the new version of the question adds a prompt to disclose the vendor, if applicable.

Advertisement

Other categories have been removed entirely. That includes information about whether agencies had access to request computing resources for training and development, if there was model training documentation available, and whether uses that disseminate information to the public were following federal quality guidelines.

In the renamed high-impact category specifically, those uses will no longer have to report information about whether there’s reasonable notice provided to people who interact with the AI-enabled service, if the tool can take action without human intervention, and whether there is an option for people to opt-out of the AI functionality.

High-impact uses will also not have to disclose whether there are significant disparities in the performance of the model across demographic groups, which appears to reflect the Trump administration’s policies rejecting consideration of diversity, equity and inclusion. 

Several new categories have been added for tracking, including prompts to add links to any data that’s publicly disclosed as an open data asset or privacy impact assessments that are made public. Agencies will also be asked to say what problem the AI solution is aimed at solving.

This story was updated July 1 with additional detail on changes between the two sets of guidance.

Latest Podcasts