Advertisement

Inventories to be ‘more central part’ of understanding how agencies use AI under White House guidance

The White House is specifically seeking input on what should be made public in agency AI use case inventories as part of the Office of Management and Budget’s new guidance.
President Joe Biden walks to sign an executive order after delivering remarks on advancing the safe, secure, and trustworthy development and use of artificial intelligence, in the East Room of the White House in Washington, DC, on October 30, 2023. (Photo by BRENDAN SMIALOWSKI/AFP via Getty Images)

Federal agencies’ artificial intelligence use case inventories are intended to become a more expansive resource for the government and public under new draft guidance from the Office of Management and Budget.

The White House is trying to “expand the use case inventory” in terms of information being reported and the function of the disclosure as “a repository of public documentation,” Conrad Stosz, director of artificial intelligence in the Office of the Federal Chief Information Officer, told FedScoop in an interview.

“We expect that the inventory will become a more central part of the way that the OMB and that the public have insight into what agencies are doing with AI,” Stosz said. 

The annual public inventories, which were initially required under a Trump-era executive order, have so far lacked consistency and received criticism from academics and advocates as a result. A major 2022 Stanford report detailed compliance issues with the inventories in the first year of reporting and FedScoop has continued to report on inconsistencies in those disclosures.

Advertisement

The government has currently disclosed more than 700 uses of AI, according to a consolidated list of use cases posted to AI.gov last month and a similar list compiled by FedScoop. 

New requirements for the inventories outlined in OMB’s draft guidance, which details how federal agencies should carry out President Joe Biden’s sweeping executive order on AI, included adding information on safety- or rights-impacting uses, the risks those uses pose and how they are managing those risks. 

Efforts to expand the inventory are important for transparency about how AI is being used to make decisions that impact the public’s lives and “to create trust that the government is using AI responsibly,” Stosz said. 

During the comment period on the new guidance, Stosz said the White House wants to hear from people on what should be included, while keeping in mind that not every question can be asked and there needs to be a degree of prioritization to “maintain the accuracy and usefulness” of the disclosures.

As the disclosure process is largely for the purpose of transparency, he said “it’s something that’s critical for us to have public input into how we go about collecting and publishing this information.” 

Advertisement

Guidance goal

The inventories — which are required of all agencies, except for the Department of Defense, the intelligence community agencies and independent regulatory agencies — were initially intended to create public transparency, give agencies insight into other government uses of the technology and provide the White House with information about current uses to better craft its guidance.

“When we were developing OMB’s draft guidance on AI, the breadth of the use cases in the inventory I think really showed how diverse federal agencies uses of AI are, and how OMB’s policy can’t really treat them all in exactly the same way,” Stosz said.

When asked about whether there were concerning use cases reported in the inventories, Stosz said that there weren’t any individually that he would point to, but underscored how AI’s potential to not meet expectations, such as through embedding historical biases, are significant to its future use in government missions. 

“It’s certainly not our intent to say that these are bad use cases, but that in order for them to be relied upon to help execute the central government missions, that they need to have some degree of greater safeguards in place,” Stosz said. “And that the risks involved and the ways in which agencies are managing those risks need to be made as transparent as possible for the public to ensure trust and accountability.”

Advertisement

Since AI can be used in myriad ways — including within sensitive contexts that interact with protecting human life such as the Transportation Security Agency’s use of facial recognition technology at airport security checkpoints — OMB has focused on establishing guardrails but has not yet ruled out the potential to ban uses. 

“We’ve considered many ideas and the document is still out for public comment; it’s not finalized, so no approach has been set in stone yet,” Stosz said. “Banning is currently not the approach that the draft takes, because we really want to focus on establishing safeguards that help you mitigate these risks and empower agencies to use AI in ways that are most critical.”

Stosz said, however, that the final decision to use AI, subject to processes and transparency laid out in OMB’s guidance, is determined by federal agencies. 

Inventory evolution

As AI evolves and the government’s use of the technology does too, Stosz said “we’re seeing that some aspects of the inventory clearly need to change and evolve with it.” 

Advertisement

For example, there is currently increasing interest in generative AI, Stosz said, adding that the White House is eager to work with agencies as uses change.

The Department of Energy previously told FedScoop that it was able to create a more comprehensive inventory following “enhanced” guidance from the White House for the second year of reporting. Its 2023 inventory was roughly four times larger than the previous year and appeared to include very few, if any, of the uses reported the previous year.

Stosz said conversations with agencies led to improvements in the 2023 guidance for inventories. The feedback they’ve heard in those conversations includes requests for clarity on certain definitions or exclusions and “the need to create even some of the practical aspects of collection instructions that align with the various different ways that agencies are organized or that they publish information publicly,” he said.

The inventories thus far have mostly been completed and published by larger departments for all of their smaller components, using methods such as “data calls” to obtain information. Despite guidance on what the URL for the public inventory should look like and requiring agencies to ensure their inventory was reflected on the AI.gov page listing them, compliance varied. 

“There’s a lot of small improvements that we think will make a difference in how agencies can … collect and create consistent datasets that can be more easily machine-readable and shared across agencies consolidated into a single list,” Stosz said. 

Advertisement

More clearly defined AI leadership could help, according to Stosz.

Stosz said they’re hopeful the appointment of chief AI officers, as required by the executive order and defined under the new guidance, will create a “stronger and more clearly accountable leader for AI” and will make it easier for agencies “to collect, report on and ensure consistency with their inventories each year.”

Rebecca Heilweil contributed to this article.

Latest Podcasts