Advertisement

AI risk management deadline hits federal agencies. Not all were ready.

DHS and DOJ are among the agencies lagging behind those that have publicly posted updated AI inventories, per OMB requirements, and outlined changes to FedScoop.
Helicopter shot of the United States Capitol and Supreme Court Building in Washington, D.C. at sunset, with the National Mall in the background.

The deadline for federal agencies to implement risk management practices for high-impact AI use cases — or terminate them — has come and gone, but a handful of departments are still working to complete their requirements.

FedScoop reached out to 28 federal agencies to inquire about the steps they have taken to ensure compliance within the April 3 timeframe. Some agencies fulfilled the requirements, others reclassified use cases or still have a couple boxes to check. A few appear to have missed the deadline entirely. 

As outlined by an Office of Management and Budget memorandum, uses considered high-impact are required to comply with minimum risk management practices, which include pre-deployment testing, impact assessments, adverse impact monitoring, adequate human training and assessments, appropriate fail-safes that minimize harm, consistent appeal processes, and options for end users to submit feedback. 

“Without these kinds of measures in place, some of the riskiest tools used by federal agencies are left without real oversight or mechanisms for agencies to validate performance and efficacy,” said Quinn Anex-Ries, a senior policy analyst focused on equity and civic tech at the Center for Democracy & Technology. “It’s not only heightening the risk of failed AI projects and wasted taxpayer dollars, but it also opens the American public to a host of potential AI harms.”

Advertisement

The Department of Labor said it has implemented all the required risk management practices for high-impact use cases, consistent with the OMB memo. The agency posted an updated inventory earlier this week with the appended risk management categories. 

“At this time, DOL has no active non‑compliant high‑risk AI use cases,” a spokesperson told FedScoop. “Any use case that did not meet federal standards has been paused or discontinued.”

While the agency did not identify risk levels of use cases in its prior inventory, a newly published version categorizes just one use case as high-impact: an AI-powered scrubber of personally identifiable information.   

Last week, NASA also posted an updated AI inventory. An agency spokesperson said in an email that AI use cases that don’t meet requirements have been removed, and NASA’s one high-impact AI use case has the proper guardrails in place. In the updated inventory, however, the agency said the development of monitoring protocols is still in-progress and an independent review of the AI use case has not been completed. 

Similarly, the Department of Veterans Affairs told FedScoop that it has complied with OMB’s requirements. The agency quietly uploaded an updated version of its AI inventory that featured new sections for the risk management practices. The VA filled in the nine compliance-related fields for its 90 deployed high-impact use cases, which include the technology assisting in reviews of results from colonoscopy procedures and breast exams. Four high-impact use cases were rolled back from deployed to pre-deployment or retired, making them exempt from the compliance requirements. 

Advertisement

While a Department of State spokesperson said none of its AI use cases were decommissioned, one of its high-impact use cases in the inventory uploaded this month has been retired after appearing in a pilot phase in the inventory post earlier this year. The agency’s other two high-impact use cases had fulfilled some of the risk management requirements, but neither use case had information about an established appeal process in the updated inventory. 

The General Services Administration did not post an AI inventory when other agencies did earlier this year but has since published one. A GSA spokesperson said it has thus far not had to terminate any AI use case for non-compliance with its requirements, although its inventory does not categorize use cases by risk level.

The Environmental Protection Agency said it is “coordinating across internal offices to confirm the status of existing AI use cases and apply the necessary oversight and risk‑management practices.” The endeavor includes routine verification steps and ongoing monitoring to determine risk level. 

“In line with OMB guidance, EPA will take any necessary corrective actions as part of this standard process,” a spokesperson told FedScoop. “This effort is underway and reflects the Agency’s commitment to responsible and compliant use of emerging technologies.”

The EPA posted an updated AI inventory Tuesday. In it, the agency changed its designation of an AI-powered records management system from high-impact to not high-impact. The agency’s one deployed high-impact use case has not yet met all the requirements. 

Advertisement

The Department of Energy also reclassified the two deployed high-impact use cases in its April inventory upload. Two use cases from the Oak Ridge National Laboratory, including Copilot Studio and Copilot for Microsoft 365, went from high-impact to not high-impact, thereby exempting them from risk management requirements. The Department of Energy did not respond to FedScoop’s request for comment. 

Some agencies did not identify any high-impact use cases during its inventory process, including the National Science Foundation, the Nuclear Regulatory Commission and the Small Business Administration. 

Waivers

As part of the OMB memo, agencies must also publicly report determinations and waivers that they’ve submitted for high-impact use cases. 

Waivers act as a system-specific and context-specific determination that fulfilling the risk management requirements would “create an unacceptable impediment to critical agency operation,” according to the memo. 

Advertisement

Most agencies don’t mention waivers in their use cases, and a few include it as a possible option but never invoke it. 

The DOE is one of the only agencies with a waiver in use. The use case, deployed at Los Alamos National Lab, is described as a tool to gain lessons learned from searches of relevant documents. It is categorized as not high-impact, though the risk management requirements are filled out. For the independent review requirement section, DOE said: “Agency CAIO has waived this minimum practice and reported such waiver to OMB.”

While used minimally now, waivers could begin cropping up more often as agencies continue to hone their approach, according to Anex-Ries. 

“Waivers are worth paying attention to,” Anex-Ries said. “That could give an indication about how risk management implementation is really going.”

Agencies without public updates

Advertisement

Some agencies did not respond to FedScoop’s request for comment and have not posted updated AI inventories. The departments of Transportation, Commerce, Health and Human Services and Justice are part of this group of laggards, as is the Department of Homeland Security. 

“Everybody is interested in how DHS is using AI,” said Darrell West, senior fellow at the Brookings Institution’s Center for Technology Innovation. “There have been a number of reports about contracts with vendors that use these tools in very intrusive ways … so people want to know how DHS is monitoring things.”

One of the most controversial high-impact AI use cases currently being deployed at DHS is an app called Mobile Fortify, which is used for law enforcement operations where agents take photos of an individual and confirm their identity using biometric matching across a database. The agency is also using AI to sift through tips, review mobile device data relevant to investigations and flag intentional misidentification.

Notably, DHS has been operating in a limited fashion since the partial government shutdown for more than 50 days. The agency was impacted by the 43-day government shutdown last year, too. 

“I don’t think the shutdown is an excuse for not meeting legal obligations,” West said. “People knew this deadline was coming up.”

Advertisement

In addition to the 365-day heads-up provided by the OMB memo, sources told FedScoop the risk management practices outlined are similar to those that the Biden administration put forth. 

“The current version issued under the Trump administration has some changes to the underlying previous memo, but in large part, retained a significant number of these risk management practices,” Anex-Ries said. 

This continuity, he said, indicates widespread support for the implementation of guardrails, rather than having them be viewed as political issues. 

“I highlight that because, really, agencies have been working on what it means to implement risk management practices since late March of 2024,” Anex-Ries said. “That’s a pretty decent runway even for government agencies.”

“At this point,” he added, “it’s pretty concerning to me if there are still federal agencies that are struggling to report out progress.”

Advertisement

Madison Alder and Matt Bracken contributed reporting to this article.

Latest Podcasts