HHS reports 66% increase in AI uses as agencies release inventories
The Department of Health and Human Services was among the first federal agencies to release its 2024 AI use case inventory Monday, reporting a roughly 66% increase in uses from the previous year.
In a post accompanying the inventory, Steven Posnack, HHS’s principal deputy assistant secretary for technology policy and principal deputy national coordinator for health IT, highlighted the health agency’s increase from 163 to 271 use cases and noted the varying stages of development. The new figure, for example, includes uses for operation and maintenance and in acquisition and development, as well as 16 uses that are retired.
The inventory comes as agencies across the federal government are expected to release new and expanded lists of AI use cases for 2024. Agencies were first required to release annual and public AI use case inventories under a Trump-era executive order, but the reporting has been inconsistent and in some cases even incorrect, as documented by Stanford researchers, FedScoop reporting, and a government watchdog review.
This year’s inventories, which were due Monday, were anticipated to be more expansive than years past following a memorandum from the Office of Management and Budget and instructions that added additional information to and clarified the existing process.
In addition to HHS, at least the Department of Veterans Affairs, Department of Agriculture, Environmental Protection Agency, Social Security Administration, Department of Homeland Security, and Department of Housing and Urban Development have also released their 2024 inventories. All of those agencies similarly reported more use cases than the previous year per their totals on a consolidated list of all agency inventories released by the White House in October 2023.
Artificial intelligence has been a focus for President Joe Biden, who released an executive order last year aimed at using the technology while ensuring its safety and trustworthiness. That effort carried into the inventory process through the introduction of specific risk management practices for use cases that could impact Americans’ rights and safety.
Per the OMB memo, all rights- or safety-impacting AI had to be brought into compliance with the specific risk management practices — such as assessing the risks and data quality — by Dec. 1. If they weren’t in compliance, and the agency hadn’t received approval on a year-long extension, those uses had to be halted.
Agencies also must disclose whether a use case is rights- or safety-impacting on their inventories. HHS, for example, listed two rights-impacting and two safety-impacting uses. The two rights-impacting uses by the Health Resources and Services Administration are tools to support evaluation of scholarship applications for medical students. Both use cases were in the “initiated” phase.
Meanwhile, the two safety-impacting uses focused on data about family separations and children who have left the care of the Office of Refugee Resettlement. Both uses were under the Administration for Children and Families and were in an acquisition and/or development phase.
HUD, for its part, reported a use case that was both rights- and safety- impacting. That use case deploys Google Translate on its public-facing website to provide information in other languages. It is currently in an operation and maintenance phase.
The White House didn’t immediately respond to a request for comment on findings and what to expect for the inventories this year.