The Centers for Disease Control and Prevention is weighing the use of so-called “model cards” to detail key information about generative AI models it deploys, an agency data official said.
As part of its broader approach to AI governance, the CDC is considering “at least as a minimum” having model cards — which contain information like what’s in a model and how it’s made — deployed alongside its generative AI tools, Travis Hoppe, associate director for data science and analytics at the agency’s National Center for Health Statistics, said Tuesday.
“There’s always a risk when running a model, and you need that context for use,” Hoppe said at AFCEA Bethesda’s Health IT Summit 2024. “You need all of the quantitative metrics … but you also need this kind of qualitative sense, and the model card does capture that.” That information could be useful for evaluating potential risks when someone is considering new uses for a system years after it was initially deployed, Hoppe explained.
Considering model cards comes as the CDC, along with many other federal agencies, is exploring its own approach to governing generative AI use. The guardrails that agencies develop will ultimately play an important role in how the government interacts with the rapidly growing technology that it’s already using.
The CDC, for example, has started 15 generative AI pilots, Hoppe said, though he noted that those projects “are not particularly focused on public health impact.” Hoppe said the agency wanted to “tease out” things like security, how its IT infrastructure worked, and how employees interact with the tools before thinking about expanding uses in the rest of the agency.
Meanwhile, Hoppe said the agency is in the process of developing guidance for generative AI. While the CDC is looking to executive orders, NIST’s AI Risk Management Framework, and the Department of Health and Human Services’ Trustworthy AI Playbook, he said much of what already exists isn’t “fully prescriptive” of what agencies should do.
“So we’re starting to write out some of these very prescriptive things that we should be doing, and kind of adapting it for our specific mission, which is obviously focused on public health,” Hoppe said.
The panel discussion about generative AI featured several other HHS officials and was moderated by Paul Brubaker, deputy chief information officer for strategic integration of emerging concepts at the Department of Veterans Affairs Office of Information Technology.
Kevin Duvall, the Administration for Children and Families’ chief technology officer, said during the panel that his agency’s approach to generative AI is detailed in an interim policy that permits employee use of those tools with some constraints. That approach contrasted with other agencies’ prohibitions of third-party generative AI tools.
Duvall said he doesn’t find it useful for the “government to artificially constrain something,” though he said there needs to be “checks and balances.”
“I really make a comparison to probably discussions we were having 20, 25 years ago about search engines. You know, search engines can give unreliable results, so can gen AI,” Duvall said.
One use case the agency has looked into for the technology is in the grants-making area, much of which is done through text, Duvall said, adding that the agency sees it as a “decision-assisting tool” and “not a decision-making tool.”