CDC releases AI strategy, guidance with eye toward ‘agentic’ uses
The Centers for Disease Control and Prevention launched a strategy and guidance for use of artificial intelligence on Thursday, setting a direction for the agency’s own work and providing resources for public health officials across the country.
Those documents point to a desire to promote adoption of the technology, empower the workforce to use it, and ensure the tools are governed properly. But, more uniquely, the publications encourage the use of “agentic” or “deep research” AI uses — those that can autonomously carry out specific tasks — which CDC is already tapping into.
Almost 10% of CDC’s roughly 100 AI use cases were agentic tools in 2025, according to the Department of Health and Human Services’ recently reported AI use case inventory. Its share of agentic uses makes up roughly a third of such deployments across the department.
As a result, the CDC’s new strategy includes specific language to leverage that technology to support public health, strengthen research and data management, and improve access to data. And simultaneously, the agency released specific guidance for state, tribal, local and territorial (STLT) public health authorities on the use of AI agents for research based on experiences from its own exploration.
“One of the number one asks that we get from our partners is guidance around this technology,” Travis Hoppe, the CDC’s acting chief AI officer, told FedScoop. The AI inventory showed where the agency was using the technology, and the new materials followed through with more information, he said.
The promise of agentic tools for agencies like CDC is that they can go beyond uses like email summarization and prescriptive tasks to move things forward, Hoppe said. For example, the CDC is already using a deep research tool to speed up the process of reviewing literature, data, policy and other sources to inform decision-making.
A common task for CDC staff is doing a three- to four-hour deep dive into a subject, Hoppe said. That could range from something that’s emerging or topical, to something more long-term and aimed at building a set of references. That’s where the agentic AI use known as “deep research” comes in.
As part of its exploration, Hoppe said CDC did a long internal study on the use of deep research and found the tool to be effective in some applications and limited in others. The agency then applied what it learned and began deploying the technology in those areas.
“What’s really exciting about this is that we found so much good use through this tool and so many places where it worked and places where it fell short,” Hoppe said. “We wanted to communicate that out to our STLT partners.”
Specifically, the guidance recommends that those partners use the technology when there’s a clearly defined scope to a problem, if rapid information synthesis is needed, when an expert can validate those responses, or if information-rich sources are being used, such as CDC’s publications.
CDC recommended avoiding deep research with data sources that are restricted or contain sensitive data about people, when professional judgement is needed, and when a comprehensive literature review is needed, among other things.
A CDC first
While the broader HHS already has its own AI strategy, this publication is CDC’s first. That’s not to say AI is a new technology to the public health agency, however.
CDC has been using AI and machine learning “for decades” for public health at various levels, Hoppe told FedScoop, so the document is more of a “formalization of the policy.”
The HHS strategy, which came out in December, outlined the higher-level priorities for the department, but CDC wanted to focus on what was specific for public health, he said. The CDC’s plan is aimed at guiding the agency for the next five years.
Beyond encouraging agentic uses, the strategy includes objectives to establish controls for third-party AI systems, prioritize enterprise solutions from the department level, and leverage options through the Office of Personnel Management to recruit top AI talent, among other things.
A throughline in the department’s strategy is support for state and local partners.
Some agencies serve the public through service delivery or relationships with industry, Hoppe said, but in addition to serving the public, CDC serves its state and local partners.
In addition to the agentic research guidance, it also released considerations for the implementation of generative AI, which is similarly based on CDC’s own journey. Notably, CDC says it was the first in the federal government to roll out ChatGPT to all of its workers back in 2023, and counts uses aimed at stopping Legionnaires’ disease and its AI chatbot among its successes.
As for where CDC goes next, Hoppe said the agency is “staying at the forefront.” Anything within the CDC’s mission is being looked at to see if the technology can help its “operational readiness,” he said. That includes questions about how the model could improve before the agency considers it for its uses.
According to Hoppe, thinking about what needs to change is key because, as the technology continues its rapid evolution, the agency can revisit those tools to see if its needs have been met.
“I think that’s probably the most important thing for us, and how I’ve been pushing all the technologies,” Hoppe said. “We don’t just look at a technology and say: ‘Oh, this sucks. It doesn’t meet our criteria.’ Rather, it could meet the criteria if these new things would happen.”