Advertisement

CDC’s generative AI pilots include school closure tracking, website updates

The Centers for Disease Control and Prevention is testing out use cases for generative AI and sharing its approach with other federal partners as it plans to develop an agencywide AI strategy.
The David J. Sencer CDC Museum in Atlanta, Ga. (CDC photo)

An artificial intelligence service deployed within the Centers for Disease Control and Prevention is being put to the test for things like modernizing its websites and capturing information on school closures, the agency’s top data official said. 

The tool — Microsoft Azure Open AI that’s been configured for CDC use within its cloud infrastructure — has both a chatbot component for employees to use and the ability for more technical staff to develop applications that connect to the service via an application programming interface (API), Alan Sim, CDC’s chief data officer, said in an interview with FedScoop. 

“The idea here is that we can allow for our CDC staff to practice innovation and gen AI safely, within CDC boundaries, rather than going out to third-party sites,” Sim said. 

In total, CDC has 15 pilots using the agency’s generative AI capabilities, primarily through the Azure Open AI service, a spokesperson said.

Advertisement

Exploring generative AI uses comes as the CDC, like agencies throughout the federal government, looks to create its own approach to artificial intelligence. Roughly a year ago, CDC leadership got together to develop an AI roadmap, Sim said, and since then, it’s prioritized goals like working on the chatbot and developing guidance that it’s shared with others in the federal government.

Now, the agency is planning to develop an AI strategy that Sim said he’s hopeful will be released in late spring to early summer. That strategy will aim to set “high-level principles” for how the CDC wants to use AI to support the public, Sim said. 

“We’re still learning, but we’re trying our best to be innovative, responsive, and obviously sharing as we learn with our partners,” he said.

Piloted uses

The CDC’s pilots are varied in terms of application and topic, including HIV, polio containment, communications, analyzing public comments, and survey design. So far, there’s been positive feedback from the pilots that generative AI has “significantly enhanced data analysis, efficiency, and productivity,” Sim said.

Advertisement

In one of the more operational pilots, for example, communications staff is using AI to assist with updates to the CDC’s websites across the agency.

That process tends to be “tedious” and “manual,” Sim said. To help make it easier, the Office of Communications is using an application connected to the Azure Open AI API, which was created by a data scientist at the agency.

“This has allowed staff to begin summarizing, leveraging … the benefits of generative AI to help speed up the work,” Sim said. 

CDC is also looking to AI for tracking school closures, which it did during the COVID-19 pandemic to watch for potential outbreaks. 

That tracking — which included monitoring thousands of school district websites and various types of closures, from weather to disease outbreaks — was done manually. And although the funding for those efforts stopped in December 2022, Sim said, there’s “a recognition that it’s still important from a public health perspective to keep track of school closure information.” 

Advertisement

As a result, CDC developed an AI prototype to collect information via social media about closures at roughly 45,000 school districts and schools. That prototype is still being evaluated for effectiveness and for whether it’s something that can be scaled, but it’s something CDC is looking into, Sim said.

While the CDC isn’t using agency data with the generative AI service, training against relevant datasets could happen in the future, Sim said. “We haven’t gotten there yet, but that’s part of our roadmap is to sort of mature and learn from these initial pilots, and then just build upon that work,” he said. 

Generative AI guidance

In addition to working toward potential uses, CDC has also developed guidance for generative AI. That document “gets into some of the details” of leveraging generative AI tools responsibly, safely and equitably, Sim said. 

It’s also something the agency is sharing. Sim said CDC presented that guidance at the Chief Artificial Intelligence Officers Council and he’s shared the guidance with “many federal agencies.”

Advertisement

“We are just trying to do our part,” he said. “We are not necessarily experts, but we are sharing the progress that we’ve made.” 

Throughout the federal government, agencies have been creating their own generative AI policies for their employees that detail things like whether third-party tools are prohibited, what information shouldn’t be used in queries, and processes for approving potential uses of the technology. A recent Office of Management and Budget memo further directs agencies to “assess potential beneficial uses” of generative AI uses and establish safeguards. 

CDC declined to share a copy of its guidance.

Even though deploying an AI tool within CDC’s cloud infrastructure provides more security, Sim said there are always concerns. One of the reasons the agency is focused on machine-learning operations is so it can explore and provide guidance on best practices on things like ensuring developers are being transparent, being able to detect “model drift,” and certifying that a model isn’t amplifying bias.

Ultimately, CDC wants to take a proactive approach to AI and machine learning so the agency is prepared for the next outbreak response and to empower state, local, tribal and territorial partners to leverage their data to gain efficiencies where it’s possible, Sim said.

Advertisement

“Any efficiencies that we can gain through these types of innovations, we’re always trying to support and encourage,” Sim said. 

Madison Alder

Written by Madison Alder

Madison Alder is a reporter for FedScoop in Washington, D.C., covering government technology. Her reporting has included tracking government uses of artificial intelligence and monitoring changes in federal contracting. She’s broadly interested in issues involving health, law, and data. Before joining FedScoop, Madison was a reporter at Bloomberg Law where she covered several beats, including the federal judiciary, health policy, and employee benefits. A west-coaster at heart, Madison is originally from Seattle and is a graduate of the Walter Cronkite School of Journalism and Mass Communication at Arizona State University.

Latest Podcasts