Government developing AI use case repository for agencies facing challenges

The AI Community of Practice meets next month to identify "practice areas" where use cases are hitting hurdles.
(Getty Images)

The General Services Administration’s tech services arm plans to quickly develop a library of artificial intelligence use cases that agencies can refer to as they start to invest in the emerging technology.

GSA’s Technology Transformation Services launched a community of practice that will see agencies meet Feb. 12 to codefine “practice areas” where they see challenges adopting AI.

A use case library may reveal additional practice areas, Steve Babitch, head of TTS’s AI portfolio, said at a GSA event Wednesday.

“The harder we start to build that repository of use cases and build in a searchable database, if you will, that can sort of blossom into other facets as well — different themes or aspects of use cases,” Babitch said. “Maybe there’s actually a component around culture and mindset change or people development.”


Early practice areas TTS identified are acquisition, ethics, governance, tools and techniques, and possibly workforce readiness. Common early use cases across agencies include customer experience, human resources, advanced cybersecurity, and business processes.

For instance, analysts with the Census Bureau’s Economic Indications Division (EID) developed a machine learning model to automate data coding.

The division releases economic indicators for monthly retail construction data, and its construction programs are based on a data set of all the projects in the country. Using the data requires assigning a code to identify the type of construction taking place. which was once a manual process.

“If you think about this, it’s the perfect machine learning project. If you can automate that coding, you can speed up and you can code more of the data,” said Rebecca Hutchinson, big data leader at EID. “And if you can code more of the data, we can improve our data quality and increase the number of data products we’re putting out for our data users.”

The model EID’s analysts created works with about 80% accuracy — meaning only 20% still needs to be manually coded.


Some of the analysts who helped develop the ML model came out of the bureau’s data science training program. Developed almost two years ago to train the bureau’s existing workforce of largely statisticians and survey analysts, the program is an alternative to hiring data scientists, which is “hard,” Hutchinson said.

All interested staff can apply to learn Python in ArcGIS and Tableau through a Coursera course. One-third of the bureau’s staff has completed training or is currently enrolled, walking away with ML and web scraping skills.

“Once you start training your staff with the skills, they are coming up with solutions,” Hutchinson said. “It was our staff that came up with the idea to do machine learning of construction data, and we’re just seeing that more and more.”

GSA is working to educate agencies that AI is an embedded technology that needs to be adopted within the acquisition framework, not a solution in itself, said Omar Saeb, Alliant 2 program manager in GSA’s Office of Information Technology Category.

Agencies also need to realize AI is still being invented and reinvented in academia, labs and the private sector, meaning there’s a lot of risk involved, said Sukumar Iyer, CEO at Brillient Corporation.


“There might be repeated failures before you zero in on one or a combination of two or three algorithms that might give you the results you desire,” Iyer said. “Sometimes, much to your surprise, you’re going to find the AI predictions are wrong … because the underlying data had issues in it.”

Latest Podcasts