Advertisement

GSA touts federal AI adoption despite unanswered ROI questions

Zach Whitman, GSA’s CAIO and chief data scientist, said his team is still working to quantify the value agencies get from using generative AI chatbots.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
The General Services Administration (GSA) Headquarters building. (SAUL LOEB/AFP via Getty Images)

The General Services Administration has worked to make it easier for federal agencies to adopt the AI models of their choosing, but the agency has yet to determine what leaders can expect for return on investment. 

It’s still a work in progress, according to Zach Whitman, chief data scientist and chief AI officer at GSA.

“The big issue that we’re facing is how do we calculate ROI,” Whitman said during a panel discussion Thursday at the AI+ Expo in Washington, D.C. 

Questions around measuring AI’s “effectiveness and utility,” efficiency savings and impact on service delivery are still “very open,” he added, but “we know something is there.”

Advertisement

While the agency works to quantify the value of federal agencies using generative chatbots, it has already facilitated adoption via a centralized platform and discount prices. Federal agencies use GSA’s USAi platform to test AI models before procurement. After deciding which model to deploy, agencies are then gaining model access at near-zero prices via GSA’s OneGov deals.

“We’ve seen a lot of success so far in terms of general adoption,” Whitman said. 

GSA isn’t the only entity to double down on AI before figuring out how to measure success. Just 6% of executives in a Deloitte survey of 1,850 leaders around the world were able to grasp an AI use case’s ROI within a year of deployment. Even so, nearly two-thirds of those surveyed planned to increase investment in AI between 6% and 19%. 

As it tries to solve the ROI equation, GSA is working to answer other questions, too. 

One of the risks of deploying AI is model drift, which characterizes model degradation that can occur over time due to data changes or other factors. 

Advertisement

The USAi platform acts as a central hub for federal agencies to rerun model evaluations. As part of the provided dashboard, agencies can take a look at how a model performs during certain use cases and how likely it is to disclose information that it shouldn’t, among other metrics. 

“Our goal is really just to give the agencies the data and then empower them to rerun it as much as they want,” Whitman told FedScoop on the sidelines of the event. 

While GSA hasn’t given agencies specific recommendations on how often to check for model drift, he said, it is working on a case-by-case basis with agencies to determine how much drift is detrimental to a given use case. 

“Our guidance would be kind of a platitude until we understand exactly their use case,” Whitman said. 

Lindsey Wilkinson

Written by Lindsey Wilkinson

Lindsey Wilkinson is a reporter for FedScoop in Washington, D.C., covering government IT with a focus on DHS, DOT, DOE and several other agencies. Before joining Scoop News Group, Lindsey closely covered the rise of generative AI in enterprises, exploring the evolution of AI governance and risk mitigation efforts. She has had bylines at CIO Dive, Homeland Security Today, The Crimson White and Alice magazine.

Latest Podcasts