With generative AI, GSA encouraged its workforce to ‘explore this space’

General Services Administration CIO Dave Shive said agency IT leaders had “fulsome visibility” into workers’ experimental gen AI phase, setting the stage for internal policy on the technology’s usage.
General Services Administration CIO Dave Shive speaks during the Elastic Public Sector Summit, produced by Scoop News Group, on March 13, 2024. (Scoop News Group photo)

When generative AI tools exploded into the public consciousness, technology leaders at the General Services Administration made the decision to let agency workers “explore this space.” 

For longtime GSA CIO Dave Shive, that decision led to some eye-opening — and head-scratching — observations of the GSA workforce. But it also provided a foundation for what would inform the agency’s generative AI policy.

“It was that transparency and visibility into what people were doing and how they were doing it and increasingly why they were doing it, in connection [with] the internal compute assets that exist in GSA that allows us to have a very broad exploration for things like generative AI across the agency now,” Shive said of the GSA’s experimental gen AI phase. 

Speaking Wednesday at the FedScoop-produced Elastic Public Sector Summit, Shive recalled the “fulsome visibility” that he and other GSA IT leaders had in the early days of the agency’s generative AI test phase. Thanks to GSA’s zero trust architecture, Shive said agency leaders were able to observe “every single action” taken by those workers who wanted to look into the tech’s capabilities.


Some workers simply explored beta sites to test how various gen AI tools functioned, Shive said, while others were “writing term papers using generative AI” and others connected those gen AI tools to “external public-facing data, which was perfectly fine.”

But there was a downside to GSA’s gen AI testing approach. 

“You also saw people trying to connect those tool sets to internal private datasets, and we needed to cut everything off,” Shive said. “We realized that we had to remind people of their data use obligations [and] their privacy obligations in this space.” 

To prevent any similar data-related miscues with gen AI, Shive and his team created a “lightweight compliance framework” for GSA workers who wanted to continue dipping their toes into the generative AI waters. Creating that ask-and-approval process seemed to fix the issue, Shive said.

“Unsurprisingly, people aren’t doing stupid things anymore, [like] trying to connect to private internal databases and things like that,” Shive said.  


For an agency that historically has had more data needs and requests than it can handle, “AI has been a force multiplier in allowing us to extract value out of our data stores in ways that were just never previously possible,” Shive said, citing AI’s automation and augmentative capabilities that enable GSA to “do deep inspection.”  

Ultimately, Shive said the GSA approach to generative AI aligns with its mission as a service provider to the federal government operating in a “data-rich environment” in which “nothing is trusted — no person, no device, no piece of data, and no application.” With those assumptions baked into GSA’s mission, a curious-yet-cautious approach to AI has formed. 

“We built an ecosystem of not just the monitoring and inspection, but proofing and authentication,” Shive said, “that every day a piece of data, every device, every human, or every artificial intelligence component is fully monitored, and we can act against that.”

Matt Bracken

Written by Matt Bracken

Matt Bracken is the managing editor of FedScoop and CyberScoop, overseeing coverage of federal government technology policy and cybersecurity. Before joining Scoop News Group in 2023, Matt was a senior editor at Morning Consult, leading data-driven coverage of tech, finance, health and energy. He previously worked in various editorial roles at The Baltimore Sun and the Arizona Daily Star. You can reach him at

Latest Podcasts