Federal leaders tailor ‘sales pitch’ to curb AI hesitancy in agencies

As agencies continue to pursue the use of artificial intelligence technologies for their missions, federal leaders say they’re figuring out ways to address skepticism from their workers.
During panels and presentations at recent government technology-focused events, leaders from various federal agencies said they’ve run into hesitancy about AI from staff and provided examples of how they’re building trust in the emerging tech to encourage use.
Among their approaches: Providing education to get people comfortable, targeting AI solutions toward tasks no one wants to do, and thoroughly testing capabilities to build confidence. And while the leaders generally indicated AI use was beneficial, some said hesitancy isn’t necessarily a bad thing.
With “five generations in the workforce, there are skeptics, and there should be skeptics about AI in particular,” David Salvagnini, NASA’s chief AI and data officer, said at an ACT-IAC panel last week. “There’s a lot of cause to be skeptical about some of the AI and some of what’s being presented to us, and then there’s people who are super excited and see vast opportunity and potential in what AI has to offer.”
Building trust within the federal workforce comes as AI use among Americans is slowly growing. AI users are still in the minority in the United States, with roughly 39% indicating they use the technology, according to a recent Gallup survey of more than 3,100 U.S. adults. Even fewer use AI in their jobs. Moreover, that same survey found that people who use AI are twice as likely to trust the technology than non-users.
Mark Gorak, director of the Cyber Academic Engagement Office and principal director for Resources and Analysis at the Department of Defense, told those gathered at FedScoop’s FedTalks last week that reskilling workers to incorporate large language models and generative AI will be “key.” However, Gorak said anecdotally that when he asks federal workers how many of them have used AI that day, “almost no hands go up.”
“I have to encourage them to use AI. I use it for everything,” Gorak said.
Educating workers
At the Department of State, education has been crucial to adoption. In recent remarks at two recent events, Kelly Fletcher, State’s chief information officer, said that while the internal generative AI chatbot StateChat is currently being used by about half the agency’s workers, adoption wasn’t immediate.
During a discussion at the ACT-IAC event last week, Fletcher said that prior to StateChat’s roll out to the entire 80,000-worker agency, she was asking questions about things like the capability of the chatbot to handle thousands of users at a time. However, it turned out that volume wasn’t an immediate issue the agency needed to deal with, and instead, it was educating people about the tool.
“It has taken a huge amount of education and training and very detailed and very focused education and training to get even up to like 45 to 50 thousand people,” Fletcher said.
Despite the chatbot being in the agency’s environment, having the “StateChat” name and hosted on an internal homepage for government employees, Fletcher said one month ago an employee asked her if using it was allowable.
Fletcher, who also made similar remarks about the tool at the Billington Cybersecurity Summit earlier this month, said something she “wildly underestimated with” the technology is “the amount of training and education and conversation required to get folks who would benefit greatly from it to use it.”
John Salamone, chief human capital officer at the House of Representatives’ Office of the Chief Administrative Officer, pointed to lack of information as a general hurdle to using new technologies, and said strategic planning can help address that.
“As a leader, we have to understand and appreciate that sometimes staff don’t know what they don’t know, and we have to put them in a situation where they’re going to learn — where they’re going to adapt to new technologies or new programs,” Salamone said at the ACT-IAC event.
Salamone said his team has a two-year strategic plan, based on the congressional term that’s broken into six-month increments, so his team knows where they’re going. That approach has helped whether they were running a new program or rolling out new technology, he said.
Targeting toil
For some agencies, using AI to target burdensome administrative toil has been a selling point to get workers on board with use.
Allison Page, acting director of the people experience division at the Advanced Research Projects Agency for Health (ARPA-H), said during the ACT-IAC event that having their IT group work “right at the hip” with the human resources team has benefited the rollout of AI technologies.
As an example, Page said the IT group went around to all the teams to talk to workers about their hardest or most time-consuming tasks on their plate when implementing an AI tool.
“It was a little bit of a sales pitch, honestly,” Page said. She said she encouraged her team to hear that pitch out, telling them “AI might sound scary to some people, but let’s give it a whirl, right? We’re supposed to be in a high-risk, high-reward organization. Let’s do it.”
Since then, ARPA-H has held information sessions for everyone at the agency about the tool to address how it could be used and what workers should be using it.
“They had instructional designers also help with the messaging and those info sessions,” Page said. “So it was all very well thought out and also consistent.”
Similarly, Andrea Brandon, deputy assistant secretary of budget, finance, grants and acquisition at the Department of the Interior, said on a panel at ACT-IAC that when the agency approached workers about automating contract closeouts, for example, no one argued.
No one wants to do contract closeouts, so their reaction was to “have at it,” Brandon said. Now, the department is looking into grant closeouts as well.
The department has also used AI to help with oversight of grant summaries on USASpending.gov to ensure they were meeting criteria for summaries that describe the project, and now workers have access to ChatGPT and Microsoft Copilot Chat, she said, which they’ve encouraged workers to use to create a summary.
Brandon said the department is “using a lot of AI now” and the technology is now “slowly winning hearts and souls,” but management used it first for its oversight function.
“These are the kinds of things that I think is what helps win people … letting them know you don’t need to fear the new technology,” Brandon said. “We’re not trying to replace your jobs, but we do want to be efficient, and we want to make sure we’re compliant to regulations and laws.”
Building trust
At NASA, Salvagnini said his approach is giving workers access to tools they need and creating intellectual curiosity about how to use them, describing it as “really kind of a change management focus.”
He pointed to the process of developing an AI-based medical resource for astronauts as an example of building acceptance for a tool.
The use case is a large language model-enabled tool designed to help astronauts get answers to medical questions when they might be out of communication with mission control. That tool was initially called “Doc in a Box” but was eventually renamed the Crew Medical Officer Decision Assistant. “You can see the nuance difference, right?” he said.
“We’re not going to replace doctors with some AI, but what we are going to do is we’re going to augment a crew with some medical data that could help them, you know, in a time of need,” Salvagnini said.
He also called acceptance of the technology “multifaceted,” and highlighted the careful work that went into the development process. NASA underwent an “enormous amount of data curation” by doctors in making the tool to ensure that the outputs were trustworthy.
With that process, NASA’s chief doctor became more comfortable with it and decided to rename the tool, Salvagnini said. “So it is emerging as a well-accepted and respected resource for that particular type of a use case.”