Advertisement

NASA chatbots, Treasury coding, OPM drafting: How agencies have deployed Claude

Federal agencies are working to halt their use of Anthropic tools amid a battle between the Claude maker and President Trump over how those services should be used.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
The Claude AI logo is displayed on the screen of a smartphone placed on a reflective surface onto which lines of computer code are projected.
(Photo by Samuel Boivin/NurPhoto via Getty Images)

A range of AI use cases — from coding assistance to workflow automation — face alteration or retirement as federal agencies work to comply with a Trump administration directive to remove Anthropic tools from their systems within the next six months. 

The recent clash between the Claude maker and President Donald Trump comes after federal officials have spent years building up AI capabilities in government, including tools from Anthropic. Now, a growing list of agencies are immediately dropping use of those services, and in some cases, replacing it with other providers.

In recent days, the Department of Treasury, the Office of Personnel Management, NASA, and the International Trade Administration all indicated to FedScoop they have stopped or plan to stop using Anthropic technologies in the wake of the ban announced via Truth Social. That adds to previous statements and internal communications at the Department of Health and Human Services, the State Department, and the General Services Administration.

Trump’s directive is the result of an escalated disagreement between Anthropic and the Department of Defense over how the technology should be used. While Trump accused Anthropic in his social media statement of attempting to “strong-arm” the DOD with its terms of service, CEO Dario Amodei said the company simply wanted to maintain safeguards to ensure that its technology would not be used in mass surveillance or fully autonomous weapons.

Advertisement

While it began as a spat over defense-related uses, the broadness of the mandate is having widespread impact beyond the military. Many of the publicly known applications on the chopping block were aimed at saving the workforce time. 

Uses halting

Treasury Secretary Scott Bessent indicated on social media that the department would terminate use of the company’s products. A spokesman told FedScoop that the most common use of Anthropic products at the department was Claude Code for its software developers. 

According to the spokesman, “roughly 100 Treasury engineers were using Anthropic products for coding, and migration to alternatives has proceeded easily.” Software engineers are now using OpenAI’s Codex, Google’s Gemini, and are testing out xAI’s Grok, the spokesman said.

Meanwhile, at the State Department, the decision impacted its internal chatbot StateChat and officials will be removing Claude, a source with direct knowledge of that system told FedScoop on the condition of anonymity. Per that source, the directive validates the department’s approach of basing that tool on multiple models. 

Advertisement

StateChat, which leverages Palantir, is the department’s premiere AI use case and is used internally by thousands of staff for tasks like summarization, drafting, and translation. While a State Department spokesperson acknowledged the agency was taking action, they didn’t respond to questions about StateChat specifically. 

The directive is poised to similarly impact two NASA chatbots, though it’s not clear those tools have alternative models to which they could immediately pivot.

According to its AI use case inventory for 2025, NASA uses Claude for two such systems: a pilot chatbot to assist Goddard Space Flight Center employees with tasks like document editing and explaining code, and another chatbot planned for use at Langley Research Center to help workers process controlled unclassified information. Both note use of Claude’s Sonnet 3.5.

NASA spokeswoman Cheryl Warner told FedScoop the agency “is evaluating its AI environment and will meet the six-month phase out requirement.” 

The Office of Personnel Management, for its part, halted its previously disclosed use of Claude across the agency “for summarization, drafting, and decision support.” OPM spokeswoman McLaurine Pinover told FedScoop in an email that use stopped shortly after the president’s Friday announcement. 

Advertisement

“We were still in the initial steps of implementing the tool and this should not affect functions at OPM,” Pinover said. According to the agency’s 2025 inventory, that use was in a sandbox phase.

The Department of Commerce’s International Trade Administration was aiming to use Claude to automate “comprehensive analytical reports, data visualizations, and documentation for research and policy analysis workflows,” per its 2025 AI use disclosure. In response to a FedScoop request for comment, a spokesperson said “ITA no longer uses Claude.”

Inventory disclosures

A FedScoop review of AI use case disclosures from 20 agencies for 2025, found that roughly half mention at least one use of Claude specifically or Anthropic tools. That’s almost certainly an undercount.

Although the inventories provide a useful disclosure of AI in government, they exclude most research and development uses, those for national security and any within the Department of Defense. There’s also substantial variation in the detail between agencies, meaning some disclosures may not include the model being leveraged for a particular use case or its maker at all. For example, Treasury doesn’t mention using Claude at all in its disclosure.

Advertisement

Moreover, like other AI companies, Anthropic has partnered with mainstay government cloud services providers to more rapidly offer their technology to federal agencies as it works through the process of being independently FedRAMP certified. Anthropic, for example, has partnerships with Palantir, Amazon Web Services, and Google Cloud. It’s possible that publicly reported use cases could be listed under those cloud providers or other companies leveraging those tools without mentioning Claude.

Nevertheless, the inventories provide some indication of other agencies that may be similarly working to untangle Anthropic from their systems, including the Department of Energy’s national labs, the Department of Homeland Security, the Department of Labor, and the Department of Interior.

For its part, the Department of Veterans Affairs was using Claude Sonnet for its Cybersecurity Operations Center, per its inventory. The agency attributed the use case to Andesite AI, which it said is using “AWS Bedrock (via Claude Sonnet) to help review potential incidents and perform investigations.” 

The VA, DOE, DHS, DOL and Interior didn’t respond to FedScoop’s requests for comment on plans to phase out Claude. 

$1 deal

Advertisement

Notably, Anthropic was one of several companies to offer its AI services to the U.S. government at a substantially discounted rate of $1 annually per agency via the GSA’s OneGov initiative

It’s not clear how many agencies had taken the provider up on that offer, but at least one of the adopters was the Department of Health and Human Services. It shut off access to Claude across the department immediately, but remains in a holding pattern absent federal guidance, according to communications viewed by FedScoop.

As FedScoop reported last week, acting deputy chief AI officer Arman Sharma told staff in an internal email that HHS would be “disabling enterprise Claude” as a result of the ban. In a subsequent email obtained by FedScoop from Monday, the office of the CAIO said that the department had “temporarily disabled enterprise access to Claude” but that further action would come after more formal guidance.

Officials “are still awaiting more detailed federal guidance regarding the future use of applications and systems that leverage Claude or other Anthropic technologies,” per that email. 

Chief information officers for individual divisions were told to start contingency planning to ensure there aren’t disruptions if that guidance requires a transition from technologies that Anthropic supports, the message said.

Advertisement

“This planning should focus on understanding dependencies and identifying potential alternatives — not on implementing changes now,” the email said.

Additional reporting by Lindsey Wilkinson and Matt Bracken.

Latest Podcasts