Anthropic makes its pitch to DC, warning China is ‘moving even faster’ on AI

Anthropic is on a mission this week to set itself apart in Washington, pitching the government’s adoption of artificial intelligence as a national security priority while still emphasizing transparency and basic guardrails on the technology’s rapid development.
The AI firm began making the rounds in Washington, D.C., on Monday, hosting a “Futures Forum” event before company co-founders Jack Clark and Dario Amodei head to Capitol Hill to meet with policymakers.
Anthropic is one of several leading AI firms seeking to expand its business with the federal government, and company leaders are framing the government’s adoption of its technology as a matter of national security.
“American companies like Anthropic and other labs are really pushing the frontiers of what’s possible with AI,” Kate Jensen, Anthropic’s head of sales and partnerships, said during Monday’s event. “But other countries, particularly China, are moving even faster than we are on adoption. They are integrating AI as government services, industrial processes and citizen interactions at massive scale. We cannot afford to develop the world’s most powerful technology and then be slow to deploy it.”
Because of this, Jensen said adoption of AI into the government is “particularly crucial.” According to the Anthropic executive, hundreds of thousands of government workers are already using Claude, but many ideas are “still left untapped.”
“AI provides enormous opportunity to make government more efficient, more responsive and more helpful to all Americans,” she said. “Our government is adopting Claude at an exciting pace, because you too see the paradigm shift that’s happening and realize how much this technology can help all of us.”
Her comments come as the Trump administration urges federal agencies to adopt automation tools and improve workflows. As part of a OneGov deal with the General Services Administration, Anthropic is offering its Claude for Enterprise and Claude for Government models to agencies for $1 for one year.
According to Jensen, the response to the $1 deal has been “overwhelming,” with dozens of agencies expressing interest in the offer. Anthropic’s industry competitors, like OpenAI and Google, also announced similar deals with the GSA to offer their models to the government for a severely discounted price.
Beyond the GSA deal, Anthropic’s federal government push this year has led to its models being made available to U.S. national security customers and staff at the Lawrence Livermore National Lab.
Anthropic’s Claude for Government models have FedRAMP High certification and can be used by federal workers dealing with sensitive, unclassified work. The AI firm announced in April that it partnered with Palantir through the company’s FedStart program, which assists with FedRAMP compliance.
Jensen pointed specifically to Anthropic’s work at the Pentagon’s Chief Digital and AI Office. “We’re leveraging our awarded OTA [other transaction agreement] to scope pilots, we’re bringing our frontier technology and our technical teams to solve operational problems directly alongside the warfighter and to help us all move faster.”
However, as companies including Anthropic seize the opportunity to collaborate with the government, Amodei emphasized the need for “very basic guardrails.” Congress has grappled with how to regulate AI for months, but efforts have stalled amid fierce disagreements.
“We absolutely need to beat China and other authoritarian countries; that is why I’ve advocated for the export controls. But we need to not destroy ourselves in the process,” Amodei said during his fireside chat with Clark. “The thing we’ve always advocated for is basic transparency requirements around models. We always run tests on the models. We reveal the test to the world. We make a point of them, we’re trying to see ahead to the dangers that we present in the future.”
The view is notably different from some of Anthropic’s competitors, which are instead pushing for light-touch regulation of the technology. Amodei, on the other hand, said a “basic transparency requirement” would not hamper innovation, as some other companies have suggested.