Advertisement

Anthropic eyes FedRAMP accreditation in quest to sell more AI to government

The AI company is in talks with federal agencies, its head of global affairs said in an interview with FedScoop.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
Close-up of phone screen displaying Anthropic Claude, a Large Language Model (LLM) powered generative artificial intelligence chatbot.
(Photo by Smith Collection/Gado/Getty Images)

Anthropic, a major developer of large language models, has entered talks with some federal agencies about a potential sponsorship for a FedRAMP authorization that would allow the company to sell AI systems directly to the government.

Michael Sellitto, Anthropic’s head of global affairs, said in an interview with FedScoop that the company wasn’t ready to share the agencies it’s engaged with in conversation, but emphasized that it is interested in expanding the use cases for its technology within civilian federal agencies. The company’s technology is already under evaluation at the Department of Homeland Security as a potential way to help Customs and Border Protection agents with training for asylum interviews. Anthropic says it’s also had talks with other federal agencies about making its chatbot technology available on their networks.

“It’s something we’re actively considering right now. We would also like to be able to directly provide services to governments and not necessarily go through a partner at all times,” said Sellitto, who previously served as director for cybersecurity policy on the National Security Council under the Obama and Trump administrations. “The process is long and complicated and particularly for a company that has pretty limited experience selling into the government at this point.” 

Beyond DHS, Anthropic has partnered with federal agencies on other work, including a nuclear information safety evaluation conducted alongside the National Nuclear Security Administration and the Department of Energy, as well as a research agreement with the AI Safety Institute. 

Advertisement

Anthropic currently provides its technology to the government through partnerships with two companies — Palantir and Amazon Web Services — that already have their cloud systems certified via FedRAMP, a governmentwide security compliance authorization program that is a requirement for cloud vendors to work with federal agencies. Cloud vendors can pursue a FedRAMP authorization via two routes: one that includes a federal agency partner to help navigate the process and another that requires accreditation by the FedRAMP Joint Authorization Board, though this method will soon be phased out

OpenAI, a top competitor that has made its technology available through Microsoft’s government cloud service, is also pursuing FedRAMP “Moderate” Accreditation for ChatGPT Enterprise. 

As Washington prepares for a second Trump administration, Sellitto said he believes the AI policy landscape won’t see radical changes, despite the former president’s promise to repeal President Joe Biden’s executive order on the technology. Sellitto also spoke about how Anthropic formulates exceptions to its usage policy for government clients — and its ongoing efforts on civil rights and examining algorithmic bias. 

This interview was edited for clarity and length. 

FedScoop: I wanted to ask you about the announcement that you recently shared about the National Nuclear Security Administration’s testing, which involved looking at whether large language models like Claude might leak sensitive information. How did you set up a test bed for that?

Advertisement

Michael Sellitto: Nuclear-related information is obviously very sensitive, given the consequences if that information were to get into the wrong hands. This was really a unique partnership with NNSA and [Energy] to make our models available in a top-secret, classified cloud environment for them, so that they would have confidence that anything that was going into or out of the models would be protected appropriately. 

This was a pretty significant engineering effort that we led, along with [Amazon Web Services], which already offers substantial classified cloud computing services to the government.

FS: Anthropic has said it’s not going to share findings. Were there initial determinations made from this test — or changes that are coming because of what you found?

MS: There’s been a number of really good outcomes from this. We’ve built a lot of capacity within DOE to just conduct this kind of red-teaming and evaluation. There’s somewhat of an art currently to developing effective model evaluations — and we’re really impressed with how quickly our partners at the labs got up to speed on this and were able to not only learn how to do testing and evaluation themselves, but now become a resource to others that are looking to do something similar. 

We are expecting some information to be shared with us that will help us understand the nature of risks, if any, and where there may be some opportunities to mitigate some of the concerns. 

Advertisement

To the extent that we get actual information from DOE, what we’ll try to do is investigate different research avenues to strengthen mode guardrails, increase trust and safety classifiers, the [systems] that observe how the models are being used, or otherwise, look at other efforts that might be undertaken to reduce the risk of these threats. To the extent we learn anything about mitigations, we hope to, working with DOE, share that information with other frontier model developers. 

FS: Your CEO has talked a lot about this idea of a “race to the top” in terms of best practices for safety and security. 

I’m curious now that we’re seeing a change in administration, whether you think that the voluntary commitments and the reporting requirements that were outlined through various Biden administration efforts were helpful for advancing those kinds of goals, and whether you think there’s anything that should be changed or adjusted. 

MS: On “race to the top,” what I would say is that one thing that Anthropic tries to do is prototype things, test them out, and then get them out there in the world to create public knowledge. For example, we were the first company to release a responsible scaling policy last year. Subsequently, a number of other companies have followed suit. At the Seoul AI Summit, 16 companies, including Anthropic, from around the world, committed to having something that looks similar to a responsible scaling policy, as well as the frontier AI safety commitments that were announced there. … Regardless of what happens in the U.S., companies have a commitment already that’s kind of international that will go beyond this.

With respect to the Trump administration, I don’t want to speculate on specific policy decisions the administration may make. It’s clear at a high level, some of the focus will be on competition with China and supporting domestic growth will be a big piece of that. Ensuring the adequate energy supply as a key input into advanced manufacturing, AI, and just general affordability [will be, too]. …

Advertisement

The incoming Trump administration has made some comments around maintaining the U.S. lead in AI energy. I’m sure that they’ll have other things on their mind. … I don’t think we should expect dramatic departures in terms of making sure that we’re in a competitive environment. I also expect that the government will continue to want to adopt and deploy the technology, which has been something that the national security memorandum and other government documents have been promoting. 

FS: Are you worried at all about the discussion about repealing the executive order, or aspects of the executive order, in particular? When it was signed, the Biden administration was saying this was one of the biggest policy efforts ever on AI released by a government.

MS: Some things that were in the executive order align with priorities that we have, including a number of initiatives to advance innovation and adoption, particularly government adoption, including the appointment of chief AI officers. There are also some pieces in there that direct organizations like [the National Institute of Standards and Technology] to provide red-teaming guidance and for the NSA AI Security Center to provide some cybersecurity guidance for developers. Ultimately, I think these efforts are really just aimed at enabling companies’ abilities to test and secure their systems, which I think is apparently a bipartisan idea. There certainly will be some changes that come along, but I think that there’s reason to expect that some of the key components of what’s been AI policy today will stay in place. 

FS: Just generally, what do AI companies like Anthropic make of the FedRAMP process? Is this going to be a burden as the government does try to deploy artificial intelligence? What changes would be needed?

MS: The things that could help make government easier for new entrants are probably streamlining contracting and making that process more transparent and obvious. With respect to FedRAMP and other government certifications, included in the classified space, I think if the government were able to have some clear and specified timelines — what you might call SLA or service level agreements — where we’re going to guarantee that we’re going to make certain decisions or determinations in a particular time frame. That we’re going to get you the information you need if there’s deficiencies or changes or you need to make to your program. …

Advertisement

What we’ve found is that in some cases, we submit things for review and then they go into an unknown black box for some period of time, and it’s unclear who’s responsible for moving [something] along or what the holdups may be.

When you’re talking about startups and small companies that need to move quickly and need to generate revenue and show market traction … just generally, the pace of working with the government and all its uncertainties make it really hard for companies to plan on anything particular working out. … We’re engaging with the government because we think it’s important and it’s part of our values. But I think there’s a lot of other startups out here in the Valley that probably just completely ignore the government because it’s too hard to navigate, too slow, and too opaque. 

FS: Can you talk a little bit about the process of determining exceptions to your user policy for government customers?

MS: There are a lot of really important uses related to protecting national security that are unique to government missions. We’ve created some exceptions that we’re making on a case-by-case basis for certain national security agencies that are particularly around the analysis of intelligence. We’ve maintained our restrictions in areas targeting to inflict harm or conducting information operations, censorship, or domestic surveillance. But there’s these other aspects of the national security mission [where] we think our models can be used in a safe and responsible manner.

Some of the factors that we look at in making these determinations include whether these models are suitable for these particular use cases. [We want to check that] there will be a reasonable level of certainty that the particular use case and the way it’s going to be implemented will have appropriate protections and guardrails around it to make sure it’s not doing things that would be risky, outside the agency’s legal authorities, or otherwise in conflict with what we think a reasonable use case might be.

Advertisement

We also maintain a really close dialogue with the organizations that we work with to understand what they’re finding. … The last piece is really making sure that there’s appropriate democratic oversight of the organizations in question, which could include things like internal inspectors general, congressional oversight, and in the case of the U.S., internal processes that monitor user behavior to ensure that they’re following all the appropriate requirements. 

FS: Some of our sources have said that while they think the Trump administration will follow the Biden administration in focusing on topics like U.S. leadership in the technology and competitiveness with China, there will be less of a focus on civil rights and algorithmic discrimination. Can you talk about that?

MS: Our company’s commitments and values stay the same. We are committed to making sure that technology is adopted in a responsible manner. One of the things that is a key piece of our usage policy is ensuring that the technology is not used inappropriately to make decisions about individuals in a way that could affect rights or otherwise discriminate based on protected characteristics or other reasons. 

We have our societal impact research team that develops, implements and [conducts] evaluations to look for things like bias, discrimination, and other potential negative implications of how the technology could be used. … There are some bills in Congress looking at algorithmic impact assessments. In Europe, which is obviously a major market for AI, the EU AI Act has a bunch of requirements for high-risk use cases. … We’re definitely engaged there as well. 

FS: What are the prospects of Europe and the United States harmonizing on these kinds of AI requirements?

Advertisement

MS: One of the important pieces for the U.S. is for the federal government to figure out its own policy regulatory environment. When the U.S. lacks a similar kind of domestic basis, it’s very hard to engage internationally. … Having some baseline set of requirements for the technology at the federal level actually will help engage internationally, because it gives U.S. diplomats, the Commerce Department, and others a footing for that engagement. … Having a trade policy and the right personnel in place that are going to push for harmonization or the interests of U.S. companies is [important] and is something I expect to see from the Trump administration, based on what they’ve said in the campaign. 

This story was updated Nov. 20, 2024, with details about Anthropic’s talks with agencies to make its chatbot available on networks.

Latest Podcasts