Claude, Llama can now be used with highly sensitive data in Amazon’s government cloud

Amazon has received federal authorizations that allow Anthropic’s Claude and Meta’s Llama AI models to be used within high-sensitivity government computing environments, the company’s cloud computing division announced Wednesday.
The company has achieved FedRAMP “High” authorization as well as at the Defense Department’s Impact Levels 4 and 5 for use of the two foundation models in AWS GovCloud, its government cloud environment, according to a blog post by Liz Martin, Department of Defense director at Amazon Web Services.
That means it’s met the security requirements needed for the AI models to be used with some of the government’s most sensitive civilian and military information, and per Martin, it’s the first cloud provider to receive that level of authorization for Claude and Llama.
“This achievement represents a pivotal moment in public sector innovation by ensuring government agencies have secure, compliant access to AI tools with scalable capabilities and advanced features,” Martin said.
The announcement comes amid AWS’s annual summit in Washington, which kicked off Tuesday with an announcement that the tech giant plans to launch a second secret cloud region that Dave Levy, vice president of worldwide public sector, said would be a boon for the nation’s AI leadership. The Wednesday announcement builds on that theme by providing foundation models that can now be used in high-security environments.
“With this achievement, AWS is expanding the potential for Meta’s open source Llama models to power mission-critical applications in secure or disconnected environments at a lower cost,” Molly Montgomery, director of public policy at Meta, said in a statement included in the blog, adding that the company is “proud to support America’s defense agencies” with its technology.
Similarly, Thiyagu Ramasamy, head of public sector at Anthropic, said the authorizations allow Claude to be used for some of the most sensitive missions within defense agencies. “This authorization opens new possibilities for responsible AI use in scenarios where both performance and security are essential for serving the public interest,” Ramasamy said in a comment also included in the post.
FedRAMP, which stands for Federal Risk and Authorization Management Program, sets the standards for security in federal cloud services, and generally, its “High” designation is reserved for use of data within law enforcement, finance, health, emergency services, and similar systems. The DOD’s Impact Levels, meanwhile, are a corresponding, separate process for defense cloud environments. Clearing Impact Levels 4 and 5 means data can be used with controlled unclassified information and national security systems.