The government wants to fast-track security reviews for AI companies

Anthropic and OpenAI, two of the country’s leading AI companies, recently announced that they’re offering their powerful models to federal agencies for $1 for the next year.
But the new deals, which are both available through a General Services Administration OneGov contract vehicle, don’t on their own clear the way for widespread government adoption of artificial intelligence. Instead, the new financial incentive seems to be daring government officials to move quickly and approve the technology as soon as possible.
Currently, no major AI provider is authorized under FedRAMP, a critical security program that allows agencies to use a company’s cloud services — including software or models offered on a cloud service — across government. While several companies — including Anthropic, xAI and OpenAI — have released government-focused product suites, they’re still somewhat dependent on cloud providers like Microsoft and Amazon that have already cleared the FedRAMP process. If AI companies want to sell much of their technology directly to the government, they need their own authorization-to-operate or ATO.
What’s changed, though, is that federal officials now have a new reason to move through security review processes more quickly, a former GSA employee and another person familiar with the matter both told FedScoop. That strategy could involve going through an authorization-to-operate process through an agency’s authorizing official — typically, their chief information officer — as well as the security review process explicated by FedRAMP, both people said.
GSA is now looking at strategies to speed up the process. An agency spokesperson confirmed that these companies still need to seek FedRAMP authorization if they want to offer their technology directly. But to make that happen faster, GSA is now consulting with the Chief Information Officers Council and the board that oversees FedRAMP about “prioritization for AI companies” that are added to GSA’s multiple award schedule. Any criteria for prioritization will eventually be published on the FedRAMP website, the person added.
“We welcome the partnership we’ve had from American AI companies to offer their products at significantly discounted prices,” Josh Gruenbaum, commissioner of the GSA’s Federal Acquisition Service, told FedScoop. “Our preference is for a short initial term as agencies educate and familiarize themselves with these transformative AI tools and the technology continues to evolve. With GSA centralizing purchasing of common goods and services, including IT, we fully expect companies to deliver favorable pricing in the future.”
GSA appears to be working with OpenAI on developing a separate authorization to operate for ChatGPT Enterprise, too. Even as OpenAI still pursues authorization through the FedRAMP process, GSA is open to sharing its ChatGPT Enterprise authorization letter and package with officials who have authorization powers at other federal agencies, a spokesperson for the AI company told FedScoop. Authorization is needed before an agency can input federal information into the tool, the person said.
Another person familiar with the matter confirmed that GSA is considering sharing an ATO for such generative AI systems with other agencies. Under this pathway, those agencies could work on early adoption of AI technologies based on GSA’s authorization, while still waiting for a more long-term approval through FedRAMP. Agencies can still pursue separate authorizations for the technology, which some agencies are doing, the person added.
But the companies’ attractive pricing comes with a tight deadline. Both Anthropic and OpenAI have promised to offer the technology for $1, but only for 12 months. The approach mimics the perks offered by other major technology companies, like Microsoft, in order to speed up government adoption.
After a year, agencies will either need to enter a paid agreement for ChatGPT Enterprise or conclude access at the end of the trial, the OpenAI spokesperson said. They emphasized that given the potential scale of federal deployment, they expect government clients to have “better pricing” than commercial customers.
In a similar vein, Anthropic plans to work with the government on pricing that balances “accessibility with affordability,” the company spokesperson said. (The spokesperson emphasized that the offer includes Claude through government-specific accounts, and that agencies that need access to the company’s API can use Google Cloud and AWS Bedrock).
Both OpenAI and Anthropic’s $1 deals are part of the GSA’s OneGov initiative, launched earlier this year to modernize how the government buys goods and services at scale. Leaders at GSA touted the collaborations this month as directly supportive of the White House’s AI Action Plan, which calls for widespread adoption of AI in the federal government.
Just days before these deals were announced, the GSA revealed ChatGPT, Anthropic and Google Gemini models were approved for the agency’s multiple award schedule, opening the door for other federal agencies to potentially access them as well. The GSA spokesperson did not address FedScoop’s question about pricing for Grok. (xAI announced that Grok for Government was available to federal agencies earlier this summer, four days after FedScoop reported that government coders seemed prime to test the tech.)
In the meantime, the FedRAMP program is now undergoing a major overhaul, called FedRAMP 20x, that aims to simplify its review process with automation and expand the private sector’s role. The effort follows a 2024 Office of Management and Budget memo emphasizing plans to modernize the FedRAMP process and identify criteria for emerging technologies that might be prioritized for review.
The Biden administration had previously created a FedRAMP program called the Emerging Technology Prioritization Framework, which also aimed to hasten the review of AI cloud services, but the program was effectively scrapped when Trump rescinded Biden’s 2023 AI executive order.
Miranda Nazzaro contributed reporting.
This story was supported by the Tarbell Center for AI Journalism.