Advertisement

Microsoft Azure OpenAI service approved for use on sensitive government systems

The service has received FedRAMP High approval, meaning it can be used in cloud environments that hold sensitive, unclassified data.
JAN. 23, 2023 -- Screens display the logos of OpenAI and ChatGPT. (Photo by LIONEL BONAVENTURE/AFP via Getty Images).

Microsoft’s recently launched Azure OpenAI service on Thursday received Federal Risk and Authorization Management Program high authorization, giving federal agencies who manage some of the government’s most sensitive data access to powerful language models including ChatGPT, FedScoop has learned.

The authorization will allow government departments’ cloud apps to integrate with and adapt models including GPT-4, GPT-3.5, and DALL-E for specific tasks, including content generation, summarization, semantic search, and natural language-to-code translation.

FedRAMP is a security framework that allows cloud providers to obtain governmentwide authorization for their products. The high authorization permits the use of a product in cloud computing environments that hold some of the government’s most sensitive, unclassified data, such as data held by law enforcement agencies or financial regulators.

Microsoft in early June launched its Azure OpenAI Service for the government to allow federal agencies to use powerful language models to run within the company’s cloud service for U.S. government agencies, Azure Government.

Advertisement

“The FedRAMP High authorization demonstrates our ongoing commitment to ensuring that government agencies have access to the latest AI technologies while maintaining strict security and compliance requirements,” Bill Chappell, CTO for Microsoft’s Strategic Missions and Technologies told FedScoop in a statement.

“We look forward to empowering federal agencies to transform their mission-critical operations with Azure OpenAI and unlocking new insights with the power of Generative AI,” he added. 

The new FedRAMP authorization comes as Microsoft faces intense scrutiny after hackers based in China breached the email accounts of senior U.S. officials, an operation that utilized a flaw in a Microsoft product and was discovered thanks to a logging feature that costs customers extra. 

Biden administration officials, security researchers and members of Congress have questioned the company’s commitment to security in the aftermath of the hack and why Microsoft is upselling customers for core security features.

Microsoft’s Azure OpenAI service this week also received DoD IL2 Provisional Authorization (PA) issued by the Defense Information Systems Agency (DISA).

Advertisement

Notably, Microsoft says all traffic used within the Azure OpenAI service will stay entirely within its global network backbone and will never enter the public internet. The technology giant’s network is one of the largest in the world and made up of more than 250,000 km of lit fiber optic and undersea cable systems.

The tech company added that the Azure OpenAI Service does not connect with Microsoft’s corporate network, and that government agency data is never used to train the OpenAI model.

The Azure OpenAI Service can be accessed using REST APIs, Python SDK, or Microsoft’s web-based interface in the Azure AI Studio, and all Azure Government customers and partners will be able to access all models.

Microsoft is doubling down and highlighting its data, privacy, and security protections offered to government customers by encrypting all Azure traffic within a region or between regions using MACsec, which relies on AES-128 block cipher for encryption. 

Latest Podcasts