OpenAI further expands its generative AI work with the federal government
The federal government is continuing to invest in generative AI technology produced by OpenAI, with a handful of agencies recently inking deals to use the enterprise version of the firm’s ChatGPT platform.
The increased activity comes as policymakers weigh potential concerns with the technology, while also trying to exploit its potential benefits. It also shows how OpenAI is developing as an early frontrunner in providing the government with generative AI technologies on both the defense and civilian sides.
Federal contract records show that the National Gallery of Art purchased OpenAI licenses earlier this fall.
NASA, too, appears to have doubled down on the technology. After beginning tests of OpenAI tools last year, the agency purchased an annual license for ChatGPT Enterprise this past summer.
Those agencies did not provide information on how they’re specifically using the technology by the time of publication.
Earlier this fall, the Internal Revenue Service purchased 150 ChatGPT Enterprise Licenses for use within the Department of the Treasury through a federal contractor, though it’s not clear how many were or have been used. A person familiar with the matter said that the Treasury Department did at one point test ChatGPT among a small user base.
The Los Alamos National Laboratory, which also has a research partnership with OpenAI, is also using ChatGPT Enterprise.
This summer, Anna Makanju, OpenAI’s vice president of global affairs, told FedScoop that the U.S. Agency for International Development had become the first federal customer for ChatGPT Enterprise. She said the company was trying to make ChatGPT Enterprise more accessible to federal agencies by making the technology available through a series of governmentwide acquisition contracts — and the company is actively seeking FedRAMP Moderate accreditation.
That work comes as the company continues to try to build stronger relationships with the U.S. government. The agency recently touted its work with federal agencies in a blog post responding to last month’s national security memo on advancing artificial intelligence leadership. Along with fellow AI developer Anthropic, the company in August also signed a memorandum of understanding with the National Institute of Standards and Technology focused on AI safety.
Earlier this year, the company brought on Felipe Millon, a former senior manager focused on federal sales at Amazon, to work on its DC-based government business. The company explored potential relationships with federal agencies in anticipation of the AI national security memo, a spokesperson told FedScoop.
On the defense side, the company is entering a limited ChatGPT Enterprise partnership with the Air Force Research Laboratory, which does research and development work for the military service. Similar to USAID’s application of the technology, the partnership will focus on using generative AI to reduce administrative burdens and increase efficiency, experimenting with using the technology to improve access to internal resources and basic coding, for instance.
This work marks OpenAI’s first ChatGPT Enterprise partnership within the Defense Department, though it’s previously done other work with some DOD components, including the Defense Advanced Research Projects Agency. The company recently hired Sasha Baker, who previously served as the acting undersecretary of defense for policy in the Pentagon, to lead its national security policy team.
“As laid out in the recent National Security Memorandum, government adoption of AI is essential to maintaining U.S. leadership in this field. We are looking forward to working with the Air Force Research Laboratory to leverage our ChatGPT Enterprise tools for administrative use cases, including improving access to internal resources, basic coding, and supporting AI education efforts,” the OpenAI spokesperson said.
They continued: “This collaboration is limited to unclassified systems and data, and is consistent with OpenAI’s usage policies prohibiting the use of our technology to harm people, destroy property, or develop weapons. ”