USAID warned employees not to share private data on ChatGPT, memo shows

As of April, the international development agency does not have an outright ban on the generative AI tool.
USAID Administrator Samantha Power speaks during the Summit for Democracy on March 30, 2023, in Washington, D.C. (Photo by Anna Moneymaker/Getty Images)

Back in April, the U.S. Agency for International Development warned employees that they should only input information from “publicly-available sources” into generative artificial intelligence tools like ChatGPT. Until now, it wasn’t clear how, exactly, USAID was approaching the rapidly developing technology. 

Federal agencies have started crafting and solidifying their strategies for generative AI. Still, their approaches have varied. The Social Security Administration has temporarily banned the technology on its devices, while the Agriculture Department determined that ChatGPT’s risk was “high” and established a board to review potential generative AI use cases. NASA, which is using a version of OpenAI software provided through the Microsoft Azure cloud system, has set up a secure testing environment to study the technology.

Notably, the White House’s recent executive order on artificial intelligence discouraged agencies from outright forbidding the technology. 

The USAID memo, which FedScoop obtained through a public records request, was sent by an official within the agency’s Office of the Chief Information Officer and titled “Usage of ChatGPT and Large Language Models (LLMs).” Its approach appears to mirror that of the General Services Administration, as well as some other agencies, in avoiding an outright ban, though it’s not clear if the agency has made any updates since last year. USAID did not respond to a request for comment.


The general notice stated that “USAID has neither approved nor banned the use of ChatGPT or any LLMs for Agency Use.” For that reason, the memo explained, only information that is already public should be entered in these tools — and any content created with their help should be “referenced as output” from a large language model. 

“Artificial Intelligence (AI) and LLMs are powerful tools with enormous value, but the Agency should exercise a degree of caution in their use as their reliability, accuracy and trustworthiness are not proven,” the memo stated. “Additionally, LLMs have not demonstrated their compliance with Federal and USAID security requirements, provided transparency around the data collected, and addressed the resulting Privacy and Records Management implications.” 

USAID has released an action plan related to artificial intelligence, and the agency’s responsible AI official appears to have spoken about how generative AI tools can be used by the government. 

Still, a data governance page for the agency notes that “emerging technologies such as generative AI raise new questions around data ownership, the ethical use of data, and intellectual property rights, among others,” and USAID’s public list of AI use cases does not appear to include any generative AI applications. 

Madison Alder contributed to this article. 

Rebecca Heilweil

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at Message her if you’d like to chat on Signal.

Latest Podcasts