Social Security Administration issues temporary block on generative AI

A recent AI executive order encourages agencies to explore use of the technology, while an SSA spokesperson said the move was precautionary.
A view of a Social Security Administration building in Burbank, Calif., on Nov. 5, 2020. (Photo by Valerie Macon /AFP via Getty Images)

The Social Security Administration has banned the use of generative artificial intelligence-based tools on agency devices, FedScoop has confirmed. The block, which is temporary, is meant to ensure the security of data and systems.

The agency’s block of these third-party tools was disclosed earlier this month in a management advisory report for this fiscal year, published by the SSA’s inspector general office. The report noted that the decision was made to protect personally identifiable information, along with health, sensitive and other non-public information, that risked being shared through the use of the technology. 

The Social Security Administration said the block was a precautionary measure and that the agency has yet to use generative AI. When asked if the ban applies to agency laptops and mobile devices, a spokesperson said the block “is designed to be agency-wide.” 

“The temporary block was a necessary precaution to ensure the agency’s data and systems remained secure while we plan for future endeavors,” Darren Lutz, an SSA spokesperson, said in an email to FedScoop. “The agency continually assesses potential endeavors including potentials for AI modernization.”


The move comes as agencies wrestle with how to approach the technology. Some agencies, like NASA and the Department of Energy, are interested in testing generative AI in a secure environment. The State Department has considered using the technology for contract writing, and the Justice Department has weighed using these kinds of tools to improve its IT service desk. 

A recent executive order on artificial intelligence discouraged agencies from issuing “broad general bans or blocks” on generative AI. Instead, agencies are supposed to conduct more tailored risk assessments and create guidelines for the technology, among other measures to prevent misuse of federal government information. 

Rebecca Heilweil

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at Message her if you’d like to chat on Signal.

Latest Podcasts