Advertisement

State Department reveals new interagency task force on detecting AI-generated content

More than 20 agencies are currently involved in the effort.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
An image of a man using ChatGPT software. (Photo by NICOLAS MAETERLINCK/BELGA MAG/AFP via Getty Images)

The State Department on Wednesday announced a new, government-wide task force focused on content authentication. The group, which includes more than 20 federal agencies, is supposed to streamline the government’s international outreach on content authentication, which could help combat technology like deepfakes. 

The task force is charged with working with foreign governments and partners on developing the technical standards and capacities to detect this category of content, according to a statement shared with FedScoop ahead of the announcement. The goal is to make it easier for the public and governments to understand when digital materials have been created or altered using artificial intelligence, a department spokesperson said.

“The U.S. government is committed to seizing the promise and managing the risks of AI. Improving digital content transparency measures at home and abroad is a critical component of that effort,” a spokesperson said in a statement.

The statement added: “Digital content transparency is vital to strengthening public confidence in the integrity of official government digital content, reducing the risks and harms posed by AI-generated or manipulated media, and countering digital information manipulation.” 

Advertisement

It’s not clear if any companies are currently participating or which federal agencies are involved in the effort. 

Building systems for detecting AI-altered content has remained a key interest for the federal government, particularly as the cost of producing this kind of content — including through tools like Stable Diffusion, DALL-E, Midjourney — have fallen. There’s also interest in detecting content provenance, which focuses on determining information related to the source of an AI-generated output. 

Through the Biden administration’s voluntary AI commitments, top AI companies including OpenAI and Anthropic have pledged to work on content provenance technology. And the Global Engagement Center, an outfit within the State Department focused on fighting disinformation spread abroad by foreign actors, has also expressed interest in the technology. 

Rebecca Heilweil

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Latest Podcasts