Advertisement

Meta unveils new rules for AI-generated content in political ads

Ahead of the 2024 election, the parent company of Facebook and Instagram is instituting disclosure policies regarding AI-generated political ads.
A view outside of Meta headquarters on April 28, 2022, in Menlo Park, Calif. (Photo by Justin Sullivan/Getty Images)

Meta, the social media giant that operates Facebook and Instagram, has released new rules for AI-generated content that appears in political advertisements. The new requirements, which apply globally and go into effect starting in 2024, emphasize disclosures for when the technology is used to depict something that hasn’t actually happened — a growing concern among election disinformation researchers. 

Specifically, political advertisers will need to disclose when AI is used to alter footage of a real event, produce fake footage of real events, depict someone that does not exist or an event that never happened, or make it appear like someone said or did something that they did not. If an advertiser does not make these disclosures as required, the platform says it will reject their advertisements and potentially institute a penalty against them within its systems. 

“Advertisers running these ads do not need to disclose when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad,” the company said in a blog post. “This may include image size adjusting, cropping an image, color correction, or image sharpening, unless such changes are consequential or material to the claim, assertion, or issue raised in the ad.”

The announcement comes amid growing concern about the potential impact of AI-generated content in elections. In September, Sens. Amy Klobuchar, D-Minn., Chris Coons, D-Del, and Susan Collins, R-Maine, proposed new legislation focused on banning the use of deceptive AI-generated content in elections. 

Advertisement

This past summer, Meta was one of seven companies to initially sign a nonbinding agreement on AI safety created by the White House, though more companies have joined since. Those principles included a commitment to developing “technical mechanisms” to alert users to AI-generated content, like watermarking. 

Rebecca Heilweil

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Latest Podcasts