Meta, the social media giant that operates Facebook and Instagram, has released new rules for AI-generated content that appears in political advertisements. The new requirements, which apply globally and go into effect starting in 2024, emphasize disclosures for when the technology is used to depict something that hasn’t actually happened — a growing concern among election disinformation researchers.
Specifically, political advertisers will need to disclose when AI is used to alter footage of a real event, produce fake footage of real events, depict someone that does not exist or an event that never happened, or make it appear like someone said or did something that they did not. If an advertiser does not make these disclosures as required, the platform says it will reject their advertisements and potentially institute a penalty against them within its systems.
“Advertisers running these ads do not need to disclose when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad,” the company said in a blog post. “This may include image size adjusting, cropping an image, color correction, or image sharpening, unless such changes are consequential or material to the claim, assertion, or issue raised in the ad.”
The announcement comes amid growing concern about the potential impact of AI-generated content in elections. In September, Sens. Amy Klobuchar, D-Minn., Chris Coons, D-Del, and Susan Collins, R-Maine, proposed new legislation focused on banning the use of deceptive AI-generated content in elections.
This past summer, Meta was one of seven companies to initially sign a nonbinding agreement on AI safety created by the White House, though more companies have joined since. Those principles included a commitment to developing “technical mechanisms” to alert users to AI-generated content, like watermarking.