Advertisement

Bipartisan House bill seeks labeling and disclosures for AI deepfakes

The Protecting Consumers from Deceptive AI Act also charges NIST with the development of standards to identify and label AI-generated content.
Rep. Anna Eshoo, D-Calif., listens to Health and Human Services Secretary Alex Azar in the Rayburn House Office Building on Capitol Hill on Feb. 26, 2020 in Washington, D.C. (Photo by Chip Somodevilla/Getty Images)

As nearly half the world’s population heads to the polls this year amid the proliferation of AI-generated audio and visual content intended to trick the public, new bipartisan House legislation released Thursday takes direct aim at the identification and labeling of deepfakes.  

The Protecting Consumers from Deceptive AI Act from Reps. Anna Eshoo, D-Calif., and Neal Dunn, R-Fla., attempts to mitigate the risks posed by AI-generated content through a series of provisions that put the onus for disclosures on private sector actors, while calling on the National Institute of Standards and Technology for fresh guidelines to govern the tech.

“AI offers incredible possibilities, but that promise comes with the danger of damaging credibility and trustworthiness,” Eshoo, co-chair of the House AI Caucus and an AI task force member, said in a statement. “AI-generated content has become so convincing that consumers need help to identify what they’re looking at and engaging with online. Deception from AI-generated content threatens our elections and national security, affects consumer trust, and challenges the credibility of our institutions.”  

AI has the “potential to do some major harm if left in the wrong hands,” added Dunn, also a member of the House AI task force. “The Protecting Consumers from Deceptive AI Act protects Americans from being duped by deepfakes and other means of deception by setting standards for identifying AI generated content. Establishing this simple safeguard is vital to protecting our children, consumers, and our national security.” 

Advertisement

The legislation requires NIST to develop standards to identify and label AI-generated content through the leveraging of various technical elements, including watermarking, digital fingerprinting and provenance metadata. 

Within 90 days of the bill’s enactment, NIST’s director would be charged with creating task forces to support the development of those standards. Those task forces would be made up of relevant federal agency representatives, generative AI developers, detection standards entities, AI testing experts, web browser and mobile developers, search engine and social networking service providers, privacy advocates, creator associations, human rights lawyers, media organizations, academics and digital forensics experts.

The bill also calls on gen AI developers to insert machine-readable disclosures within content generated by their AI applications, in addition to granting users the choice to include metadata with more information. Online platforms would then be required to label AI-generated content with those disclosures.

Voluntary pledges made by major AI companies last year would serve as a building block for the legislation, as would “the work of many experts and global stakeholders,” per the press release announcing the bill. 

The legislation — which also claims House AI task force members Reps. Don Beyer, D-Va., and Valerie Foushee, D-N.C., as original co-sponsors — comes with broad agreement on Capitol Hill on the need for action on deepfakes. During a House Oversight subcommittee hearing in November, lawmakers across the aisle expressed general support for legislation targeting the AI-generated content and increased funding for federal agencies to combat deepfakes and invest in detection technologies. 

Advertisement

AI watermarking has also spurred plenty of conversation on the Hill and beyond. Senate Majority Leader Chuck Schumer, D-N.Y., said during one of his AI insight forums that watermarking “has a quicker timetable maybe than some of the others, and it’s very important to do.” AI scholars, meanwhile, have said that Adobe and TikTok have so far led the way on the labeling and disclosures of AI-generated content. 

“We welcome this legislation’s thoughtful consideration of the unique yet complementary roles that generative AI models and social media platforms can each play in giving people critical context about the things they see online,” Jace Johnson, Adobe’s vice president for public policy and ethical innovation, said in a statement. “We are heartened too by the proposal to build on the good work the National Institute of Standards and Technology is already doing in collaboration with industry to promote responsible innovation in AI technology.”

Matt Bracken

Written by Matt Bracken

Matt Bracken is the managing editor of FedScoop and CyberScoop, overseeing coverage of federal government technology policy and cybersecurity. Before joining Scoop News Group in 2023, Matt was a senior editor at Morning Consult, leading data-driven coverage of tech, finance, health and energy. He previously worked in various editorial roles at The Baltimore Sun and the Arizona Daily Star. You can reach him at matt.bracken@scoopnewsgroup.com.

Latest Podcasts