Senate lawmakers see a need to legislate against deepfakes threatening elections, but can’t come to terms with how
Senate lawmakers agree they need to take action on the use of artificial intelligence-generated deepfakes ahead of the 2024 election cycle — but they so far can’t see eye-to-eye on how to best tackle it through legislation.
Members of the Senate Committee on Rules and Administration during a hearing Wednesday disagreed for the most part along party lines on the appropriate role the government should play in regulating deceptive, AI-generated political content. However, they did agree that, within a narrow set of dangerous circumstances, some action was needed.
Sen. Jon Ossoff, D-Ga., said there is great fear around bad actors using AI to “willfully, knowingly and with extreme realism falsely depict you or anyone of us or a candidate challenging us making statements we never made that’s indistinguishable to the consumer of the media from a realistic documentation of our speech.”
There exists substantial legal precedent that would support the regulation of speech in extreme cases related to willfully deceptive or fabricated content regarding claims made by candidates or public figures, according to Ossoff and Trevor Potter, an expert witness at the hearing who is a former chairman of the Federal Election Commission and founder of nonprofit watchdog group the Campaign Legal Center.
Potter noted that the protection of speech has included intentional mischaracterizations of candidates by other public officials, but he believes the courts would draw a line at someone falsely depicting another person doing or saying something that never actually happened.
The two Republican-chosen witnesses at the hearing – who are skeptical of government regulation or curtailment of political speech – agreed broadly with Democrats that certain willfully deceptive content should not be allowed but disagreed on how to best address it with legislation.
“There are circumstances in which I would probably agree with you that things crossed the line,” said Ari Cohn, free speech counsel at TechFreedom, a libertarian-leaning technology think tank.
“I think that drawing the statute narrowly enough is an exceedingly difficult task. In principle, as a pie-in-the-sky concept, I think I agree with you but I’m just not sure how to get from point A to point B in a manner that will satisfy strict [legal] scrutiny,” Cohn said in response to questions from Sen. Ossoff.
Neil Chilson, a senior research fellow at the Center for Growth and Opportunity at Utah State University, advised in his testimony that Congress should “increase its ability to understand and respond.”
Conservatives are generally skeptical of curtailing political speech, including content that is created with the help of AI, which they say has no agreed-upon legal or technical definition despite its widespread and daily use throughout the federal government and the private sector.
“I hope that we see more political speech,” Neil Chilson, a senior research fellow at The Center for Growth and Opportunity at Utah State University, told FedScoop after the hearing. “Free speech often is uncomfortable, but especially around elections, it’s important. So, I think AI can help elevate voices that otherwise couldn’t create sophisticated content because they didn’t have the resources.” Chilson was acting chief technologist at the Federal Trade Commission for a year during the Trump administration.
The hearing focused part of its discussion around the Protect Elections from Deceptive AI Act — a bipartisan bill that Sens. Amy Klobuchar, D-Minn., Josh Hawley, R-Mo., Chris Coons, D-Del., and Susan Collins, R-Maine, introduced earlier this month to protect candidates from misinformation and harmful generative AI content that would potentially lead voters to make uninformed decisions.
The bill would allow election candidates to have materially deceptive AI content taken down and to seek damages in a federal court. The ban would extend to any one person, political committee or entity that would distribute fraudulent material.
The legislation would set a precedent against harmful generative AI content while allowing for parody and satire. It would also have an exception for the use of AI-generated content in news broadcasts.