Advertisement

Senators discuss new legislation focused on deceptive AI and elections

"Hot off the press, Sen. Hawley and I have introduced our bill today ...] to ban the use of deceptive AI-generated content in elections," Sen. Amy Klobuchar said during a hearing.
Sen. Josh Hawley, R-Mo., asks questions as Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. (Photo by Win McNamee/Getty Images)

During a Tuesday hearing hosted by the Senate Judiciary Privacy, Technology and Law Subcommittee, senators from both parties touted their emerging plans to regulate artificial intelligence.

The ideas included new legislation from Sen. Amy Klobuchar, D-Minn., that would focus on the use of deceptive generative AI in elections, and a previously announced framework from Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., which would involve a new government body charged with providing licenses to companies developing high-risk AI systems.

Klobuchar’s legislation, which is called the Protect Elections from Deceptive AI Act, is designed to ban the use of AI to develop deceptive impersonations of federal political candidates in political ads, according to a press release. The bill would function as an amendment to the Federal Election Campaign Act of 1971 and would allow candidates to address deceptive AI-generated content in federal court. There are satire, parody, and news broadcast exceptions, the release noted.

The legislation was developed alongside Hawley and Sens. Chris Coons, D-Del., and Susan Collins, R-Maine.

Advertisement

“Hot off the press, Sen. Hawley and I have introduced our bill today with Sen. Collins,” said Klobuchar at the beginning of the hearing, “and Sen. Coons to ban the use of deceptive AI-generated content in elections.”

Klobuchar said the system would work in concert with a watermarking-based system, but would focus on AI that is used to impersonate elected officials. Her office did not respond to a request for more information by the time of publication but released the press release shortly afterward.

Speaking about the legislation, Blumenthal added at the hearing: “I’m very focused on election interference because elections are upon us. And I want to thank my colleagues, Sens. Klobuchar, Hawley, Coons and Collins for taking the first step toward addressing the harms that may result from deepfakes, impersonation, all the potential perils we’ve identified here.”

Witnesses at the hearing included Microsoft President Brad Smith, NVIDIA chief scientist and senior vice president of research William Dally, and law professor Woodrow Hartzog.

The discussion emphasized the wide range of ideas — and concerns — about the ways legislators might rein in emerging AI systems. The discussion comes as the effort to regulate AI continues to ramp up, including a new executive order on the technology expected from the White House and the onset of Sen. Chuck Schumer’s AI-focused “insight forums.” Earlier on Tuesday, the White House announced that eight more companies have joined its voluntary AI commitments, too

Advertisement

Hawley and Blumenthal’s framework would involve a new government body focused on overseeing AI. In particular, the proposal would require a government license for companies working on AI systems deemed particularly sensitive. These include large language models, including the kind of systems that OpenAI used to develop ChatGPT, as well as other high-impact applications of the technology, like facial recognition.

In order to obtain licenses, companies working on these models would need to follow certain risk management, pre-development testing, and incident reporting practices, among other requirements, according to a brief released by the senators.

Editor’s note, 9/12/23 at 5:29 p.m.: This piece was updated to include new details about a legislative proposal, which were revealed in a press release.

Latest Podcasts