House lawmakers consider legislative options to combat generative AI sexual exploitative content

Republicans and Democrats on the House Oversight and Accountability Cybersecurity, Information Technology and Government Innovation Subcommittee seek to focus on legislative definitions to allow law enforcement to apply existing laws to artificial intelligence.
Capitol Building U.S. Congress
(Getty Images)

House lawmakers across the aisle agreed during a Tuesday hearing on the need for a legislative framework for artificial intelligence-generated content that depicts child sexual abuse materials, or CSAM, while acknowledging that there are gaps to fill left by existing statutes that do not explicitly include AI stipulations.

During a House Oversight and Accountability subcommittee hearing that featured testimony from a New Jersey mother whose high school-aged daughter was the victim of deepfake pornographic content created and distributed by other students, lawmakers appeared sympathetic to pleas for legislative action at both the state and federal levels that would hold bad actors civilly and criminally accountable. 

Rep. Nick Langworthy, R-N.Y., said he is working on legislation with state inspector generals to create a commission that would consider generative AI safeguards, assess current legislation and recommend revisions to strengthen law enforcement’s ability to prosecute AI-related CSAM crimes. 

Additionally, Cybersecurity, Information Technology and Government Innovation Subcommittee Chairwoman Nancy Mace, R-S.C., shared that she is co-sponsoring a bipartisan, bicameral bill introduced by Rep. Alexandria Ocasio-Cortez, D-N.Y., that deals with non-consensual, sexually explicit deepfake media.


Langworthy said during the hearing that while he strongly supports innovation and wants to ensure that the U.S. doesn’t lose its edge to China on AI, lawmakers “must hold accountable unethical creators, criminal actors and especially those who are creating child pornography and child sexual abuse material. Emerging technology should always be used in ethical ways and tech companies, alongside Congress, need to ensure that this happens.”

Carl Szabo, vice president and general counsel for the technology trade association NetChoice, said during his testimony that laws that apply to offline crimes are applied online as well, pointing to more law enforcement and prosecutions of bad actors. Szabo argued that AI is “highly regulated today” and that the technology is not “an escape clause for criminals.”

In an interview after the hearing, Mace said Congress can fill in legislative gaps through technical changes in laws, noting that “small parts” can make a “big difference.”

“We need to change a word or two here to make it applicable to deepfakes in AI, non-consensual pornography,” she said. “You can’t go build a dirty bomb; that’s against the law, so AI can’t do it either. How are [existing laws] impacted by AI?”

Caroline Nihill

Written by Caroline Nihill

Caroline Nihill is a reporter for FedScoop in Washington, D.C., covering federal IT. Her reporting has included the tracking of artificial intelligence governance from the White House and Congress, as well as modernization efforts across the federal government. Caroline was previously an editorial fellow for Scoop News Group, writing for FedScoop, StateScoop, CyberScoop, EdScoop and DefenseScoop. She earned her bachelor’s in media and journalism from the University of North Carolina at Chapel Hill after transferring from the University of Mississippi.

Latest Podcasts