Advertisement

Senate Democrat pushes for expansion to copyright act to include generative AI research

In a letter to the Library of Congress, Sen. Mark Warner, D-Va., proposed an expansion to an exemption for generative AI “good-faith security research.”
Sen. Mark Warner, D-Va., talks to reporters at the U.S. Capitol on March 7, 2023 in Washington, D.C. (Photo by Chip Somodevilla/Getty Images)

An exemption under the Digital Millennium Copyright Act should be expanded to include generative artificial intelligence research focused specifically on embedded biases in AI systems and models, a top Senate Democrat argued in a new letter to the Library of Congress.

In the letter, shared exclusively with FedScoop, Sen. Mark Warner, D-Va., urged the LOC’s copyright office to expand an existing “good-faith security research exemption” to include research that exists outside of traditional security concerns, such as bias, arguing that it would be the best path for ensuring a “robust security ecosystem” for tools such as generative AI. 

The letter from Warner, co-chair of the Senate Cybersecurity Caucus, is in response to a petition from Jonathan Weiss, founder of the IT consulting firm Chinnu Inc., that asked the LOC to establish a new exemption to address security research on generative AI models and systems. 

A spokesperson for Warner said in an email that an expansion to the exemption rather than an entirely new exemption “is the best way to extend the existing protections that have enabled a robust cybersecurity research ecosystem to the emerging issues surrounding safe AI.”

Advertisement

Warner’s letter mirrors a Department of Justice response to the same petition last month. The Computer Crime and Intellectual Property Section of the DOJ’s Criminal Division wrote that “good faith research on potentially harmful outputs of AI and similar algorithmic systems should be similarly exempted from the DMCA’s circumvention provisions.”

Said Warner: “It is crucial that we allow researchers to test systems in ways that demonstrate how malfunctions, misuse and misoperation may lead to an increased risk of physical or psychological harm.”

The Virginia Democrat, who has introduced bipartisan legislation on artificial intelligence security and emerging tech standards, pointed to the National Institute of Standards and Technology’s AI Risk Management Framework to acknowledge that AI’s risks “differ from traditional software risks in key ways,” opening the door for not only security vulnerabilities but also dangerous and biased outputs. 

The use of generative AI for fraud and non-consensual image generation are among the deceptive practices Warner noted as reasons for consumer protections, such as watermarks and content credentials. Additionally, the lawmaker asked the LOC to ensure that the potential expanded exemption “does not immunize” research that would intentionally undermine protective measures. 

“Absent very clear indicia of good faith, efforts that undermine provenance technology should not be entitled to the expanded exemption,” Warner said. 

Advertisement

The senator also asked the LOC to include security and safety vulnerabilities, especially involving bias and additional harmful outputs, in its expanded good-faith security research definition.

In response to Warner’s letter, Weiss said in an email to FedScoop that he doesn’t “care whether the existing exemption is expanded to include research on AI bias/harmful output, or whether an entirely new exemption is created. Our main concern is to secure protections for good faith research on these emerging intelligent systems, whose inner workings even the brightest minds in the world cannot currently explain.”

The Weiss petition and letters from DOJ and Warner were prompted by the LOC Copyright Office’s ninth triennial rulemaking proceeding, which accepts public input for new exemptions to the DMCA.

Latest Podcasts