Advertisement

Billions flooding into private AI creation poses regulatory challenge, lawmakers told

Stuart Russell says he supports involuntary provisions that would force companies whose AI systems violate regulations to recall their products.
WASHINGTON, DC - JULY 25: Professor of computer science at the University of California, Berkeley, Stuart Russell testifies during a hearing before the Privacy, Technology, and the Law Subcommittee of Senate Judiciary Committee at Dirksen Senate Office Building on Capitol Hill on July 25, 2023 in Washington, DC. (Photo by Alex Wong/Getty Images)

A top AI researcher told Senate lawmakers Tuesday the billions being invested in private sector artificial intelligence systems presents a hurdle for government regulation of the technology.

“No government agency is going to be able to match the resources that are going into the creation of these AI systems,” Stuart Russell, a computer science professor at the University of California Berkeley who focuses on AI, said at a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on AI oversight.

Russell, in response to a question by Sen. Richard Blumenthal, D-Conn, said he’d seen figures that show roughly $10 billion a month going into the creation of startups focused on artificial general intelligence (AGI), or human-like AI.

“Just for comparison, that’s about ten times the amount of the entire National Science Foundation of the United States,” Russell said, adding that the NSF also covers things like physics, chemistry and basic biology. “So how do we get that resource flow directed toward safety?”

Advertisement

Russell said he supports involuntary recall provisions that would force companies whose systems violate regulations to recall their product until they can show that it won’t happen again. “So they have a very strong incentive to actually understand how their systems work and if they can’t, to redesign their systems so that they do understand how they work,” Russell said.

The comments come as lawmakers and the Biden administration grapple with how to go about reining in the nascent but potentially powerful technology. Two senators last month introduced legislation that would end Section 230 immunity for generative AI, and the White House is working to create a government body to support research and development of the technology in the U.S.

Dario Amodei, chief executive officer of AI research company Anthropic who also testified Tuesday, voiced support for measurement and enforcement related to AI in response to Blumenthal’s question.

Amodei said his company has supported funding the National Institute of Standards and Technology to oversee the AI research and measurement process, and creating a National AI Research Resource (NAIRR) — a proposed research body backed by the Biden White House.

“I think this idea of being able to even measure that the risk is there is really the critical thing,” Amodei said in reference to threats posed by AI. Without measurement, he said regulations would be “a rubber stamp.”

Madison Alder

Written by Madison Alder

Madison Alder is a reporter for FedScoop in Washington, D.C., covering government technology. Her reporting has included tracking government uses of artificial intelligence and monitoring changes in federal contracting. She’s broadly interested in issues involving health, law, and data. Before joining FedScoop, Madison was a reporter at Bloomberg Law where she covered several beats, including the federal judiciary, health policy, and employee benefits. A west-coaster at heart, Madison is originally from Seattle and is a graduate of the Walter Cronkite School of Journalism and Mass Communication at Arizona State University.

Latest Podcasts