A top AI researcher told Senate lawmakers Tuesday the billions being invested in private sector artificial intelligence systems presents a hurdle for government regulation of the technology.
“No government agency is going to be able to match the resources that are going into the creation of these AI systems,” Stuart Russell, a computer science professor at the University of California Berkeley who focuses on AI, said at a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on AI oversight.
Russell, in response to a question by Sen. Richard Blumenthal, D-Conn, said he’d seen figures that show roughly $10 billion a month going into the creation of startups focused on artificial general intelligence (AGI), or human-like AI.
“Just for comparison, that’s about ten times the amount of the entire National Science Foundation of the United States,” Russell said, adding that the NSF also covers things like physics, chemistry and basic biology. “So how do we get that resource flow directed toward safety?”
Russell said he supports involuntary recall provisions that would force companies whose systems violate regulations to recall their product until they can show that it won’t happen again. “So they have a very strong incentive to actually understand how their systems work and if they can’t, to redesign their systems so that they do understand how they work,” Russell said.
The comments come as lawmakers and the Biden administration grapple with how to go about reining in the nascent but potentially powerful technology. Two senators last month introduced legislation that would end Section 230 immunity for generative AI, and the White House is working to create a government body to support research and development of the technology in the U.S.
Dario Amodei, chief executive officer of AI research company Anthropic who also testified Tuesday, voiced support for measurement and enforcement related to AI in response to Blumenthal’s question.
Amodei said his company has supported funding the National Institute of Standards and Technology to oversee the AI research and measurement process, and creating a National AI Research Resource (NAIRR) — a proposed research body backed by the Biden White House.
“I think this idea of being able to even measure that the risk is there is really the critical thing,” Amodei said in reference to threats posed by AI. Without measurement, he said regulations would be “a rubber stamp.”