Sen. Chuck Schumer, D-N.Y., once again Tuesday brought together top artificial intelligence scholars, tech evangelists and civil rights leaders to discuss AI regulation and development, this time focusing the conversation on increased federal research and development funding, tech immigration issues, and ways to find common ground on AI safeguards.
In Senate Majority Leader Schumer’s second closed-door bipartisan AI Insight Forum, participants also spotlighted how the federal government in particular can best ensure the U.S. remains a leader in AI innovation while developing better and safer autonomous systems.
“We came to an agreement that the government has to fund — now we say at least $32 billion. There are certain things we have to do, specific things in terms of funding NAIRR [ the National Artificial Intelligence Research Resource] with at least $32 billion,” Schumer told reporters halfway through the forum.
“We have to have the government and the private sector collaborate on sharing information, sharing data, and the federal government needs to set up some models and some kind of ecosystem that allows the private sector to do even more. And if we don’t do this, China will get ahead of us,” added Schumer.
The forum was attended by top tech evangelists like Marc Andreessen of venture firm Andreessen Horowitz, and Patrick Collison, the CEO of Stripe, as well as key civil rights leaders like Derrick Johnson, the president of the NAACP, and Amanda Ballantyne, the director of the AFL-CIO Technology Institute. It also included former top White House AI officials like Alondra Nelson, the former OSTP director, and Suresh Venkatasubramanian, a former AI specialist within OSTP who is now a computer science professor at Brown University.
South Dakota Sen. Mike Rounds, Schumer’s Republican counterpart in leading the bipartisan AI forums, said he would like to see the federal government have something akin to the American Society for Testing and Materials (ASTM) – an international standards organization that develops and publishes voluntary consensus technical standards – for AI.
Rounds told FedScoop after the forum that such an entity or group within the government could be a “good referee” for AI and provide “provide technical assistance to a lot of different federal agencies that need to understand it better.”
Rounds added that there was significant consensus in the room Tuesday regarding AI problems and solutions but said there were some disagreements on how to handle large language models that underpin most generative AI tools like Open AI’s ChatGPT. There was also disagreement on privacy and who controls open source and private databases that most AI tools have been trained or built upon, the senator added.
One of the forum’s attendees, Ylli Bajraktari, CEO of the nonprofit Special Competitive Studies Project (SCSP) and the former executive director of the National Security Commission on AI, told FedScoop that the forum focused on three key ideas on how to boost AI innovation.
In addition to increasing funding for AI research and development, Bajraktari said the meeting also centered around ensuring the U.S. has a strong pipeline of skilled AI workers — both by educating and reskilling American citizens and through increased immigration — and agreeing upon necessary safeguards for the government to put on AI technologies so the technology doesn’t harm society.
“Right now, we have invested less than 1% of our GDP in [AI] R&D. So I think there was a general agreement, we got to put more money there. The issue is, how fast and how much because you cannot dump a lot of money all at once. We need the government to inject money, through our institutions like [the National Institute of Standards and Technology], the National Science Foundation, NASA and others,” Bajraktari told FedScoop.
Bajraktari also said there was agreement during the forum that the immediate impact of AI on jobs and the workforce should be studied further, perhaps through the creation of a national commission on automation and the future of work. This was also a recommendation in one of SCSP’s recent reports.
There was also discussion during the forum, Bajraktari said, about increasing the number of H-1B visas allowed into the U.S. to attract and retain more of the world’s brightest minds.
Max Tegmark, a physics professor at the Massachusetts Institute of Technology who’s also president of the Future of Life Institute, told FedScoop that the AI forum was highly productive but he was disappointed with the lack of discussion on potential existential risks created by Artificial General Intelligence (AGI) due to fears of AGI tools becoming so intelligent that they could get out of control and harm humans.
“I think there was a very commendable push from Sen. Schumer and others for AI innovation being sustainable. But there was a great unwillingness to discuss large-scale risks, to discuss existential risks, and to discuss AGI at all,” Tegmark told FedScoop during an interview after the forum.
“I’m the only one that really brought up the subject and another one of the invited speakers explicitly said that we shouldn’t talk about these things. Nobody wants to talk about the thing that could transform everything in two or three years,” added Tegmark.
Tegmark’s institute earlier this year led an open letter, signed by Tesla CEO Elon Musk, calling to pause the development of powerful AI systems to focus on safe and responsible AI deployment.
Schumer’s AI Insight Forums will continue on, with the third scheduled for Nov. 1 focused on AI in the workforce, FedScoop learned. The Senate majority leader has planned nine different “insight forums” that will focus on issues including national security, privacy, high-risk applications, bias, and the implications of AI for the workforce, gathering both those bullish on AI as well as skeptics and critics of the technology.