Rep. Don Beyer is one of the leading voices on artificial intelligence in the House. The Virginia Democrat is vice chair of both the bipartisan Congressional AI Caucus and the working group on the technology established by the New Democrat Coalition, the party’s largest caucus in the lower chamber. He has proposed legislation meant to rein in the technology, including, most recently, a plan to ensure federal agencies and vendors follow the AI risk management framework created by the National Institute for Standards and Technology.
Oh, and in the congressman’s spare time, he’s getting a master’s degree in machine learning, too.
In a recent interview with FedScoop, Beyer said federal AI legislation could finally be signed by President Joe Biden this year. Of course, there are real reasons to be skeptical. Major legislation focused on the technology hasn’t been finalized yet — and House Speaker Mike Johnson, R-La., hasn’t personally told Beyer that he’s interested in making that goal happen. Still, Beyer says legislative ideas on the table do have traction.
“It will be bipartisan. It will be supported by leadership. And I think it’s important. It’ll be an extraordinary contrast with the laissez-faire approach we’ve taken with social media for the last 24 years,” Beyer said. “We did virtually nothing and are suffering the consequences. Here’s a time where we’re trying to be responsible and get ahead of the curve.”
In a wide-ranging conversation, Beyer outlined the House’s AI plans for this year, funding for NIST, potential existential risks created by the technology, where Congress might have a role, and why he’s optimistic.
Editor’s note: The transcript has been edited for clarity and length.
FedScoop: I know you’re really focused on AI — and I want to ask about the AI legislation you’ve been working on — but maybe to start: how is your AI master’s program going?
Rep. Don Beyer: It’s going well. I’ve got Monday [and] Wednesday classes [and] Thursday morning lab. My whole team is upset that labs are Thursday morning at 9:30 because it interferes with hearings. But the coursework is very fun. This semester is object-oriented programming. Don’t ask me what that means. You can ask me in a couple of months.
FS: I’ll come back with a follow-up on that one. Let’s start by talking about the Federal Artificial Intelligence Risk Management Act, which you proposed earlier this month, alongside Reps. Ted Lieu, D-Calif., Zach Nunn, R-Iowa, and Marcus Molinaro, R-N.Y. Why did you propose it?
DB: This actually is an idea that we stumbled across maybe nine months ago. The simple notion [is] that to try to impose new standards on the entire private sector would be very difficult and would take a long time. But we had an easy trigger in how all the federal contracting work is done. We talked about it for months and then it showed up in the President’s Executive Order that federal agencies contracting for any contracts that involve AI, the AI had to follow the NIST risk management framework.
Then we decided: the executive order can be reversed at any time by the next president, so we should put this in legislation. Cheerfully, it’s very bipartisan. … It just requires all government agencies to follow the NIST risk management framework. Hopefully, what that will do is not only make sure that the government’s using AI well, but it will be a signal to the private sector that this is a responsible way to go.
FS: I know the Office of Management and Budget has to finalize its own guidelines for federal agencies working with AI. How do you see that interacting with some of the other rules for AI and federal agencies?
DB: I think most people still agree that NIST is the gold standard. … What we hope is that there will be a convergence around the set of standards that really works. Because NIST for more than a century has been the official caretaker of how long an inch is and how long a second takes and how much a gram weighs, all that stuff. They are probably the best people, we think, to determine what the standard should actually be.
Now one of the challenges is they don’t have a big budget for it. There’s only two-and-a-half people assigned to it. So among our responsibilities will be to make sure that they have the intellectual and labor resources to keep it up to date and improve and evolve and learn from everyone else.
FS: Given that there’s a pretty complicated supply chain for the creation of an AI system, does that present potential challenges for implementing this with companies that might be building these tools?
DB: Yes, it does, but because that’s also the way the real world works, it’s good that we address it sooner rather than later. … I can tell you this now as a computer science student — one of the interesting ideas in computer science is something called inheritance, that you don’t have to recreate a whole set of code if it already exists. You can inherit the class structure, the code structure, files, all that from previous stuff. You’re going to have inheritance everywhere in the industry, but best to realize that and get on top of it early on.
FS: Where does the idea of setting up an entirely new agency to regulate AI stand?
DB: I’m going to give you an ambivalent answer. It makes more sense on an international level, in that maybe through the United Nations or something like the World Trade Organization or the World Health Organization, ultimately we need to coordinate this among more than 200 different countries, including major players like China, India, us, the U.K., Russia. … You’re not gonna be able to deal with that one at a time.
On the other hand, I’m skeptical about doing another federal agency to do it. I think NASA’s need for AI and AI oversight is going to be very different from what the Department of Defense needs, which will be very different from what Fish and Wildlife needs within the Department of Interior. I’m also reluctant to bless the creation of yet another federal bureaucracy. … All the agencies already have been studying this and trying to get ready for it. So I’m perfectly content to let the Department of Defense within the NIST AI framework try to manage its own vendor relations.
FS: Do you think that risk management framework is sufficient for thinking about civil rights and AI, bias, trust and safety issues and things and leaving it to agencies to apply that? I can imagine a critic saying this is not rigorous enough.
DB: I do want to start that way. Obviously, some agencies will do it better than others, based on the individuals they put in the leadership position or what the secretary or directors committed to. I’d rather have 20 or 30 different efforts out there. Some of them thrive, some of them will fail. Then we will apply the lessons learned.
FS: Going into 2024, what are the priorities for the Congressional AI Caucus right now?
DB: I’m just a humble vice chair. But the conversations that I’ve had with [California Democrat] Anna Eshoo and [California’s] Jay Obernolte, who is the other vice chair on the Republican side, are all around picking out a handful of the 100 bills that have already been introduced. … The number one priority would be, if we could get three to five AI bills signed by President Biden this year, that really creates an excellent platform for us to build on to the years to come as we get more real-life experience with the AI.
FS: You’re working on the AI working group within the New Democrat Coalition. How would you parse the difference in perspective that the New Democratic Coalition has on this, versus maybe other members of the Democratic party who are not members of the New Democrat Coalition, including some of the more progressive or farther left members?
DB: I can’t give you the Progressive Caucus insight because it hasn’t come up there hardly at all. I am a member of both the New Dems and the Progressive Caucus. In terms of your thoughtful question about the differences between the New Dems’ approach and the bipartisan Congressional AI Caucus, I don’t see hardly any difference. Maybe the difference would be in ambition.
FS: Do you have your eye on the question of how generative AI tools could sort of be deployed in maybe nefarious ways during the upcoming elections? How worried are you?
DB: I think we’re all worried about that and all expecting it to happen. We’re certainly seeing it happen in other countries already. And we will see. It’s possible that people will use it and it backfires on them. … Because it’s easy enough to create the horror scenarios where I’m standing there with — who’s the worst bad guy — the leader of Hamas, having dinner talking about our children or something terrible. What deepfakes could accomplish.
On the other hand, one of our objectives is to educate people enough to be skeptical about anything like that. And, already, some private companies have policies to disclose ads. That is not gonna be a big step up.
FS: With AI right now, what seems most exciting to you and what scares you the most about this technology?
DB: The most exciting part by far are the science applications, specifically the medical applications. … I had dinner last week … with the AI scientists who developed AlphaFold. … They know how every protein ever discovered, they know how it’s folded to an 80% accuracy. That’s enough to be able to really, really stimulate drug development, make things go 1,000 times faster.
… The short-term biggest downside by far is gonna be job elimination. As we’ve seen in industry after industry over the years. You go back 100 years, 150 years, most of us were in agriculture, now it’s 1%. We’re gonna have the same kinds of things. Coping with the dislocations will be a great public policy challenge and cultural challenge. The long-term [downside] is still trying to dig deep enough to figure out how real are the existential risks.
FS: I was gonna ask that.
DB: I don’t know. I’m trying to learn as much as I can about them — and only because I think it’s really irresponsible not to learn as much as we can about the existential risk. Many in the industry say, Blah. That’s not real. We’re very far from artificial general intelligence. … Or we can always unplug it.
But I don’t want to be calmed down by people who don’t take the risk seriously. A lot of people still don’t think climate change is a serious risk. I’m always annoyed by the people who don’t take 1,600 nuclear weapons that we have aimed at other people or vice versa [seriously]. … A lot of people don’t think about that at all, but that could be the end of humanity.