Advertisement

Bill Foster, a particle physicist-turned-congressman, on why he’s worried about artificial general intelligence

In a Q&A with FedScoop, the Illinois Democrat and House AI task force member discusses his optimism and pessimism on potential AI legislation and his fears about AGI.
Rep. Bill Foster, D-Ill., speaks during a House Select Subcommittee on the Coronavirus Crisis hearing on Capitol Hill on Oct. 2, 2020, in Washington, D.C. (Photo by J. Scott Applewhite/Getty Images)

Congress is just starting to ramp up its efforts to regulate artificial intelligence, but one member says he first encountered the technology in the 1990s, when he used neural networks to study physics. Now, Rep. Bill Foster, D-Ill., is returning to AI as a member of the new bipartisan task force on artificial intelligence, led by Reps. Ted Lieu, D-Calif., and Jay Obernolte, R-Calif., which was announced by House leadership earlier this week

In a chat with FedScoop, the congressman outlined his concerns with artificial intelligence. The threat of deepfakes, he warned, can’t necessarily be solved with detection and may require some kind of digital authentication platform. At the same time, Foster said he’s also worried that the setup of committees — and the varying levels of expertise within Congress — aren’t well situated to deal with the technology. 

“There are many members of Congress who understand about finance and banking and can push back on technical statements about financial services that might not be true,” he told FedScoop. “It’s much harder for … the average member of Congress to push back on claims about AI. That’s the difference. We’re not as well defended against statements that may or may not be factual from lobbying organizations.” 

Compared to some other members of Congress, Foster appears particularly concerned about artificial general intelligence, a theoretical form of AI that, some argue, could end up rivaling human abilities. This technology doesn’t exist yet, but some executives, including OpenAI CEO Sam Altman, have warned that this type of AI could raise massive safety issues. In particular, Foster argues that there will be a survival advantage to algorithmic systems that are opaque and deceptive. 

Advertisement

(Critics, meanwhile, argue that discussion of AGI has distracted from opportunities to address the risks of AI systems that already exist today, like bias issues raised by facial recognition software.) 

Foster’s comments come in the nascent days of the AI task force, but help elucidate how varied perspectives on artificial intelligence are, even within the Democratic party. Unlike other areas, the technology is still relatively new to Congress and positions on how to rein in AI, and potential partisan divides, are only still forming. 

Editor’s note: The transcript has been edited for clarity and length.

FedScoop: With this new AI task force, to what extent do you think you’re going to be focusing on chips and focusing on hardware, given both the recent chips legislation and OpenAI’s Sam Altman’s calls for more focus on chip infrastructure, too? 

Rep. Bill Foster: It’s an interesting tradeoff. I doubt that this committee is going to be in a position to micromanage the [integrated circuit] industry. … I first met Sam Altman about six years ago when I visited OpenAI [to talk about] universal basic income, which is one of the things that a lot of people point to having to do with the disruption to the labor market that [AI] is likely to cost. 

Advertisement

… When I started making noise about this inside the caucus, people expected the first jobs to fall would be factory assembly line workers, long haul truck drivers, taxi drivers. That’s taken longer than people guessed right then. But the other thing that’s happened that’s surprised people is how quickly the creative arts have come under assault from AI. … There’s a lot of nervousness among teachers about what exactly are the careers of the future that we’re actually training people for.

… I think one of the most important responses — something that the government can actually deliver and even deliver this session of Congress — is to provide people some way of defending themselves against deepfakes. … There’s two approaches to this. The first thing is to try to imagine that you can detect fraudulent media and to develop software to detect deepfake material. I’m not optimistic that that’s going to work. It’s going to be a cat-and-mouse game forever. … Another approach is to provide citizens with a means of proving they are who they say they are online and they are not a deepfake. 

FS: An authentication service? 

BF: A mobile ID. A digital driver’s license or a secure digital identity. This is a way for someone to use their cell phone and their government-provided credential, like a passport or Real ID-compliant driver’s license, and associate it with your cell phones [This could] take advantage of your cell phone’s ability through AI to recognize its owner — and also the modern cell phone’s ability to be used like a security dongle. It has what’s called a secure enclave, or a secure compute facility, that allows it to hold private encryption keys that makes the device essentially a unique device in the world that can be associated with a unique person and their credential. 

FS: How optimistic are you that this new AI task force is actually going to produce legislation?

Advertisement

BF: One reason I’m optimistic is the Republican’s choice of a chair: Jay Obernolte. … He’s another guy who keeps up the effort to maintain his technical currency. He and I can geek out about the actual state of the art, which is rather rare in the U.S. Congress. … One of the missions, certainly for this task force, will be to try to educate members about at least the capabilities of AI. 

FS: How worried are you that companies might try to influence what legislation is crafted to sort of benefit their own finances?

BF: I served on the Financial Services Committee for all my time in Congress, so I’m very familiar with industry trying to influence policy. It would shock me if that didn’t happen. One of the dangers here is that there are many members of Congress who understand about finance and banking and can push back on technical statements about financial services that might not be true. It’s much harder for …  the average member of Congress to push back on claims about AI. That’s the difference. We’re not as well defended against statements that may or may not be factual from lobbying organizations. 

FS: To what extent should the government itself be trying to build its own models or creating data sources for training those models? 

BF: There is a real role for the national labs in curating datasets. This is already done at Argonne National Lab and others. For example, with datasets where privacy is a concern, like electronic medical records — where you really need to analyze them, but you need a gatekeeper on privacy — that’s something where a national laboratory that deals with very high-security data has the right culture to protect that. … Even when they’re not developing the algorithms, they can allow third parties to come in and apply those algorithms for the datasets and give them the results without turning over all the private information. 

Advertisement

FS: You’ve proposed legislation related to technology modernization and Congress. To what extent are members exposed to ChatGPT and similar technologies?

BF: The first response is to have Congress organize itself in a way that reflects today’s economy. Information technology just passed financial services as a fraction of the economy. … That puts it pretty much on a par with, for example, health care, which is also a little under 20%. If you look at the structure of Congress, it looks like a snapshot of our economy 100 years ago.

… The AI disruption might be an opportunity for Congress to organize itself to match the modern economy. That’s one of the big issues that I’d say. Obviously, that’s the work of a decade at least. There’s going to be a number of economic responses to the disruption of the workforce. I think the thing we just have to understand and appreciate [is] that we’re all in this together. It used to be 10 or 15 years ago that people say, those poor, long-haul truck drivers or taxi drivers or factory workers that lose their jobs. But no, it’s everybody. … With that realization, it will be easier to get a consensus that we’ve got to expand the safety net for those who have seen their skills and everything that defines their identity and their economic productivity put at risk from AI. 

FS: How worried are you about artificial general intelligence?

BF: Over the last five years, I’ve become much more worried than I previously was. And the reason for that is there’s this analogy between the evolution of AI algorithms and the evolution in living organisms. And what if you look at living organisms and the strategies that have evolved, many of them are deceptive. 

Advertisement

… This happens in the natural kingdom. It will also happen and it’s already happening in the evolution of artificial intelligence. If you imagine there are two AI algorithms: one of them is completely transparent and you understand how it thinks [and] the other one is a black box. … Then you ask yourself, which of those is more likely to be shut down and the research abandoned on it? The answer is it is the transparent one that is more likely to be shut down, because you will see it, you will understand that [it has] evil thought processes and stop working on it. There will be a survival advantage to being opaque. 

You are already seeing in some of these large language models behavior that looks like deceptive behavior. Certainly to the extent that it just models what’s on the internet, there will be lots of deceptive behavior, documented on the internet, for it to model and to try out in its behavior. It will be a huge survival advantage for AI algorithms to be deceptive. It’s similar to the whole scandal with Volkswagen and the smog emission software. … When you have opaque algorithms, the companies might not even know that their algorithm is behaving this way. Because they will put it under observation, they will test it. … The difficulty is that [they’re going to] start knowing they’re under observation and then behave very nicely, and they’ll do everything that you wish they would. Then, when it’s out in the wild, they will just try to be as profitable as they can for their company. Those are the algorithms that will survive and displace other algorithms.

Rebecca Heilweil

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Latest Podcasts