Advertisement

As House task force work begins, Rep. Bonamici is ‘very worried’ about AI — ‘and we all should be’

In a Q&A with FedScoop, the Oregon Democrat discusses her legislative priorities with the task force, as well her focus on the need to address bias, lack of consent, discrimination and privacy issues with the technology.
(Getty Images)

Rep. Suzanne Bonamici is no stranger to high-level, bipartisan tech discussions on Capitol Hill, having assisted in the negotiation and passage of the CHIPS and Science Act and co-founded the Science, Technology, Engineering, Arts and Mathematics Caucus. The Oregon Democrat’s next assignment, as a member of the new House AI task force, could be her most consequential.

Bonamici, one of 12 Democrats appointed to the 24-member House AI task force announced last week by Speaker Mike Johnson, R-La., and Minority Leader Hakeem Jeffries, D-N.Y., said in an interview with FedScoop that her focus will be on the ethical use of AI, pointing to a need to address bias, lack of consent, discrimination and privacy issues.

The congresswoman also revealed that she is working on a piece of legislation that mirrors the Senate’s “No Robot Bosses Act of 2024.” The House version, which is set to be introduced in a matter of weeks, addresses the risks of job displacement as AI is implemented in practical applications across industry, according to an email shared with FedScoop. 

Bonamici spoke with FedScoop about her legislative priorities with the task force, why it’s important that AI regulation is a bipartisan effort and her concerns about the technology.

Advertisement

Editor’s note: The transcript has been edited for clarity and length.

FedScoop: I know that AI has remained nonpartisan, and I wanted to know, where do you see the biggest difference between Democrats and Republicans on AI regulation and AI work?

Rep. Suzanne Bonamici: I have, in my dozen years of Congress, always tried to find common ground, and I’m convinced that we’ll be able to do that on the task force. … I think we are there as a bipartisan task force to actually address the issues that our constituents are asking about [such as] responsible use and ethical development of artificial intelligence, which has been something that I’ve been talking about for years, and then let’s figure out the regulatory structure. Those are two issues that I’ve been asking about in more informal settings, and I look forward to working on them in the task force. 

FS: How worried are you about AI-generated content this election season?

SB: Oh, I’m very worried — and we all should be. We have already seen examples of problems, whether it be deepfakes or theft of someone’s identity and making it sound like a candidate sending a message. So I’m hopeful that there are enough people out there looking at this and monitoring it and calling it out. I just don’t [know] what the remedy is going to be. And in the long term, I am very supportive of education and media literacy — to help people recognize when content is AI-generated. I just heard a story the other day about how even young kids, they think there’s a little person in an [Amazon] Alexa, they draw it with the face of a person. Early on, age-appropriate education so people know what to look for. 

Advertisement

FS: Is that the AI issue that worries you most? The ethical dilemmas and the threats that those pose regarding AI?

SB: Depending on how you’re defining ethical dilemmas, which is what I’ve been asking. Sometimes in these briefings, you come out with more questions than answers, but people have different definitions of what ethical AI means. I know that there’s tremendous potential, but I also know there are risks. I mentioned the privacy concerns, algorithmic bias, job displacement — lots of questions about that — and then, of course, all the nefarious uses we were seeing with deepfakes. I’m interested in the energy use and the vast amount of energy that this takes. In fact, I had a meeting [insert day of the week] about some work that’s being done to make running these models more energy-efficient. 

FS: What are some lessons learned, perhaps from social media and data privacy, that you are keeping in mind when it comes to placing guardrails for artificial intelligence?

SB: I want to start by saying that whenever we regulate around technology, we have to do it in a way that provides the needed protections but does not stifle innovation, because we don’t want to hold back the good potential, which is challenging. But I think that’s one of the reasons why the bipartisan task force was set up by the Leader and the Speaker, because they realized that this is urgent. And we know with social media, now people are saying [that] this is dangerous to some and we really have to look at the experts and what is best for especially young people, and keep that in mind. … I don’t think we can have 50 different systems; I think it needs to be done at the federal level.

FS: We’ve seen a lot of voluntary commitments from companies where AI is concerned, and I know that you are advocating for fair competition across the private sector. How can Congress ensure that smaller tech companies can compete with big tech companies?

Advertisement

SB: Obviously look at anti-competitive behavior and [we] have [the Federal Trade Commission] and [the Department of Justice] to do that, and they are working on those and I’m sure that anti-competitive behavior that falls under existing antitrust laws is important to look at. Expanding opportunities for small industries is going to be really important. I’m out here in Northwest Oregon, which is known as Silicon Forest, where we have a lot of big but also small semiconductor companies and all the ancillary businesses that go with them. So we are really working hard out here to develop a workforce. I was just hearing about a partnership [insert day] with one of the tech hubs and some of our research universities but also smaller colleges and universities and workforce boards. I think there are many ways that we can look to not only increase opportunities but also increase diversity, which is really critical in the workforce. 

FS: Of course, as you’re well aware, digital literacy and the broadband gap remain a major problem. How worried are you about AI and AI-generated content exacerbating those digital inequities?


SB: It’s a possibility, but we’ve been working hard with filling the gaps. It was exacerbated … by the pandemic when all of a sudden, students have to learn online and there’s a lot of places [that] didn’t have connectivity or devices. So we’ve been working on closing those gaps and I think we’ve made some progress with that and expanding connectivity.

Caroline Nihill

Written by Caroline Nihill

Caroline Nihill is a reporter for FedScoop in Washington, D.C., covering federal IT. Her reporting has included the tracking of artificial intelligence governance from the White House and Congress, as well as modernization efforts across the federal government. Caroline was previously an editorial fellow for Scoop News Group, writing for FedScoop, StateScoop, CyberScoop, EdScoop and DefenseScoop. She earned her bachelor’s in media and journalism from the University of North Carolina at Chapel Hill after transferring from the University of Mississippi.

Latest Podcasts