Advertisement

Rep. Scott Franklin eyeing public-private partnerships as his House AI task force work kicks into gear

The Florida Republican said in a Q&A with FedScoop that he’s also interested in AI’s potential with agriculture and labor.
(Getty Images)

Proposed applications of artificial intelligence considered by Congress have included everything from cybersecurity uses to streamlining processes. For Rep. Scott Franklin, R-Fla., the technology also presents a more hyperlocal function: enhancing weather forecasting for agriculture purposes.

“I represent an area that’s a long way from Silicon Valley. I’ve got the largest agricultural district east of the Mississippi [River], and we have a lot of challenges that are facing farmers today. Technology can be at least a partial solution,” Franklin said in an interview with FedScoop. “If AI can help us do better weather prediction, that’s going to have massive implications for agriculture. Even better, hurricane forecasts that allow farmers and growers to respond in a more timely manner with [an] impending storm.”

A member of the new House AI task force, Franklin completed 26 years of military service as a Naval aviator before joining Congress three years ago. Now, the Florida Republican serves on the House Science, Space and Technology Committee and the joint Research and Technology Subcommittee. 

Franklin has introduced legislation to support American businesses’ participation in the establishment of global standards for AI, as well as the Land Grant Research Prioritization Act, which would provide universities with dedicated access to the Department of Agriculture’s grant funding to enhance AI research and mechanization. He spoke with FedScoop recently about his plans for the AI task force, AI-related cybersecurity concerns, the role of the private sector in Congress’s AI work and more.

Advertisement

Editor’s note: The transcript has been edited for clarity and length.

FedScoop: AI has remained largely bipartisan. What are you anticipating as some differences between Democrats and Republicans as the task force members work to produce a report?

Rep. Scott Franklin: It’s going to be interesting to see — I can’t anticipate all of that yet. I know we all have First Amendment concerns, and there’s concerns about either purposeful or unintentional bias built into algorithms that produce outputs that can cut both ways, and I think both sides of the aisle will be concerned about that. … I’ve been hearing … there was a lot of concern about the algorithms are only going to spit out results that are as good as the data that’s fed into them, and is there purposeful or unintentional bias in some of that information?

… We’re all obviously concerned about election integrity and the implications for AI in nefarious hands to influence the elections one way or another. I think we’re all rightfully worried about that. We’ve got a tremendous advantage, I think, now over maybe the rest of the world; we want to make sure we preserve that. That’s obviously, it’s in everyone’s best interest there, so that’s not a political thing either. … We’ll all come at it from different perspectives, which I’m actually interested to hear how my other counterparts think differently on it. 

FS: How can the U.S. lead on that global stage for AI governance?

Advertisement

SF: I think establishing those guardrails is gonna be important, but where those are, and that’s something I still haven’t landed on. I’m interested to hear the people that we’re going to have come in to speak to us and go through our deliberations. We want to be careful not to squelch innovation and be … overly burdensome with regulation. So where’s that fine, Goldilocks spot on this. I think it’s not one of those things that we’re going to just nail from the beginning. I think we’re going to need to probably look at it as a work in progress and try things as the technology evolves, [and] we may realize that we need to make tweaks along the way.

… A concern that I have is that if we don’t lead … and if there’s a void, states are gonna create their own regulations. I think if we’re not careful, we’ll end up with a patchwork of state regulations that are just going to muddy the water and make things a lot more confusing. … The time is of the essence to get busy on this. 

FS: Are you worried about AI uses in cybersecurity threats? Can you share anything that worries you regarding defense and AI?

SF: I think AI is an area that’s going to help us with cybersecurity. I think when there’s so much out there that we need to protect and so many vulnerabilities, I think AI is going to help. But there’s also the other side of that coin: AI is going to enable our adversaries to be much more pervasive in their efforts to try to hack into our systems. … Defense-wise, a question I get a lot is, “do we envision a future where we’re going to have these autonomous machines where a human’s not in the loop making decisions?” I don’t see us getting to that level of AI, at least in a generation. I don’t see where it’s ever going to be turned over to just kill boxes where the machines are on autopilot and the human’s no longer in the loop. 

FS: Do you think that the private sector and industry leaders should be informing Congress, especially as these task force meetings happen?

Advertisement

SF: Yeah, and I think that was something that [former House Speaker] Kevin McCarthy had recognized early on, that whether we want this role or not, that’s coming our way as Congress. … So he was trying to bring people like [OpenAI CEO Sam Altman] and others in to speak to us and just start bringing up a base level of knowledge. But I think the old days, like when you go back to the moon program, the 60s and the Apollo program, so much of that cutting-edge research came from within the government and then went outside in the private sector. It’s completely reversed now. I think if we try to be the guardian of it all — ‘We’ve got all the answers as a government, we’re going to tell you how to do things without a collaborative partnership with private enterprise.’ — it’s going to be a mistake. Because they’re the ones that have invested massively more money into this than the federal government has, and they’re the ones that are innovating. I think there needs to be a voice represented there from the private side, too. 

FS: Is there anything within the new task force or artificial intelligence that you want to share?

SF: Labor is difficult, and that’s one of our biggest challenges. It’s hard work, we don’t have enough labor to pick our crops and do things like that. So we’re trying to automate that. But, if you’re talking about machines that can go in and pick strawberries out of a field and decide, ‘is this one ready to be picked?’ … There’s a lot of artificial intelligence that will need to be applied to that, and we may ultimately be able to fix our labor issues through the use of AI. [There’s] a lot of concern about AI’s gonna put people out of work, and it is going to cause some disruptions and shifts, depending on the area, but I think there are going to be plenty of areas where we need labor, where we have shortages of labor. AI is going to be able to help fix that and we can retrain. 

There’ll be other big initiatives that are going to be necessary to retrain folks and changes in the workforce to re-deploy people into different areas. But I see far more upside and the potential for AI than the downside.

Caroline Nihill

Written by Caroline Nihill

Caroline Nihill is a reporter for FedScoop in Washington, D.C., covering federal IT. Her reporting has included the tracking of artificial intelligence governance from the White House and Congress, as well as modernization efforts across the federal government. Caroline was previously an editorial fellow for Scoop News Group, writing for FedScoop, StateScoop, CyberScoop, EdScoop and DefenseScoop. She earned her bachelor’s in media and journalism from the University of North Carolina at Chapel Hill after transferring from the University of Mississippi.

Latest Podcasts