Advertisement

Mozilla president on AI executive order: Government can help promote privacy-enhancing tech

The federal government can help advance privacy and open-source goals through procurement, Mark Surman says in a Q&A with FedScoop.
Mark Surman, president of the Mozilla Foundation, delivers a speech at Campus Party Sao Paulo Brazil on Jan. 28, 2013, in Sao Paulo, Brazil. (Photo by Mauricio Santana/Getty Images)

The Biden administration’s executive order on artificial intelligence aims to regulate the technology with a whole-of-government approach. Critically, the policy strategy, which was revealed late on Monday, isn’t limited to creating new rules for the private sector. Instead, it aims to exploit the government’s role as both a purchaser and regulator of technology in order to shape how AI is ultimately deployed throughout the United States. 

That strategy makes sense to Mark Surman, president of the Mozilla Foundation, the nonprofit that advocates for a safe and healthy internet and owns the company behind the Firefox web browser. 

In an interview Tuesday with FedScoop, Surman said that through procurement, the federal government can help create a market for technologies that advance privacy objectives. At the same time, he argues that the government could invest in more open-source systems itself, promoting the benefits of open-source technology.

Along with some criticisms of the executive order, Surman noted that there are some challenges ahead. The biggest, he said, is wooing people with AI expertise to work for the federal government.

Advertisement

“There’s so many places that it can stall out or go wrong. And an executive order is not a lasting policy mechanism,” Surman said. “I think the biggest need — and then on the flip side, the biggest barrier — is getting the AI Talent Surge happening in a really fast way, even if they’re not people who are staying in government forever.”

Notably, Mozilla recently released an AI Guide that developers can contribute to. This year, the Foundation announced it would spend $30 million to develop an AI-focused startup and also hosted a Responsible AI Challenge. 

Editor’s note: The transcript has been edited for clarity and length.

FedScoop: What does this executive order do right? Obviously, it’s very, very long. 

Mark Surman: Given how long it is, it’s hard not to find great things and some weaker things and some missing things. What it gets right is that we urgently need to dig into, how do we balance the potential of AI to really help us solve big problems for humanity and help us grow our economy. … with some of the risks? And it calls those things out in really specific, concrete ways, which is why it’s so long. That’s a good starting point for how we tackle the problem of AI governance. 

Advertisement

You gotta start somewhere. It’s way more concrete than the Blueprint for the AI Bill of Rights and it gives us a lot of springboards to jump from. 

FS: Any particular springboards you’re most excited about? 

We like that it makes the connection between AI, privacy and data. Data is what fuels AI — and it’s making sure that the federal government takes a lot more care and in how it uses data and treats privacy in relationship to AI.

It’s particularly exciting to see the focus on the very nerdy privacy-enhancing technologies piece. We’re only going to get to the point that we can do AI in privacy-respecting ways if we create a market for privacy-enhancing technologies. The government as a buyer of AI has a chance to create a market for privacy-enhancing technologies that feed AI. 

… We’re at a spot where we want to make sure we learn from the past and we don’t just hand over the AI economy to a few companies to control, as in some ways we have with the Web 2.0 era. It calls that out and it calls out the need for competition and the need for small business to be a player right from the beginning. 

Advertisement

The EO is silent on the role of open source in AI and I think it’s skirting the topic and skirting the potential. Open source has been a tremendous force for ensuring and enabling competition in the rest of the internet era. The whole of the internet [was] built on top of Linux and other open source software that allowed companies to innovate faster and cheaper. … We would like to see that as a research priority and a funding priority.

The third piece is really open source itself. … The executive order talks about safety and security, and that we need to address the security risks, and on the other hand, we have to deal with AI as opacity and complexity. We know from history that you can have security issues in technology and software, whether the software within technology is open or closed, but open source approaches [and] public scrutiny in things like cybersecurity are critical to making sure that we actually address the safety and security issues. 

FS: If you were the head of procurement of AI in the U.S. government, what are the ways agencies could be smarter about buying AI? There is a lot of encouragement for federal agencies to not just regulate artificial intelligence, but to actually use it themselves. 

MS: If I was the head of procurement for all the U.S. government? One is setting market standards for how evaluation, auditing and safety are handled. Often in a private-sector setting, people are going to try to rush past those things… If those are mandated at a federal level, then you’re creating a market and you’re pushing companies to develop things around safety, auditing and that whole evaluation piece. One is to look at setting standards for that through [the National Institute of Standards and Technology] and through purchasing. 

The other is really advancing both the protection of citizens and the field of privacy through purchasing cutting edge privacy-enhancing technology for managing the data that these AI systems interact with. 

Advertisement

It’s a field where there’s a lot of technical advancement, but it’s been hard for it to find its market niche. There’s a real chance for the federal government to create a market in privacy-enhancing technology, which I think would benefit everybody.

The third piece is in open source. … Huge parts of the AI ecosystem are open source [and] mandating open source in procurement means that federal dollars are used in ways that accrue benefit back to the public and to the industry —  and also means that the government doesn’t need to pay twice for stuff. 

FS: Maybe at Mozilla you have an interesting view into this, but there’s a challenge with the draw to the private sector for people with AI expertise. I’m curious how the government can compete for those kinds of skills. As agencies are trying to hire these people, what challenges are ahead? What should the government be thinking about in terms of AI experts when it can’t necessarily offer the same salary?

MS: There’s two or three things to think about in terms of how you make the AI Talent Surge work … People are in these careers because they’re intellectually challenging. So it sounds kind of trite, but it really is true. And we find that at Mozilla. We have to compete with talent. … But because we’re a nonprofit and because of the public interest, people were like, ‘Oh, I can work on this in a really interesting way.’

To use the fact that the government is genuinely trying to answer these big, hairy and interesting generational problems around AI as an attraction for talent. In the short term, [we could] use the kind of models that came up under the Obama administration in the U.S. Digital Service, which is more talent rotation from industry. That is something that the U.S. really pioneered and can work again here.  

Advertisement

And then the longer term [idea] is to invest in education of what people often call public interest technologists … More federal education funding going into responsible tech education is the long-term solution.

FS: What do you have your eyes on? Is it sort of just looking to Congress now? I’m curious what you see as the next steps in AI policy. 

MS: It’s not just looking to Congress. Certainly, looking to Congress on comprehensive consumer privacy legislation as being a key complement to anything on AI — I think that actually needs to be on the front of the line and is a bedrock on which any legislative work on AI needs to be based.

On the agency work — both on the procurement front and balancing civil rights with things like hiring, education, housing — we’re interested to see if there’s action there. And then similarly, on competition on open source AI. The call on the [Federal Trade Commission] to act and the call on Congress to act; where does that play out?

Rebecca Heilweil

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Latest Podcasts