Advertisement

When philanthropy mandates AI solutions, taxpayers pay the price

Believe it or not, AI isn’t the answer to every civic tech problem, a co-founder of the U.S. Digital Service argues.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
(Getty Images)

When I started as the first employee of the U.S. Digital Service and later served as chief technologist of two federal agencies, I had to deal with extensive, nonsensical, requirements placed on our technical work. Now, the nonsense isn’t just coming from people we elected. Today, well-meaning civic tech reformers are getting quietly pushed by philanthropic funding with strings attached toward a single answer regardless of the question: AI.

An exploding backlog of veteran applications for health care was one of our first challenges after standing up USDS at the VA. The team was able to do usability testing — watching veterans try to apply for health care — and figured out that they were mashing the “submit” button repeatedly because they couldn’t tell that it had gone through. A solution was simple and cheap: A line of code that made it clear the form had been submitted and that only allowed one submission. It stopped a flood of duplicate forms.

For teams trying to fix things like government backlogs today, it’s not so simple. Steve Ballmer, Bill Gates, Eric Schmidt and other tech billionaires are funding “tech for good” groups on the condition that they have to use AI to tackle government projects. Have a simple fix, a cheap test, or even want to just listen to veterans? You can’t even apply for a grant, unless you somehow cram in AI.

In tech, we have to ask, “who is the user, and what problem are they trying to solve?” before we know what to build. In this case, it’s fair to ask whether the user is a tech billionaire and the problem they’re trying to solve is how to get a return on their eye-popping speculative investments in AI.

Advertisement

To do that, they need customers who have a lot of money, are so bad at technology they can’t ask even basic questions, and who are slow-moving enough that they can’t pivot back if the tech just simply doesn’t work. Who has huge budgets, is bad at tech, and slow to fix things?

The government.

The philanthropic arms of Ballmer and Gates are funding efforts to “help government,” as long as they are “AI-enabled solutions.” Google.org will give groups $30 million to work on “critical public service challenges,” but only if they use generative or agentic AI. 

In fact, Washington, D.C., now mandates that every single city employee takes a multi-hour training that covers the “transformative potential of Generative AI (GenAI) in the public sector,” starting with Google’s Gemini, courtesy of Schmidt and Google.org — including every procurement officer and apparently also every kindergarten teacher. It’s not just on how to avoid pitfalls or privacy failures; it’s a captive audience for AI hype.

(Screencap of the Google-funded mandatory training’s first suggested tool, Google’s Gemini.)
Advertisement

(Screencap of the Google-funded mandatory training’s first suggested tool, Google’s Gemini.)

Requiring the government to use any specific technology is a bad investment of taxpayer money, whether it’s blockchain, COBOL, or AI. What if instead of using AI to streamline a bad regulation, we just nuked the regulation entirely? Why not, instead of using AI to help people navigate complex forms, get rid of needless paperwork altogether? We should solve the actual problem.

This is not a condemnation of the tech-for-good groups doing their best under these weird conditions. On a noisy listserv of technologist friends and colleagues, I recently asked:

“I’d love to have a heart to heart about why we care if an advancement to [government tech] uses a specific technology? Blockchain or AI or paper… what happens if we get in there and we could just get rid of the application process entirely?”

I won’t quote from private emails, but the answer in the thread and in conversations afterwards was clear: The (tech company and billionaire) funders care, and we have to keep them happy. Not a single technical person I’ve met has been willing to defend the funders’ position of AI-or-bust, but they’re also not in a position to say so.

Advertisement

Baking in complexity doesn’t help people who use government services, and it certainly doesn’t help the people who pay for them. It does help someone, though: The people whose next billion dollars depend on the adoption of AI.

Erie Meyer is a senior fellow at Columbia Law School’s Center for Law and the Economy. Previously, she was chief technologist at the Consumer Financial Protection Bureau, and before that, chief technologist at the Federal Trade Commission. She is also a co-founder of the U.S. Digital Service.

Erie Meyer

Written by Erie Meyer

Erie Meyer is a senior fellow at Columbia Law School's Center for Law and the Economy. Previously, she was chief technologist at the Consumer Financial Protection Bureau, and before that, chief technologist at the Federal Trade Commission. She is also a co-founder of the U.S. Digital Service.

Latest Podcasts