Advertisement

Dozens of lawmakers question DOGE’s use of AI

In a letter to OMB Director Russell Vought, 48 lawmakers highlighted concerns about DOGE's use of unauthorized AI to process government data.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
Elon Musk, Tesla and SpaceX CEO and Senior Advisor to the President, attends a Cabinet meeting at the White House on April 10, 2025 in Washington, DC. President Trump convened a Cabinet meeting a day after announcing a 90-day pause on ‘reciprocal’ tariffs, with the exception of China. (Photo by Anna Moneymaker/Getty Images)

Dozens of Democrats wrote a letter to Office of Management and Budget Director Russell Vought on Wednesday demanding information on the Department of Government Efficiency’s unauthorized use of artificial intelligence systems.

The letter, which was led by Reps. Don Beyer, D-Va., Mike Levin, D-Calif., and Melanie Stansbury, D-N.M. and signed by 45 other lawmakers, expressed concerns about privacy and security risks associated with the group’s use of federal data in unapproved AI systems, as well as potential conflicts of interest involving Elon Musk, who leads an AI firm called xAI. 

Specifically, the lawmakers flagged reports of DOGE affiliates inputting data into unapproved AI systems and the risk that sensitive federal data could be used to train future commercial models.

“Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system’s operator—a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data,” the lawmakers wrote. “Generative AI models also frequently make errors and show significant biases—the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place.”

Advertisement

They pointed to a few specific reports, including an instance where Education Department data was used in an AI system and a plan for the Office of Personnel Management to scan through federal workers’ emails. The lawmakers also expressed concern about GSAi, the General Services Administration’s in-house chatbot built on commercial large language models from Meta and Anthropic, as well as an AI assistant built by a SpaceX and DOGE employee hosted on an external site, which has since been taken offline.

The lawmakers are demanding more information on the extent to which the Trump administration is using technology from xAI, procurement processes that might have been used to license the xAI model Grok and other commercial LLMs for tools such as GSAi, and the extent to which officials have entered federal data into systems that have not gone through the FedRAMP process or that don’t meet standards under the Federal Information Security Modernization Act of 2014. 

A source within the General Services Administration, which houses the FedRAMP program, noted the lawmakers are highlighting two separate issues: how AI is being used to make decisions in government and the problem of using federal data on insecure, unapproved systems.  

All federal IT systems must have an authority to operate, and FedRAMP is only relevant for cloud services agreements, the source said. The larger problem is data being sent to external platforms in the cloud that don’t have authorization, they added, and people who send data to these systems are violating agency rules of behavior. 

Musk’s xAI is not currently FedRAMP-authorized, but other generative AI companies are ramping up their interest in the cloud security authorization program, as FedScoop reported earlier this month. OpenAI continues to expand its work through Microsoft, and Anthropic, Palantir, and Google recently announced an agreement to expand the use of Claude in government. 

Advertisement

This story was supported by the Tarbell Center for AI Journalism.

Latest Podcasts