Advertisement

Advocacy groups urge OMB to bar Grok from federal government 

More than two dozen advocacy groups point to the Trump administration’s fight against “ideological bias” in AI models as a reason to block the xAI chatbot.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
Microsoft CEO Satya Nadella is silhouetted as a pre-recorded interview with Elon Musk is played during the Microsoft Build 2025 conference in Seattle on May 19, 2025. Nadella announced that Grok AI, by Musk's artificial intelligence startup xAI, will be available on Microsoft's Foundry Models. (Photo by JASON REDMOND/AFP via Getty Images)

A group of more than 30 advocacy organizations is calling on the Office of Management and Budget to prohibit the use of Elon Musk’s xAI Grok chatbot across the federal government, warning deployment of the artificial intelligence model will “likely invite chaos” and “controversy.” 

In a letter sent Thursday to OMB Director Russell Vought, the organizations cited concerns over Grok’s “recurring patterns of ideological bias, erratic behavior, and tolerance for hate speech” and questioned the AI tool’s “suitability” for government.

“Until robust federal regulatory legislation is established by Congress, no [large language model], including Grok, should be trusted for use by the federal government,” the letter stated. “The risks to public trust, institutional integrity, and democratic governance are too high.” 

FedScoop first reported that the General Services Administration was exploring the use of Grok in government last month. Days later, xAI officially unveiled its “Grok for Government” tool, which is included in the GSA’s Multiple Award Schedule. 

Advertisement

The letter was led by Public Citizen, a progressive consumer rights advocacy group, and Color of Change, an online racial justice organization. More than two dozen other groups signed on to the letter. 

The groups argued the use of Grok would go against the White House’s AI Action Plan, which calls for updated federal procurement guidelines mandating the government only contracts with LLM developers “who ensure their systems are objective and free from top-down ideological bias.”

J.B. Branch, Public Citizen’s Big Tech Accountability Advocate, told FedScoop in an interview that the government’s exploration with Grok “smacks of hypocrisy.” 

“On one end, the Trump administration is saying that there needs to be a viewpoint-neutral aspect to AI, that it needs to be truth-seeking and objective, and then they’re approving of a large language model that is neither truth-seeking nor objective,” Branch said. 

President Donald Trump issued an executive order last month with the plan that is focused on “preventing woke AI in the federal government.”

Advertisement

The executive order, the advocacy groups said, “clearly requires agencies to procure only those LLMs that are truth-seeking (LLMs shall prioritize historical accuracy, scientific inquiry and objectivity) and ideologically neutral.” 

“Grok’s record falls short of these fundamental requirements,” the letter continued, “Grok’s reported instances of generating inaccurate and biased responses are in direct contradiction with these principles.” 

Grok became embroiled in controversy earlier this year after the chatbot espoused antisemitic and pro-Hitler content when responding to inquiries on Musk’s social media platform, X. This occurred after an instruction was apparently added to Grok’s system prompt, directing it to “not shy away” from certain claims. The instructions were later removed. 

Democrats on the House Oversight Committee also voiced concerns to GSA about the use of Grok in government following FedScoop’s report last month. 

The letter further raised questions about Grok’s safety and suitability for government application. 

Advertisement

The organizations pointed to a CyberScoop report last month, in which cybersecurity researchers said they found Grok was “easy to jailbreak” and generated “harmful content with very descriptive and detailed responses.” 

The GSA, which primarily oversees government procurement of AI models, told FedScoop earlier this month that the agency’s coders are working on red-teaming AI models, including Grok, to study their ability to withstand attacks and capacity to spread hate speech. 

Branch emphasized that xAI has not released safety reports for its latest Grok model. When asked what agencies or AI companies can do to gain trust in the models, Branch said there needs to be third-party safety audits that are entirely independent of the AI firms. 

“A lot of these audits that occur happen from companies, but the companies are captured by the large tech companies because they require those contracts to exist,” Branch said. “So they’re not really independent.”

While Grok for Government is on the GSA’s Multiple Award Schedule, it was notably not included in the agency’s rollout of a governmentwide AI testing platform called USAi earlier this month. 

Advertisement

xAI also has not announced a partnership with the GSA like competitors Anthropic and OpenAI, which are offering access to their models for $1 per agency for one year. As of now, GSA says Grok could still be considered for future agreements. 

Dozens of House Democrats also wrote to Vought in April, demanding more information on the extent to which the Trump administration is using technology from xAI. At the time, the lawmakers cited potential conflicts of interest involving Musk, who was one of Trump’s closest advisers, leading Department of Government Efficiency efforts.  

FedScoop reached out to xAI and OMB for further comment. 

Miranda Nazzaro

Written by Miranda Nazzaro

Miranda Nazzaro is a reporter for FedScoop in Washington, D.C., covering government technology. Prior to joining FedScoop, Miranda was a reporter at The Hill, where she covered technology and politics. She was also a part of the digital team at WJAR-TV in Rhode Island, near her hometown in Connecticut. She is a graduate of the George Washington University School of Media and Pubic Affairs. You can reach her via email at miranda.nazzaro@fedscoop.com or on Signal at miranda.952.

Latest Podcasts