Vetting of ‘ideological bias’ in AI models in new Trump plan stirs confusion

The Trump administration’s push to expand artificial intelligence use in the government is now being coupled with a fight against “ideological bias” in AI models, raising new questions about who and what will determine the technology used by federal workers.
In its highly anticipated AI Action Plan released Wednesday, the Trump administration outlined various action items related to the federal procurement process for AI models, including new limitations on technology the government approves for contracts.
The 28-page plan placed heavy emphasis on ensuring AI systems are “built from the ground up with freedom of speech and expression in mind” and that AI used by the government “objectively reflects truth rather than social engineering agendas.”
In its listed policy recommendations, the plan called for updated federal procurement guidelines to “ensure that the government only contracts with frontier large language model developers who ensure that their systems are objective and free from top-down ideological bias.”
The Trump administration has made fighting against conservative bias a key policy tenet, but Wednesday’s announcement marks the first time this push has been linked to automation technology in the government.
President Donald Trump later signed an executive order Wednesday aimed at “preventing woke AI in the federal government.”
“While the federal government should be hesitant to regulate the functionality of AI models in the private marketplace, in the context of federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas,” the order states.
It is not immediately clear how the administration hopes procurement offices will vet for ideological biases, though some in the technology space are already sounding alarms about the murkiness of the move.
Kit Walsh, director of AI and access-to-knowledge legal projects at the Electronic Frontier Foundation, suggested the initiative could be rooted in “a desire to control what information is available through AI tools.”
“The government has more leeway to decide which services it purchases for its own use, but may not use this power to punish a publisher for making available AI services that convey ideas the government dislikes,” Walsh said in a statement.
Some experts warned that this leaves too much discretion with the government to decide on models that could be used both in and outside of government.
Ryan Hauser, a research fellow at George Mason University’s Mercatus Center, said the procurement requirement forces the government’s technology partners to comply with “an impossible standard.”
“Anthropic, Google, OpenAI, and xAI are already working with the Pentagon and lending their LLMs to national security work,” Hauser told FedScoop on Wednesday. “That kind of innovation is badly needed in our overly rigid bureaucracy.”
“But now these same frontier labs will have to commit more resources to auditing their models and making sure they don’t run afoul of these new bias requirements,” he added.
Kristian Stout, director of innovation policy at the International Center for Law and Economics, noted federal procurement can have “significant downstream pressure” on product design, especially for smaller firms more reliant on government buyers.
“If objectivity becomes a procurement criterion, we should expect companies to be more explicit about how they audit or validate their models for neutrality,” Stout told FedScoop.
As part of the plan, the Trump administration recommended that the National Institute of Standards and Technology adjust its AI Risk Management Framework to remove references to diversity, equity, and inclusion, climate change and misinformation.
Under this change, AI companies — especially those with federal contracts — would not be required to manage the risks associated with those issues.
Topics related to DEI are the administration’s main concern when it comes to potential biases, a senior White House official told reporters on a call Wednesday morning.
“We expect GSA to put together some procurement language that would be contractual language, requiring that, again, LLMs procured by the federal government would abide by a standard of truthfulness, of seeking accuracy and truthfulness, and not sacrificing those things due to ideological bias,” the official said.
Cato Institute research fellow Matthew Mittelsteadt called the move the “biggest error” of the order and suggested it could have ripple effects on foreign competition.
“Not only is ‘objectivity’ elusive philosophically, but efforts to technically contain perceived bias have yet to work,” he said in a statement. “If this policy successfully shapes American models, we will lose international customers who won’t want models shaped by a foreign government’s whims.”
The White House’s move against “ideological bias” in AI models comes as the General Services Administration promotes its own AI chatbot — GSAi — for federal workers and increasingly explores tools from external firms.
The GSAi platform already gives federal workers access to private models like Anthropic and Meta. And last week, xAI announced Grok was available to purchase through GSA, just days after xAI faced backlash for the chatbot’s recent antisemitic responses.
This story was updated July 23 with information on Trump’s executive order on “woke AI.”