Advertisement

OMB lays out requirements for agencies to prevent ‘woke AI’

The seven-page directive comes nearly five months after the White House called for the prevention of “biased” AI models.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
President Donald Trump speaks during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, D.C. (Photo by Chip Somodevilla/Getty Images)

The Office of Management and Budget released long-awaited guidance Thursday that outlined how federal agencies are expected to ensure that artificial intelligence models are “unbiased” when procured and deployed by the government. 

The memo from OMB Director Russell Vought addresses some questions that arose after President Donald Trump signed an executive order last July to prevent “woke AI” in the federal government. The order, signed alongside the release of the White House AI Action Plan, stated the federal government has an obligation not to procure models “that sacrificed truthfulness and accuracy to ideological agendas.” 

The order did not provide details on how agencies should evaluate models and directed OMB to issue guidance. The seven-page memo fulfills this directive by outlining how agencies must approach contractual requirements for new partnerships, modify existing contracts, and update their procurement policies. 

Under the directive, agencies are required during the procurement process to obtain “sufficient information” from AI vendors to ensure the large language models comply with the White House’s two “unbiased AI principles.” These two principles, included in the executive order, state that AI must be “truth-seeking,” or based in historical accuracy, objectivity, and scientific inquiry, and also have “ideological neutrality” by not building models based on partisan beliefs. 

Advertisement

The OMB acknowledged AI products are often sold to agencies through government resellers, and the actual AI developers will need to be willing to collaborate with resellers for information sharing and potential “direct product interventions.” 

Agencies were told to avoid requesting sensitive technical data, such as specific model weights, and to seek only “enough information” to assess risk management. This includes the vendor’s acceptable use policy, along with information on the model, system or data that may come in the form of training process summaries or model evaluation scores. LLMs can also provide product resources or developer guides, and must create a way for end-user feedback. 

In some cases, an agency can also request other information for “enhanced LLM transparency,” the memo said. This includes information on the vendor’s pre- and post-training activities, model-bias evaluations, and third-party modifications to LLMs or specific governance tools. 

Agencies must “explicitly identify” these factors for both contract eligibility and termination, the memo stated. When it comes to procuring other types of generative AI, agencies are required to use OMB’s guidance to “inform the documentation requirements imposed for the procurement.” 

If an agency develops an LLM or small language model itself, similar documentation is required, OMB said. 

Advertisement

The memo follows OMB’s multiple listening sessions this fall to hear from industry about their approaches to AI transparency and risk management. 

Researchers in the space have raised questions in recent months over whether preventing biased AI is possible. A paper from Stanford’s Institute for Human-Centered AI, published in September, stated “true political neutrality in AI” is “theoretically and practically impossible.” 

Other tech experts suggested Trump’s directive is a way for the administration to control the information available through AI tools. 

Cato Institute research fellow Matthew Mittelsteadt told FedScoop in July that the ideological bias measure was the “biggest error” of the order, and suggested it could have ripple effects on foreign competition. 

“Not only is ‘objectivity’ elusive philosophically, but efforts to technically contain perceived bias have yet to work,” he said in a statement. “If this policy successfully shapes American models, we will lose international customers who won’t want models shaped by a foreign government’s whims.” 

Latest Podcasts