Here’s how federal agencies say they’re tackling AI use under Trump

Federal agencies’ latest status updates on how they’re using artificial intelligence reveal persistent barriers and variability on where agencies stand with ”high-impact” use cases.
The release of the 2025 AI compliance plans offers one of the first in-depth glimpses at how federal agencies are addressing issues of AI risk management, technical capacity and workforce readiness under the second Trump administration.
Those documents, which were required under the Trump administration’s AI governance memo to agencies, were supposed to be released publicly by Sept. 30. As of publication time, FedScoop located roughly 20 plans and 14 strategies across 22 agencies.
For nine of the roughly two dozen Chief Financial Officers Act agencies, FedScoop was unable to find either a plan or a strategy. The U.S. Department of Agriculture and the Nuclear Regulatory Commission, meanwhile, produced only strategies.
FedScoop and DefenseScoop attempted to contact the CFO Act agencies that didn’t produce both documents, but the agencies either didn’t respond or didn’t provide the documents. Two of those agencies, NASA and the Justice Department, noted the government shutdown in their responses, and both the DOJ and Department of Defense indicated they were working to post at a later date.
Agencies were also required to submit AI strategies for the first time this year. Those documents contain some of the same information as the compliance documents, including plans to train the workforce, examples of use cases, and systems for governance. The compliance plans, meanwhile, which are in their second year, have changed only slightly from their previous iterations, with some agencies showing progress on their implementation of the technology and risk management practices.
Here are five takeaways from those plans:
AI barriers remain mostly unchanged
Agencies’ barriers to AI implementation appear to track closely with the obstacles they mentioned in compliance plans last year, suggesting persistent issues. Those barriers include data access and quality, IT infrastructure challenges, lack of talent with specific AI skills, and resource constraints.
Data and IT infrastructure issues were again among the most commonly cited barriers in the 2025 plans, with the Departments of Energy, Homeland Security, and Transportation, as well as the General Services Administration and Social Security Administration, citing both issues in their plans and even more agencies citing one or the other.
Specifically, DOT said its assessment of AI maturity found “the primary barriers to responsible AI use not as a lack of innovative ideas, but as structural impediments to execution.”
It listed access to computing tools, lack of AI-ready data, and processes for security and compliance as the issues that “create friction” and redundant efforts within the department. That showed some progress, as last year, DOT hadn’t yet identified specific barriers.
DHS, which similarly hadn’t listed barriers last year, now says it’s focused on IT infrastructure and data governance to improve its deployment of the technology. In particular, it pointed to efforts to implement common and consolidated IT tools across the department and said it’s moving “to continuous authorization of IT systems.”
Agencies like Interior and Labor also noted that they’re working to improve data quality through things like enhanced standards or practices. To improve its access to development environments, the Office of Personnel Management specifically pointed to its use of Microsoft Azure to support “full AI lifecycle — from sandbox experimentation to production deployment.”
Many agencies also cited workforce capacity and lack of AI knowledge as issues, and said they were addressing those gaps with some kind of training. Interior said that the various meanings for “AI” has caused “skepticism and confusion” — which is a trend other federal officials have also noted recently — and that it will create an educational program showing workers how the technology can be used with department-specific examples.
Some agencies, meanwhile, cited the exact same barriers as the previous year.
Energy reiterated almost identical data and IT infrastructure issues it did in its 2024 compliance plan, using almost the same wording. That included citing obstacles with high-quality data and doubling down on complaints that staff aren’t able to access the latest AI tools because services are awaiting authorization under FedRAMP.
The GSA, too, left its wording about barriers to high-quality, scalable data products and pilot projects to assess the viability of AI uses — as did the Department of Veterans Affairs, which cites nearly identical issues with accessing authoritative data sources.
Meanwhile, a couple of agencies — Commodity Futures Trading Commission and Court Services and Offender Supervision Agency — said there were no specific barriers to their use of AI. CSOSA did cite funding and staffing as “pressures” that extend beyond just its AI work, but it also pointed to the agency’s advantages in use of the technology, including having mostly moderate sensitivity, open-source, and commercial data.
Agencies vary in approach to high-impact AI
The concept of “high-impact AI” was established this year with the release of the Office of Management and Budget AI memo in April, replacing a similar designation from Biden-era guidance. Most agencies said they followed this definition, dedicating chief AI officers to lead the tracking efforts. However, agencies differed in the complexity of the steps taken to determine the use level and in their adherence to risk management practices.
The OMB describes high-impact AI as models that could “have significant impacts when deployed,” including for “decisions or actions that have a legal, material, binding or significant effect on rights or safety.” Agencies were instructed to assess the AI’s output and its potential risks, regardless of whether human oversight was involved in the decision-making process or the action taken.
Some agencies, like the CFTC, the Export-Import Bank of the U.S., and the Federal Housing Finance Agency, said they do not have any high-impact AI use cases but briefly described how they would assess future high-impact use cases. EXIM also indicated that it has no instances of high-impact AI within the agency, but provided details on the current process used to identify these instances.
Other agencies made changes to the process for evaluating the impact of the AI they are using. At Labor, the agency explained its Impact Assessment Form for “meticulously” outlining the agency’s adherence to high-impact practices. The form, which was not mentioned in last year’s compliance plan, is stored in a “secure repository” and reviewed on an ongoing basis.
At DHS, the agency has fulfilled a pledge from its plan last year to create a risk management framework, which also applies to high-impact levels. DHS emphasized it takes a “risk-based and mission-essential approach to developing and evaluating AI tailored to the specific context and potential impacts of an AI system, model, and AI use case.”
Energy established a High-Impact AI Working Group, comprising “AI equities” from across the agency. The group developed a checklist guide to determine if an AI use case is high-impact.
While not new, DOT outlined an in-depth process for its use case evaluations. This includes a unique Safety, Rights, and Security Review (SR2) Committee, which consults with the agency’s chief data and AI officer to give them “expert advice” on any potential for legal, material, or significant effects on rights or safety, and also reports its findings during the agency’s AI Governance Board meetings.
The DOT also utilizes a mandatory, specialized system called the Transportation Use Case Knowledge Repository (TruCKR), which tracks high-impact AI determinations, waivers, risk mitigation plans, and maintains a comprehensive AI inventory.
“This practice ensures that risk is not an afterthought but a continuous consideration throughout development, testing and operation,” the DOT said.
Most agencies established waiver processes, but few disclosed using them
All but three of the 20 publicly available compliance plans detailed agencies’ processes for granting waivers for AI use cases. However, not a single agency has affirmatively stated that it has approved waivers so far. The Consumer Financial Protection Bureau, CFTC, and the FHFA did not include waivers, but addressed high-impact AI to an extent.
Aside from those that do not mention waivers, the Securities and Exchange Commission did not detail a current plan, stating it “plans to develop a process.”
The inconsistent public information over waivers has been ongoing for more than a year-and-a-half. The OMB, under the Biden administration, first instructed agencies to disclose waivers publicly in a March 2024 memo.
Under that policy, agency CAIOs can waive one or more of the risk management practices if the requirement would “increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations.” The policy charges CAIOs with tracking these waivers, recertifying or revoking them annually, and reporting waiver changes to OMB within 30 days.
Agencies are expected to release a summary of each determination and waiver publicly. But out of the 21 agencies, only the DOL, VA, EXIM, and the CSOSA confirmed they have not identified any use cases requiring a waiver.
Both the DOL and CSOSA also specified that they do not anticipate any waivers but would update AI policy as the technology continues to develop.
The Department of Homeland Security, meanwhile, submitted a detailed process for waivers in which the CAIO coordinates with the Component-designated senior AI official and other agency officials before documenting the decision with a “system-specific and context-specific risk assessment.” As with many agencies, DHS states that it will track waived risk management practices in its AI use case inventory and annually reevaluate the waivers, but does not clarify whether any have been issued.
At the Federal Reserve, a dedicated AI Program Team maintains records of all determinations and waivers, and reports to OMB, the agency stated, but did not specify in the plan whether any waivers have been issued.
GSA’s AI partnerships come up in some plans
The GSA, which manages the online federal marketplace, has played a crucial role in the Trump administration’s efforts to increase the use of AI across the government.
At least seven of the 20 available agencies mention GSA, its AI evaluation suite USAi, or its recent OneGov deals with AI companies in their compliance plans. GSA in recent months has signed numerous partnerships with leading AI companies including OpenAI, Anthropic, Google, Meta, and xAI, which are offering their models to agencies for $1 or less over the next year.
Amid these deals, the GSA launched USAi.gov in August, a governmentwide tool that enables agencies to test major AI models. The multi-agency tool is the next iteration of GSAi, the agency’s internal tool that allows GSA workers to test models.
DOL said it plans to “leverage OneGov deals and shared solutions” and bring in external consultations with interagency workgroups established by GSA. The CFPB’s compliance plan stated it intends to leverage existing partnerships with GSA, along with USAi.gov, to “align our efforts with federal guidance.” The CFTC similarly stated that it will leverage the “vetted sources developed by GSA for federal consumers” and create testing plans for evaluating AI.
“GSA’s launch of USAi.Gov marks a significant step in providing Federal agencies the tools and infrastructure needed to experiment with and adopt generative AI technologies,” GSA wrote in its compliance plan. “This initiative is part of GSA’s broader role as a Federal AI enabler, offering secure, scalable, and shared services that facilitate responsible AI deployment across Government.”
The National Science Foundation and the SEC both indicated they will participate in GSA’s AI trainings, while the VA said for the second year in a row that it will be involved in the GSA’s AI Community of Practice, a cross-agency forum.
Window into current AI leadership
The documents provide the latest window into the current AI leadership at agencies, including agencies that previously haven’t confirmed the identities of their officials.
Per the new documents, for example, DHS CIO Antoine McCord and Department of Housing and Urban Development CIO Eric Sidle are also serving as the AI chiefs at their agencies. Interior similarly shared that its CAIO was Jay McMaster. It’s unclear if he has an additional title.
At OPM, meanwhile, Perryn Ashmore is listed as acting CAIO after Greg Hogan’s departure as AI and IT chief, and various small agencies also included the names of their CAIOs. While CAIOs used to be listed publicly on AI.gov, the website no longer has such a list.
DefenseScoop’s Brandi Vincent contributed to this article.