AI risks can’t be avoided, must be managed, NIST official says

Deploying artificial intelligence requires taking on the right amount of risk to achieve a desired end result, a National Institute of Standards and Technology official who worked on its risk management framework for the technology said on a panel this week. 

While federal agencies, and particularly IT functions, are generally risk averse, risks can’t entirely be avoided with AI, Martin Stanley, an AI and cybersecurity researcher at the Commerce Department standards agency, said during a Wednesday FedInsider panel on “Intelligent Government.” 

“You have to manage risks, number one,” Stanley said, adding that the benefits from the technology are compelling enough that “you have to go looking to achieve those.”

Stanley’s comments came in response to a question about how the federal government compares to other sectors that have been doing risk management for longer, such as financial services. On that point specifically, he said the NIST AI Risk Management Framework “shares a lot of DNA” with Federal Reserve guidance on algorithmic models in financial services.

He said NIST attempted to leverage those approaches and the same plain, simple language.

“We talk about risks, we talk about likelihoods, and we talk about impacts, both positive and negative, so that you can build this trade space where you are taking on the right amount of risk to achieve a benefit,” Stanley said.

His comments come as many agencies across the government have publicly disclosed how they’re governing their use of the growing technology under the Trump administration. 

Under an Office of Management and Budget memo, which preserved many aspects of the Biden administration’s approach, agencies were required by the end of September to publish both plans to comply with that guidance as well as strategies on how to deploy the technology. 

Those documents included agency approaches to risk management, such as processes for designating use cases as “high-impact” — a designation under the memo for certain deployments that impact rights and safety, and, as a result, require specific risk management practices.

Stanley discussed the government’s approach to governance during the panel, noting that one of the biggest challenges, because of the widespread adoption of AI, is “not to have too heavy [a] hand from a governance perspective — don’t have a whole ton of paperwork to fill out and a six-month approval process.”

But he also praised the government’s approach to risk management under that OMB memo (M-25-21). 

“The federal government has actually done a nice job of this with OMB 25-21, where there’s an identification of what are the high-impact uses of AI that require … more diligence around their implementation and the potential risks,” Stanley said.

There are other areas in which agencies might want to handle AI differently, such as lab experiments where the bar might be lower, he said. But if it’s a high-impact use, “then of course, we want to take a close look at what the potential impacts of that might be.” 

NDAA calls for Treasury-led report on AI to fight money laundering

A handful of federal financial regulatory agencies may have some new artificial intelligence work to do soon, courtesy of an amendment tucked into the Senate’s National Defense Authorization Act.

The $925 billion must-pass defense bill, which cleared the upper chamber Thursday night, included an amendment from Sen. Ruben Gallego, D-Ariz., that calls for a report on the implementation of AI in certain anti-money laundering investigations.

The amendment requires the director of the Treasury Department’s Financial Crimes Enforcement Network to spearhead work on the report, pulling in the heads of the Federal Deposit Insurance Corp., the Federal Reserve, the Office of the Comptroller of the Currency and the National Credit Union Administration for consultation.

That report, which would be submitted to the Senate Banking and House Financial Services committees within 180 days of the NDAA’s passage, would assess the feasibility of leveraging AI in money-laundering probes, specifically those tied to foreign terrorist groups, drug cartels and other transnational criminal organizations.

According to the amendment text, the report would detail the types of investigations where AI may be useful, the types of AI tools that could be effective in those probes, the types of schemes AI would be best positioned to detect, and any possible challenges that could arise when using AI for that kind of work.

The NDAA, which cleared the Senate by a 77-20 tally after votes on the amendments, is now poised for what could be extensive negotiations with the House, setting up a final vote in the weeks ahead. 

Gallego’s office did not respond to a request for comment by the time of publication. The Arizona Democrat has shown increasing interest in AI over this congressional term, introducing a bill in July to protect U.S. call center jobs and consumers from the technology and pushing back on Republicans’ ill-fated attempts over the summer to bar states from regulating AI for a decade. Gallego has also backed legislation aimed at thwarting fentanyl-related money laundering.

The Financial Crimes Enforcement Network has done some exploration on artificial intelligence in the past. A Treasury report released last December noted that FinCEN teamed with federal banking agencies on a 2018 statement on combating money laundering and terrorist financing with “innovative efforts” including AI. 

The Anti-Money Laundering Act of 2020, meanwhile, charged FinCEN with issuing regulations on testing technologies to aid financial firms’ compliance with the Bank Secrecy Act, with a particular focus on “using innovative approaches such as machine learning.”

Here’s how federal agencies say they’re tackling AI use under Trump

Federal agencies’ latest status updates on how they’re using artificial intelligence reveal persistent barriers and variability on where agencies stand with ”high-impact” use cases.  

The release of the 2025 AI compliance plans offers one of the first in-depth glimpses at how federal agencies are addressing issues of AI risk management, technical capacity and workforce readiness under the second Trump administration. 

Those documents, which were required under the Trump administration’s AI governance memo to agencies, were supposed to be released publicly by Sept. 30. As of publication time, FedScoop located roughly 20 plans and 14 strategies across 22 agencies. 

For nine of the roughly two dozen Chief Financial Officers Act agencies, FedScoop was unable to find either a plan or a strategy. The U.S. Department of Agriculture and the Nuclear Regulatory Commission, meanwhile, produced only strategies. 

FedScoop and DefenseScoop attempted to contact the CFO Act agencies that didn’t produce both documents, but the agencies either didn’t respond or didn’t provide the documents. Two of those agencies, NASA and the Justice Department, noted the government shutdown in their responses, and both the DOJ and Department of Defense indicated they were working to post at a later date.

Agencies were also required to submit AI strategies for the first time this year. Those documents contain some of the same information as the compliance documents, including plans to train the workforce, examples of use cases, and systems for governance. The compliance plans, meanwhile, which are in their second year, have changed only slightly from their previous iterations, with some agencies showing progress on their implementation of the technology and risk management practices.

Here are five takeaways from those plans: 

AI barriers remain mostly unchanged

Agencies’ barriers to AI implementation appear to track closely with the obstacles they mentioned in compliance plans last year, suggesting persistent issues. Those barriers include data access and quality, IT infrastructure challenges, lack of talent with specific AI skills, and resource constraints. 

Data and IT infrastructure issues were again among the most commonly cited barriers in the 2025 plans, with the Departments of Energy, Homeland Security, and Transportation, as well as the General Services Administration and Social Security Administration, citing both issues in their plans and even more agencies citing one or the other. 

Specifically, DOT said its assessment of AI maturity found “the primary barriers to responsible AI use not as a lack of innovative ideas, but as structural impediments to execution.” 

It listed access to computing tools, lack of AI-ready data, and processes for security and compliance as the issues that “create friction” and redundant efforts within the department. That showed some progress, as last year, DOT hadn’t yet identified specific barriers.

DHS, which similarly hadn’t listed barriers last year, now says it’s focused on IT infrastructure and data governance to improve its deployment of the technology. In particular, it pointed to efforts to implement common and consolidated IT tools across the department and said it’s moving “to continuous authorization of IT systems.” 

Agencies like Interior and Labor also noted that they’re working to improve data quality through things like enhanced standards or practices. To improve its access to development environments, the Office of Personnel Management specifically pointed to its use of Microsoft Azure to support “full AI lifecycle — from sandbox experimentation to production deployment.”

Many agencies also cited workforce capacity and lack of AI knowledge as issues, and said they were addressing those gaps with some kind of training. Interior said that the various meanings for “AI” has caused “skepticism and confusion” — which is a trend other federal officials have also noted recently — and that it will create an educational program showing workers how the technology can be used with department-specific examples. 

Some agencies, meanwhile, cited the exact same barriers as the previous year.

Energy reiterated almost identical data and IT infrastructure issues it did in its 2024 compliance plan, using almost the same wording. That included citing obstacles with high-quality data and doubling down on complaints that staff aren’t able to access the latest AI tools because services are awaiting authorization under FedRAMP.

The GSA, too, left its wording about barriers to high-quality, scalable data products and pilot projects to assess the viability of AI uses — as did the Department of Veterans Affairs, which cites nearly identical issues with accessing authoritative data sources.  

Meanwhile, a couple of agencies — Commodity Futures Trading Commission and Court Services and Offender Supervision Agency — said there were no specific barriers to their use of AI. CSOSA did cite funding and staffing as “pressures” that extend beyond just its AI work, but it also pointed to the agency’s advantages in use of the technology, including having mostly moderate sensitivity, open-source, and commercial data.

Agencies vary in approach to high-impact AI

The concept of “high-impact AI” was established this year with the release of the Office of Management and Budget AI memo in April, replacing a similar designation from Biden-era guidance. Most agencies said they followed this definition, dedicating chief AI officers to lead the tracking efforts. However, agencies differed in the complexity of the steps taken to determine the use level and in their adherence to risk management practices. 

The OMB describes high-impact AI as models that could “have significant impacts when deployed,” including for “decisions or actions that have a legal, material, binding or significant effect on rights or safety.” Agencies were instructed to assess the AI’s output and its potential risks, regardless of whether human oversight was involved in the decision-making process or the action taken. 

Some agencies, like the CFTC, the Export-Import Bank of the U.S., and the Federal Housing Finance Agency, said they do not have any high-impact AI use cases but briefly described how they would assess future high-impact use cases. EXIM also indicated that it has no instances of high-impact AI within the agency, but provided details on the current process used to identify these instances. 

Other agencies made changes to the process for evaluating the impact of the AI they are using. At Labor, the agency explained its Impact Assessment Form for “meticulously” outlining the agency’s adherence to high-impact practices. The form, which was not mentioned in last year’s compliance plan, is stored in a “secure repository” and reviewed on an ongoing basis. 

At DHS, the agency has fulfilled a pledge from its plan last year to create a risk management framework, which also applies to high-impact levels. DHS emphasized it takes a “risk-based and mission-essential approach to developing and evaluating AI tailored to the specific context and potential impacts of an AI system, model, and AI use case.”

Energy established a High-Impact AI Working Group, comprising “AI equities” from across the agency. The group developed a checklist guide to determine if an AI use case is high-impact. 

While not new, DOT outlined an in-depth process for its use case evaluations. This includes a unique Safety, Rights, and Security Review (SR2) Committee, which consults with the agency’s chief data and AI officer to give them “expert advice” on any potential for legal, material, or significant effects on rights or safety, and also reports its findings during the agency’s AI Governance Board meetings. 

The DOT also utilizes a mandatory, specialized system called the Transportation Use Case Knowledge Repository (TruCKR), which tracks high-impact AI determinations, waivers, risk mitigation plans, and maintains a comprehensive AI inventory. 

“This practice ensures that risk is not an afterthought but a continuous consideration throughout development, testing and operation,” the DOT said. 

Most agencies established waiver processes, but few disclosed using them 

All but three of the 20 publicly available compliance plans detailed agencies’ processes for granting waivers for AI use cases. However, not a single agency has affirmatively stated that it has approved waivers so far. The Consumer Financial Protection Bureau, CFTC, and the FHFA did not include waivers, but addressed high-impact AI to an extent. 

Aside from those that do not mention waivers, the Securities and Exchange Commission did not detail a current plan, stating it “plans to develop a process.” 

The inconsistent public information over waivers has been ongoing for more than a year-and-a-half. The OMB, under the Biden administration, first instructed agencies to disclose waivers publicly in a March 2024 memo. 

Under that policy, agency CAIOs can waive one or more of the risk management practices if the requirement would “increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations.” The policy charges CAIOs with tracking these waivers, recertifying or revoking them annually, and reporting waiver changes to OMB within 30 days. 

Agencies are expected to release a summary of each determination and waiver publicly. But out of the 21 agencies, only the DOL, VA, EXIM, and the CSOSA confirmed they have not identified any use cases requiring a waiver. 

Both the DOL and CSOSA also specified that they do not anticipate any waivers but would update AI policy as the technology continues to develop. 

The Department of Homeland Security, meanwhile, submitted a detailed process for waivers in which the CAIO coordinates with the Component-designated senior AI official and other agency officials before documenting the decision with a “system-specific and context-specific risk assessment.” As with many agencies, DHS states that it will track waived risk management practices in its AI use case inventory and annually reevaluate the waivers, but does not clarify whether any have been issued. 

At the Federal Reserve, a dedicated AI Program Team maintains records of all determinations and waivers, and reports to OMB, the agency stated, but did not specify in the plan whether any waivers have been issued.

GSA’s AI partnerships come up in some plans 

The GSA, which manages the online federal marketplace, has played a crucial role in the Trump administration’s efforts to increase the use of AI across the government. 

At least seven of the 20 available agencies mention GSA, its AI evaluation suite USAi, or its recent OneGov deals with AI companies in their compliance plans. GSA in recent months has signed numerous partnerships with leading AI companies including OpenAI, Anthropic, Google, Meta, and xAI, which are offering their models to agencies for $1 or less over the next year. 

Amid these deals, the GSA launched USAi.gov in August, a governmentwide tool that enables agencies to test major AI models. The multi-agency tool is the next iteration of GSAi, the agency’s internal tool that allows GSA workers to test models. 

DOL said it plans to “leverage OneGov deals and shared solutions” and bring in external consultations with interagency workgroups established by GSA. The CFPB’s compliance plan stated it intends to leverage existing partnerships with GSA, along with USAi.gov, to “align our efforts with federal guidance.” The CFTC similarly stated that it will leverage the “vetted sources developed by GSA for federal consumers” and create testing plans for evaluating AI. 

“GSA’s launch of USAi.Gov marks a significant step in providing Federal agencies the tools and infrastructure needed to experiment with and adopt generative AI technologies,” GSA wrote in its compliance plan. “This initiative is part of GSA’s broader role as a Federal AI enabler, offering secure, scalable, and shared services that facilitate responsible AI deployment across Government.”

The National Science Foundation and the SEC both indicated they will participate in GSA’s AI trainings, while the VA said for the second year in a row that it will be involved in the GSA’s AI Community of Practice, a cross-agency forum

Window into current AI leadership

The documents provide the latest window into the current AI leadership at agencies, including agencies that previously haven’t confirmed the identities of their officials. 

Per the new documents, for example, DHS CIO Antoine McCord and Department of Housing and Urban Development CIO Eric Sidle are also serving as the AI chiefs at their agencies. Interior similarly shared that its CAIO was Jay McMaster. It’s unclear if he has an additional title.  

At OPM, meanwhile, Perryn Ashmore is listed as acting CAIO after Greg Hogan’s departure as AI and IT chief, and various small agencies also included the names of their CAIOs. While CAIOs used to be listed publicly on AI.gov, the website no longer has such a list. 

DefenseScoop’s Brandi Vincent contributed to this article.

The time to update the federal data strategy is now

The Office of Management and Budget in 2020 and 2021 teamed with agency chief information officers and chief data officers to issue the federal data strategy and action plans, laying out a 10-year vision for the government to accelerate its use of data to better deliver on its mission, serve the public, and steward resources. 

We are now halfway through that trajectory. Not only has the operational and technology landscape fundamentally shifted, in many cases progress as envisioned by the Foundations of Evidence-based Policymaking Act has stalled. OMB has stopped efforts to provide annual guidance, integrating its assessment of progress. Further, the current administration’s executive orders put into stark relief the gap between the potential of public-sector data sharing to improve performance and prevent fraud, and the reality. 

However, CDOs and their agency counterparts have accomplished much and learned from initial efforts. The challenge is that their efforts are not aligned, integrated, or leveraged against governmentwide priorities or agency resource allocation and performance management activities. Authorities, accountability and appropriation for responsible and effective use of information resources are confused between the CDO and CIO roles and not effectively integrated into agency investment review processes.

This must change an aligned and resourced focus on responsibly harmonizing and automating data collection, access, sharing, linking, and use policy; federated data management and integration; and data workforce capacity building. These capabilities in turn will more than pay for themselves through the ability to align federal, state, and local programs on common, high-value, and high-integrity data and outcome measures. They will enable the use of interoperable common data as a way to collapse redundant and outdated systems and empower artificial intelligence dominance. Finally, they will support improved cybersecurity and continuous audit of data access and use through acceleration of interoperable zero-trust implementations.

For example, consider laudable proposals for reforming the Paperwork Reduction Act. What is missing from the discussion is the real opportunity to make progress on implementing the once-only principle. This would be done by leveraging agency progress with data governance and data catalogs to accelerate reuse of systematically linked and entity-resolved common data to collect information only once, and reuse it broadly to reduce burden and improve experience with government services. This provides a new pathway for meaningful reform and improvement within and between OMB and agencies on this topic, to be developed in an updated federal data strategy. It also goes a long way — maybe all the way — toward preparing agency data to be AI ready.

Indeed, most recognize the importance of customer experience. But the Evidence Act and modern approaches to data management open the aperture to the entire citizen’s journey, looking at improving experience and impact over their lifetime. How is this possible without a strategic focus on the underlying data and responsibly reusing it at the individual and population level? It’s not! What is needed is to work back from all the moments that matter along the citizen’s journey with the government, to identify opportunities to reuse data and automate harmonized policy guardrails, and then to build and use data products that improve both experience and program outcomes.

OMB has the pen on updating the federal data strategy and accompanying action plans, and ensuring placement in an integrated approach to allocating resources, improving information resources management, and improving performance of agencies. OMB should ask the federal CDO Council to develop substantive input toward an updated data strategy, building on agency open-data efforts that engage the public, including with industry, state and local CDOs, and policy advocates. This approach will align the strategy update with agency missions under administration guidance. It can also tighten focus on sharing, linking, and using agency data to instrument OMB’s resource and performance management of agencies and their programs. 

Implementation of the updated federal data strategy will set incentives toward learning and ongoing improvement including with AI. It will do this because of authoritative data-instrumenting for common data, and key measures that agency and program leaders need to operate. The same data and measures then will be used by OMB and White House policy councils to drive a virtuous cycle of learning, improvement, and accountability, amplified by open data and transparency imperatives in the Evidence Act. It also readies high-quality, integrated and curated common data to support responsible integration of AI into federal efforts to drive efficiencies and improve performance.

Kshemendra Paul served in a variety of federal agencies and the White House in roles such as assistant inspector general, governmentwide lead for information sharing, federal chief architect, program manager, and chief data officer.

EPA nominee says chemical reviews won’t be compromised for AI data centers

The Environmental Protection Agency’s pledge to “get out of the way” on chemical reviews to accelerate the buildout of artificial intelligence data centers doesn’t mean those reviews would be any less “robust,” a top EPA nominee told lawmakers Wednesday.

Appearing before the Senate Environment & Public Works Committee, Douglas Troutman — President Donald Trump’s pick for assistant administrator for toxic substances — was pressed by Sen. Ed Markey, D-Mass., about comments made by EPA Administrator Lee Zeldin last month following a White House roundtable with AI and data center leaders.

In a Sept. 18 press release, Zeldin announced that the EPA would begin prioritizing the review of new chemicals — under the Toxic Substances Control Act (TSCA) — that would be used in data center projects. 

“We inherited a massive backlog of new chemical reviews from the Biden Administration which is getting in the way of projects as it pertains to data center and artificial intelligence projects,” Zeldin said in a statement. “The Trump EPA wants to get out of the way and help speed up progress on these critical developments, as opposed to gumming up the works. We are taking every step possible to make America the artificial intelligence capital of the world.”

Markey asked Troutman, a former chemical industry lobbyist, what provisions in federal toxic safety laws indicate the EPA can “get out of the way of reviewing chemicals for safety.”

Troutman responded that “nothing will change with regard to the robust review based on the risk-based statute enacted under Section Five of TSCA.” 

Markey appeared unconvinced, telling Troutman that if he’s confirmed, he will “be under orders from Administrator Zeldin to get out of the way.” The Massachusetts Democrat made the case that “big tech bosses” with ties to the administration could lean on the agency to bypass regular review protocols. 

“Are you going to guarantee that there will never be a compromise of safety, of toxics,” Markey asked, “even though the EPA administrator is saying that’s what he wants you to do?”

“Senator, I commit to following the statutory requirements and the regulations, the rules that Congress has enacted, and to follow the science and what the statutes require with regard to the review of chemicals, either new or existing,” Troutman replied. 

Markey has railed for months against the environmental concessions the Trump administration has said it will make as part of the president’s AI Action Plan. The energy agenda for that plan emphasizes the rapid buildout of data centers across the country — a goal the administration said will be accomplished by cutting clean air and water regulations and expediting permitting approvals.

Markey said during a virtual event in July that the Trump administration “wants to create loopholes so big in our federal environmental laws that you could actually build a hyperscale data center inside of them.”

“It doesn’t have to be this way,” the senator added. “Our environment doesn’t have to be a sacrificial lamb on the altar of innovation.” 

The proliferation of AI data centers across the country has had a massive impact on Americans’ energy bills, resulting in skyrocketing costs and projections that global energy use by such facilities could more than double by 2030. A bipartisan House bill introduced last month targets rising utility prices in rural areas brought on by AI data centers.

As for the EPA’s role in quickening data-center expansion, Sen. Cynthia Lummis, R-Wyo., said during Wednesday’s hearing that her read on Zeldin’s comments was that he “just plans to prioritize reviews of new chemicals, not to just get out of the way and turn a blind eye.” Troutman seemed to agree with that interpretation.

“If confirmed, I commit to working with the subject matter experts, the program staff, which is very capable, to make sure I understand the rules and the law as stated in the statute,” he said. “And that’s my commitment to you today.”

GOP lawmakers engaged with SEC watchdog on IT shop’s deletion of texts

House Republicans have had initial discussions with the Securities and Exchange Commission’s watchdog about its findings that nearly a year’s worth of texts from the former chair’s mobile device were erased.

In a letter sent last week to SEC Chair Paul Atkins, four GOP members of the House Financial Services Committee called for “further investigation” into a series of missteps by the agency’s IT shop that led to the deletion of Gary Gensler’s messages.

The SEC’s Office of Inspector General detailed in a report last month how text messages sent and received by the head of the SEC during the Biden administration between October 2022 and September 2023 were accidentally expunged from his government-issued device.

A GOP aide with knowledge of the matter told FedScoop on Tuesday that the House Financial Services Committee Republicans behind the letter have had “preliminary discussions with SEC OIG” and anticipate “further engagement when the government reopens.”

The letter was sent by committee Chair French Hill of Arkansas and Reps. Ann Wagner of Missouri, Dan Meuser of Pennsylvania and Bryan Steil of Wisconsin, who lead the Capital Markets, Oversight and Investigations, and Digital Assets, Financial Technology, and Artificial Intelligence subcommittees, respectively.

The HFSC chairs raised concerns in their letter about how the SEC handles IT, “particularly as it relates to its most senior officials.” The lawmakers focused specifically on the 62 days that Gensler’s phone was deemed “inactive” after losing connection with the agency’s mobile device management system.

An SEC policy enacted around that time called for the remote wiping of any agency-issued mobile device that hadn’t connected with the device management system for at least 45 days. Gensler arrived at work one day to find all SEC apps missing from his phone and quickly reached out to the regulator’s Office of Information Technology.

According to the OIG, the SEC’s IT personnel “hastily performed a factory reset of the smartphone, which resulted in the permanent deletion of the device’s data, including nearly a year’s worth of text messages.”

In their letter to the SEC, the committee Republicans questioned why OIT “made no attempt to investigate or address” the inactivity issue and why the office implemented a policy “that was ‘poorly understood’” and appeared to accord Gensler with “special treatment.” His device, the lawmakers contend, was wiped “more than two weeks after the wipe should have occurred.”

“Notably, even though the smartphone was wiped, it was still possible to retain former Chair Gensler’s information on the phone,” the lawmakers wrote. “However, OIT staff factory reset the smartphone resulting ‘in the permanent deletion of the device’s data, including nearly a year’s worth of text messages.’ This data loss was due to the fact that OIT had not backed-up former Chair Gensler’s device since October 18, 2022.”

The House Republicans aimed to connect the regulator’s IT slip-ups with the SEC’s lawsuits under Gensler that chided several financial firms for “widespread record keeping failures.” The lawmakers also noted a 2013 report from the Commodity Futures Trading Commission OIG, which found that Gensler, then the agency’s chair, had used his personal email for official agency communications. 

“Collectively, these incidents, along with the OIG’s findings, raise serious concerns about former Chair Gensler’s and OIT’s compliance with federal recordkeeping laws, transparency obligations, and the integrity of agency oversight,” the letter stated. “The Committee is engaging with the OIG to learn more about their report, seek clarity on outstanding questions, and discuss additional areas that require further oversight and investigation. The Committee looks forward to the Commission’s engagement and transparency during this process.”

An SEC spokesperson told FedScoop on Tuesday that due to the government shutdown, the agency’s public affairs office “is not able to respond to many inquiries from the press.”

Identity insights are critical in preventing provider enrollment fraud for government programs

Provider fraud in Medicare and Medicaid is a persistent, complex, and costly challenge, threatening the integrity of the nation’s most vital healthcare programs. Fraudulent activities, ranging from billing for services not rendered to kickbacks to identity theft and misuse, cost taxpayers billions of dollars each year, draining critical resources designed to serve the nation’s most vulnerable populations. Proactive fraud prevention not only protects taxpayer resources but also reduces unnecessary friction for honest providers, allowing them to focus on delivering care to beneficiaries without being burdened by excessive oversight or delays.

Deepika Sud is a Market Strategist for Government Healthcare Solutions at LexisNexis Risk Solutions.

In this piece, I will focus on one specific aspect of provider fraud that has become increasingly common: provider enrollment fraud. Provider enrollment fraud occurs when individuals or entities falsify information to gain access to Medicare or Medicaid billing privileges. This can include misrepresenting credentials, concealing ownership, or using stolen identities. Once enrolled, these providers can submit fraudulent claims and divert funds intended for legitimate care.

Recent headlines highlight the scale of this challenge. The Justice Department’s takedown earlier this year resulted in 324 defendants charged in connection with over $14.6 billion in fraudulent healthcare schemes and included 96 healthcare providers across the United States.  A nationwide investigation known as “Operation Gold Rush” exposed how criminal networks and transnational groups exploited online portals to buy durable medical equipment companies across the United States, then submitted $10.6 billion in fraudulent claims to Medicare using the stolen identities of over one million Americans.

These cases highlight how identity theft and straw ownership converge and continue to exploit weaknesses in the provider enrollment process. The Government Accountability Office has emphasized the importance of enhanced provider screening and enrollment monitoring, especially after COVID-era waivers.

These findings underscore a core problem: it is becoming increasingly difficult for Medicare and Medicaid to “know their providers” due to fragmented data and complex ownership structures that make schemes like straw ownership more prevalent.  While existing systems such as NPPES, PECOS, and APS play a critical role in the enrollment and monitoring process, gaps between these systems, such as mismatched data, reliance on self-reporting, or delayed updates, can create vulnerabilities that fraudsters find and quickly exploit.

By purchasing existing businesses already approved to bill Medicare, fraudsters exploit self-reported information and verification processes that, while diligent, are limited. They have access to stolen identities to use for billing, but are missing the ability to submit fraudulent claims and receive payment. As illustrated in Operation Gold Rush, criminals buy existing businesses, fail to notify Medicare/Medicaid of the acquisition and then bill the government billions of dollars. Fraudsters have found loopholes and tactics to avoid or delay reporting changes in ownership despite the legal requirement to report such changes within 30 days.  

The Centers for Medicare & Medicaid Services’ (CMS) commitment to “crush fraud, waste and abuse” is a step in the right direction, along with the provision in Public Law 119-21 requiring additional, mandatory checks for providers enrolled in Medicaid.  However, relying heavily on claims and deceased data is reactive and does not tell the whole story of a provider, their networks, and other underlying risk factors. Claims are submitted at a snapshot in time and understanding the behavioral drivers and risks of the provider submitting them is key to preventing fraudulent enrollment, before any billing can occur. To shift towards more holistic and proactive fraud prevention, government agencies should leverage comprehensive identity insights that can reveal the relationships, behaviors, and affiliations behind the claims.

Looking proactively for professional risks such as unreported changes in ownership, criminal history, and misrepresenting credentials allows Medicare and Medicaid to more effectively intervene not only at the time of initial enrollment but also during ongoing monitoring.  

It is important to look beyond just professional risk – fraudulent behavior among healthcare providers often stems from personal behavioral drivers that signal deeper vulnerabilities. Indicators such as bankruptcy, criminal history or financial distress can reveal a provider’s susceptibility to unethical practices, especially when under pressure to maintain operations or income.

The integration of AI and automation offers unprecedented efficiency gains but relies on quality data for effective outcomes. With agencies increasingly asked to “do more with less,” these technologies can provide the robust, detailed analysis required to uncover hidden relationships and networks that manual investigations cannot keep pace with. For identity intelligence to be effective, it must be built on referential data with comprehensive coverage that reflects real-world relationships, behavioral motivations, and personal and professional risks.

An identity-centric approach to risk detection grounded in decades of curated healthcare data enables agencies to screen, verify, and monitor provider entities. By resolving disparate data sets to a unique identity and uncovering hidden relationships, identifying personal risks, and performing data validation, LexisNexis can partner with government agencies to thwart provider enrollment fraud through real-time data analysis, industry-leading referential insights, and superior linking. 

Bringing together professional and personal insights about providers, their networks, and their associations will enable government agencies to protect taxpayer resources, support ethical providers, reduce administrative burden, and ensure beneficiaries receive uninterrupted, high-quality care, building trust in the entire healthcare ecosystem.

For more information on LexisNexis Risk Solutions, visit here.

A new way forward: SaaS as a zero-trust enabler

For too long, many federal agencies have relied on outdated HR and financial management systems stitched together over decades. While these systems may feel familiar, their complexity creates serious vulnerabilities and operational strain. As the White House mandates the implementation of zero-trust architectures, it’s clear that clinging to these legacy models only increases risk and cost.

Fortunately, a new approach is emerging that treats security not as an add-on but as a foundational principle. By adopting software-as-a-service (SaaS) platforms like Workday, agencies can build zero trust into the core of their operations, rather than layering controls over fragile, siloed systems.

From fragmented to foundational security

Illustration of office workers performing tasks.
Download the full 2-page infographic.

The old model of government IT is defined by fragmentation. Systems are often a mix of custom-built and on-premises applications with complex integrations that break with every upgrade. Security is usually layered afterwards, with controls that add more complexity than protection. This approach contradicts the zero-trust principle of “never trust, always verify.” The result is a high-cost, high-risk environment that leaves agencies constantly reacting to threats instead of focusing on their mission.

Adopting a modern SaaS platform flips this equation. Security is not an add-on; it’s a foundational principle. These platforms unify HR and financial management under a single, role-based security model. Access is tied to an employee’s organizational role, automatically shifting as they move within the agency. This granular, automated control is the essence of zero trust, dramatically reducing the risk of over-privileged accounts and insider threats.

Lower costs, greater agility

Beyond security, SaaS delivers significant operational advantages. Hosting, maintenance and security become the vendor’s responsibility, allowing agencies to reallocate scarce IT staff to higher-value work. This shift from a capital-intensive, custom-built model to an operational-expense SaaS model reduces long-term costs by eliminating redundant licenses and expensive upgrades.

Agencies also gain the ability to innovate securely and continuously. Unlike legacy systems that require disruptive upgrades, a modern SaaS provider automatically delivers new features, security patches and regulatory updates. Agencies benefit from innovations delivered to the entire customer base — from improved reporting to AI-driven insights — without the delays of manual coding or disruptive overhauls. This ensures agencies can respond to evolving mission demands while remaining compliant with federal mandates.

A call to action

The path to zero trust requires more than policy; it demands modern infrastructure. Federal leaders should reevaluate their HR and financial management systems through a lens that clearly views total cost, security and mission-readiness. By adopting SaaS models that support those elements, agencies gain the agility to respond to workforce changes, evolving compliance requirements and emerging threats.

Equally important, leaders must recognize that SaaS is not about relinquishing control. Agencies remain the stewards of their data — they are the “data controller” — while vendors act as the processor, facilitating secure transactions within the platform. This shared responsibility model allows agencies to maintain ownership while benefiting from the vendor’s scale and expertise.

Additionally, transitioning to SaaS doesn’t have to be overwhelming. Partners like Workday, whose SaaS products are FedRAMP-moderate certified and continuously updated with leading security practices, provide agencies with a secure, unified HR and financial management foundation. By shouldering the operational load, these partners accelerate time to value and fill critical staffing gaps that agencies would otherwise face alone.

With the federal government’s deadline for zero-trust implementation approaching, the stakes are high. Agencies cannot afford to remain locked into outdated systems that drain budgets and expand attack surfaces. Embracing SaaS is not simply a technology upgrade; it’s a strategic move that strengthens security, improves agility and ensures taxpayer dollars are spent on mission outcomes rather than system maintenance.

The choice is clear: continue struggling with the risks and costs of the old way, or embrace the modern, secure, and efficient SaaS model that enables zero trust by design.

James Herubin is Senior Enterprise Architect, Federal Market, at Workday.

Learn more about how Workday Government Cloud helps agencies enhance the employee experience for government workers while adhering to strict security and compliance standards.

IRS taps familiar face as new CEO: SSA’s Frank Bisignano

Treasury Secretary Scott Bessent, still moonlighting as acting IRS commissioner following Billy Long’s ouster in August, is adding another Trump official to the tax agency’s leadership chart: Social Security Administration Commissioner Frank Bisignano.

Bessent said in a press release Monday that Bisignano will serve as chief executive officer of the IRS, a newly created role that tasks the SSA leader with overseeing all day-to-day operations at the tax agency. 

“Frank is a businessman with an exceptional track record of driving growth and efficiency in the private and now public sector,” Bessent said in a statement. “Under his leadership at the SSA, he has already made important and substantial progress, and we are pleased that he will bring this expertise to the IRS as we sharpen our focus on collections, privacy, and customer service in order to deliver better outcomes for hardworking Americans.”

The press release made the case that Bisignano is “a natural choice” for the position given the shared “technological and customer service goals” of the IRS and SSA. During his Senate confirmation hearings, Bisignano touted his experience as chairman and CEO of the payments and fintech company Fiserv, saying that work made him an ideal choice to guide SSA through its myriad technology challenges.

Mike Kaercher, deputy director of the Tax Law Center at New York University, said in a statement that “managing the IRS is a full-time job” and having one person fill both roles “makes it even harder for the IRS to prepare for the next filing season and implement recent changes in tax law.”

“Putting the same person in charge of both the IRS and SSA creates a conflict of interest when SSA wants access to legally protected taxpayer data,” he added.

The Social Security Administration under Bisignano has since taken victory laps on a variety of tech and customer service-related initiatives — though there has been pushback on some of those supposed wins. Leading up to Bisignano’s confirmation, Democratic lawmakers expressed concerns that he may consider privatizing Social Security — a charge he denied.

“I’ve never thought about privatizing,” Bisignano testified. “It’s not a word that anybody has ever talked to me about, and I don’t see this institution as anything other than a government agency that gets run for the benefit of the American public.”

It remains to be seen whether similar concerns will be levied as Bisignano steps into the CEO position at the IRS, which has cycled through six commissioners during the second Trump term before settling — for the time being — on Bessent. Long, who lasted two months as IRS commissioner, had told lawmakers during his confirmation hearings that he intended to take “clues from [the] private sector” on IT modernization. 

It’s been a tumultuous stretch for the tax agency, which was targeted early and often by the so-called Department of Government Efficiency. At least a quarter of the IRS’s IT staff has been cut, and an August watchdog report found that nearly half of the probationary staffers swept up in reduction-in-force orders had “fully successful” or better reviews.

This story was updated Oct. 7, 2025, with comments from Mike Kaercher.

Federal workers union sues Education Department over altered shutdown emails 

A federal workers’ union is suing the Education Department after agency employees on furlough or administrative leave discovered that their automatic email replies had been changed to a message blaming Democratic lawmakers for the ongoing government shutdown. 

The complaint, filed by the American Federation of Government Employees on Friday, asks a court to prohibit the Education Department’s alleged efforts to “put political speech in federal employees’ mouths.” 

“Forcing civil servants to speak on behalf of the political leadership’s partisan agenda is a blatant violation of federal employees’ First Amendment rights,” the suit stated, adding that “employees are now forced to involuntarily parrot the Trump Administration’s talking points with emails sent out in their names.” 

The suit came one day after some furloughed workers discovered that their automatic out-of-office email replies were changed without their knowledge, from neutral language to partisan messaging that blamed Democrats for the shutdown, which began last Wednesday. 

Prior to the shutdown, the agency sent workers suggested language for automatic email replies, but the messaging was “neutral” at the time, two furloughed employees told FedScoop. Agency workers mostly cut and pasted from that message when setting their messages, one of the employees said. But later in the week, the message changed and included partisan language mentioning Democrats. 

According to multiple screenshots obtained by FedScoop, the altered emails read: “Thank you for contacting me. On September 19, 2025, the House of Representatives passed H.R. 5371, a clean continuing resolution. Unfortunately, Democratic Senators are blocking passage of H.R. 5371 in the Senate, which has led to a lapse in appropriations. Due to the lapse in appropriations, I am currently in furlough status. I will respond to emails once the government functions resume.” 

The suit alleges that some department employees, who still had access to their government equipment, attempted to change their replies back to nonpartisan language, but the replies were later changed again to partisan language. 

“As part of their employment duties, many Department employees regularly use their government email accounts to communicate with school district representatives, college administrators, parents, students, vendors, and other external stakeholders,” the suit stated. “So long as the out-of-office messages remain up, members of the public who try to reach a Department of Education employee will receive as an auto-reply a partisan message blaming ‘Democrat Senators’ for their inability to respond.” 

“Making public statements with such partisan language is not an ordinary part of the job responsibilities of federal civil servants,” the suit added. 

The suit argued that the partisan messaging is a violation of the Hatch Act, which restricts most federal employees from engaging in partisan, political activities. Messaging around the shutdown has primarily centered on who is to blame for it, with both Democrats and Republicans pointing fingers at one another. 

Numerous agencies have promoted this partisan messaging on their official websites. A banner of the Department of Housing and Urban Development’s website, for example, states the “Radical Left in Congress shut down the government.” An image of the banner is cited in AFGE’s suit, along with screenshots of partisan messaging on the websites for the Small Business Administration and the Departments of Treasury, Justice, Agriculture, State and Health and Human Services. 

According to the suit, many employees on administrative leave had already set their out-of-office replies to state they were on leave. Two agency staffers who have been on administrative leave since March told FedScoop their automatic replies were switched to partisan messaging without their knowledge. 

“I believe that this is a gross misuse of the administration’s power and an obvious violation of the Hatch Act and First Amendment in which the department compelled speech,” one of the employees said Monday.

The Education Department did not immediately respond to a request for comment Monday. Madi Biedermann, the deputy assistant secretary of communications, sent FedScoop a statement last week: “The email reminds those who reach out to Department of Education employees that we cannot respond because Senate Democrats are refusing to vote for a clean CR and fund the government. Where’s the lie?”