Advertisement

Government needs more agency buy-in to fight fraud with tech, officials say

Data experts with Treasury, the GAO and the Pandemic Response Accountability Committee say AI and data analytics can be used more effectively to detect fraud in federal programs.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
House Oversight and Government Reform Committee Chairman James Comer, R-Ky., arrives for a hearing about fraud in Minnesota at the U.S. Capitol on Jan. 7, 2026 in Washington, D.C. (Photo by Chip Somodevilla/Getty Images)

The federal government has tools at its disposal to more effectively detect and prevent fraud, three data-focused officials told lawmakers Tuesday, but there needs to be more agency investment and support to truly leverage those capabilities.

Appearing before the House Oversight Government Operations Subcommittee, officials from the Treasury Department, the Government Accountability Office and the Pandemic Response Accountability Committee touted artificial intelligence and data analytics as ways to flag fraud in federal programs — while cautioning that there are barriers to full-scale adoption. 

One of the driving forces behind the hearing was the ongoing fraud case in Minnesota involving Paycheck Protection Program and Economic Injury Disaster loans doled out to the state. Rep. James Comer, R-Ky., who chairs the full Oversight Committee, asked Kenneth Dieffenbach, executive director of the Pandemic Response Accountability Committee, how PRAC could’ve stopped pandemic-era fraud in the North Star State.

“The hallmark of most fraud schemes is that people hide information,” Dieffenbach replied. “So leveraging data analytics allows us to see patterns, trends, anomalies, hidden connections, to shine a bright light on what’s actually happening. That is the path forward. So we have to assemble the right data, the right team, the right tools, which we already have at the PRAC thanks to your support. We just need to think more about the jurisdiction of how we’re employing those tools.”

Advertisement

Jurisdictional issues have popped up from time to time at the Treasury Department, according to Renata Miskell, the agency’s deputy assistant secretary for accounting policy and financial transparency. Treasury has expanded the use of Do Not Pay, a tool that it provides to agencies and states that operate federal programs.

Though the resource is effective at helping frontline workers detect risk when making awards and certifying payments, Miskell said it has been significantly “underutilized” and it lacks “sufficient authority to access key federal databases that could detect the most common drivers of improper payments, namely verifying identity, financial status and death.”

Implementing several payment verification processes would help matters a bit, Miskell told Comer, noting that Treasury is applying a “trust by verify” technique on agencies before they can certify payments. 

“One of the pieces that we are missing is the ability to ping authoritative federal databases to confirm a pay ID, such as a tax identification number or Social Security number,” she continued. “We already received the data, we just can’t verify it. So there are a number of databases that would help.”

Sterling Thomas, chief scientist at the GAO, said data silos at agencies “are a problem” when fighting fraud. He said that privacy “is an important concept, particularly in the age of AI,” but Congress may want to explore alterations to the Privacy Act that allow data to be shared “specifically for the purpose of fraud investigations.” 

Advertisement

Other issues preventing full utilization of AI and machine-learning tools, Thomas said, are antiquated networks at some agencies that are incompatible with other systems. Some information, he said, simply can’t be shared via old technology. Those outdated APIs prevent effective technical solutions from being deployed. 

“Data science algorithms, inclusive of machine learning and AI, are going to produce indicators of fraud,” Thomas said. “It’s critically important … [that agencies have] a fraud investigator, an analyst who’s an expert in the tools, techniques and technologies that fraudsters use to look at the data coming out. 

“So the types of things you’re looking for,” he continued, “are … patterns of behavior that don’t fit the expected patterns of behavior of someone who’s using the money for the intended purpose or for the intended program.”

AI tools need to be trained by in-house agency experts to spot fraud. GAO reports on fraud risk management detail how fraud indicators would be fed into “algorithms, machine learning, AI, other data science methods,” Thomas said, and “that could then be used to track and monitor potential fraud while the program’s in execution. That’s the purpose of it, as you design the tool to find the behaviors that you want to get rid of.”

Dieffenbach, who served in various data analytics and investigative roles at the departments of Energy and Justice as well as the Air Force, said it’s important to note that there needs to be a human in the loop, but the model is what flags possible fraud and is “much more efficient” in addressing the issue. The single biggest challenge he’s seen over his nearly three decades as a fraud investigator is agencies that don’t have a top-to-bottom commitment to identifying and addressing risk.

Advertisement

“We do see that in some places in the government, but it’s not across the board. So a mandatory requirement would help us get to where we want to be,” Dieffenbach said, referring to a GAO recommendation that all agencies build an analytics-based fraud risk analysis team.

Other issues raised during Tuesday’s hearing included an acknowledgement from Thomas that “there’s quite a bit of competition with the private sector” to hire technically competent agency staffers who can guide AI and data fraud work, and concerns from Rep. Maxwell Frost, D-Fla., about how agencies — specifically Treasury — are protecting people’s personally identifiable information in the age of DOGE.

“We take privacy security very seriously at the Treasury Department, specifically within Do Not Pay,” Miskell said. “Privacy is built in by design. We follow the principle of ‘least privilege,’ meaning that a person can only receive a response back on information that they provided. We operate in a FISMA high environment, which means that it’s the highest standard in terms of federal cyber, we apply continuous monitoring, and we are also transparent to the public.”

Chair Pete Sessions, R-Texas, closed the hearing by applauding the witnesses and Republican and Democratic subcommittee members for their work on fraud. Sessions noted that he and ranking member Kweisi Mfume, D-Md., “have confidence that we can move to a brighter, better world and work with AI” on fighting fraud. Mfume said the issue is “begging for some sort of national response that would be so strong that it would set the course over the next five or 10 years.”

“And at the rate AI is moving,” he added, “we’re already behind, in my opinion.”

Latest Podcasts