VA leader eyes ‘aggressive deployment’ of AI as watchdog warns of challenges to get there

A key technology leader at the Department of Veterans Affairs told lawmakers Monday that the agency intends to “capitalize” on artificial intelligence to help overcome its persistent difficulties in providing timely care and maintaining cost-effective operations.
At the same time, a federal watchdog warned the same lawmakers that the VA could face challenges before the agency can effectively do so.
Lawmakers on the House VA subcommittee on technology modernization pressed Charles Worthington, the VA’s chief data officer and chief technology officer, over the agency’s plans to deploy AI across its dozens of facilities as the federal government increasingly turns to automation technology.
“I’m pleased to report that all VA employees now have access to a secure, generative AI tool to assist them with their work,” Worthington told the subcommittee. “In surveys, users of this tool are reporting that it’s saving them over two hours per week.”
Worthington outlined how the agency is utilizing machine learning in agency workflows, as well as in clinical care for earlier disease detection and ambient listening tools that are expected to be rolled out at some facilities later this year. The technology can also be used to identify veterans who may be at high risk of overdose and suicide, Worthington added.
“Despite our progress, adopting AI tools does present challenges,” Worthington acknowledged in his opening remarks. “Integrating new AI solutions with a complex system architecture and balancing innovation with stringent security compliance is crucial.”
Carol Harris, the Government Accountability Office’s director of information technology and cybersecurity, later revealed during the hearing that VA officials told the watchdog that “existing federal AI policy could present obstacles to the adoption of generative AI, including in the areas of cybersecurity, data privacy and IT acquisitions.”
Harris noted that generative AI can require infrastructure with significant computational and technical resources, which the VA has reported issues accessing and receiving funding for. The GAO outlined an “AI accountability framework” in a full report to solve some of these issues.
Questions were also raised over the VA’s preparedness to deploy the technology to the agency’s more than 170 facilities.
“We have such an issue with the VA because it’s a big machine, and we’re trying to compound or we’re trying to bring in artificial intelligence to streamline the process, and you have 172 different VA facilities, plus satellite campuses, and that’s 172 different silos, and they don’t work together,” said Rep. Morgan Luttrell, R-Texas. “They don’t communicate very well with each other.”
Worthington said he believes AI is being used at facilities nationwide. Luttrell pushed back, stating he’s heard from multiple sites that don’t have AI functions because “their sites aren’t ready.”
“Or they don’t have the infrastructure in place to do that because we keep compounding software on top of software, and some sites can’t function at all with [the] new software they’re trying to implement,” Luttrell added.
Worthington responded: “I would agree that having standardized systems is a challenge at the VA, and so there is a bit of a difference in different facilities. Although I do think many of them are starting to use AI-assisted medical devices, for example, and a number of those are covered in this inventory,” in reference to the VA’s AI use case inventory.
Luttrell then asked if the communication between sites needs to happen before AI can be implemented.
“We can’t wait because AI is here whether we’re ready or not,” said Worthington, who suggested creating a standard template that sites can use, pointing to the VA GPT tool as an example. VA GPT is available to every VA employee, he added.
Worthington told lawmakers that recruiting and retaining AI talent remains difficult, while scaling commercial AI tools brings new costs.
Aside from facility deployment, lawmakers repeatedly raised concerns about data privacy, given the VA’s extensive collection of medical data. Amid these questions, Worthington maintained that all AI systems must meet “rigorous security and privacy standards” before receiving an authority to operate within the agency.
“Before we bring a system into production, we have to review that system for its compliance with those requirements and ensure that the partners that are working with us on those systems attest to and agree with those requirements,” he said.
Members from both sides of the aisle raised concerns about data security after the AI model had been implemented in the agency. Subcommittee chair Tom Barrett, R-Mich., said he does not want providers to “leech” off the VA’s extensive repository of medical data “solely for the benefit” of AI, and not the agency.