Advertisement

House Republicans scrutinize VA for lack of AI disclosures, inadequate contractor sanctions

Members of the Veterans Affairs Technology Modernization subcommittee urged the department to offer disclosures for AI use and issue more “severe” consequences for contractors.
U.S. Department of Veterans Affairs Medical Center, VA Medical Center Drive, Ann Arbor, Michigan (Image credit: Wikimedia / Dwight Burdette)

The Department of Veterans Affairs should provide disclosures to patients when the agency uses artificial intelligence to analyze sensitive information, and the VA should also be prepared to levy greater sanctions against contractors who misuse veterans’ data, House lawmakers recommended during a Monday hearing. 

The VA does not currently disclose the use of AI when the technology is used for diagnostic purposes in a health care setting, lawmakers noted during the House subcommittee hearing on Technology Modernization Oversight for the VA.  as Chairman Matt Rosendale, R-Mont., said that there is not a “good, consistent disclosure process that is being utilized and being signed off by our veterans,” a point that Dr. Gil Alterovitz, the director of the agency’s National Artificial Intelligence Institute and its chief AI officer, confirmed. 

Alterovitz did confirm that the department is piloting “model cards” that offer patients and providers information about the AI that is being applied to their care, along with informed consent forms that patients are given when the tools are being researched in health care settings. 

“I would highly recommend that if that disclosure [about AI use] is going out and someone’s information is going to be analyzed by AI, that certainly the patient should be made aware of that,” Rosendale said. “It could present all types of issues going forward. If the groups that are doing all that analysis of what is and what is not acceptable, a disclosure at the very beginning would be a good place to start.”

Advertisement

In the VA’s use case inventory, the department cites the use of a large language model that can help predict risks that patients might have before they enter surgery and another tool to “optimize surgical outcomes.” While the department notes that that tool has not been applied to a patient’s pre-surgery period, the VA reports that “large amounts of pharmacogenomic and phenotypic data have been analyzed by machine learning/(artificial intelligence and has produced interesting results” in various clinical settings.

Alterovitz responded to a question about ethical concerns from Rosendale, stating that the department is looking at “trustworthy AI,” and emphasized that the department is researching the surgical outcome LLM and not using it operationally. Rosendale raised more ethical concerns, citing false positives and restrictions on veterans’ freedoms, including gun ownership. 

However, in a different application, the VA is attempting to “extract signals” of suicidal risk from clinical notes through the use of natural language processing. The tool is used by the department in conjunction with current risk prediction methods.

Alterovitz reported that veterans sign a consent form when they are involved in a health care-centered process that utilizes an AI tool, and for AI tools used in operations, “generally there are tools used that have been publicized” on the VA’s use case inventory. 

“Everything that [Rosendale] said are concerns that need to be looked at,” Alterovitz said. “Where this uses AI is in the natural language processing, looking at those notes and extracting potential meaning out of it. There’s always a human in the loop that looks at the results. So this is a way to help them sift through a large amount of text.”

Advertisement

Rep. Keith Self, R-Texas, said he was “not satisfied” with the VA’s answers to questions about sanctions for contractors. 

The VA’s witnesses reported that the current sanctions for contractors that accidentally or purposefully leak sensitive information about veterans’ medical records are losses of contracts. 

“A general contract acquisition answer is not satisfactory because of the importance, the potential devastating consequences of a breach of 1,100 petabytes of data, sensitive data,” Self said. “First, you’ve got to identify some sanctions and they’ve got to be fairly severe sanctions, and they’ve got to be in policy upfront. This is something you have got to settle in policy early. Frankly, in my mind, it is not going to be sufficient to say, ‘we’re going to cancel a contract.’”

Self also scrutinized the VA for its unclear amount of AI use cases, stating that the agency reported 128 use cases to the subcommittee, 300 use cases to the Senate and 21 given during the hearing that “advanced to implementation.”

Charles Worthington, the VA’s chief technology officer, responded by saying that the department’s use case inventory is a “work in progress” and that the inventory has been “at different points in time, created to comply” with memorandums from the White House Office of Management and Budget.

Caroline Nihill

Written by Caroline Nihill

Caroline Nihill is a reporter for FedScoop in Washington, D.C., covering federal IT. Her reporting has included the tracking of artificial intelligence governance from the White House and Congress, as well as modernization efforts across the federal government. Caroline was previously an editorial fellow for Scoop News Group, writing for FedScoop, StateScoop, CyberScoop, EdScoop and DefenseScoop. She earned her bachelor’s in media and journalism from the University of North Carolina at Chapel Hill after transferring from the University of Mississippi.

Latest Podcasts