The Department of Veterans Affairs has established an Artificial Intelligence Institutional Review Board and an AI Oversight Committee charged with evaluating the fairness and transparency of using AI tools within research and clinical operations, according to a senior official.
In an interview with FedScoop, the VA’s first Director of Artificial Intelligence Gil Alterovitz explained that responsibility for evaluating the use of AI tools within research and clinical operations sits with the two recently-established bodies.
This is important, Alterovitz said, because it allows an assessment of the technology to occur before it is deployed and prevents the agency from having to take corrective action later in a research program.
“So there’s a new AI Institutional Review Board pilot that basically creates a module that enables the VA to review a research project to ensure it’s not problematic and to be able to ask the right questions, understand what is going on on the AI front,” he said.
Alterovitz added: “There’s also, even newer, the AI Oversight Committee that was piloted in Long Beach. It’s an active, ongoing thing that looks at things that will go into clinical operations, not research, but things that go into clinical operations.”
Details of the VA’s AI governance structure emerge as the Biden administration increases its focus on regulating the use of the technology across the government and the private sector. Earlier this week, the National Telecommunications and Information Administration issued a wide-ranging request for comment, seeking evidence on how government agencies should audit AI technology.
The VA’s AI IRB and the Oversight Committee adhere closely to the White House’s AI ‘Bill of Rights’ blueprint, which last year set out principles that shape each federal agency’s approach to the use of the technology.
According to Alterovitz, the IRB is a pilot project of the blueprint in action and was developed around the same time that the Bill of Rights was announced.
Alterovitz told FedScoop that the VA’s IRB has already stepped in to intervene on projects where the use of AI may not have been appropriate by bringing transparency to the process of approval of AI tools within the agency.
“The AI IRB which I mentioned reviewed an AI research study where there was an issue on transparency. It was a company that was going to work on some VA data and there was a lack of understanding of how that data would be used,” said Alterovitz. “And so the AI IRB was able to enable the finding of that and make sure there was transparency. So that issue was raised and led to action. So that’s an example of where it literally did make a real difference.”
Alterovitz said the next step in the evolution of AI within the VA is generative AI tools like OpenAI’s ChatGPT and others, which he said are popular and “very user friendly,” but need more guidance and information to be leveraged correctly by the VA and its employees.