Advertisement

Veterans Affairs experiments with AI ‘to-go’

The department is developing embeddable AI modules that can be easily integrated across systems.
Department of Veterans Affairs flag
(Veterans Health / Flickr)

The Department of Veterans Affairs is experimenting with an artificial intelligence “to-go” delivery model to assist its medical centers during the coronavirus pandemic, said Gil Alterovitz, director of AI.

VA, industry and academic researchers are developing embeddable AI modules that efficiently integrate across different systems, Alterovitz said during the ATARC IT Acquisition Virtual Summit on Tuesday. The modules are essentially plugins or add-ons to current software.

Alterovitz likened the change to restaurants packaging food for pickup in a standardized, to-go format during the pandemic, rather than the usual, sit-down experience.

“Similarly, when you think about cutting-edge AI, many times it takes years for it to be developed and to lead to actual care — kind of like that meandering experience within a restaurant,” he said. “And so we’re looking at: How do you cut that time? How do you allow for it to be developed in this kind of scalable, modular model?”

Advertisement

One such AI module is helping VA medical centers with COVID-19 individual risk prediction, since the virus’ initial surge in the U.S. VA is testing basic statistical models, as well as the latest and greatest ones to compile and analyze data across all its medical centers, Alterovitz said.

If a patient tests positive for the coronavirus, AI can help determine if they should be admitted to a medical center, if they should be in the intensive care unit, and even their chance of death.

The module was deployed and embedded in VA’s patient dashboard for veterans to use, with plans for more AI to-go in the future, Alterovitz said.

AI ‘hype cycle’

VA can’t develop modules without the help of industry and academia because of the limited AI talent pool, especially when it comes to disruptive AI, Alterovitz said.

Advertisement

Agencies rely on subject matter experts to make incremental advancements in adopting reliable technology during what has become an “AI hype cycle,” said Anil Chaudhry, director of AI implementations at the General Services Administration.

Chaudhry is one of the leads at GSA’s AI Center of Excellence, which helps agencies plot out a phased approach to AI.

“We end up providing holistic advice, technical support to federal agencies … so that they can modernize their own infrastructure, their own tools in a repeatable, scalable way based on best practices that are a collaboration of industry, academia and also other federal agencies,” Chaudhry said.

At the same time ATARC’s AI Ethics Working Group — a collaboration between government, industry and academia — developed the AI transparency score.

Rather than giving AI projects a pass-fail grade, the assessment can be used by agencies to objectively determine how much risk they’re willing to take.

Advertisement

Developed by volunteers, the score has since been moved to International Organization for Standardization working groups for consideration globally.

“Hopefully in a couple of years, or sooner, will have actual national and international standards aligned to AI transparency,” Chaudhry said. “And we can all know it started with ATARC.”

Latest Podcasts