Advertisement

AI risks can’t be avoided, must be managed, NIST official says

Speaking on a panel at a FedInsider event, NIST’s Martin Stanley said AI’s benefits are compelling and praised the government’s approach to risk management.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
(via Getty Images)

Deploying artificial intelligence requires taking on the right amount of risk to achieve a desired end result, a National Institute of Standards and Technology official who worked on its risk management framework for the technology said on a panel this week. 

While federal agencies, and particularly IT functions, are generally risk averse, risks can’t entirely be avoided with AI, Martin Stanley, an AI and cybersecurity researcher at the Commerce Department standards agency, said during a Wednesday FedInsider panel on “Intelligent Government.” 

“You have to manage risks, number one,” Stanley said, adding that the benefits from the technology are compelling enough that “you have to go looking to achieve those.”

Stanley’s comments came in response to a question about how the federal government compares to other sectors that have been doing risk management for longer, such as financial services. On that point specifically, he said the NIST AI Risk Management Framework “shares a lot of DNA” with Federal Reserve guidance on algorithmic models in financial services.

Advertisement

He said NIST attempted to leverage those approaches and the same plain, simple language.

“We talk about risks, we talk about likelihoods, and we talk about impacts, both positive and negative, so that you can build this trade space where you are taking on the right amount of risk to achieve a benefit,” Stanley said.

His comments come as many agencies across the government have publicly disclosed how they’re governing their use of the growing technology under the Trump administration. 

Under an Office of Management and Budget memo, which preserved many aspects of the Biden administration’s approach, agencies were required by the end of September to publish both plans to comply with that guidance as well as strategies on how to deploy the technology. 

Those documents included agency approaches to risk management, such as processes for designating use cases as “high-impact” — a designation under the memo for certain deployments that impact rights and safety, and, as a result, require specific risk management practices.

Advertisement

Stanley discussed the government’s approach to governance during the panel, noting that one of the biggest challenges, because of the widespread adoption of AI, is “not to have too heavy [a] hand from a governance perspective — don’t have a whole ton of paperwork to fill out and a six-month approval process.”

But he also praised the government’s approach to risk management under that OMB memo (M-25-21). 

“The federal government has actually done a nice job of this with OMB 25-21, where there’s an identification of what are the high-impact uses of AI that require … more diligence around their implementation and the potential risks,” Stanley said.

There are other areas in which agencies might want to handle AI differently, such as lab experiments where the bar might be lower, he said. But if it’s a high-impact use, “then of course, we want to take a close look at what the potential impacts of that might be.” 

Madison Alder

Written by Madison Alder

Madison Alder is a reporter for FedScoop in Washington, D.C., covering government technology. Her reporting has included tracking government uses of artificial intelligence and monitoring changes in federal contracting. She’s broadly interested in issues involving health, law, and data. Before joining FedScoop, Madison was a reporter at Bloomberg Law where she covered several beats, including the federal judiciary, health policy, and employee benefits. A west-coaster at heart, Madison is originally from Seattle and is a graduate of the Walter Cronkite School of Journalism and Mass Communication at Arizona State University.

Latest Podcasts