• Sponsored

PNNL and Microsoft talk strategies for responsible AI adoption

Artificial intelligence rapidly transforms the world, promising a future filled with unimaginable possibilities. One of the key issues to capitalizing on AI is ensuring that it is used responsibly.

Brian Abrahamson, associate lab director and chief digital officer for the Pacific Northwest National Laboratory and Michael Mattmiller, Microsoft’s senior director for state government affairs, joined FedScoop to explore how leaders think about implementing AI responsibly, including establishing ethical guidelines, data governance and privacy and security measures.

Abrahamson emphasized the need for leaders to recognize the fallibility of AI. He highlights the potential inaccuracies, especially in generative AI, and stresses the importance of having a human in the loop for critical decision-making processes.

“This notion of hallucinations that we’ve all heard a lot about can give you a very inaccurate response in a very confident way. And so we need to make sure that we’re asking the questions if we’re using AI models to drive business decisions, to drive field activities, to drive elements that might involve human safety or critical decision making.”

Mattmiller concurred with Brian, emphasizing the exciting potential of AI technology but cautioning about introducing new risks. “Leaders need to be thinking about what can be done to encourage the innovative use of this technology to learn from it to try it out—but also recognize it’s a copilot, that there still needs to be that human element that we need to be able to assess and mitigate risk,” he said.

To navigate this complex landscape, leaders across all sectors must embrace responsible AI use. This requires establishing clear principles and policies; as Mattmiller emphasized, “We want to make sure the technology is used in a way that respects fairness, that maintains privacy and security and is ultimately accountable to humans.” Transparency becomes key, with institutions like the PNNL reiterating disclosure of AI involvement in publications and software development.

“One of the most significant that we’ve put in place is you have to disclose the use of artificial intelligence when it contributes to the work product. So, at the National Laboratory, we produce thousands of peer-reviewed research publications every year. Generative AI can be a very useful tool in helping construct abstracts and conduct elements of synthesis of other people’s publications. But to the extent that AI was used, it has to be disclosed,” added Abrahamson.

“And that’s not only for scientific integrity purposes; it’s to make sure that when others are reviewing your work, they know that AI was a part of generating that work, and they can take an appropriate stance in scrutinizing those elements.”

Abrahamson and Mattmiller provided valuable recommendations for leaders looking to implement AI effectively, stressing the importance of balancing advocacy and skepticism, emphasizing cultural considerations and the significance of leadership setting the tone at the top and investing in workforce training to responsibly adopt AI.

Ultimately, the responsible rise of AI requires a collaborative effort. Open communication, collaboration and a shared commitment to ethical principles will be key to shaping a future where AI serves as a force for good, empowering agencies and citizens to flourish.

Learn more about how Microsoft helps agencies adopt AI responsibly and about leading in the era of AI.

This video panel discussion was produced by Scoop News Group for FedScoop and underwritten by Microsoft.