Advertisement

Pacific Northwest National Laboratory wants to curb AI’s worst side effects

In an interview with FedScoop, AI experts at the Department of Energy-run lab discuss their center devoted to AI and other work on the technology.
A view of the Pacific Northwest National Laboratory's 3410 Building. (Photo credit: PNNL)

At the Pacific Northwest National Laboratory, researchers at the Department of Energy-managed lab are racing to study the most concerning risks associated with artificial intelligence, including bias, security vulnerabilities, and the potential to endanger national security. 

Just last year, the lab formally announced the creation of a Center for Artificial Intelligence, a move that came a few weeks after the signing of the Biden administration’s executive order on AI. That center now serves as a hub for the scientists with expertise in the technology across the facility. The lab is also working with technology companies including Nvidia and Microsoft, as well as on the growing use of GPT4, an OpenAI model made available through the Azure cloud. 

The lab’s work demonstrates many of the challenges the federal government will face as it tries to both study and implement generative AI systems. To use the technology, PNNL has set up a secure sandbox meant to protect data, and is now regularly updating official policy in line with both the federal and state guidance. 

Lab leaders also ramped up AI training that’s now reached about 2,000 members of its staff. The national lab’s IT staff, meanwhile, has had to focus on cutting down on the use of riskier endpoints for accessing AI models, which could risk endangering non-public data.

Advertisement

“By and large, we have found that nearly all AI systems, not just foundation models, have inherent biases and inherent security flaws,” said Courtney Corley, who directs the lab’s Center for AI. “The national labs have a specific role in helping both build and the foundation models, but also to secure them and evaluate them — and also evaluate the capabilities and evaluate their potential to expose information about chem, bio and nuclear risk.” 

FedScoop recently chatted with several officials associated with PNNL’s work on artificial intelligence, including Corley, Quentin Kreilmann, capacity lead and technology strategist at PNNL, and Andy Cowell, division director for research computing. 

This interview has been edited for clarity and length. 

FedScoop:  What are the use cases for generative AI that you’re pursuing right now? 

Quentin Kreilmann: That’s a very complex topic that has a lot of different layers for us. Part of our exploration of GenAI has been to try to identify our audiences because there’s hundreds of use cases within each. If you really start at the baseline, [there’s] the use case of using ChatGPT internally for a variety of optimizations, both on the operations and research front. On the software engineering side, we have a set of different use cases in IT. …

Advertisement

We’re also applying a lot of generative AI to domain science, so really utilizing existing toolsets and then putting them into practice, both on the research side and on the operations side. We’re also helping to develop some new kinds of AI techniques.

This past year, we funded 25 different seed projects that touch a variety of different use cases. … We’ve done some [research] related to building out digital twins, for instrumentation signaling [and] for predictive maintenance. We’ve done a variety of projects around grid modernization, resilience, predictive phenomics, chemistry and material science, climate and Earth science, autonomous experimentation, discovery and natural security. 

Courtney Corley: One of our primary goals is to advance the state of the art in AI and its application. Because we’re a national laboratory, we really support a wide breadth of [work on] scientific discovery, energy, resilience, and national security.

FS: How do you see your relationship to foundation models and the companies building them?

CC: Foundation models are an essential component of modern AI. We have staff that are part of consortiums that are building and training large foundation models. … We are very interested in advancing foundation models for science, energy and security. That may look like fine tuning, that may look like augmenting them in some other way. That may look like identifying them and putting them in an agent framework, so that way we can leverage them across our workflows [and] in laboratories. 

The other angle that we have as a priority is in AI assurance, so assuring that AI-enabled systems are safe, secure and trustworthy — that their bias is limited and that they’re ethically used.

FS: What are you finding? Are they safe? Are they fair?

Advertisement

CC: By and large, we have found that nearly all AI systems — not just foundation models — have inherent biases and inherent security flaws. It’s a two-pronged approach to, yes, developing them, but also ensuring that they are safe throughout their development and they’re doing the things that we expect them to. … We don’t want to expose capabilities to others that could uplift certain skillsets that would be detrimental to the world and to the nation. This was called out specifically in the president’s AI executive order on safe, secure, and trustworthy AI last October — where the national labs have a specific role in helping build foundation models, but also to secure them and evaluate them.

FS: Can you talk a little bit about the data firewall you set up? 

QK: We’ve instantiated, essentially, a clone of ChatGPT internally and made that available. That became the start of our AI incubator program, which is meant to be a kind of secure sandbox environment for folks to start to play and operationalize AI into their work. …

On the tech side, on the research computing side, we’ve established that environment, but then also put as much … attention into guidance, AI literacy, and making sure that folks are properly trained. In order to start to get access to our tool set, you need to go through an onboarding program. This is something that we’ve put together and packaged into an accelerator that is now being shared across the complex. …

We’re still really making sure that folks understand data security considerations at the very top. Part of the reason for the program is to reduce our exposure [to] anyone at the lab utilizing externally hosted tools and leaking our data. …

Advertisement

Another unique aspect of the approach of our laboratory is at the very beginning of all of this, we allowed our entire lab to experiment with a set of guidance at the very top, and we’ve adapted that guidance over time based on other things that have come up, like the executive order, the guidance from the Energy Department, and now some of the state-level guidance that’s coming out. …

We’ve seen the numbers drop in terms of folks accessing the more risky endpoints and we’re really taking this approach of not banning anyone from use [and] really stimulating as much innovation as possible — but doing so safely, securely, and continuing to draw people in with an advanced tool set so that they don’t feel compelled to go elsewhere.

Andy Cowell: There’s generally a rule that we follow: … If you don’t pay for something, your engagement with a particular platform is what’s for sale. Not only does that [ring] true with social media, but also with some of the generative AI applications. With many of the tools that are freely available, your interaction, the data you put into them, are actually used to continue to train the model in some cases. 

[What] we wanted to do was build out this chat within our AI incubator to ensure all that data [that our staff is using] stays secure and we can trust that, you know, it’s not being used to train and pop up in any other interactions that others might have with that same platform.

FS: I’m wondering how you’re thinking about the different types of generative AI systems available. I know you’re making use of GPT4, but what about systems offered by other companies, like Anthropic?

Advertisement

AC: The core of our chat within the AI incubator … are OpenAI models accessed through Azure AI services, but we’re also doing a lot of investigation around open-source models. Some say there is as much, if not more, innovation going on in the open-source community as there is in some of these big companies. We follow both sides of that because we have some potential use cases in other environments that aren’t necessarily connected to the internet. In terms of Anthropic and Google, we’ve started to do some engagement there, but still primarily it’s OpenAI models. 

QK: We have access to all of their models securely and that’s kind of a unique positioning that the PNNL has had with cloud provisioning. So we have access to all of the different CSPs and and all of the associated models, and we have a set of these seed projects that are using a combination of them.

FS: What’s been the hardest to implement on the AI front?

CC: It is really challenging just to keep up with all of the new innovations. It’s a significant effort just to keep up on what the advances are, how we can leverage them for our missions, and then deciding which to bring in and provide to our researchers.

AC: I also think it’s the breadth of applicability of generative AI across so many different areas. … Everything that we do, there’s an element of gen AI that’s now a piece of that.

Advertisement

QK: Going back to that pace of change. You have to constantly scan the environment. … We have a whole set of AI scientists and researchers that are well aware of how the technology works, but we also have a whole set of folks coming into this with a lot of magical thinking. There needs to be a lot of kind of dispelling of what is possible and what is impossible.

Rebecca Heilweil

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Latest Podcasts