Nuclear weapons risk analysis, procurement and more: How two national labs are going ‘all in’ on AI

With mentions in the Trump administration’s AI Action Plan and the Energy secretary calling artificial intelligence the Manhattan Project of our time, several of the 17 DOE-run national laboratories appear poised to level up on AI in the years ahead.
Though billions of dollars in proposed cuts to the labs have left many energy and tech observers scratching their heads, the AI work is continuing unabated at some of those facilities, according to a pair of IT leaders at two labs.
Mark Pettit, principal deputy chief information officer at Lawrence Livermore National Laboratory in California, said during a FedInsider virtual event this week that he has to “temper myself in the hype around AI” as his facility goes “all in” on the emerging technology.
Livermore’s proximity to Silicon Valley has allowed the lab to bring in some of the country’s top AI thought leaders to speak to staffers about the tech’s best uses. There’s a lab-wide initiative called “AI edge” to upskill the entire workforce, which includes skill assessments and customized learning tracks. And the lab is getting creative on how to keep workers engaged.
“We use gamification of learning to encourage people to actually upskill themselves,” Pettit said. “We’ve got a lot of advancement and adoption because of that. We have communities that we’ve built online that share best practices and success stories so people can learn from each other’s uses as well as their failures, and then we host a series of special speakers.”
Lawrence Livermore is also piloting “all the major” large language model pilots, according to Pettit, who said the lab is taking advantage of General Services Administration deals that offer products from leading AI companies to agencies for as little as $1.
The lab is currently pursuing an initiative “to save a million man-hours,” Pettit said, a push that probably wouldn’t be possible without AI. One example he mentioned is with large safety analysis and risk documents for nuclear weapons. Pulling together those documents can “take years to develop, analyze” and put policies in place “so that you can ensure the safety of … your workers, the community, the lab, everybody,” Pettit said.
But with AI, Lawrence Livermore is trying to accelerate that process and pare down the document-development time from three-to-five years to 18 months, “which would be a huge win for us,” Pettit said.
“We’re using it in other places, like invoice processing,” he said of the technology. “Not very sexy, but it’s time consuming.”
Over at Oak Ridge National Laboratory in Tennessee, Jay Eckles has found a similarly un-sexy — but useful — way to leverage artificial intelligence: in procurement. Juggling the Federal Acquisition Regulation, the DOE energy regulation, a prime contract and internal policies amounts to thousands of pages of documents.
“It would take a person months to go through all that and to answer comprehensively a question like, ‘which of our procurement activities directly descend from those controlling documents?’” Eckles, division director for application development at Oak Ridge, said at the FedInsider event. “That kind of analysis would take months for a team of human beings. We were able to do it in an afternoon.”
In Eckles’ view, it’s important to keep in mind that AI itself isn’t coming for people’s jobs; it’s the “people who know how to use AI” that “are coming for your job.” Still, it’s best to view generative artificial intelligence “like a really smart intern, someone that is energetic, someone that is smart, someone that wants to prove themselves, but also someone that wants to please you, [and] someone who is always confident no matter how wrong they are.”
Put simply, the technology “accelerates human performance the same way a forklift does,” Eckles said. “You know, a human being alone cannot lift the kind of mass that a human being operating a forklift can. That’s the kind of scale we’re talking about.”
As the national labs’ responsibilities under the White House’s AI Action Plan come into focus in the months ahead, and the buildout of data centers to power the technology continues across the country, Lawrence Livermore, Oak Ridge and other DOE facilities will undoubtedly keep pushing forward on AI experimentation.
But that push, Eckles said, should always be done with one giant human-in-the-loop caveat in mind.
“The problem with artificial intelligence, which is basically a giant bag of words, is that it is excellent at analyzing and building upon yesterday’s good ideas,” he said. “And so you’ve got to have a place where the humans [are involved], you’ve got to have guidance. … You’ve got to treat it like something that needs verification, that needs a human touch, that needs validation.”