Advertisement

Deputy Energy secretary sees role in counteracting AI industry’s ‘profit motive’

As national labs invest in next-generation AI research, David Turk explains how the government fits into the AI age.
David Turk, deputy secretary of the Department of Energy, testifies during a Senate Energy and Natural Resources Committee hearing on federal electric vehicle incentives on Jan. 11, 2024 in Washington, D.C. (Photo by Anna Rose Layden/Getty Images)

The Energy Department could be a key force in counteracting the “profit motive” driving America’s leading artificial intelligence companies, the agency’s second-in-command said in an interview. 

DOE Deputy Secretary David Turk told FedScoop that top AI firms aren’t motivated to pursue all the use cases most likely to benefit the public, leaving the U.S. government — which maintains a powerful network of national labs now developing artificial intelligence infrastructure of their own — to play an especially critical role.

Turk’s comments come as the Energy Department pushes forward with a series of AI initiatives. One key program is the Frontiers in Artificial Intelligence for Science, Security, and Technology, or FASST effort, which is meant to advance the use of powerful datasets maintained by the agency in order to develop science-forward AI models. At Lawrence Livermore National Laboratory in California, federal researchers are working on building the world’s fastest supercomputer, El Capitan. The current fastest, Frontier, is based at the Oak Ridge National Laboratory in Tennessee, which also falls under the auspices of the federal government.

Through the Energy Department’s data and research staff, Turk says the agency is hoping to focus on areas of AI the private sector isn’t motivated to seek out — while also countering some of the negative consequences spurred by the race to build the technology. 

Advertisement

“These [companies] are shareholder-driven and they’re looking to turn a profit. But not everything that is valuable to society as a whole has a huge amount of profit behind it, especially in the near term and in the way that we’ve seen history play out,” Turk said. “It is going to take the government, without the full profit motive and without the intense competition of the private sector, to say, hold on, let’s kick the tires. Let’s make sure we’re doing this right.” 

That’s not to say the Energy Department or the national laboratories have a problem working with Big Tech. At the Pacific Northwest National Laboratory in Washington, there’s already plenty of work being done with ChatGPT, as well as with Microsoft, on battery chemistry research. OpenAI is working with the Los Alamos National Laboratory on bioscience research, too. 

As policymakers wrestle with how to prioritize U.S. competitiveness in artificial intelligence  while also curbing some of the worst impacts of the emerging technology — including data security risks, environmental costs, and potential bias — Turk spoke to FedScoop about how the government will try to position itself in the age of AI.

This interview has been edited for clarity and length. 

FedScoop: What is the Department of Energy’s role in this moment of AI? Everyone just watched “Oppenheimer” and has more familiarity with the history of the national labs.

Advertisement

Deputy Secretary David Turk: What’s striking to me is not just the nuclear security [or] Oppenheimer side of the house, which does a lot of AI and supercomputing, but also our Office of Science, where we have an $9 billion annual budget for science. Some of those laboratories have been at the very cutting edge on AI, supercomputing, quantum computing, and have huge, huge datasets that are incredibly helpful as well. For me, the foundation of our AI at the Department of Energy is not as appreciated as it should be. … It’s the supercomputer power, it’s the data, which is the fuel for AI, and then, maybe even most importantly, it’s the people. We’ve got such phenomenal talent, mostly in our national laboratories and some at our federal headquarters. 

FS: A lot of these companies are finding that there’s a ceiling in terms of what you can scrape from the public web, and a lot of the more specific applications of AI rely on more specific data, potentially data with higher security needs. How are you thinking about access to data that the Department of Energy might have? 

DT:If you don’t have good data, you’re not going to have good outputs, no matter how good your AI is. We have just phenomenal datasets that no one else has, including and especially from our national laboratories, who’ve been doing fundamental science, have been doing applied research, and have been really pushing the boundaries in any number of different areas.

… We do have a responsibility to do even more, [including] making sure that we’ve got the funding to be able to put that data out there, where it’s appropriate for public use and where it’s appropriate more for specialized use when we have some security issues. … Part of the FASST proposal is making that data more available for researchers where it’s appropriate to do so, and for others as well in more specialized areas. That’s a big part of our strategy. 

FS: There’s a lot of conversation about building a national AI capability. I’m curious how you would explain that to a member of the public. Is that something like a government version of GPT? Is this a science version of GPT, an LLM?

Advertisement

DT: I don’t think there’s going to be one AI to serve all purposes, right? There may be more generalized ChatGPT-like services, but then there’s going to be AI really trained on from the data perspective, from the algorithm perspective — on physics problems or bio problems or other kinds of science problems. 

The private sector is going to do what the private sector does and they have a profit motive in mind. That’s not to say that there aren’t good people working in companies, but these are companies, and these [companies] are shareholder-driven and they’re looking to turn a profit. But not everything that is valuable to society as a whole has a huge amount of profit behind it, especially in the near term and in the way that we’ve seen history play out. 

And so if we want to have AI benefiting the public as a whole, including use cases that don’t have that profit loaded squarely in the equation, then we need to invest in that and we need to have a place within our government, the Department of Energy, working with other partners to make sure we’re taking advantage of those more public-minded use cases going forward. 

Because these are profit-driven companies with intense competition among themselves, we need to have democratically elected government with real expertise and we need to hire up and make sure that we’ve got cutting-edge AI talent in our government to be able to do the red-teaming. [They need] to be able to research, for example, whether a model may get into areas that are really challenging, that a terrorist could use it to build a chem or bio weapon in a way that’s not good for anybody.’

We need to have that expertise within the U.S. government. We need to do the red-teaming and we need to have the regulations in place. All of that depends on having the human capability, the human talent, but also the datasets and the algorithms and other kinds of things that are necessary for the government to play its role on the offense side and the defense side.

Advertisement

FS: I got the chance to see Frontier maybe about a year ago, and that was super interesting. I’m curious, do we have enough supercomputers to meet the DOE goals on AI right now?

DT: We do have many of the world’s fastest supercomputers right now, and there’s others in the pipeline that will become the world’s fastest going forward. We need to keep investing. The short answer is, if we want to keep being on the cutting edge, we need to keep that level of investment. We need to keep pushing the boundaries. And we need to make sure that the U.S. government has capabilities, including on the compute power side of things. 

So we need to work with those partners in the private sector and keep pushing the envelope on the compute power, as well. I feel like we’re in a very strong place there. But again, with not only what’s going on in the private sector, but what’s going on in China and other countries, who also want to be the leaders in AI, we’ve got to keep investing, and we’ve got to compete — and we’ve got to out-compete from the U.S. government side of things, too. 

FS: I understand that the national labs do have some responsibility not just developing AI, but also analyzing potential risks that might come from private-sector models. Curious if you could summarize what you’re finding in terms of the biggest risks or biggest concerns with powerful AI models right now?

DT: This is a huge, huge responsibility and we need to invest in this side as well. We’ve got great capabilities. We’ve got great human talent. But if we’re going to keep tabs on what’s happening in the private sector — if we’re going to be able to do the red-teaming and other kinds of things that are necessary to make sure that these AI models are safe going forward — [we should do that] before they’re released more broadly, right? You don’t want the Pandora’s box to open. 

Advertisement

… What’s clear to me in all our discussions internally in the U.S. government is we’ve got a lot of that expertise. So we’re not only doing it ourselves, but with some key partners. We’ve got relationships with Anthropic and many other AI companies on that front. We’re working hand in hand with others, including, especially, the Commerce Department. The Commerce Department is setting up this AI Safety Institute. We’re partnering with them so that we can take advantage of this expertise, this knowledge, this ability to work in the classified space — of course, working with our intel colleagues and our Department of Defense colleagues as well — and making sure that we all have an across-the-government effort to do all this more defensive work.

That’s something that’s in the interest of companies, but it is going to take the government, without the full profit motive and without the intense competition of the private sector, to say, hold on, let’s kick the tires. Let’s make sure we’re doing this right. Let’s make sure we don’t have any unintended consequences here. And this is going to be only even more important with each successive generation of AI, which gets more and more sophisticated, more and more powerful. 

This is why we put together this FASST proposal, why we’re having conversations with Congress about making sure that we have the funding to keep up the talent, keep up the compute power, keep up the ability of the algorithms to make sure that we’re playing this incredibly important role.

Rebecca Heilweil

Written by Rebecca Heilweil

Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She’s also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.heilweil@fedscoop.com. Message her if you’d like to chat on Signal.

Latest Podcasts