Energy releases generative AI guidance for employees, contractors 

The Department of Energy’s reference guide is the first publicly released generative AI guidance from the agency.
A view of Department of Energy headquarters on Feb. 9, 2024, in Washington, D.C. (Photo by J. David Ake/Getty Images)

Employees and contractors at the Department of Energy now have a new reference guide to help them navigate use of generative AI tools at the agency, including best practices and a note that ChatGPT is available for use by request.

That 61-page document was published and distributed on DOE’s internal network on June 14, a DOE spokesperson told FedScoop. The detailed reference guide constitutes the first such document on generative AI that the department has shared publicly, and while the guidance isn’t considered a formal policy, it provides a window into how the DOE is thinking about the technology.

“For us, it is a way to educate our agency and all the folks who will use it for many different purposes about what the opportunity space is [and] how to use it responsibly,” Helena Fu, director of Energy’s Office of Critical and Emerging Technologies and its chief AI officer, said during a panel at Scale’s Gov AI Summit on Tuesday.

That document includes a reference guide to best practices, existing government resources related to AI, and ideas for use cases with the department. Those potential use cases include creating voiceovers for educational videos, image generation based on descriptions, drafting an initial set of interview questions, and writing up meeting minutes from audio recordings. 


“As a best practice to mitigate privacy and security risks, users should not input nonpublic (sensitive) data into a GenAI system unless the appropriate measures have been undertaken to ensure that the rights and potential uses of the data are permitted, or they are using a tool which is appropriately configured and approved for their use case,” the guide states. “This best practice is critical for public or commercial systems where model, inputs, and outputs are not under DOE’s direct control.” 

The document suggests that those looking to deploy generative artificial intelligence conduct a privacy impact assessment, as well as tailored generative AI-focused training for employees with access to protected data and the implementation of a full lifecycle stewardship of data. The guide encourages the use of prompt engineering training and evaluating generative AI systems for potential bias, too. 

The guide strongly discourages Energy employees from using generative AI to produce “confidential or mission-critical information or data.” It also highlights that contract solicitation responses are confidential and that users need to understand what rights the government “has or doesn’t have” in regard to what data generative AI companies can use to train large language models. 

The guide was assembled by Energy’s GenAI Tiger Team, a group with more than 70 stakeholders and subject matter experts from across the department, the spokesperson said. The document was drafted between October and December, but the department waited to make it final until the Office of Management and Budget published its memo on the technology, they added.

The reference guide was ultimately completed in April and underwent a department-wide review process before it was shared internally and then publicly, the spokesperson said. Notably, the document is the second iteration of the guide. The first version was published for internal release in September 2023, according to a record of changes in the document.


In addition to guidance on how to approach the technology, the reference guide states that the DOE is “in the process of considering which GenAI services will be permitted for use based on comprehensive risk assessments. As decisions on services are made, specific guidelines for usage will be established.” 

So far, one of those permitted services is ChatGPT. 

The DOE spokesperson said that the Office of the Chief Information Officer “reviewed the risks and opted to empower staff to drive the Department’s productivity and innovation safely and securely.” The department’s headquarters is using the public version of the platform and unblocking the URL for employees who “justify a need through a by-request process,” the spokesperson said.

Additional details about how the department is handling ChatGPT come after the department previously confirmed it had blocked the tool but was making exceptions based on mission and business needs. 

That block came as other agencies took similar actions to prevent employees from accessing generative AI tools on government networks and laptops. President Joe Biden’s executive order on the technology in October discouraged agencies from “imposing broad general bans or blocks on agency use of generative AI” and said they should “should instead limit access, as necessary, to specific generative AI services based on specific risk assessments,” in addition to establishing guidance and several other measures. 


Notably, the guide states that when generative AI plays a role in forming an idea, approach or invention at either Energy or its national laboratory network, the technology’s deployment must be acknowledged and cited. It also cautions against using generative AI in scenarios that might implicate regulations focused on fighting discrimination, like the Civil Rights Act. 

“Train, validate, and test GenAI models using representative datasets that are as ‘fair’ as possible, as defined by the context of the use case at the beginning of the design phase of the project,” the guide recommends. “After attempting to address fairness in the dataset, including filtering if needed, recheck the model to see if any new artificial bias has been created.”

The guidance suggests having a “human in the loop,” developing AI literacy and using “grounding” as a method to mitigate hallucinations from generative AI tools.

“Use external information from trusted sources to prompt the GenAI model to generate a response based upon the retrieved, factual information within the given context,” the guide said. It pointed to Retrieval Augmented Generation as the most common method for grounding with large language models.

When asked about plans to work with other AI providers, the spokesperson said that the agency’s “AI Hub” is “actively looking to pursue additional AI technologies.”


Over time, the document is expected to change. According to the spokesperson, the guide “is a living document that will be regularly reviewed and updated to reflect the most current knowledge, trends, risks, and best practices around the responsible use of GenAI as the technology, market, and regulatory landscape continue to evolve.”

Latest Podcasts