Advertisement

Generative AI isn’t quite cleared for takeoff at NASA

Federal agencies are racing to develop policies for tools like ChatGPT.
A person sits in the White Flight Control Room at the Johnson Space Centers Mission Control Center in Houston, Texas (Photo by MARK FELIX/AFP /AFP via Getty Images)

Back in May, NASA’s chief information officer Jeff Seaton emailed the space agency’s staff to make clear that tools like ChatGPT, Google Bard, and Meta Llama had not been cleared for any widespread use with “sensitive NASA data.” The email, which does not seem to have been publicized until now, also noted that a community of “potential early adopters” across the agency were working to investigate “certain” AI technologies.

The notice, which FedScoop obtained after it was included in a solicitation that NASA posted online, comes as federal agencies begin to outline policies related to the use of new AI tools, and particularly, text-generating software like ChatGPT.

The email also serves as a reminder that large federal agencies are wrestling with both the risks — and opportunities — that might come with generative AI tools. Privacy researchers have warned that sensitive information entered into AI models such as ChatGPT may end up in the public domain. In the private sector, these concerns have led companies like JPMorgan to clamp down on the use of the technology by staff.

“OCIO is coordinating closely with leading industry partners, fellow government organizations including the Federal CIO, Chief Data Officer, and Chief Information Security Officer Councils to understand the significant amount of policy guidance emerging around Generative AI as well as how other organizations are adopting generative AI capabilities,” said Seaton in the May 22 email. “OCIO is also connecting with commercial providers to understand how Generative AI will be integrated into widely available tools, such as the Microsoft 365 suite, visual tools like Adobe Illustrator, and the often-used Google search.”

Advertisement

Seaton also pointed to a range of issues raised by popular generative AI tools. Some of these programs are hosted in the cloud and use systems that store information outside the United States, which means that NASA data could be exposed to unauthorized and non-US individuals. He warned that these tools aren’t necessarily accurate — and raise ethical and intellectual property questions, too.  

In a statement, a NASA spokesperson said: “NASA provided written guidance to employees on generative artificial intelligence technologies in May 2023. While use of AI technologies on NASA systems is not authorized at this time, the agency’s Office of the Chief Information Officer is still evaluating use of some technologies in a secure online environment.

They added: “We also are evaluating AI tech in collaboration with others within the agency. This investigation is ongoing, and NASA will provide employees an update later, and codify any guidance in a future NASA Policy Directive. Finally, NASA also is closely working with other federal agencies and staying informed on evolving federal guidance and all policy related to AI.”

Notably, the space agency is still developing a unified approach to artificial intelligence. A report from NASA’s inspector general published in early May noted that the agency is struggling to track its own usage of the technology. NASA does not operate with a single standard definition of AI, and, the report outlined, “does not have a singular designation or classification mechanism to accurately classify and track AI or to identify AI expenditures within the [a]gency’s financial system.” 

NASA hasn’t completely eschewed the idea of working with this kind of AI, though. The Guardian reported in June that NASA engineers were developing a tool akin to ChatGPT that would facilitate information-sharing between astronauts and spacecraft. The space agency has also worked with a type of digital engineering called “evolved structures,” which involves using design software that incorporates AI generations. 

Advertisement

Nor is NASA the only agency attempting to rein in the use of generative AI tools. Across the federal government, agencies are grappling with how employees and contractors should and shouldn’t use those programs and developing preliminary policies.

An instructional letter the General Services Administration distributed to staff in June, for example, outlined “controlled access” to generative AI large language model tools on the agency network and equipment. 

Under the “interim policy” GSA said it would block access to third-party generative AI large language model endpoints tools from the GSA network and government equipment but would make exceptions for research. The policy provided guidance on “responsible use” of those tools, such as not inputting non-public information.

At the Environmental Protection Agency, technology leaders took a similarly cautious approach blocking use of the tools on an “interim basis” in a May internal memo. The EPA said it may reconsider that decision, however, and “allow use of such tools at a future time after further analysis and the implementation of appropriate guidance,” a spokesperson said in an email.

The Administration for Children and Families took a more permissive approach in its “interim policy” for staff and contractors in May. That memo didn’t include blocking of the tools, but similarly advised employees to not input things like non-public, personally identifiable, or protected health information.

Advertisement

In a Linkedin post about the memo, chief technology officer and acting chief information officer at ACF, Kevin Duvall, described the agency’s approach as “balancing risk, while still exploring this technology and its potential to empower federal government employees to serve citizens even better.”

The Department of Health and Human Services, of which ACF is a sub-agency, is taking a similar approach to the tools.

“HHS is reminding its employees that they should always follow HHS existing policies regarding personal identifiable information and data protection, data storage, transmission, and sharing, and that these tools fall under existing policies and guidance of the HHS IT Rules of Behavior,” the department’s chief information officer, Karl S. Mathias, told FedScoop in a June written statement.

Mathias said at the time that HHS operating division chief information security officers were advised not to put sensitive information into tools like ChatGPT.

Relatedly, in June, the National Institutes of Health published a notice clarifying that NIH peer reviewers were prohibited from working with generative AI tools for the purpose of developing critiques on grant applications and contract proposals. 

Advertisement

Editor’s note, 7/19/23 at 3:33 p.m. ET: This story was updated to include comment from NASA. The piece has also been updated to note that NASA confirmed that guidance was sent in May, and to include details of the EPA’s current approach to using AI tools.

Latest Podcasts