Advertisement

Inside NASA’s deliberations over ChatGPT

More than 300 pages of documents provide insight into how the space agency thought about generative AI, just as ChatGPT entered the public lexicon.
The NASA logo is seen at its headquarters in Washington, D.C., on June 7, 2022. (Photo by Stefani Reynolds / AFP via Getty Images)

In the months after ChatGPT’s public release, leaders inside NASA debated the merits and flaws of generative AI tools, according to more than 300 pages of emails obtained by FedScoop, revealing both excitement and concerns within an agency known for its cautious approach to emergent technologies. 

NASA has so far taken a relatively proactive approach to generative AI, which the agency is considering for tasks like summarization and code-writing. Staff are currently working with the OpenAI tools built into Microsoft’s Azure service to analyze use cases. NASA is also weighing generative AI capabilities from its other cloud providers — and it’s in discussions with Google Cloud on plans to test Gemini, the competitor AI tool formerly known as Bard. 

Though NASA policy prohibits the use of sensitive data on generative AI systems, that won’t be the case forever. Jennifer Dooren, the deputy news chief of NASA, told FedScoop that the agency is now working with “leading vendors to approve generative AI systems” for use on sensitive data and anticipates those capabilities will be available soon. While the agency’s most recent AI inventory only includes one explicit reference to OpenAI technology, an updated list with more references to generative AI could be released publicly as soon as October. 

In the first weeks of 2023, and as ChatGPT entered the public lexicon, the agency’s internal discussions surrounding generative AI appeared to focus on two core values: researching and investing in technological advances and encouraging extreme caution on safety. Those conversations also show how the agency had to factor in myriad authorities and research interests to coordinate its use. 

Advertisement

“NASA was like anyone else during the time that ChatGPT was rolled out: trying to understand services like these, their capabilities and competencies, and their limitations, like any of us tried to do,” said Namrata Goswami, an independent space policy expert who reviewed the emails, which were obtained via a public records request. 

She continued: “NASA did not seem to have a prior understanding of generative AI, as well as how these may be different from a platform like Google Search. NASA also had limited knowledge of the tools and source structure of AI. Neither did it have the safety, security, and protocols in place to take advantage of generative AI. Instead, like any other institution [or] individual, its policy appeared to be reactive.” 

NASA’s response

Emails show early enthusiasm and demand internally for OpenAI technology — and confusion about how and when agency staffers could use it. In one January 2023 email, Brandon Ruffridge, from the Office of the Chief Information Officer at NASA’s Glenn Research Center, expressed frustration that without access to the tool, interns would have to spend time on “less important tasks” and that engineers and scientists’ research would be held back. In another email that month, Martin Garcia Jr., an enterprise data science operations lead in the OCIO at the Johnson Space Center, wrote that there was extensive interest in getting access to the tech.

By mid-February, Ed McLarney, the agency’s AI lead, had sent a message noting that, at least informally, he’d been telling people that ChatGPT had not been approved for IT use and that NASA data should only be used on NASA-approved systems. He also raised the idea of sending a workforce-wide message, which ended up going out in May. In those opening weeks, the emails seem to show growing pressure on the agency to establish permissions for the tool. 

Advertisement

“We have demand and user interest through the roof for this. If we slow roll it, we run [a] high risk of our customers going around us, doing it themselves in [an] unauthorized, non-secure manner, and having to clean up the mess later,” McLarney warned in a March email to other staff focused on the technology. Another email, from David Kelldorf, chief technology officer of the Johnson Space Center, noted that “many are chomping at the bits to try it out.”

But while some members of the space agency expressed optimism, others urged caution about the technology’s potential pitfalls. In one email, Martin Steele, a member of the data stewardship and strategy team at NASA’s Information, Data, and Analytics Services division, warned against assuming that ChatGPT had “intelligence” and stressed the importance of “The Human Element.” In a separate email, Steven Crawford, senior program executive for scientific data and computing with the agency’s Science Mission Directorate, expressed concerns about the tool’s potential to spread misinformation. (Crawford later told FedScoop that he’s now satisfied by NASA’s guardrails and has joined some generative AI efforts at the agency). 

Email from Steven Crawford, April 10, 2023.

In those first weeks and months of 2023, there were also tensions surrounding security and existing IT procedures. Karen Fallon, the director of Information, Data, and Analytics Services for NASA’s Chief Information Office operations, cautioned in March that enthusiasm for the technology shouldn’t trump agency leaders’ need to follow existing IT practices. (When asked for comment, NASA called Fallon’s concerns “valid and relevant.”)

Advertisement

Email from Karen Fallon, March 16, 2023.

In another instance, before NASA’s official policy was publicized in May, an AI researcher at the Goddard Space Flight Center asked if it would be acceptable for their team to use their own GPT instances with code that was already in the public domain. In response, McLarney explained that researchers should not use NASA emails for personal OpenAI accounts, be conscious about data and code leaks, and make sure both the data and code were public and non-sensitive. 

NASA later told FedScoop that the conversation presented “a preview of pre-decisional, pending CIO guidance” and that it aligned with NASA IT policy — though they noted that NASA doesn’t encourage employees to spend their own funds on IT services for space agency work. 

Advertisement

Email from Martin Garcia, Jr., April 7, 2023.

“As NASA continues to work to onboard generative AI systems it is working through those concerns and is mitigating risks appropriately,” Dooren, the agency’s deputy news chief, said. 

Of course, NASA’s debate comes as other federal agencies and companies continue to evaluate generative AI. Organizations are still learning how to approach the technology and its impact on daily work, said Sean Costigan, managing director of resilience strategy at the cybersecurity company Red Sift. NASA is no exception, he argued, and must consider potential risks, including misinformation, data privacy and security, and reduced human oversight. 

“It is critical that NASA maintains vigilance when adopting AI in space or on earth —wherever it may be — after all, the mission depends on humans understanding and accounting for risk,” he told FedScoop. “There should be no rush to adopt new technologies without fully understanding the opportunities and risks.” 

Greg Falco, a systems engineering professor at Cornell University who has focused on space infrastructure, noted that NASA tends to play catchup on new computing technologies and can fall behind the startup ecosystem. Generative AI wouldn’t necessarily be used for the most high-stakes aspects of the space agency’s work, but could help improve efficiency, he added.

Advertisement

NASA generative AI campaign.

“NASA is and was always successful due to [its] extremely cautious nature and extensive risk management practices. Especially these days, NASA is very risk [averse] when it comes to truly emergent computing capabilities,” he said. “However, they will not be solved anytime soon. There is a cost/benefit scale that needs to be tilted towards the benefits given the transformative change that will come in the next [three-to-five] years with Gen AI efficiency.”

He continued: “If NASA and other similar [government] agencies fail to hop on the generative AI train, they will quickly be outpaced not just by industry but by [nation-state] competitors. China has made fantastic government supported advancements in this domain which we see publicly through their [government] funded academic publications.”

Meanwhile, NASA continues to work on its broader AI policy. The space agency published an initial framework for ethical AI in 2021 that was meant to be a “conversation-starter,” but emails obtained by FedScoop show that the initial framework received criticism — and agency leaders were told to hold off.  The agency has since paused co-development on practitioners’ guidance on AI to focus instead on federal AI work, but plans to return to that work “in the road ahead,” according to Dooren.

Advertisement

The space agency also drafted an AI policy in 2023, but ultimately decided to delay it to wait for federal directives. NASA now plans to refine and publish the policy this year. 

Latest Podcasts