The National Science Foundation is starting to experiment internally with appropriate use cases for popular generative AI chatbots like ChatGPT while also building safe guardrails for government use of such technology.
The Foundation’s Chief Information Officer, Dorothy Aronson, said Wednesday that the independent agency, which supports and funds major science and engineering research across universities and institutions in the U.S., has started considering the role ChatGPT and other such AI tools could play within the agency.
“We are building a set of use cases for our appropriate use of ChatGPT so that we can have pros and cons in our guardrails,” Aronson said during FedScoop’s ITModTalks on Wednesday.
“So the tool is amazing. But right now, for example, we’re very careful about the way we ask questions, because we don’t want to release privileged information into the wild without really understanding where it’s going,” she added.
Major AI developer OpenAI in November released its ChatGPT tool, allowing users to interact with an artificial intelligence chatbot which has astounded users, writing short college essays, cover letters, unique poetry, and a weirdly passable Seinfeld scene in which Jerry needs to learn the bubble sort algorithm.
OpenAI yesterday released a powerful new image- and text-understanding AI model, GPT-4, which the company calls “the latest milestone in its effort in scaling up deep learning.”
ChatGPT does not represent a revolution in machine learning as such but is significant in regards to how users interact with it. Previous versions of OpenAI’s large language models require users to prompt the model with an input. ChatGPT, which relies on a tuned version of GPT-3.5, OpenAI’s flagship large language model, makes it far easier to interact with that model by making it possible to carry a fluid conversation with a highly trained AI.
The National Science Foundation is excited about ChatGPT’s potential use within the agency, Aronson said, but highlighted that federal employees and citizens who use it for government services need to be careful about what information they feed highly sophisticated AI tools.
“So our main concerns about ChatGPT are what data you provide it in questions. And in general, we would prefer people be conservative in their use of it, so we’ve got a few guardrails set up like you can’t determine an NSF grant award winner using chat GPT,” Aronson said.
Prior to her time at NSF, Aronson served as the Director for the Office of Management Operations for the Defense Advanced Research Project Agency (DARPA) which is the agency where the internet and AI first made major breakthroughs.
Aronson was speaking at ITModTalks, which was hosted in Washington D.C. by FedScoop.