The National Science Foundation released a memo Thursday that established guidelines for generative artificial intelligence concerning its merit review process, a procedure in which the agency will ensure that submitted proposals are reviewed fairly and competitively.
NSF’s guidelines will limit agency reviewers from uploading any proposal content, related records and review information to non-approved generative AI tools, according to a news release. Additionally, those submitting proposals for funding awards are “encouraged to indicate in the project description” if generative AI technology was used to develop their proposal, as well as the extent of usage and methodology.
So far, NSF has only approved the use of publicly accessible, commercial generative AI that is “explicitly for the use of public information,” according to an agency spokesperson. NSF publicly disclosed its use cases earlier this month, and it does not include the use of generative AI that would interact with proposal content.
The new memo states: “While NSF will continue to support advances in this new technology, the agency must also consider the potential risks posed by it. The agency cannot protect non-public information disclosed to the third-party GAI from being recorded and shared. To safeguard the integrity of development and evaluation of proposals in the merit review process, this memo establishes guidelines for its use by reviewers and proposers.”
Regarding its approach to generative AI, NSF had said it would approve use cases to safely apply it toward agency activities while following proper use guidance.
“NSF is exploring options for safely implementing GAI technologies within NSF’s data ecosystem,” an agency spokesperson told FedScoop. “NSF is developing a set of approved applications for the use of generative AI. When approved, they will be published on NSF’s public inventory of AI applications.”
The agency added that it is “still exploring how generative AI might be responsibly applied towards its business processes,” but has not yet determined whether the need for a tool should be a commercial or in-house product.
“NSF expects to develop an AI strategy for identifying and removing barriers to the responsible use of AI and achieving enterprise-wide advances in AI maturity to meet cross-agency guidance defined in [Office of Management and Budget’s] proposed memorandum,” an NSF spokesperson told FedScoop on Friday.
The agency’s merit review process memo comes shortly after Dr. Chaitanya Baru, a senior adviser at NSF, teased the document’s release last Friday during a Digital Trade & Data Governance Hub event at George Washington University.
“The notion of a proposal, it’s original ideas that you have come up with. If you’re using generative AI, then that generative AI has used data from somewhere else,” Baru said. “It’s not your original ideas; it’s data that the gen AI program picked up from somewhere, and now you’re trying to pass it off.”
Baru emphasized that plagiarism is a potentially serious problem in creating content for proposals, stating that generating a proposal entirely with generative AI “sounds like a very bad idea.”
“I’m worried about the current [generative AI models] because, in some ways, they are so powerful,” Baru said, noting that so-called AI hallucinations are essentially “lying. They just lie, they just make up stuff. That’s not a good thing. … It’s not clear that everyone is fully aware of all of that. Checking all of that is hard.”
Madison Alder contributed to this article.