The Department of Commerce’s National Institute of Standards and Technology is looking for collaborators to be part of a newly announced AI Safety Institute Consortium following the release of the Biden administration’s executive order on the technology.
In a post to the Federal Register and a corresponding press release Thursday, NIST invited interested organizations to write letters describing their expertise in developing or deploying trustworthy AI, and/or creating models or products that support trustworthy AI.
The agency called the consortium a “core element of the new NIST-led U.S. AI Safety Institute,” which was announced Wednesday at the U.K. AI Safety Summit 2023, and said the group would be essential to its efforts to work with stakeholders to carry out its new responsibilities under the administration’s AI executive order (EO 14110).
The order, among other things, requires that NIST develop a companion resource to its AI Risk Management Framework that’s focused on generative AI, create guidance on differentiating between human and AI-generated content, and establish benchmarks for AI evaluation and auditing.
“The U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, companies and impacted communities to help ensure that AI systems are safe and trustworthy,” NIST Director and Under Secretary of Commerce for Standards and Technology Laurie E. Locascio said in a release.
The consortium, NIST said in a frequently asked questions page, will help establish “a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote development and responsible use of safe and trustworthy AI.”
NIST said the consortium’s activities will begin after enough organizations have completed and signed letters of interest that meet all the requirements, but not earlier than Dec. 4. It will also hold a workshop for organizations interested in participating on Nov. 17.