A new bipartisan bill would require federal agencies to use the risk management framework outlined by the National Institute of Standards and Technology when using artificial intelligence.
The legislation from Sens. Jerry Moran, R-Kan., and Mark Warner, D-Va., would require the Office of Management and Budget to issue guardrails for federal entities that align with NIST’s AI Risk Management Framework, according to a release shared with FedScoop on Thursday. This would also require both the administrator of Federal Procurement Policy and the Federal Acquisition Regulatory Council to ensure that agencies procure AI systems that contain the framework; NIST would have to develop capabilities for evaluating AI acquisitions.
In the bill’s summary, shared with FedScoop, the lawmakers outlined the potential risks of federal AI use that involve data security, misinformation and unaccountability. Rep. Ted Lieu, D-Calif., meanwhile, is planning to introduce a companion bill in the House soon, according to the release.
Warner said in a statement that they “have also seen the importance of establishing strong governance, including ensuring that any AI deployed is fit for purpose, subject to extensive testing and evaluation, and monitored across its lifecycle to ensure that it is operating properly,. It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order to capitalize on the benefits while mitigating risks.”
Per the bill’s text, NIST would only have one year after the statute’s enactment to issue guidance for agencies’ AI risk management. Additionally, the official framework for federal entities under that timeline would have to incorporate the framework’s standards and practices, while adequate cybersecurity implementation information must be provided as well as training in regard to the framework.
OMB, however, would have just 180 days to issue guidance that would require agencies to incorporate the framework into their AI risk management protocol that is consistent with NIST’s guidelines. The office has already released a draft guidance for federal agencies using AI, which was announced by Vice President Kamala Harris during her visit to the United Kingdom for a tech-focused international summit.
“You can anticipate in coming months there will be focus on procurement, [that] will be a big piece of this,” Federal CISO Chris DeRusha said in a previous interview with FedScoop. “We get a lot of questions about clear guidance on what agencies can use, how do we vet generative AI and new technology coming on.”
The legislation’s procurement requirements also include that AI providers would have to adhere to specific actions outlined in the framework and provide “appropriate access” to allow the head of each agency to carry out adequate evaluations and verification of data, models and parameters.
OMB would also have just 180 days, following the final issue of this bill, to provide agencies with expertise on AI on how to recruit experts who could assist in the development, procurement, deployment and assessment of the technology.
“AI has tremendous potential to improve the efficiency and effectiveness of the federal government, in addition to the potential positive impacts on the private sector,” Moran said in the release. “However, it would be naïve to ignore the risks that accompany this emerging technology, including risks related to data privacy and challenges verifying AI-generated data.”