Advertisement

Klobuchar and Thune introduce foundational AI legislation

New bipartisan Senate bill would require both NIST and the Commerce Department to establish guidelines and a certification process for critical-impact AI systems.
Sen. John Thune, R-S.D., speaks at the U.S. Capitol on March 28, 2023, in Washington, D.C. (Photo by Kevin Dietsch/Getty Images)

New bipartisan legislation in the Senate would set the stage for “light-touch” foundational legislative efforts for the development, identification and deployment of artificial intelligence. 

The AI Research, Innovation and Accountability Act of 2023, introduced Wednesday by Sens. John Thune, R-S.D., Amy Klobuchar, D-Minn., and other members of the Senate Commerce, Science and Transportation Committee, aims to provide clear distinctions on AI-generated content and other identification for AI systems, including those deemed “high impact” and “critical impact.” 

The bill would also require the National Institute of Standards and Technology to develop recommendations for agencies regarding “high-impact” AI systems, which the Office of Management and Budget would then implement. 

Agency use of the AI framework would be subject to a certification framework that complies with the Commerce Department standards. The bill would also require the department to establish a working group to offer recommendations for a voluntary, industry-led consumer education effort for AI systems.

Advertisement

AI “comes with the potential for great benefits, but also serious risks, and our laws need to keep up,” Klobuchar said in a statement. “This bipartisan legislation is one important step of many necessary towards addressing potential harms. It will put in place commonsense safeguards for the highest-risk applications of AI — like in our critical infrastructure — and improve transparency for policymakers and consumers.”

The legislation aims to provide clearer distinctions for AI-generated content, calling on NIST to research and develop standards for “providing both authenticity and provenance information for online content.” It would also support NIST’s development of a methodology to mitigate unanticipated behavior from AI systems. 

Companies deploying “critical-impact” AI would have to perform risk assessments consistent with NIST’s existing AI Risk Management Framework, and these evaluations would then have to be submitted to the Commerce Department. 

The same “critical-impact” AI systems that are subject to risk certification would also have to self-certify with the Commerce Department’s standards. The department would have to comply with an outlined certification process.

“As this technology continues to evolve, we should identify some basic rules of the road that protect Americans and consumers, foster an environment in which innovators and entrepreneurs can thrive, and limit government intervention,” Thune said in a statement. “This legislation would bolster the United States’ leadership and innovation in AI while also establishing commonsense safety and security guardrails for the highest-risk AI applications.”

Latest Podcasts