Senate Minority Whip John Thune, R-S.D., is actively working to introduce a “light touch” artificial intelligence bill that would aim to protect consumers and entrepreneurs by requiring AI companies to conduct risk and impact assessments for critical-impact AI systems and then undergo certification of such systems, according to a draft of the legislation obtained by FedScoop.
The AI Research, Innovation and Accountability Act of 2023, which was first reported in July, has been changed and updated significantly since then, with a new emphasis on the following: online content authenticity, the study of AI usage in government, government standards for detecting AI generated media, generative AI transparency and enforcement of the bill through monetary penalties and outright bans on violating AI systems and companies.
The bill would require the creation of a 15-person, multifaceted AI Certification Advisory Committee within the Commerce Department to help propose testing, evaluation, validation and verification (TEVV) standards to be used for the certification of critical-impact AI systems. Companies developing or deploying AI systems would then ultimately be responsible for using such standards to assess their impact and self-certify their safety to the Commerce Department.
The Commerce Department would be tasked with enforcing the legislation, either via civil action against noncompliant or violating AI companies in the form of hundreds of thousands of dollars in penalties or bans on violating critical-impact AI systems from being deployed altogether.
“I can confirm that Sen. Thune is working on a light-touch AI bill that would help set some basic rules of the road that both protects consumers and entrepreneurs (doesn’t want to squelch positive innovation in this space),” a Senate source familiar with the bill told FedScoop.
“He’s continuing to have discussions with his colleagues, but he’s determined to have this be a bipartisan product. He’s interested in a substantive result, not a messaging bill,” the source added.
The Senate staffer said there is no deadline for introducing the bill but it has been a work in progress for several months and Thune’s goal is “to introduce [it] sooner rather than later.”
Axios Pro reported on the previous version of Thune’s bill in July.
Per a different Senate source and an industry executive familiar with the matter, Sen. Amy Klobuchar, D-Minn., is the lead Democrat working with Thune on the legislation, which is expected to garner more bipartisan support before being formally introduced. Klobuchar declined to comment for this story.
The legislation would affect any AI system found on public-facing websites or applications available to consumers in the U.S., with some exemptions, such as for nonprofit AI research or platforms that don’t employ more than 500 people or collect personal data of more than 1 million people per year.
The bill defines critical-impact AI systems as those that are deployed for non-defense purposes and intended to be used to make decisions that have a legal or similarly significant effect on the following: biometric personal data collection, the management and operation of critical infrastructure as defined by the PATRIOT Act, the criminal justice system as defined by the Crime Control and Safe Streets Act of 1968 or any AI system that poses a significant risk to rights afforded under the U.S. Constitution or safety.
The bill would require the Under Secretary of Commerce for Standards and Technology to tackle AI-generated misinformation and disinformation by carrying out research to facilitate the development and standardization of authenticity and provenance information for content generated by human users and AI systems.
The legislation would also amend the National Institute of Standards and Technology (NIST) Act to require the agency to find best practices for detecting outputs generated by AI systems, including content such as text, audio, images and videos, as well as to find methods to detect AI content and safeguards to mitigate potentially adversarial or compromising AI output.
Within a year of the enactment of the bill, it would require the Comptroller General to study the statutory, regulatory and other policy barriers that prevent the adoption of AI systems by the federal government, as well as the use of AI systems to improve the functionality of the government, and submit this study to relevant committees in the House of Representatives.