Advertisement

Legislation to codify NAIRR, authorize safety body among nine AI bills passed by House panel

Several of the bills have counterpart or similar measures moving forward in the Senate.
Reps. Zoe Lofgren, D-Calif., and Frank Lucas, R-Okla. sit at a dark wooden table with microphones and papers atop it in a House hearing room.
Reps. Zoe Lofgren, D-Calif., and Frank Lucas, R-Okla., testify during a hearing before the House Committee on Rules in July 2023. (Photo by Alex Wong/Getty Images)

Nine bipartisan bills were passed by the House Science, Space and Technology Committee on Wednesday, including legislation to formally establish the National AI Research Resource and authorize the AI Safety Institute under a new name. 

All nine measures advanced out of the committee via voice votes, and in several cases, identical or similar bills were advanced by a Senate panel earlier this summer. While some differences would still need to be resolved between the proposals in each chamber, moving the legislation forward indicates bipartisan and bicameral momentum on research, education and safety proposals.

“These bills take valuable steps to expand the use of AI, develop a skilled AI workforce, and improve our tools for AI research and development,” said Rep. Frank Lucas, R-Okla., the chairman of the committee. “They don’t impose regulations and burdensome requirements. Instead, they’re designed to help American businesses and workers to keep us at the cutting edge of global competition.”

The CREATE AI Act (H.R. 5077; S. 2714), which was among the bills the committee passed Wednesday, was also favorably reported by the Senate Committee on Commerce, Science, and Transportation in July. That legislation would codify the NAIRR, a shared research infrastructure for AI that’s currently being operated in a pilot format by the National Science Foundation. 

Advertisement

Similarly, the NSF AI Education Act (H.R. 9402; S.4394), was also voted out of the House committee Wednesday after its counterpart was favorably approved by the Senate Commerce panel. That bill would support education and professional development activities by NSF related to AI. 

The House also advanced its authorization for what is now the AI Safety Institute at the Department of Commerce’s National Institute of Standards and Technology. While the Senate has a proposal to authorize the institute that the Senate Commerce Committee advanced, the two pieces of legislation are fairly different, per staffers.

The House’s AI Advancement and Reliability Act (H.R. 9497) — which was officially introduced on Monday by Rep. Jay Obernolte, R-Calif. — is more “stripped down” than the Senate’s proposal and would just authorize a center and a stakeholder consortium, according to a spokesperson for the Republican majority on the committee.  The Senate bill is the Future of Artificial Intelligence Innovation Act (S. 4769). 

“The Senate bill had some extra provisions unrelated to the Center like creating testbeds, a grand challenge prize, and international collaboration. We didn’t include those because they fall under NIST core activities,” the spokesperson said in an email.

Democratic staff for the committee said the biggest differences were with the focus of the mission on AI safety and misuse. “In drafting this, we benefited from the AISI releasing a vision document. As a result, there are other differences, but nothing we can’t address in conference,” the Democratic staff said in an email. 

Advertisement

During the markup, Rep. Zoe Lofgren, D-Calif., ranking member of the House Science Committee and one of the bill’s cosponsors, noted there’s still work to be done.

Lofgren called the legislation “strong” but said that there are still disagreements that need to be addressed to move it forward. Specifically, she said the bill “significantly underfunds the activities” it authorizes at NIST and pointed to the agency’s “severe resource constraints” as it works to address AI safety goals. Lofgren also said she was “disappointed” that under the bill, the institute would be renamed the Center for AI Advancement and Reliability.

“While this change may seem to be cosmetic and the mission of the AI Safety Institute would not change, the name change could create new confusion domestically and internationally,” Lofgren said.

In an effort to address the funding, Rep. Haley Stevens, D-Mich., offered and later withdrew an amendment that would strike the $10 million funding authorization level for the institute, allowing for additional funding to be added.

Stevens said that other countries, such as the U.K. and Canada, have poured millions more in funding into their safety bodies than the U.S. and the “low amount” of funding puts the work of American researchers at risk.

Advertisement

Despite the base bill highlighting “many critical tasks for the AI Safety Institute,” Stevens said. “We’re missing the mark by authorizing just another $10 million, and we’re setting ourselves up to not succeed — to not be as successful as we’d want to be.” 

She ultimately withdrew the amendment but noted “the point had to be made.” 

In response, Lucas, another one of the bill’s cosponsors, said he believed $10 million is “more than enough” for the body to complete its work currently. Lucas said NIST spent $5 million in fiscal year 2024 on all of its AI work. 

“The center is only a fraction of the work NIST is doing on AI, and we’re authorizing this at double NIST’s total AI spending,” he said. “Removing the authorization entirely amounts to handing over a blank check, and I don’t think that would be responsible, given all the work we need to do.”

Lucas, however, said that going forward, he’s “happy to discuss” the funding level and thanked Stevens for withdrawing the amendment. 

Advertisement

The funding discussion was the latest iteration of a conversation that has been going on since the creation of the AI Safety Institute. The initial $10 million for the institute came as the overall budget for NIST was cut nearly 12%. In July, the agency secured an additional $10 million in funding through a one-time Technology Modernization Fund investment. While Stevens was glad to see that funding, she told FedScoop at the time that “truly investing in AI safety requires consistent and sufficient appropriations.”

The institute was first announced last year to assess the risks of AI and develop guidance to mitigate those issues. Currently, it’s working with a consortium of more than 200 companies and organizations to address actions in President Joe Biden’s AI executive order, including risk management for generative AI and red-teaming guidance.

Other bills that advanced out of the committee Wednesday were the Small Business Artificial Intelligence Advancement Act (H.R. 9197), the Nucleic Acid Screening for Biosecurity Act (H.R. 9194), the LIFT AI Act (H.R. 9211), the Workforce for AI Trust Act (H.R. 9215), the Expanding AI Voices Act (H.R. 9403), and the AI Development Practices Act (H.R. 9466).

Latest Podcasts