NTIA calls for independent audits of AI systems in new accountability report

The Department of Commerce bureau also pushed for federal guardrails for AI systems, including additional disclosures, in the report.
Exterior entrance of the U.S. Chamber of Commerce in Washington D.C. (Image Credit: U.S. Chamber of Commerce).

The National Telecommunications and Information Administration on Wednesday called for independent audits of high-risk artificial intelligence systems, part of a new report from the Commerce Department bureau that also included eight recommendations for federal agency use of AI. 

The NTIA’s AI Accountability Policy Report recommends that the federal government take action to establish guidance, support and regulations for AI systems. Within those three categories, NTIA calls for agencies to increase transparency through disclosures, such as AI nutrition labels, encourage research and evaluations on AI tools, require contractors and suppliers to “adopt sound AI governance and assurance practices” and more. 

In addition to its focus on federal involvement in guidelines for AI audits and auditors, NTIA recommends that the government strengthen its capacity to “address risks and practices related to AI across sectors of the economy,” which includes maintaining a registry of “high-risk AI deployments, AI adverse incidents and AI system audits.”

“NTIA’s AI Accountability Policy recommendations will empower businesses, regulators and the public to hold AI developers and deployers accountable for AI risks, while allowing society to harness the benefits that AI tools offer,” NTIA Administrator Alan Davidson said in a statement


Significantly, the NTIA called for the creation of AI disclosure cards that mimic “nutrition labels” detailing a product’s name, whether or not there is a human in the loop, the model type, the data retention frequency, base model and more. NTIA stressed in the report that the standardization of accessible and plain language labeling could “enhance the comparability and legibility of disclosures.”

The agency noted that the report is just “one element” of its work to meet the Biden administration’s commitment to establishing guardrails and promoting innovation regarding AI. The report follows a request for comment submitted by the agency last year. 

The request sought feedback about policy development for AI mechanisms (such as audits and assessments) meant to encourage trustworthiness. In particular, the NTIA inquired about what data would be necessary to conduct audits and what approaches might be needed in various industry environments. 

Hodan Omaar, senior policy analyst at the Center for Data Innovation, said in a statement that the focus on regulatory frameworks throughout the report “will not help the United States become a leading global adopter of AI.”

“The United States should pursue policies that encourage U.S. businesses to hire more AI developers, integrators and engineers, not divert those resources to hiring more auditors and lawyers,” Omaar added. “Policymakers should instead rely on voluntary frameworks because they are more adaptable, dynamic, and effective at addressing risks in a rapidly evolving AI landscape.”


When asked for comment in response to Omaar’s statement, NTIA directed FedScoop to its press release and fact sheet.

Latest Podcasts