Advertisement

Commerce’s NTIA launches trustworthy AI inquiry 

The National Telecommunications and Information Administration has issued a request for comment on how government agencies should audit AI technology.
United States Department of Commerce Building (Photo by James Leynse/Corbis via Getty Images)

The National Telecommunications and Information Administration has launched an inquiry that will examine how companies and regulators can ensure artificial intelligence tools are trustworthy and work without causing harm.

Assistant Secretary of Commerce Alan Davidson announced the new initiative at an event at the University of Pittsburgh’s Institute of Cyber Law, Policy and Security.

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” Davidson said.

He added: “Our inquiry will inform policies to support AI audits, risk and safety assessments, certifications, and other tools that can create earned trust in AI systems.”

Advertisement

As part of the exercise, which is focused on determining how the federal government can effectively regulate the evolving technology, NTIA is seeking evidence on what policies can support the development of AI audits, assessments, certifications and other mechanisms to “create earned trust in AI systems.”

The Department of Commerce agency has issued a request for comment to seek feedback from a range of parties across industry and academia.

According to NTIA, insights collected through the request for comment will inform the Biden administration’s work to establish a joined-up regulatory framework for the technology.

Respondents have 60 days to submit comments following the publication of the request for comment in the Federal Register. They can do so by following instructions listed here.

The launch of NTIA’s inquiry follows the publication of a voluntary AI Risk Management Framework, which was issued in January by the National Institute of Standards and Technology.

Advertisement

That initial guidance document set out four key functions that NIST says are key to building responsible AI systems: govern, map, measure and manage.

NIST’s AI framework document followed the Biden administration’s AI ‘Bill of Rights’, which was published in October and sought to address the potential discriminatory effects of certain AI technology.

That blueprint document contained five key principles for the regulation of the technology: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation and human alternatives, consideration, and fallback.

Latest Podcasts