Countries within the Group of Seven political forum have signed a declaration agreeing on the need for “risk-based” AI regulations.
Top technology officials from Britain, Canada, the EU, France, Germany, Italy, Japan and the United States on Sunday signed the joint statement, which seeks to establish parameters for how major countries govern the technology.
The statement said: “We reaffirm that AI policies and regulations should be human centric and based on democratic values, including protection of human rights and fundamental freedoms and the protection of privacy and personal data.”
It added: “We also reassert that AI policies and regulations should be risk-based and forward-looking to preserve an open and enabling environment for AI development and deployment that maximizes the benefit of the technology for people and the planet while mitigating its risks.”
The reference to a risk-based approach to regulating AI in the document follows the publication of NIST’s AI management framework in January, which sought to establish some “rules of then road” for the use of the technology by government and the private sector int he United States.
The G7 declaration comes also as the use of AI technology receives increased public attention with the launch of new mainstream tools including OpenAI’s Chat-GPT, which the federal government and Congress has started considering for use internally.
In the U.S., Commerce Secretary Gina Raimondo last week called NIST’s AI Risk Management Framework (AIRMF), which was first released in January, the “gold standard” for the regulatory guidance of AI technology.
However, NIST’s AI framework and the G7 agreement contrast in some ways with the foundational rights-based framework laid out in the White House’s October 2022 Blueprint for an AI ‘Bill of Rights,’ that some AI experts have advocated as a model for AI regulations going forward.