‘Avoid a precautionary approach’ when regulating AI applications, OMB tells agencies

A new memo directs agencies to regulate AI products in a way that doesn't stifle innovation.
applications, apps, AI, IoT, internet of things, software
(Getty Images)

Federal agencies now have guidance from the White House on how to regulate artificial intelligence applications produced for the U.S. market.

The Trump administration’s goal is to make sure agencies don’t stifle innovation when issuing rules intended to serve consumers or protect national security. As software makers expand their use of AI, the federal government is likely to be called upon to step in to regulate the technology.

“While narrowly tailored and evidence-based regulations that address specific and identifiable risks could provide an enabling environment for U.S. companies to maintain global competitiveness, agencies must avoid a precautionary approach that holds AI systems to an impossibly high standard such that society cannot enjoy their benefits and that could undermine America’s position as the global leader in AI innovation,” the memo states.

Agencies also might need to address inconsistent, burdensome or duplicative state laws that hurt the national market for technology, the memo says.


The Office of Management and Budget issued the document Tuesday, nearly 21 months after President Trump called for it in an executive order on Feb. 11, 2019. It was supposed to arrive no more than 180 days after the order. The White House’s Office of Science and Technology Policy, Domestic Policy Council and National Economic Council also participated in drawing up the memo.

“Through this memorandum, the United States is taking the lead to set the regulatory rules of the road for artificial intelligence,” said Michael Kratsios, U.S. Chief Technology Officer, in a statement. “The U.S. approach will strengthen the nation’s AI global leadership and promote trustworthy AI technologies that protect the privacy, security, and civil liberties of all Americans.”

OMB lists 10 stewardship principles for federal oversight of AI applications:

  • Public trust in AI, such as regulations that reduce accidents or protect privacy to build support for the technology.
  • Public participation —allowing people to provide feedback and engage in rulemaking, especially when AI uses their information.
  • Scientific integrity and information quality, so agencies can defend the need for regulation.
  • Risk assessment and management, for transparently determining when AI harm is unacceptable.
  • Benefits and costs — not only in using AI, but also the in liabilities for the decisions it makes.
  • Flexibility in development, including making AI technology-agnostic and building it in a way that can compete internationally.
  • Fairness and non-discrimination, to eliminate bias AI may introduce into decision making.
  • Disclosure and transparency — helping non-experts understand how AI works and technical experts how the technology made a decision.
  • Safety and security, such as instituting proper controls, protecting data and countering adversarial AI, as well as considering national security ramifications.
  • Interagency coordination — ensuring federal agencies share experiences and keep AI policies predictable.

The memo says agencies should always consider nonregulatory approaches to address AI risks — such as issuing sector-specific policy guidance or frameworks, creating pilot programs and experiments, and instituting voluntary consensus standards or voluntary frameworks.


The development and use of AI will benefit from agencies controlling access to federal data and models for research and development purposes, communicating pros and cons to the public, releasing standards like metrics that industry can but isn’t obligated to use, and consulting international frameworks and coordinating with trade partners, according to the memo.

AI regulating agencies are expected to submit their plans to comply with OMB’s memo to the Office of Information and Regulatory Affairs by May 17, 2021. Those agencies must identify their regulatory authorities and AI-related datasets they collect from entities they regulate.

“The agency plan must also report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within an agency’s regulatory authorities,” the memo says. “OMB also requests agencies to list and describe any planned or considered regulatory actions on AI.”

As OMB thinks about AI in the broader economy, federal agencies like the General Services Administration have been testing commercial machine-learning models for their own use. The National Institute of Standards and Technology also has prepared guidance on technical requirements of trustworthy AI.

The Government Accountability Office is working on an AI oversight framework for continuously monitoring agencies progress with the technology, and the Department of Defense exploring methods for combating adversarial AI.

Latest Podcasts