Advertisement

OMB draft AI guidance defines role of top agency AI official, adds to inventories

Newly released guidance comes amid a series of new Biden administration AI regulatory efforts.
Office of Management and Budget Director Shalanda Young speaks during a daily news briefing at the White House on March 10, 2023, in Washington, D.C. (Photo by Alex Wong/Getty Images)

The Office of Management and Budget on Wednesday released its draft guidance for federal agencies using artificial intelligence. The brief, which was announced as Vice President Kamala Harris visits the United Kingdom for an international summit focused on the technology, covers a range of AI applications that are or might be used by the government.

The memo comes as the Biden administration beefs up its AI regulatory effort. On Monday, the president revealed a long-awaited executive order on artificial intelligence. While traveling, the vice president has also announced a series of new AI initiatives, including the creation of an AI Safety Institute and a funders program that involves philanthropic organizations focused on the technology.

“It’s pushing for and enabling agencies to really experiment, but also ensuring that if we’re getting into the health use cases and public safety use cases, that we have appropriate guardrails around that before we go too far,” Federal CISO Chris DeRusha said in an interview with FedScoop.

The memo strongly emphasizes AI innovation, instructing agencies to build IT infrastructure to support AI, collect data to train AI and evaluate potential applications of generative AI. At the same time, it also spells out AI systems that the government considers to be safety- or rights-impacting, such as automated security systems, risk assessments or emotion detection technology. Those systems are now subject to new requirements. 

Advertisement

“In a wide range of contexts including health, education, employment, federal benefits, law enforcement, immigration, transportation and critical infrastructure, the draft policy would create specific safeguards for uses of AI that impact the rights and safety of the public,” the White House said in a fact sheet regarding the OMB draft guidance.  

“This includes requiring that federal departments and agencies conduct AI impact assessments, identify, monitor and mitigate AI risks, sufficiently train AI operators, conduct public notice and consultation for the use of AI and offer options to appeal harms caused by AI,” the White House added.

As part of the guidance, each federal agency must designate a chief AI officer responsible for coordinating the use of AI, promoting AI innovation and managing AI risks. The order also requires that agencies convene AI governance bodies and that they develop enterprise AI strategies. 

The guidance stipulates that agencies may choose an existing official for its chief AI role, such as a chief technology officer, chief data officer or similar official “provided they have significant expertise in AI and meet the other requirements” spelled out by OMB.

The responsibility of that official will include serving as an agency’s senior adviser on AI, developing a plan to comply with the guidance, and responsibility for creating and maintaining an agency’s annual AI use case inventory.

Advertisement

DeRusha said that OMB has already seen the challenges that AI presents, but acknowledges the technology’s effectiveness in supporting tasks. He emphasized the limited knowledge around AI, specifically generative AI, and the need to categorize use cases to support the office’s future guidelines.

“That’s why we have this inventory of the 700-plus use cases, because it’s really important for us to understand, are those the right things for us to be focused on as pilots or when we’re still experimenting a little bit with the tech,” DeRusha said. “We need to know that those are the right decisions and the right, safe uses. That’s why we break it down by the use cases, break these things down from there by tasks.”

The White House said that the OMB memo will build upon the Biden administration’s Blueprint for an AI ‘Bill of Rights’, which takes a rights-based approach to regulating AI, as well the  National Institute of Standards and Technology’s risk-based AI RMF, which experts have compared and contrasted in the past few months.

The memo also creates changes to the process of creating AI inventories, which are already required by a 2020 executive order and legislation, including sharing more details on systems that could impact rights and safety. The Department of Defense also has new AI reporting requirements for its AI use cases. 

Notably, challenges with AI inventories were the subject of a major Stanford report published in 2022. FedScoop has continued to report on compliance issues within these disclosures, including errors and lack of consistency.

Advertisement

The memo does not impact AI systems that might be used as part of a national security system.

Latest Podcasts