The White House’s guidance for federal agencies using artificial intelligence that will be issued later this summer is likely to focus on gathering and sharing information on AI experiments taking place along with best practices learned from such pilot programs.
The Office of Management and Budget in its upcoming draft AI policy guidance may focus on gathering empirical data on the use of AI tools within the federal agencies and then sharing best practices and risks from such AI programs, according to Catherine Sharkey, an NYU law professor who is one of the nation’s leading authorities on federal regulatory law.
“I don’t think they’ll lay out any comprehensive policy in the beginning, it first signals OMB’s interest and involvement in having a new explicit focus on regulating AI tools within federal agencies and providing a canvas of AI use in the government followed by best practices,” said Sharkey.
“Most federal agencies are not fully apprised of AI experimentation happening in other agencies and that knowledge could be fruitful – sharing AI tools and resources and learning from others’ trials and errors,” Sharkey added.
She said that OMB would likely look internally to federal agencies that have taken the lead on AI technology and policies like the Department of Health and Human Services, rather than look to European nations and other countries government AI policies and regulations.
Federal agencies that provide approval or regulate AI use of consumers externally are most likely to take the lead in experimentation of AI in the government, like the Defense Department, the Food and Drug Administration, and the General Services Administration.
Major federal agencies like the Department of Veterans Affairs and the National Science Foundation have already started to experiment internally with appropriate use cases for popular generative AI while also building safe guardrails for government use of such technology.
The Biden administration in recent months has worked to hold private organizations and companies accountable for addressing bias that may be embedded within AI systems while also promoting innovation. In October, it published an AI ‘Bill of Rights’ blueprint document, which was followed by NIST’s voluntary risk management framework in January.
These recently introduced responsible AI guidelines can be used voluntarily by federal agencies, but haven’t been created explicitly for their implementation which is what makes OMB’s AI policy guidance for the federal government later this summer particularly relevant.
Commerce Secretary Gina Raimondo last week called NIST’s AI Risk Management Framework, which was first released in January, the “gold standard” for the regulatory guidance of AI technology.
However, NIST’s AI framework and the G7 agreement contrast in some ways with the foundational rights-based framework laid out in the White House’s October 2022 Blueprint for an AI ‘Bill of Rights,’ that some AI experts have advocated as a model for AI regulations going forward.