Stanford report: Despite federal AI progress, barriers to governance persist
Although the federal government has made strides in its work to implement artificial intelligence governance policies over the past several years, agencies are still experiencing challenges, a new Stanford report said.
The Friday report published by Stanford Human-Centered Artificial Intelligence and the Stanford Regulation, Evaluation, and Governance Lab is a review of agencies’ actions to comply with AI directives and primarily focuses on progress in designating agency chief AI officers, publishing compliance plans for their AI governance efforts, and requesting funding to support that work.
Researchers concluded that an “overwhelming reliance” on CAIOs that are “dual-hatted” (or have two positions at once) is a reflection of AI talent challenges within the government, focus on risk compliance and fast deadlines overshadow the “broader purpose of the CAIO role,” and variation in compliance “underscores the fragmented nature of AI innovation and governance.”
While the researchers said the government’s progress has been “substantial,” effective government leadership on AI “remains hindered by limited transparency, resource constraints, and inconsistencies in meeting mandates.”
“Agencies’ uneven designations of Chief AI Officers (CAIOs), limited public disclosure of Compliance Plans, and insufficient budgetary requests to support AI initiatives all highlight systemic barriers to fully realizing the vision of a cohesive, ‘whole-of-government’ approach to AI,” the report said.
A spokesperson for the Office of Management and Budget didn’t immediately respond to a request for comment on the findings.
The report, which was commissioned by the Administrative Conference of the United States, is the latest assessment of the federal government’s AI work by Stanford researchers who have previously looked into progress on past AI directives. A comprehensive December 2022 report, for example, detailed challenges with the legal and policy framework for AI governance that existed at that time and included findings that agency AI use case inventories were inconsistent.
Since that paper, the Biden administration released its executive order on AI governance and OMB issued corresponding memos that outlined specific steps for agencies to take to effectuate that directive, including its governance memo (M-24-10). The paper presents a similar systematic review of some of those newer requirements.
Generally, researchers recommended more “visibility and conceptualization of the CAIO role,” finding 30% of 266 agencies publicly disclosed their CAIO. Disclosure was better among large independent and Chief Financial Officers Act agencies, with 94% of CAIOs publicly disclosed.
Of the CAIOs that have been publicly announced, 89% are dual-hatted roles and almost all were internal appointments.
In interviews with the researchers, officials justified dual-hatted roles “as pragmatic and practical, given the tight deadlines and limited budgetary resources,” the report said. But while some CAIOs said creating a standalone role might run into feasibility challenges, those in large agencies “acknowledged the benefits of having a fully dedicated role.”
“The complexity and volume of AI-related work, along with responsibilities such as responding to inquiries from Congress, oversight bodies, and the media, can strain dual-hatted appointees and prevent them from focusing on their work broadly,” the report said.
Those officials mostly come from within the government. Just one agency brought in an official from outside of government, and while not a requirement, formal AI training isn’t common among CAIOs, the report said. While the researchers acknowledged that the IT, cybersecurity, and data experience most of the CAIOs have is valuable for institutional wisdom, the report said it might hamper exposure to AI industry best practices and developments.
In fact, the report cited one interviewee who “expressed the sense that there was too little actual knowledge about AI within the CAIO Council itself” and said that several others “recognized the lack of understanding about the distinct capabilities and challenges around AI compared to the traditional software that civil service agencies work with.”
Agency compliance plans for OMB’s AI governance memo, meanwhile, showed “significant improvement over prior assessments,” though they still ranged in terms of detail, transparency and focus. Most of the agencies’ published compliance plans referenced establishing an AI governance body, but only 60% reported they’d developed internal guidance and 33% referenced creating safeguards and oversight mechanisms, the report said.
Researchers said they found in interviews that the focus on compliance with risk management and the rapid deadlines for reporting “risks fueling a culture of compliance theater, where formal adherence to rules takes precedence over substantive leadership in advancing AI innovation.”
Funding requests to support the new AI work also had wide variation, with 65% of agencies not requesting funding specifically for their AI work in their fiscal year 2025 budget requests, the researchers found. Of those that did, the average request was $270,000 to support the CAIO office, though the Department of Defense was an outlier with its $435 million request.
With the Biden administration in its final days, the future of AI governance policy is likely to enter a period of change. President-elect Donald Trump is expected to repeal Biden’s executive order and replace it, but it’s not clear what a new policy might look like or how it will impact the area reviewed. CAIOs, for example, haven’t been controversial positions and there have been bipartisan proposals to codify those roles.