OMB’s AI guidance falls short on privacy, watchdog says
Governmentwide guidance on artificial intelligence from the Office of Management and Budget falls short of addressing myriad privacy-related risks agencies should take into account, according to a new watchdog report.
The Government Accountability Office reviewed OMB AI guidance and hosted a three-day virtual panel made up of a dozen privacy experts from various professional fields. The takeaway from the watchdog and those experts was that AI guidance from the White House office “does not fully address all the identified privacy-related risks and challenges.”
“Given the risks and challenges, additional guidance from OMB could help ensure agencies take appropriate steps to protect the privacy of sensitive data when using AI,” the GAO said in a press release accompanying the report. “Without this additional direction, risks are increased that agencies’ use of AI would disclose sensitive data, or compromise privacy in other ways.”
The GAO suggested some simple ways OMB could bolster its privacy guidance, such as enlisting the Chief AI Officers Council or the Federal Privacy Council in “forums for interagency information-sharing about strategies or best practices for addressing AI-related privacy challenges.”
But there’s likely more work to be done given the conclusions reached by the team of experts. The panelists identified 10 privacy-related challenges organizations face when using AI, and found that OMB guidance only fully addressed two.
Those two, per the GAO report, were detailing the “lack of skills among federal workforce to implement AI while mitigating privacy risk,” and examining “scalability of implementing AI systems with privacy protections.”
In the experts’ minds, the remaining eight challenges have been just “partially addressed” by OMB in its guidance as of January. Those challenges are:
- Auditing and evaluating AI models with sensitive information
- Difficulty disentangling sensitive data from products
- Lack of best practices/guidance for mitigating privacy-related risks
- Lack of performance metrics and incentives for entities to implement robust/sufficient AI privacy practices
- Lack of public AI literacy
- Lack of technology to implement AI with privacy protections
- Lack of transparency on how sensitive data are used in AI
- Tradeoffs between performance and privacy
The report acknowledges some overarching issues that are out of OMB’s control, namely the fact that the country lacks a comprehensive federal privacy law, which leaves “gaps” and possibly “inconsistent levels of production.”
Nevertheless, working to improve its AI guidance for agencies is hugely important given concerns over “potential invasions of privacy from data aggregation” that could affect all Americans, the watchdog said. The emerging technology could disclose personal and private information in raw datasets, it added, while agencies may not have the right resources to ensure that data is protected.
The GAO shared its observations with OMB and asked why the eight privacy-related challenges were not fully addressed, but the office didn’t respond with comments or other documentation, according to the report.
“Without additional information or direction on addressing these challenges, agencies will be hindered in protecting privacy when using AI, as well as making the public aware of the associated risks and steps they are taking … to mitigate them,” the report concluded.