GAO suggests policy reform to mitigate generative AI’s human, environmental risks

While leaders in Washington have the option to maintain the status quo around artificial intelligence, the Government Accountability Office suggests that legislators and policymakers take action to address the negative effects AI has on the public and the environment.
In a report released Tuesday on generative AI’s human and environmental effects, the federal watchdog said that Congress, agencies and industry could encourage AI developers to use government frameworks, like those that come from GAO or the National Institute of Standards and Technology, to defend against harmful AI-generated content that compromises safety, security and privacy.
The GAO also said that while generative AI has revolutionary potential, it also uses “large amounts of energy and water.” Commercial developers have not yet released information on water consumption during their training of gen AI models, and GAO recommended that policymakers could encourage developers and researchers to “create more resource-efficient models and training techniques.”
“The benefits and risks of generative AI are unclear, and estimates of its effects are highly variable because of a lack of available data,” the GAO wrote. “The continued growth of generative AI products and services raises questions about the scale of benefits and risks.”
While developers could take it upon themselves to create policies to inform users of their community policies, GAO suggested that the government encourage industry to use public sector frameworks that are publicly available, like its own AI Accountability Framework or NIST’s AI Risk Management Framework.
GAO pointed directly to issues like unsafe systems, including those with model hallucinations, that introduce risks to users. “Limitations in assessment techniques and the choice of metrics may prevent accurate predictions of system capabilities,” the report states. “Another potential emergent safety risk is a loss of control, in which a system may devolve to threatening users with blackmail, claiming to spy on individuals and conducting other harmful behavior.
Security and data privacy issues are also subjects of concern for the GAO. Insufficiently secured generative AI, according to the watchdog, could disclose personal information from the “vast amount of data” required for the systems. Similarly, cybersecurity attacks can circumvent the security of generative AI systems, facilitating “unsafe and privacy-compromising uses.”
As generative AI is rapidly evolving, the GAO said that definitive statements about risk are difficult to make since developers don’t disclose some key technical information regarding human or environmental effects of the technology.
In recent years, data center water consumption has received attention, but estimates related to generative AI, as the GAO found, are limited. The watchdog pointed to an academic paper that estimated that the training of a certain generative AI model could consume the equivalent of 25% of the water in an Olympic-sized swimming pool.
GAO suggested that both government and industry leaders could “consider increasing efforts to reduce environmental effects, including use of existing energy infrastructure and reuse of hardware and supporting infrastructure.”
However, the reduction of environmental effects would likely require improving data collection and reporting from industry to better understand AI’s impact on the environment and aid policymakers.