The Government Accountability Office will conduct a review of the potential harm caused by generative AI tools like ChatGPT. The chief government auditor’s plans to assess the technology follow a request sent by Sens. Ed Markey, D-Mass., and Gary Peters, D-Mich., to the GAO comptroller last month.
“[I]t has already become apparent that generative AI is a double-edged sword,
carrying with it a broad range of serious harms. Scammers have begun using generative AI for manipulative voice, text, and image synthesis,” wrote the senators in a June 22 letter. “The output from generative AI can replicate damaging racist and sexist stereotypes. Large language models can also ‘hallucinate,’ generating false content, including potentially defamatory statements.”
FedScoop learned about GAO’s plans after obtaining a response letter that the agency sent the senators at the end of last month. That letter, which was written by GAO congressional relations managing director A. Nicole Clowers, confirmed that GAO accepted the work as within the scope of its authority and noted that “staff with the required skills would be available shortly.”
“We have accepted the request and plan to meet with Congressional staffers soon to discuss our approach to the work,” said Charles Young, managing director of public affairs at GAO, in an email to FedScoop on Thursday. “That approach and time frames for issuance will be determined as we get started on the effort.”
GAO is not currently using generative AI in its auditing work, he added.