- The Daily Scoop Podcast
GSA nominee open to reviewing Grok AI selection process
Edward Forst told lawmakers Thursday that he wasn’t privy to the decision-making behind the General Services Administration’s deal with xAI’s Grok — but if confirmed to lead the agency, he signaled openness to examining the process that led to the procurement of the generative AI chatbot known for having an antisemitic meltdown. During a Senate Homeland Security & Governmental Affairs Committee hearing, ranking member Gary Peters, D-Mich., asked the GSA administrator nominee if he shared his concerns about Grok, pointing to the day the tool “produced racist and antisemitic content widely across [Elon] Musk’s social media platform.” Forst, a former private equity and financial services executive, told Peters that he had “not been a part of the decision” by the GSA to contract for the chatbot from the Musk-owned AI firm. With some additional pressing by Peters, Forst acknowledged that procuring a tool with a history of racist and antisemitic posting is “not, I think, the signal we would necessarily want to send to the country.” Peters attempted to get Forst to commit to pausing use of Grok until the committee received “documentation about the details of the procurement, including whether the GSA actually performed a comprehensive risk assessment.” Forst wouldn’t go that far on Grok, which once referred to itself as “MechaHitler.” But he did says his commitment to the lawmakers is that he will “meet with the team, and I’ll understand the process used in selecting them, and I’ll make sure that we have all the facts and if there was incompleteness to the process, that we’ll rectify it.”
A pair of federal judges said staff use of generative artificial intelligence tools and premature docket entry were behind error-ridden orders they issued, according to letters made public by Senate Judiciary Chairman Chuck Grassley on Thursday. Judges Henry T. Wingate and Julien Xavier Neals, who sit on the U.S. District Courts for the Southern District of Mississippi and District of New Jersey, respectively, both stated in letters that their law clerks had used AI tools to draft orders that were then entered into the dockets before they had been reviewed. Both judges also described measures to prevent repeat issues. The letters come after the orders from both judges were ridden with errors — including misquotes and references to parties not in the current cases — and later withdrawn. Speculation swirled as to whether those judges used AI, which is known to hallucinate, in their orders. Earlier this month, Grassley, R-Iowa, sent letters to both jurists asking for an explanation. The communications published Thursday are responsive to those inquiries. In his response, Neals indicated that previous reporting by Reuters that a “temporary assistant” had used ChatGPT was correct. “In doing so, the intern acted without authorization, without disclosure, and contrary to not only chambers policy but also the relevant law school policy.” Neals said he prohibits generative AI use in legal research and drafting of opinions and orders. While that policy was verbal in the past, he said it is now a “written unequivocal policy that applies to all law clerks and interns, pending definitive guidance from the AO through adoption of formal, universal policies and procedures for appropriate AI usage.
The Daily Scoop Podcast is available every Monday-Friday afternoon.
If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Soundcloud, Spotify and YouTube.