Use of Perplexity, ChatGPT behind error-ridden orders, federal judges say
A pair of federal judges said staff use of generative artificial intelligence tools and premature docket entry were behind error-ridden orders they issued, according to letters made public by Senate Judiciary Chairman Chuck Grassley on Thursday.
Judges Henry T. Wingate and Julien Xavier Neals, who sit on the U.S. District Courts for the Southern District of Mississippi and District of New Jersey, respectively, both stated in letters that their law clerks had used AI tools to draft orders that were then entered into the dockets before they had been reviewed. Both judges also described measures to prevent repeat issues.
The letters come after the orders from both judges were ridden with errors — including misquotes and references to parties not in the current cases — and later withdrawn. Speculation swirled as to whether those judges used AI, which is known to hallucinate, in their orders. Earlier this month, Grassley, R-Iowa, sent letters to both jurists asking for an explanation. The communications published Thursday are responsive to those inquiries.
In his response, Neals indicated that previous reporting by Reuters that a “temporary assistant” had used ChatGPT was correct. “In doing so, the intern acted without authorization, without disclosure, and contrary to not only chambers policy but also the relevant law school policy.”
Neals said he prohibits generative AI use in legal research and drafting of opinions and orders. While that policy was verbal in the past, he said it is now written.
“I now have a written unequivocal policy that applies to all law clerks and interns, pending definitive guidance from the AO through adoption of formal, universal policies and procedures for appropriate AI usage,” Neals said.
Neals also indicated that the draft appeared on the docket before routine reviews were carried out, which Wingate also noted in his letter.
“The standard practice in my chambers is for every draft opinion to undergo several levels of review before becoming final and being docketed, including the use of cite checking tools,” Wingate said. “In this case, however, the opinion that was docketed on July 20, 2025, was an early draft that had not gone through the standard review process.”
According to Wingate, the clerk in his chambers used Perplexity “strictly as a foundational drafting assistant to synthesize publicly available information on the docket” and didn’t input any sensitive or non-public information.
Wingate said he had also taken steps to ensure the same error doesn’t happen again, “including a plan whereby all draft opinions, orders, and memorandum decisions undergo a mandatory, independent review by a second law clerk before submission to me. All cited cases are printed from Westlaw and attached to a final draft.”
As the boom in generative AI tools has placed the ever-learning technology into the hands of more people, errors attributable to its use have also risen, including in the legal profession.
According to an article published in Cornell’s Journal of Empirical Legal Studies earlier this year: “Legal practice has witnessed a sharp rise in products incorporating artificial intelligence.” That article found that even claims by several AI tools geared specifically toward legal research were overstated.
In response to a request for comment about the incident, a spokesperson for Perplexity said the AI tool “never claims to be 100% accurate, but we do claim to be the only AI company 100% focused on building accurate AI.”
“We aggressively test and measure accuracy in Perplexity daily (today we were 1.84% inaccurate). While we haven’t read the report or the underlying data, we strongly advise all AI users to check the work of their AI assistants. This is why we invented citations in AI in the first place,” the emailed statement said.
OpenAI, which owns ChatGPT, didn’t immediately respond to a request for comment.
In response to the judges’ letters, Grassley commended their honesty and said he was pleased with steps to prevent future issues.
“Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law,” Grassley said. “The judicial branch needs to develop more decisive, meaningful and permanent AI policies and guidelines. We can’t allow laziness, apathy or overreliance on artificial assistance to upend the Judiciary’s commitment to integrity and factual accuracy.”