U.S. Senate Judiciary Committee Chairman Chuck Grassley is demanding answers from two federal judges about whether they used artificial intelligence to draft court rulings that contained serious errors. The Republican senator from Iowa sent letters Monday to judges who withdrew flawed orders in July, marking the first high-profile congressional inquiry into potential AI misuse by the federal judiciary itself.
What you should know: Grassley targeted U.S. District Judge Julien Xavier Neals in New Jersey and U.S. District Judge Henry Wingate in Mississippi, both of whom withdrew written rulings after lawyers identified factual inaccuracies and other serious errors.
• The senator asked whether the judges, their law clerks, or court staff used generative AI or automated tools to prepare the problematic orders.
• He also demanded explanations about the “human drafting and review” process, the cause of the errors, and what measures their chambers have taken to prevent similar mistakes.
The big picture: This congressional scrutiny represents a significant escalation in oversight of AI use within the federal court system, extending beyond lawyers to judges themselves.
• Lawyers have increasingly faced sanctions from judges across the country for apparent AI misuse, with dozens of cases resulting in fines or other penalties over the past few years.
• “No less than the attorneys who appear before them, judges must be held to the highest standards of integrity, candor, and factual accuracy,” Grassley wrote.
Key details about the problematic rulings: Both withdrawn orders contained glaring errors that raised red flags about their preparation process.
• Wingate’s order in a civil rights lawsuit included “incorrect plaintiffs and defendants” and allegations that weren’t in the original complaint, according to lawyers for the state.
• Neals’ ruling in a securities lawsuit contained factual errors and included quotes that defense attorneys said weren’t actually in the cited cases.
• Neither judge explained in court filings how the errors made it into their decisions.
What sources revealed: A person familiar with the New Jersey case told Reuters that AI-generated research was included in a draft decision that was accidentally placed on the public docket before proper review.
• The research was prepared by a temporary assistant, according to the source.
• The court’s chambers reportedly has “a strict policy against the unauthorized use of AI to support opinions.”
Transparency concerns: Grassley questioned why the original faulty rulings were removed from court dockets and asked whether the judges will restore them “to preserve a transparent history of the court’s actions in this matter.”
• Wingate declined to make his original, flawed ruling available on the public docket after a request from state lawyers.
• He only acknowledged that the original contained “clerical errors referencing improper parties and factual allegations.”