×
Large Language Poor Role Model: Lawyer dismissed for using ChatGPT’s false citations
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The legal profession is confronting the real-world consequences of AI hallucination as recent graduates face career setbacks from overreliance on chatbots. A case in Utah has highlighted the dangerous intersection of legal practice and AI tools, where fake citations in court filings led to sanctions, firing, and a pointed judicial warning about AI’s limitations. This incident demonstrates how professional standards are evolving in response to AI adoption, with courts and firms establishing new guardrails to protect both the justice system and vulnerable professionals.

The big picture: A recent law school graduate lost his job after including AI-hallucinated legal citations in a court filing, marking the first fake citation case discovered in Utah’s legal system.

  • Judge Mark Kouris ordered sanctions after finding multiple mis-cited cases and at least one completely fictional legal precedent generated by ChatGPT.
  • The incident highlights the growing tension between convenient AI tools and professional responsibility in highly regulated fields like law.

Key details: The law firm claimed the graduate was working as an unlicensed law clerk who failed to disclose his ChatGPT use when drafting the document.

  • Attorneys Douglas Durbano and Richard Bednar faced judicial scrutiny for submitting the filing without proper verification of its accuracy.
  • The law firm had no AI policy in place at the time but quickly established one after the incident.

What the court said: Judge Kouris emphasized that “every attorney has an ongoing duty to review and ensure the accuracy of their court filings.”

  • The court noted that the attorneys “fell short of their gatekeeping responsibilities as members of the Utah State Bar when they submitted a petition that contained fake precedent generated by ChatGPT.”
  • Kouris warned that “the legal profession must be cautious of AI due to its tendency to hallucinate information.”

The consequences: Attorney Bednar was ordered to pay the opposition’s attorneys’ fees and donate $1,000 to “And Justice for All,” a legal aid organization.

  • The law clerk who used ChatGPT was fired despite the absence of formal policies against such AI use.
  • The sanctions were relatively mild because the attorneys quickly accepted responsibility, unlike other lawyers who have denied AI use when caught.

Why this matters: Fake legal citations generate significant harms by wasting court resources, increasing costs for opposing parties, and potentially depriving clients of proper legal representation.

  • The case represents a cautionary tale as professional industries grapple with integrating AI tools while maintaining ethical standards and quality control.

Behind the numbers: The fictional case—”Royer v. Nelson, 2007 UT App 74, 156 P.3d 789″—was easily identifiable as fake when prompted for details, with ChatGPT providing only vague information that should have raised red flags.

The broader context: This incident reflects growing concerns about students and recent graduates becoming overly dependent on AI tools without understanding their limitations.

  • Law firms are now facing the challenge of educating new hires about responsible AI use in professional contexts where accuracy is paramount.
  • Even legal non-profits acknowledge they are “incorporating AI in their services” while emphasizing that “every attorney has a legal and professional responsibility” to ensure accuracy.
Law clerk fired over ChatGPT use after firm’s filing used AI hallucinations

Recent News

Idris Elba announces “African Odeon” cinema chain at SXSW London

The actor aims to address Africa's severe cinema shortage with fewer than 3,000 theaters currently serving the entire continent.

Anthropic faces Reddit lawsuit over unauthorized data use

Reddit claims Anthropic's Claude chatbot accessed its content over 100,000 times despite public promises not to scrape the platform's data.

AI communication capture transforms workplace chat into revenue streams

AI tools now extract valuable business intelligence from messaging platforms, capturing early warning signs and key insights that typically remain trapped in siloed conversations across Slack, WhatsApp, and email.