UPDATED 18:46 EDT / MAY 28 2023

Judge's gavel with nondescript books in the background, brown covers with the pages facing the viewer, two of the stacked on one another, plain white background AI

Lawyer’s reliance on ChatGPT leads to false case citations in airline lawsuit

A New York lawyer has found himself in trouble in a lawsuit between a man and the airline Avianca Holding S.A. after presenting nonexistent citations in the case generated by ChatGPT.

The case involved a man named Roberto Mata suing Avianca, claiming he was injured when a metal service cart struck his knee during a flight. Injury claims are typically uninteresting, aside from the broader cultural considerations about how the U.S. is so litigious, but the case took an interesting twist after the airline attempted to have the case dismissed.

The New York Times reported Saturday that in response to the filing, lawyers representing Mata submitted a 10-page brief citing more than a half-dozen relevant court cases, arguing that the cases show the “tolling effect of the automatic stay on a statute of limitations.”

One huge problem, however, is that none of the cases was genuine. The lawyer who created the brief, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, had used OpenAI LP’s ChatGPT to write it.

Schwartz, who is said to have practiced law for three decades, defended himself, claiming that he wasn’t aware of the AI’s potential to generate false content. Schwartz told Judge P. Kevin Castel that he had no intent to deceive the court or the airline and vowed not to use ChatGPT again without thorough verification. The unusual situation prompted the judge to call a hearing on potential sanctions against Schwartz, describing the incident as an “unprecedented circumstance” filled with “bogus judicial decisions.”

The incident has sparked discussions among the legal community about the values and risks of AI. Stephen Gillers, a legal ethics professor at New York University School of Law, told the Times that the case highlights that legal professionals can’t simply take the output from an AI and incorporate it into court filings. “The discussion now among the bar is how to avoid exactly what this case describes,” Gillers added.

The case creates a precedent because of the role AI plays in legal research and argument construction, which has surfaced severe concerns about the reliability of AI tools in the legal profession. The case also underscores the potential hazards of trusting AI outputs in not only court filings but also in general use without secondary verification.

Photo: Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy