New York Lawyer Faces Court Hearing Over AI Tool’s Inaccurate Legal Research

A New York lawyer is under scrutiny as his law firm employed an AI tool called ChatGPT for legal research. The court found itself in an unprecedented situation when a filing referenced nonexistent legal cases. The lawyer claimed ignorance about the tool’s potential to produce false information.

This incident raises concerns about the accuracy and reliability of AI-generated content, highlighting the risks associated with artificial intelligence in legal research.

Lawyer Faces Consequences for AI-Generated Legal Research

A New York lawyer is currently facing a court hearing after his law firm utilized an AI tool called ChatGPT for legal research. This development has presented an unprecedented circumstance, as the filing made reference to legal cases that did not actually exist. The lawyer involved expressed his unawareness regarding the possibility of false content being generated by the AI tool.

The original case revolved around a man suing an airline for an alleged personal injury. To support their argument, the plaintiff’s legal team submitted a brief citing several previous court cases, aiming to establish precedent and validate the case’s progression. However, the airline’s lawyers later informed the judge that they could not locate several of the cases referenced in the brief.

In an order demanding an explanation from the plaintiff’s legal team, Judge Castel stated, “Six of the submitted cases appear to be fictitious judicial decisions, complete with fabricated quotes and false internal citations.”

New york lawyer faces court hearing over ai tool's inaccurate legal research

Upon further investigation through subsequent filings, it was revealed that the research was not conducted by Peter LoDuca, the lawyer representing the plaintiff. Instead, it was conducted by one of his colleagues at the same law firm, Steven A Schwartz, an experienced attorney with over 30 years of practice. Mr. Schwartz had employed ChatGPT to find relevant prior cases for comparison.

In his written statement, Mr. Schwartz clarified that Mr. LoDuca had no involvement in the research process and was unaware of how it had been conducted. Mr. Schwartz expressed deep regret for relying on the chatbot, as he had never used it for legal research before and was unaware of the potential for inaccurate information. He has since pledged never to utilize AI for legal research without thoroughly verifying its authenticity.

Accompanying screenshots attached to the filing illustrate a conversation between Mr. Schwartz and ChatGPT. In one message, Mr. Schwartz asks, “Is Varghese a real case?” referring to Varghese v. China Southern Airlines Co Ltd, one of the cases that no other lawyer could locate. ChatGPT responds affirmatively, prompting Mr. Schwartz to inquire about the source. After double-checking, ChatGPT reiterates that the case is genuine and can be found on reputable legal reference databases such as LexisNexis and Westlaw. The AI tool asserts the authenticity of the other cases it provided to Mr. Schwartz as well.

Both lawyers, who are employed at the law firm Levidow, Levidow & Oberman, have been instructed to provide an explanation as to why disciplinary action should not be taken against them at a hearing scheduled for June 8th.

ChatGPT, launched in November 2022, has gained widespread usage, with millions of people relying on its ability to respond to queries using natural, human-like language and even mimic various writing styles. It utilizes the internet as it was in 2021 as its primary database.

This incident raises concerns about the potential risks associated with artificial intelligence, including the dissemination of misinformation and the introduction of bias into legal research.

Leave a Comment

close