A New York attorney is set to appear in court due to his law firm’s usage of the artificial intelligence tool ChatGPT for legal research. The development followed the discovery of citations in a legal brief that referenced non-existent legal cases.
Judge Castel labeled the situation as an “unprecedented circumstance.” The attorney using ChatGPT admitted to the court that he was unaware of the possibility of the AI tool generating false content. While ChatGPT can generate unique text, it carries disclaimers warning users about its potential to provide inaccurate information.
The lawyer, Steven A. Schwartz, was involved in a personal injury case against an airline. His legal team cited several previous cases in their legal brief to justify their client’s case based on precedent. However, the airline’s lawyers alerted the judge, stating that they were unable to find several cases referenced in the brief.
In an order demanding an explanation from the plaintiff’s legal team, Judge Castel wrote: “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”
It came to light that Peter LoDuca, the lawyer for the plaintiff, had not prepared the research. Instead, his colleague, Schwartz, used ChatGPT to look for similar past cases. In a written statement, Schwartz affirmed that LoDuca was not involved in the research and was unaware of the method used.
Schwartz expressed deep regret for relying on the AI chatbot and acknowledged that he was not aware that its content could be false. He committed to abstaining from using AI for legal research in the future without thoroughly verifying its authenticity.
The filing included screenshots of a conversation between Schwartz and ChatGPT, wherein ChatGPT incorrectly confirmed the existence of a case that no other lawyer could locate.
Both Schwartz and LoDuca, who are affiliated with the law firm Levidow, Levidow & Oberman, have been ordered to provide reasons at a hearing on June 8 why they should not face disciplinary action.