AI is good for a lot of things—namely cheating on stuff and pretending like you’re more productive than you actually are. Recently, this affliction has spread to a number of professions where you would have thought the work ethic is slightly better than it apparently is.
Case in point: lawyers. Lawyers apparently love chatbots like ChatGPT because they can help them power through the drudgery of writing legal briefs. Unfortunately, as most of us know, chatbots are also prone to making stuff up and, more and more, this is leading to legal blunders with serious implications for everybody involved.
The New York Times has a new story out on this unfortunate trend, noting that, more and more, punishments are being doled out to lawyers who are caught sloppily using AI (these punishments can involve a fine or some other minor inconvenience). Apparently, due to the stance of the American Bar Association, it’s okay for lawyers to use AI in the course of their legal work. They’re just supposed to make sure that the text that the chatbot spits out is, you know, correct, and not full of fabricated legal cases—which is something that seems to keep happening. Indeed, the Times notes:
…according to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for A.I. blunders. Some of those stem from people’s use of chatbots in lieu of hiring a lawyer. Chatbots, for all their pitfalls, can help those representing themselves “speak in a language that judges will understand,” said Jesse Schaefer, a North Carolina-based lawyer…But an increasing number of cases originate among legal professionals, and courts are starting to map out punishments of small fines and other discipline.
Now, some lawyers are apparently calling out other lawyers for their blunders, and are trying to creating a tracking system that can compile information on cases involving AI misuse. The Times notes the work of Damien Charlotin, a French attorney who started an online database to track legal blunders involving AI. Scrolling through Charlotin’s website, it’s definitely sorta terrifying since there are currently 11 pages worth of cases involving this numbskullery (the researchers say they’ve identified 509 cases so far).
The newspaper notes that there is a “growing network of lawyers who track down A.I. abuses committed by their peers” and post them online, in an apparent effort to shame the behavior and alert people to the fact that it’s happening. However, it’s not clear that it’s having the impact it needs to, so far. “These cases are damaging the reputation of the bar,” Stephen Gillers, an ethics professor at New York University School of Law, told the newspaper. “Lawyers everywhere should be ashamed of what members of their profession are doing.”









English (US) ·