Expert witness used Copilot to make up fake damages, irking judge

2 weeks ago 8

Judge calls for a swift end to experts secretly using AI to sway cases.

A New York judge recently called out an expert witness for using Microsoft's Copilot chatbot to inaccurately estimate damages in a real estate dispute that partly depended on an accurate assessment of damages to win.

In an order Thursday, judge Jonathan Schopf warned that "due to the nature of the rapid evolution of artificial intelligence and its inherent reliability issues" that any use of AI should be disclosed before testimony or evidence is admitted in court. Admitting that the court "has no objective understanding as to how Copilot works," Schopf suggested that the legal system could be disrupted if experts started overly relying on chatbots en masse.

His warning came after an expert witness, Charles Ranson, dubiously used Copilot to cross-check calculations in a dispute over a $485,000 rental property in the Bahamas that had been included in a trust for a deceased man's son. The court was being asked to assess if the executrix and trustee—the deceased man's sister—breached her fiduciary duties by delaying the sale of the property while admittedly using it for personal vacations.

To win, the surviving son had to prove that his aunt breached her duties by retaining the property, that her vacations there were a form of self-dealing, and that he suffered damages from her alleged misuse of the property.

It was up to Ranson to figure out how much would be owed to the son had the aunt sold the property in 2008 compared to the actual sale price in 2022. But Ranson, an expert in trust and estate litigation, "had no relevant real estate expertise," Schopf said, finding that Ranson's testimony was "entirely speculative" and failed to consider obvious facts, such as the pandemic's impact on rental prices or trust expenses like real estate taxes.

Seemingly because Ranson didn't have the relevant experience in real estate, he turned to Copilot to fill in the blanks and crunch the numbers. The move surprised Internet law expert Eric Goldman, who told Ars that "lawyers retain expert witnesses for their specialized expertise, and it doesn't make any sense for an expert witness to essentially outsource that expertise to generative AI."

"If the expert witness is simply asking a chatbot for a computation, then the lawyers could make that same request directly without relying on the expert witness (and paying the expert's substantial fees)," Goldman suggested.

Perhaps the son's legal team wasn't aware of how big a role Copilot played. Schopf noted that Ranson couldn't recall what prompts he used to arrive at his damages estimate. The expert witness also couldn't recall any sources for the information he took from the chatbot and admitted that he lacked a basic understanding of how Copilot "works or how it arrives at a given output."

Ars could not immediately reach Ranson for comment. But in Schopf's order, the judge wrote that Ranson defended using Copilot as a common practice for expert witnesses like him today.

"Ranson was adamant in his testimony that the use of Copilot or other artificial intelligence tools, for drafting expert reports is generally accepted in the field of fiduciary services and represents the future of analysis of fiduciary decisions; however, he could not name any publications regarding its use or any other sources to confirm that it is a generally accepted methodology," Schopf wrote.

Goldman noted that Ranson relying on Copilot for "what was essentially a numerical computation was especially puzzling because of generative AI's known hallucinatory tendencies, which makes numerical computations untrustworthy."

Because Ranson was so bad at explaining how Copilot works, Schopf took the extra time to actually try to use Copilot to generate the estimates that Ranson got—and he could not.

Each time, the court entered the same query into Copilot—"Can you calculate the value of $250,000 invested in the Vanguard Balanced Index Fund from December 31, 2004 through January 31, 2021?"—and each time Copilot generated a slightly different answer.

This "calls into question the reliability and accuracy of Copilot to generate evidence to be relied upon in a court proceeding," Schopf wrote.

Chatbot not to blame, judge says

While the court was experimenting with Copilot, they also probed the chatbot for answers to a more Big Picture legal question: Are Copilot's responses accurate enough to be cited in court?

The court found that Copilot had less faith in its outputs than Ranson seemingly did. When asked "are you accurate" or "reliable," Copilot responded that "my accuracy is only as good as my sources, so for critical matters, it's always wise to verify." When more specifically asked, "Are your calculations reliable enough for use in court," Copilot similarly recommended that outputs "should always be verified by experts and accompanied by professional evaluations before being used in court."

Although it seemed clear that Ranson did not verify outputs before using them in court, Schopf noted that at least "developers of the Copilot program recognize the need for its supervision by a trained human operator to verify the accuracy of the submitted information as well as the output."

Microsoft declined Ars' request to comment.

Until a bright-line rule exists telling courts when to accept AI-generated testimony, Schopf suggested that courts should require disclosures from lawyers to stop chatbot-spouted inadmissible testimony from disrupting the legal system.

"The use of artificial intelligence is a rapidly growing reality across many industries," Schopf wrote. "The mere fact that artificial intelligence has played a role, which continues to expand in our everyday lives, does not make the results generated by artificial intelligence admissible in Court."

Ultimately, Schopf found that there was no breach of fiduciary duty, negating the need for Ranson's Copilot-cribbed testimony on damages in the Bahamas property case. Schopf denied all of the son's objections in their entirety (as well as any future claims) after calling out Ranson's misuse of the chatbot at length.

But in his order, the judge suggested that Ranson seemed to get it all wrong before involving the chatbot.

"Whether or not he was retained and/ or qualified as a damages expert in areas other than fiduciary duties, his testimony shows that he admittedly did not perform a full analysis of the problem, utilized an incorrect time period for damages, and failed to consider obvious elements into his calculations, all of which go against the weight and credibility of his opinion," Schopf wrote.

Schopf noted that the evidence showed that rather than the son losing money from his aunt's management of the trust—which Ranson's cited chatbot's outputs supposedly supported—the sale of the property in 2022 led to "no attributable loss of capital" and "in fact, it generated an overall profit to the Trust."

Goldman suggested that Ranson did not seemingly spare much effort by employing Copilot in a way that seemed to damage his credibility in court.

"It would not have been difficult for the expert to pull the necessary data directly from primary sources, so the process didn't even save much time—but that shortcut came at the cost of the expert's credibility," Goldman told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Read Entire Article