Skip to content
When AI Goes on Trial

Attorneys and expert witnesses are increasingly turning to AI tools to streamline their work — and for good reason. The efficiency gains are real. But a recent court decision out of New York is sending a clear message to the legal profession: AI can support expert analysis, but it cannot substitute for it.

In an article published in Today’s General Counsel, iDS Managing Director of Damages Theodore (Teddy) Brown breaks down the implications of the Matter of Weber decision — a case in which a financial expert’s use of Microsoft Copilot contributed to the disqualification of their testimony — and what it means for legal professionals navigating the growing role of AI in litigation.

What Happened in Matter of Weber

The case involved allegations of fiduciary breach by an executor of a trust. The objecting party presented a financial expert to evaluate anticipated economic damages — an expert who used Microsoft Copilot as a secondary check for his calculations. That decision did not go unnoticed by the court.

The judge’s disqualification rested on several findings: the expert lacked industry-standard calculation methods, demonstrated inadequate qualifications, and could not clearly distinguish between lost profits and capital losses. While AI reliance was not the sole basis for disqualification, the court specifically flagged concerns about the expert’s use of an open-source AI tool — noting that such tools are not easily verifiable and therefore unreliable in high-stakes expert testimony.

Open-Source vs. Closed-Source: A Critical Distinction

Brown’s analysis draws an important line that legal teams should take note of. The problem in Matter of Weber was not AI itself — it was the wrong kind of AI, used in the wrong way.

Closed-source AI tools, he argues, can meaningfully enhance an expert’s workflow when applied to foundational tasks: sorting and indexing large document sets, high-level summarization, deep searches across difficult formats like PDFs and images, and timeline construction. These are legitimate productivity multipliers.

Open-source tools like Microsoft Copilot, however, present a different proposition. Their outputs are not easily auditable, their methodologies are not transparent, and — as the Weber decision illustrates — they cannot withstand the scrutiny of a courtroom when used for complex, specialized calculations.

What Courts Still Require

Brown is clear on the bottom line: no AI tool can compensate for gaps in an expert’s qualifications. Courts continue to assess expert testimony against the Federal Rules of Evidence — evaluating foundational knowledge, training, education, skill, and experience. AI cannot fill those gaps, and relying on it to do so invites exactly the outcome seen in Weber.

His recommendations for legal teams and expert witnesses are straightforward: use closed-source AI for preliminary and organizational tasks, ensure full transparency about any AI usage and its limitations, avoid sole reliance on AI for complex domain-specific calculations, and maintain the qualification standards that courts expect.

At iDS, our Damages & Forensic Accounting and Testimony practices are built on exactly this philosophy — combining advanced analytical tools with the deep human expertise and qualification standards that courts demand.

To connect with an iDS expert about your next investigation, visit idsinc.com.


iDS provides consultative data solutions to corporations and law firms around the world, giving them a decisive advantage – both in and out of the courtroom. iDS’s subject matter experts and data strategists specialize in finding solutions to complex data problems, ensuring data can be leveraged as an asset, not a liability. To learn more, visit idsinc.com.


Having trouble with a technical term used in this post? Check out our
Data Investigators