Skip to content
Artificial Intelligence Did (Not) Help Me Write This Expert Report

Introduction

AI tools are constantly improving in both their reliability and effectiveness. Tools like ChatGPT, Microsoft Copilot, and various open-source AI models are increasingly used by attorneys and experts alike. However, the limitations of these tools are regularly tested with each use by experts. A recent matter in Minnesota, surrounding new legislation intended to ban “deep fake” technology used to influence elections offers critical insight into these limitations. An expert, known for his research and education on technology used for deception, has been accused of citing fabricated academic works in his declaration filed under penalty of perjury.

This white paper aims to discuss the concept of AI hallucinations in expert reports to further the conversation and raise awareness on how to spot them, as well as how experts can avoid the pitfall of citing non-existent facts.


The Case Study: Kohls and Franson v. Ellison and State of Minnesota (“Minnesota Deep Fake Litigation”)

Background

In the Minnesota Deep Fake Litigation, Minnesota Attorney General (“AG”) Keith Ellison engaged Jeff Hancock, the founding director of Stanford University’s Social Media Lab, to provide an expert declaration in support of the State’s new legislation banning the use of “deep fake” technology to influence elections. However, a recent Daubert motion filed by the Plaintiffs alleges that multiple academic studies cited by Hancock in support of his declaration are, in fact, hallucinations created by large language model (“LLM”) AI tools.

Key Facts in the Daubert Motion

Hancock has been challenged based on the following factors:

  1. AI Hallucination: The Daubert motion cites a believed “AI Hallucination” regarding a study relied upon by Hancock as a reason to bring the entire report under scrutiny.
  2. Reputation In Question: The Daubert motion requests sanctions be brought on the Defendant and Hancock for “false representation.”
  3. Previous Work Complicates Current Report: As part of most expert reporting requirements, an expert must provide their qualifications, including a listing of all testimonies and publications authored in the previous five years. Hancock’s previous publications identify the potential harm of “deep fakes,” yet he fell victim to nearly identical mistakes he identified in other authored works.

Analysis

The Daubert motion illustrates that litigants and their counsel are becoming increasingly aware of the rise of AI in legally filed documents, as well as the risks associated with it. One of the likely culprits for experts falling victim to hallucinations and other errors is certainly time constraints. However, if the time constraint is of the expert’s own making, as opposed to being court-imposed, there is little excuse for the expert’s mistakes.

As will be further discussed, there is a place for AI tools in an expert’s work. However, this assistance from AI is best suited for the ideation and initial outlines of expert work product. Reliance on AI tools for significant portions of, or the creation of, final expert reports is still viewed as problematic by the larger legal community and comes with inherent risks.


AI Hallucinations and Where They Come From

The underpinnings of the dispute and ultimate legislation involved “deep fakes” in elections. “Deep fakes” are the product of humans using AI to create fabricated and misleading media. On its own, AI will not self-generate nor promulgate “deep fakes” without the human element. Hallucinations, however, are self-generated results from LLMs due to ingesting incorrect information as the truth and generating a response to a query based on that. In almost a vicious, self-feeding cycle, the more misleading “deep fakes” that exist, the more they can be fed into the LLMs, which then creates more hallucinations.

How To Spot and Hopefully Prevent Hallucinations

  • Check Once, Twice, Every Time: Have a colleague do a second review and check for all the sources cited by the AI tool. At a minimum, this should be done as part of creating a documents-considered list and building a working file with all source documents.
  • Customize Your Settings: Some AI tools, such as the current version of ChatGPT (ChatGPT 4o), allow you to customize your interactions with the LLM. These customizations include telling the tool about yourself, who you are, and what you do. An additional aspect of customization is guiding the tool on how you want it to respond. Therefore, you can set up your space within, say ChatGPT, to have any factual output source identified and even provide you with a link to that source.
  • Use AI Tools to Check the Work: As redundant as this may sound, creating a query back to the AI tool you used with the data you want to verify and asking a question such as, “Can this be verified by multiple sources?” can actually work as a first quick check.
  • Additionally, there are tools such as Undetectable AI that can review text and provide an analysis of the likelihood of the text being AI generated and why. These tools should not be the last line of defense but are certainly an easy first step that can be part of the general process every time.
  • Don’t Use AI For Citations: The easiest way to avoid citing non-existent items is to not rely on AI tools for citations at all. AI tools can be very helpful for report structure, grammar, and non-substantive writing reviews. Limiting their usage to elements that are not content generation can help prevent reliance on hallucinations.

What May Result from The Hancock Report and Litigation

  1. Legal Precedent May Be Created: However, the Daubert motion and the rest of the litigation plays out, it is likely that any decisions made will be looked at as precedent for other courts and jurisdictions dealing with similar issues that arise.
  2. AI Tools May Suffer Implementation Setbacks: At least as far as experts in the legal world are concerned, a negative result for Hancock could create trepidation among other experts about relying on any AI tools in their work.
  3. Provide a Roadmap for Counsel and Courts: The Daubert motion itself, as well as any responses filed in defense of Hancock, will be a roadmap for counsel on cross examination and the trier of fact in potential future decisions.

Conclusion

The Minnesota Deep Fake Litigation underscores that AI, while beneficial, cannot replace fundamental expert judgment and human verification. Filings such as the Daubert motion in this matter, as well as other decisions to come in the near future, will be defining “roadmaps” for experts, counsel, and the courts.

As AI technologies evolve, it is essential for legal professionals and experts to stay informed of best practices, ensuring that AI remains a tool that complements, rather than compromises, their expertise.

iDS provides consultative data solutions to corporations and law firms around the world, giving them a decisive advantage – both in and out of the courtroom. iDS’s subject matter experts and data strategists specialize in finding solutions to complex data problems, ensuring data can be leveraged as an asset, not a liability.