By Robert Kehr and Rachelle H. Cohen
Artificial intelligence tools such as ChatGPT, Claude, Gemini, and other generative AI systems are rapidly becoming part of how people prepare for disputes, communicate with lawyers, and even develop litigation strategies.
But a recent federal court ruling suggests that clients’ use of AI may create an unexpected risk: conversations with AI may be discoverable by the opposing side in a lawsuit.
This is an important development for litigants and lawyers. It means that clients using AI to think through legal issues – or even to draft documents – has the potential to expose their strategy, facts, or thinking to their adversary.
A client should speak with the client’s attorney before communicating with AI tools about information the client would prefer to keep private from the opposing party in litigation.
What “Discoverable” Means
In litigation, each side is generally entitled to obtain relevant information from the other side through a process called discovery. Material subject to discovery can include documents, emails, text messages, notes, electronic data, and records of communications with third parties.
Some communications, however, are not discoverable. Communications between attorney and client are protected against discovery. Also shielded is “attorney work product,” which includes materials, documents, research, and other tangible items prepared by or for an attorney, and which could reveal the attorney’s legal theories, conclusions, or opinions.
These protections exist because the legal system wants clients to be able to speak candidly with their lawyers and wants lawyers to be able to prepare their cases without fear that their opponent will see everything.
But those protections can be lost if confidentiality is not maintained. And that is where AI enters the picture.
Recent Federal Case Finds No Attorney-Client Protection AI Platform Communication
In United States v. Heppner, a federal judge in New York ruled that a defendant’s communications with an AI system were not protected, making them subject to discovery by the opposing party.
Bradley Heppner, after learning that he might be indicted for criminal fraud charges, had used an AI platform to outline legal arguments and strategies. He later shared these AI-generated materials with his attorney. Heppner argued that these AI-generated documents should be protected against discovery under the attorney-client privilege and the work-product doctrine.
The court disagreed. It held that the attorney-client privilege did not apply because Heppner’s communications with AI were not with an attorney. This alone was sufficient reason for the court.
In addition, the court held that Heppner could not have had a reasonable expectation of privacy in his AI communications. The AI platform’s privacy policy stated that user inputs and outputs could be used for training the platform AI model and could be disclosed to third parties, including governmental regulatory parties.
Finally, the court held that Heppner did not communicate with the AI platform for the purpose of obtaining legal advice. The AI platform expressly disclaims that it provides legal advice, and the fact that Heppner later shared the results with his lawyer did not save that the initial interaction with AI was not for legal advice.
The court also rejected the argument that Heppner’s AI communications were subject to the work product doctrine, because they were not created by or at the request of Heppner’s lawyers and did not reflect the lawyers’ strategy.
What does the Heppner Holding Mean for California Clients and Lawyers?
The Heppner case is a federal case in New York and is not binding on California judges, whether in federal or state court. However, clients should be aware that their use of AI could be discoverable in California litigation. In addition, if AI systems are considered third parties, clients who share their attorney's communications with AI risk waiving attorney-client privilege.
Using the reasoning of the Heppner holding, it might be possible for attorneys and clients to work together so that a client’s AI use is at the direction of or in coordination with the lawyer and is through a platform that maintains confidentiality and is not open to third party disclosure. Clients should speak with their lawyers about how they intend to use AI in litigation so that a strategy can be implemented to minimize the risk that the AI communications will become discoverable.
Why This Matters Beyond Litigation
The implications of the evolving treatment of AI-generated material extend beyond lawsuits. If courts treat AI use as disclosure to a third party, this could affect AI usage involving trade secrets, confidential business information, regulatory matters, and internal investigations.
