Existing clients

Log in to your client extranet for free matter information, know-how and documents.

Client extranet portal

Staff

Mills & Reeve system for employees.

Staff Login
13 Mar 2026
9 minutes read

AI chatbots, LLMs and the risk of losing privilege: Lessons from the US

AI chatbots and large language models (LLMs) are now routinely used to draft, summarise and analyse documents by clients and lawyers. But their convenience comes with real privilege risks.  

Recent regulatory and judicial warnings emphasise that misuse of public AI tools can breach confidentiality, waive privilege, and expose legal professionals to an investigation into their professional conduct.

The English High Court hasn't yet ruled on whether interactions with AI platforms can waive privilege. The recent US decision in US v Heppner has raised understandable concerns that England might follow suit. However, as this article explains, the English courts are likely to take a different, more nuanced approach and there are strong reasons to conclude that uploading privileged material to a public LLM does not automatically amount to a waiver.

The US case: Why US v Heppner is relevant

In February 2026, the US District Court for the Southern District of New York held that documents a defendant generated using the public AI platform Claude weren't protected by attorney‑client privilege or the work product doctrine. The defendant had input confidential information and a draft defence strategy into the platform during a federal investigation, later sharing the AI‑generated material with his lawyers. The court ruled that none of the materials were privileged, primarily because:

  • Claude is not a lawyer
  • The communications were not confidential, as the platform’s privacy terms made clear that inputs and outputs could be collected, reused for training, and disclosed to third parties including regulators
  • The purpose of the exchanges was not to obtain legal advice from counsel

The court rejected the notion that sharing non‑privileged AI‑generated documents with a lawyer could subsequently cloak those documents with privilege. 

Although US privilege law differs from that in England and Wales, this case has shaped commentary on LLMs and privilege. However, the English legal position is likely to be fundamentally different.

Privilege in England and Wales: The applicable tests

Legal advice privilege

Legal advice privilege protects confidential communications between a lawyer and their client for the purpose of giving or receiving legal advice. The definition of “client” is narrowly construed, particularly for corporates following Three Rivers (No. 5) [2003] EWCA Civ 474. Crucially, privilege doesn't extend to communications with non‑lawyers, even if the advice sought is legal.

Litigation privilege

Litigation privilege is broader and covers confidential communications between lawyers, clients, and third parties, provided litigation is reasonably contemplated and the dominant purpose of the communication is the conduct of that litigation. However, confidentiality remains essential. Voluntary disclosure to a third party outside the privileged relationship will ordinarily waive privilege.

In both tests, confidentiality remains essential. However, English courts assess confidentiality in a nuanced, fact-specific way.

The likely approach of English courts

English courts approach confidentiality and privilege through a different doctrinal lens. Several factors strongly indicate that they wouldn't find an automatic waiver of privilege when privileged information is input into a public LLM. English courts are likely to take a protective, balanced approach rather than a punitive one. Simply interacting with a public LLM is unlikely, without more, to constitute waiver of privilege.

The first factor is confidentiality and intention. English law places significant weight on the privilege‑holder's intention and whether the information was ever genuinely publicly accessible.

The 2017 Singapore Court of Appeal decision in Wee Shuo Woon v HT S.R.L. [2017] SGCA 23 (which applies English authorities) held that confidentiality isn't lost simply because information is theoretically accessible online. The court emphasised that “potential, abstract accessibility is vastly different from access in fact”: privileged emails uploaded to WikiLeaks had not entered the public domain because discovering them required extensive and unlikely effort.

This reasoning applies directly to public LLMs. If a client doesn't intend to make their privileged information public and accessing it would require improper or technically dubious behaviour (for example adversarial prompting or prompt‑injection attacks), English courts are unlikely to treat confidentiality as lost. Even if a waiver were found, the court could limit any such waiver solely to the LLM service provider, not to the world at large.

The second factor is public policy. The mechanism of “discovery” matters. In Heppner, the police seized devices under criminal powers. Civil litigants in England have no such powers. How would they obtain the material? Extraction techniques such as targeted attacks, model‑inversion, or prompt manipulation raise concerns about potential breaches of the Computer Misuse Act 1990 and the Data Protection Act 2018. English courts are highly unlikely to endorse or reward such conduct by finding a waiver of privilege.

The third factor is the innate disapproval of fishing expeditions by the English courts. A party cannot demand disclosure of another’s LLM prompts or logs merely because they speculate that privileged information might be found. Applying Wee Shuo Woon, privilege is preserved where the information is only theoretically accessible. Without evidence that privileged content has actually escaped into the public domain, a court is unlikely to order disclosure or find waiver.

A fourth factor is access to justice. If there were a rule that uploading privileged and confidential information into a public LLM automatically waived privilege and confidentiality, this would disproportionately harm litigants in person and those unable to afford full legal representation. Many rely on free AI tools to help prepare documents and English courts are acutely aware of these realities. Upholding a broad waiver rule would undermine access to justice which is an outcome that the English courts are highly unlikely to favour.

The final factor is the English judiciary’s constructive approach to AI. Artificial Intelligence (AI) Guidance for Judicial Office Holders (October 2025) encourages responsible AI use by judges. It warns that public AI tools should be treated as potentially publishing anything entered into them, but it also recognises that litigants in person increasingly depend on these tools.

Recent UK tribunal decisions and judicial guidance warn lawyers not to upload confidential material into public AI tools and not to rely on these tools for legal research. The Immigration Upper Tribunal decision UK and R (on the application of Munir) v Secretary of State for the Home Department [2026] UKUT 00081 (IAC) concerned a situation where a lawyer had uploaded confidential client correspondence as well as Home Office correspondence into public AI tools. Upper Tribunal Judge Lindsley held that any regulated lawyer who does this must report themselves to their regulator and may need to consult the Information Commissioner’s Office. This reinforces good practice but doesn't establish an automatic waiver of the client’s privilege, despite comments in the judgment to that effect. When this issue arises in future cases, the courts are likely also to consider intention, accessibility, public policy and fairness before finding that privilege has been waived.

The Civil Justice Council’s consultation

The Civil Justice Council is currently consulting on whether procedural rules should govern how AI is used in preparing court documents. The proposals include mandatory declarations confirming whether AI was used in witness statements, expert reports and other key court documents. This is a significant development and reflects a systemic shift: the consultation is not intended to penalise parties for responsible AI use but is concerned with procedural transparency. 

This might impact privilege if parties become subject to a requirement to disclose that AI was used in generating or summarising material. Such a declaration could encourage an opponent to seek disclosure of prompts, drafts and logs to test accuracy, authorship or integrity.

Risks of using AI chatbots and LLMs in legal contexts

The use of AI chatbots and LLMs in legal practice presents several distinct risks. These risks arise both from the nature of public AI systems and from the ways in which adversaries may attempt to exploit them in litigation. Understanding these risks is essential to preserving confidentiality, avoiding inadvertent waiver of privilege, and complying with regulatory obligations.

Public AI chatbots: Loss of confidentiality and privilege

Because public AI platforms routinely collect, store, and reuse user inputs, uploading confidential information to them could constitute disclosure to a third party. This is true even if the platform claims to offer optional privacy settings as the service provider may still retain copies for operational and compliance purposes. This creates a real risk that privilege may be challenged on the basis that confidentiality was not preserved and the material could become accessible through regulatory requests, cybersecurity incidents, or internal handling by the AI provider.

Public or closed AI systems

Many organisations now use enterprise AI systems that operate within secure, segregated environments. These tools can reduce risk if configured so that the provider cannot use inputs for model training and cannot access or disclose client data.

However, the mere use of a closed system doesn't by itself confer privilege. The communication must still satisfy the tests for legal advice or litigation privilege. For example, if an employee uses an internal AI assistant to analyse legal issues unconnected to a dispute without involving authorised client personnel or lawyers, the output is unlikely to be privileged.

Even if the underlying privileged document never escapes the LLM, logs, screenshots, cached outputs and local copies might still surface in investigations or disclosure exercises. Secondary evidence such as this may be admissible and privilege could be lost if confidentiality has already been compromised.

Practical takeaways

While Heppner demonstrates a strict US approach, English courts are likely to take a materially different stance. Given the pace of AI adoption and the likelihood of court scrutiny, this is an area where the first English High Court authority will matter enormously. The principles of confidentiality, equitable protection of privileged material, public policy, and access to justice all point towards a reluctance to find waiver merely because privileged and confidential information is input into a public LLM. However, until the courts pronounce definitively, the safest course is clear: keep privileged and confidential material out of public AI chatbots entirely.  

These are some practical takeaways for clients:

  • Don't input any confidential or privileged material into public AI tools: Treat public AI platforms as unsecure publication channels. This is consistent with current judicial guidance and with the Upper Tribunal’s recent decision.
  • Establish strict internal AI policies: Organisations should adopt clear policies that prohibit employees from using public AI tools for any client‑related material. Training should emphasise confidentiality, privilege, and verification.
  • Use enterprise AI solutions only with robust safeguards: Before deploying any internal AI tool, ensure: 
    • Contractual guarantees prevent the provider from retaining or using data for training
    • Data is stored solely within your environment
    • Access controls, retention policies and audit logs are in place
  • Maintain close lawyer involvement: Where AI is used in preparing communications with lawyers, only the final communication sent to counsel is likely to attract privilege. Underlying drafts created through AI systems may not be privileged and may be disclosable.
  • Anticipate disclosure and accuracy challenges: Parties may increasingly seek disclosure of prompts, logs, and AI usage records. Maintain documentation recording how tools are used and ensure that any outputs relied upon are independently verified.

Our content explained

Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.