Existing clients

Log in to your client extranet for free matter information, know-how and documents.

Client extranet portal

Staff

Mills & Reeve system for employees.

Staff Login
16 Apr 2026
12 minutes read

AI on trial: Developments professionals need to know

Over a year has passed since our inaugural FutureProof conference was held in London. As predicted, the integration of AI into professional life continues to take place at rapid speed.

This article provides a round up of the key developments in relation to AI over the past year, including significant case law and updates to regulatory guidance for a variety of professionals, including solicitors, barristers and surveyors. We consider how the courts and regulators are responding to the growing prevalence of AI in legal proceedings and what this means in practice for professionals.

These developments highlight an emerging tension between technological advancement and the core professional responsibilities owed by professionals to their clients, the court and regulators. As AI use accelerates, the focus is no longer on whether professionals will use AI, but on how they do so in a way that remains compliant with ethical standards.

Case law updates: Key themes

Impact of the use of public AI tools

In UK v Secretary of State for the Home Department [2026] UKUT 81 (IAC), the Upper Tribunal addressed two separate instances involving false citations produced by AI. The Upper Tribunal issued the warning that uploading confidential documents to an open/public AI platform places such documents in the public domain, therefore breaching confidentiality and waiving privilege. It also suggested that such conduct may warrant self-referral to the relevant regulator and notification to the Information Commissioners Office.

Across the Atlantic, United States v Heppner held that a defendant’s documents created via a public AI platform (Anthropic’s Claude) and later shared with counsel were neither privileged nor protected from disclosure. Whilst clearly stating that not all AI usage waives privilege, the decision serves as an important reminder of the risks the adoption of AI may pose in litigation if used incorrectly.

The warnings contained in these judgments are echoed by the Judicial Guidance in England and Wales dated October 2025, which states:

“Do not enter any information into a public AI chatbot that is not already in the public domain. Do not enter information which is private or confidential. Any information that that you input  into a public AI chatbot should be seen as being published to all the world”.

The growing impact of hallucinations

It has recently been estimated that there are now over 1,000 suspected or confirmed AI hallucinated cases internationally. Many lawyers are already facing investigations by regulators and potential sanctions as a result of citing hallucinated cases during proceedings. 

In The King on the application of Frederick Ayinde v The London Borough of Haringey [2025] EWHC 1383 (Admin), the legal team were found to have cited five hallucinated cases. The High Court emphasised the need for caution at both an individual and firmwide level and highlighted the serious implications of AI misuse. Mr Justice Johnson called for practical and effective measures to be implemented by those with individual leadership responsibilities (including heads of chambers and managing partners) and regulators to ensure that legal professionals understand their professional and ethical obligations and their duty to the court when using AI. 

In Al-Haroun v Qatar National Bank QPSC & Anor [2025] EWHC 1383 (Admin), which was considered at the same time as Ayinde, 18 of the 45 cases cited in a claim for £89.4m in damages were found to be hallucinations. Here, research had been carried out by a lay client using publicly available AI tools, and this research was relied upon by solicitors. 

In Ndaryiyumvire v Birmingham City University [2025] 10 WLUK 719, the County Court made a wasted costs order against a solicitor for citing two hallucinated authorities. The court reiterated the principles from Ayinde and highlighted firm level management failures.

In Re A, B, C, D (Extension of assessment; Use of AI: hallucinations) [2026] EWFC 71 (B), the Recorder ruled that it was in the public interest to publicly name an individual who presented four hallucinated case citations to the court within a skeleton argument. The individual, who works as a therapist and was acting as a lay advocate for a friend during Children Act proceedings, was found to “hold herself out as a lawyer, and is an unregistered barrister” and to have completed paid legal work as recently as November 2025. Although it was concluded that the individual did not intend to mislead the court, this case further demonstrates the judiciary taking a tough stance in relation to hallucinated cases and the use of AI by professionals in legal proceedings.

The above cases consider the impact and consequences of misuse of hallucinated cases by legal professionals. But it is clear that the Courts are also prepared to draw a firm line for other court users. In D (A Child) (Recusal) [2025] EWCA Civ 1570, the Court of Appeal considered the use of AI by a litigant in person during proceedings. Whilst the bench expressed sympathy for litigants in person, stating at paragraph 83 of the judgment that it is “entirely understandable that they should resort to artificial intelligence for help”, the Judgment went on to stress that all parties including litigants in person owe a duty to the court and must ensure that cases cited in legal documentation are genuine.

Supervision

Another key theme arising over the past year is the need to ensure that professionals are aware of the risks associated with using AI platforms and that adequate supervision is provided to use such platforms in a reasonable and proper manner.

This was particularly prevalent in UK v Secretary of State for the Home Department [2026] UKUT 81 (IAC), where the Upper Tribunal concluded that a lawyer who delegates to junior fee earners remains responsible for accuracy and for ensuring such juniors understand the dangers of non specialist AI for legal work, noting at paragraph 58 of the judgment:

“Supervisors must ensure that fee earners under their supervision are aware of the dangers of using non-specialist AI for legal research and drafting. (…) A supervisor who fails to ensure that the work of a more junior fee earner does not contain false cases or citations is likely to be more culpable than a lawyer who fails to ensure that his own work is free from such “hallucinations””.

Interference during court proceedings

In the recent case of UAB Business Enterprise v Oneta Ltd [2026] EWHC 543 (Ch), the judge found that a witness was being coached by an individual through smart glasses connected to their phone. The witness’ evidence was rejected by the High Court and indemnity costs followed. While this case involved human coaching, it demonstrates how connected tech (and, potentially, live AI assistance) could compromise evidence and courtroom integrity. 

Guidance published for professionals

Alongside the growing body of case law, the past year has seen an increase of guidance available to professionals on the topic of AI use. 
Whilst we are not aware of any guidance prohibiting the use of AI outright, the increased commentary reflects a clear shift away from general caution towards defined expectations, particularly around responsibility, supervision, confidentiality and accuracy.

Recent guidance

Judicial guidance

The Courts and Tribunals Judiciary has issued two versions of guidance within the past year, culminating in the refreshed Judicial guidance on AI published in October 2025. Although directed to judicial office holders, it serves as an important reference point to legal professionals, setting out the standards the courts expect to be upheld in proceedings and reinforcing several core principles.

The consistent message is that AI may be used as a supporting tool, but it cannot displace direct engagement with evidence, legal sources or professional judgment. The guidance also reflects growing judicial awareness of more sophisticated risks, including bias in training data and the manipulation of AI assisted processes.

Separately, the Civil Justice Council (CJC) has recently launched a consultation on the need for procedural rules to govern the use of AI in the preparation of court documents. An interim report from February 2026 proposes targeted transparency regarding the use of AI, rather than blanket restrictions. The interim report includes proposals for:

  • Witness statements (trial, including Part 32 and PD57AC) - a declaration that AI has not been used to generate the content.
  • Witness statements (non trial) - a declaration if AI has been used.
  • Expert reports – a declaration explaining any AI use (beyond transcription and administration) including the tools used.
  • Statements of case and skeleton arguments – either no additional rules at this stage or a specific declaration to confirm whether AI has been used.
  • Disclosure - no additional rules required at this stage as AI assisted disclosure is well established.

The consultation closed in mid April 2026. We predict a transparency based framework (including the possibility of specific declarations for certain evidence) rather than blanket bans, and continuing judicial insistence on verification, supervision and confidentiality. For insured firms and counsel, the aim is not to avoid AI, but to use it competently, safely and accountably within existing professional duties.

Finally, the UK Jurisdiction Taskforce has published its draft Legal Statement on Liability for AI harms under the private law of England and Wales, providing a thorough analysis of the application of the law of negligence to the physical and economic harms caused by AI. The report is a must read for anyone practising in the area of professional negligence and gives a valuable insight into how the Courts may look to tackle claims arising from the use of AI by professionals.

Bar council guidance

In November 2025, the Bar Council published updated guidance on the use of generative AI, reflecting developments in both technology and case law. The guidance recognises the inevitability of AI becoming embedded in legal practice and makes clear that there is nothing inherently improper about its use when utilised responsibly.

Law Society guidance 

The Law Society has continued to develop its “Generative AI: the essentials” guidance, most recently updated in September 2025. This guidance is deliberately practical and risk focused, aimed at firms and in house teams considering whether and how to deploy generative AI tools.

Royal Institution of Chartered Surveyors (RICS) guidance

RICS have recently updated their “Responsible use of artificial intelligence in surveying practice” which came into effect on 9 March 2026. The guidance focusses on new professional standards for AI usage for surveyors and recognises that, although AI developments are likely to drive the profession forward, they also carry high levels of professional and commercial risk.

FCO and ICO

The ICO has already published extensive guidance on AI. To support good practice into the future, the ICO has announced that it will be developing a statutory code of practice for organisations developing or deploying AI and automated decision-making – enabling innovation while safeguarding privacy. The FCA will also be helping firms to develop, test, and evaluate AI as part of the FCA’s AI Lab.

Key themes

Although issued by different bodies and for different audiences, the guidance is largely consistent. Four key themes emerge:

Responsibility rests with professionals to check AI outputs

Across all guidance, there is a firm rejection of the idea that AI errors are somehow excusable because they originate from technology.  Instead, responsibility for checking the accuracy of AI outputs rests with the professional. The RICS guidance also acknowledges that the present standard of knowledge amongst the field is “uneven” and that individual members must develop and maintain sufficient knowledge to support their responsible use of such systems.

The importance of proper supervision and practice management

Recent guidance and case law have focussed on supervision as a central risk area. Where junior lawyers or support staff use AI tools, supervisors are expected to ensure that those tools are used appropriately and that outputs are checked before being deployed.

Issues surrounding data governance, confidentiality and privilege

There are clear and repeated warnings against uploading confidential material into open or public AI systems and a recognition that professionals must safeguard private and confidential data. The guidance reflects a shared assumption that doing so may place information into the public domain, with consequences for confidentiality, privilege and data protection obligations.

AI use as a tool not an authority

Perhaps the most consistent theme is the rejection of AI as an authoritative source of law or fact. AI tools are described as predictive and non authoritative. They may assist, but they cannot replace primary legal sources, professional judgment or ethical responsibility. Professionals therefore have a responsibility to conduct their own due diligence to confirm the accuracy of any output produced by AI.

The past year has seen a marked tightening of expectations around AI use in professional practice. The message from regulators and the judiciary is not that AI should be avoided, but that it must be used deliberately, transparently and within existing professional frameworks. The challenge is no longer whether AI can be used, but whether it is being used in a way that withstands judicial, regulatory and professional scrutiny.

Key takeaways

The developments highlighted above are by no means exhaustive and AI technology is constantly evolving. With new forms of AI emerging at rapid pace, and adoption increasing, the best way for professionals to protect themselves is to stay abreast of developments and regulatory guidance.

For now, our key takeaways are that:

  1. Professionals need to proceed with caution when using AI for things like research and be alert to the risk of hallucinations. Professionals need to verify AI outputs against authoritative sources.  
  2. Protecting client confidential information and privilege is critical.  With that in mind, extreme caution is needed when using publicly available tools such as Chat GPT. Uploading confidential information into a publicly available large language models could put that information in the public domain. 
  3. There is an ever-evolving body of guidance for professionals which needs to be followed. Further guidance and procedural requirements are expected. 
  4. It is vital to ensure that appropriate supervision is in place for junior team members using AI and that all work is checked appropriately.

Interested in learning more? To sign up to our FutureProof campaign, please follow the following link: FutureProof

Our content explained

Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.