Existing clients

Log in to your client extranet for free matter information, know-how and documents.

Client extranet portal

Staff

Mills & Reeve system for employees.

Staff Login
25 Nov 2025
6 minutes read

FutureProof: Unlawfully intelligent – when AI crosses the line in legal practice

Generative AI (GenAI) is rapidly changing how professionals work, bringing with it immense opportunities, such as increased productivity. It also brings challenges and risks for professionals. Law firms are very much early adopters of GenAI.

In this article, we examine how solicitors are currently using GenAI, as well as the potential regulatory and professional indemnity risks.

Rate of GenAI adoption

Law firms are adopting GenAI at an astonishing rate. In its report of 22 September 2025, Thompson Reuters concluded:

  • Almost 80% of the top 40 firms advertise use of GenAI (an increase of 20% in 12 months).
  • Large firms almost all have in-house teams now charged with the development/roll-out of GenAI and corresponding staff training.
  • Almost 50% of the top 20 firms now have a “head of AI” – a non-existent role ten years ago.
  • Worryingly though, only a 25% of firms have an AI adoption strategy in place and almost half are using GenAI without any strategy.

Use of AI

AI in its more basic form has been used by solicitors for a while. For example, Technology Assisted Review (TAR) uses machine learning to tag and analyse documents for disclosure reviews, even understanding synonyms like “terminate” and “end” the contract. It’s like Netflix for disclosure.

GenAI is fundamentally different and a game-changer for lawyers. This because it has the ability to mimic human intelligence by actually creating new product. Commercial lawyers, for example, use software such as KIRA, enhanced by GenAI to analyse contracts. By dropping files into it, it identifies key clauses, summarises them, and generates charts.

A number of firms are already trying to establish themselves as pioneers in the GenAI space, advertising their use of GenAI to automate elements of the litigation process and prepare correspondence, pleadings, etc.

Conversely, litigants in person are embracing GenAI (especially ChatGPT) to assist them with legal research and drafting letters of claim and statements of case with mixed success. In doing so, they can deprive themselves of sensible legal advice and have a misjudged sense of the merits of their claim and their ability to progress a claim.

At the more advanced end of the scale, there are examples of GenAI being used in the US, with a degree of success, in outcome prediction for trials, along with preparing and delivering submissions in oral advocacy (without human involvement). That may take time to filter across to the UK, but it is a sign of how the technology has developed and an indication of what capabilities we might see. This could include live GenAI analysis of prospects of success during trials and the use of AI judges for paper determinations to name but a few.

All of these developments though create a number of new or potential increased risks for solicitors. Let us turn to these.

Copyright issues

Many GenAI systems are trained on vast datasets, including things like legal texts, without express permission from rights holders. That means that lawyers using those tools may unknowingly use or reproduce copyrighted content, exposing them to the risk of infringement claims.

GenAI can also generate content which closely mimics existing copyrighted works. Part of the difficulty for professionals is that many publicly available AI tools do not disclose their training data sources, making checks for copyrighted material all but impossible.

Data and confidentiality breaches

The most obvious risk here is that professionals may inadvertently breach client confidentiality by uploading commercially sensitive data onto public or unapproved AI platforms, such as ChatGPT, which then becomes absorbed into the AI’s training data set and becomes publicly available.

Imagine a scenario where a solicitor uses ChatGPT to produce a summary of the legal documents from a corporate merger or a property transaction, and in doing so inadvertently discloses highly confidential client information into the public domain. Or a solicitor who uses an open AI tool to summarise material which is legally privileged, causing all sorts of issues over whether that means a waiver of confidentiality/privilege has occurred.

These breaches of confidentiality and/or privilege give rise to professional indemnity claims, as well as regulatory action by the Solicitors Regulatory Authority.

Cyber security

According to a report published earlier this year by the National Cyber Security Centre (NCSC), the use of AI by malicious actors will almost certainly make cyber and ransomware attacks more effective and frequent. AI models can craft highly convincing emails to employees that mimic the style and tone of legitimate messages. The SRA has published a warning to lawyers who rely on video calls to identify clients that they could be interacting with deepfake AI. Sophisticated deepfake audio and video (including face-swapping AI) has been used to trick employees into sharing sensitive information or transferring money.

This is a major concern for solicitors and their insurers. Firms will need to be extremely vigilant, especially in respect of conveyancing transactions which have been the traditional vehicles of fraud against lawyers. That said, this is also giving rise to the development of AI tools, which can spot potential fraud as well as practical steps lawyers can take to verify individuals.

Inaccuracy and hallucinations

Finally, perhaps the biggest risk to lawyers right now, is a lack of understanding about the limitations of GenAI and an over-reliance on its outputs without proper guard-rails in place. There needs to be better training and supervision of work product, as well as a much deeper understanding by professionals of how GenAI works and its limitations.
Probably the most high-profile examples come from a widely publicised judgment handed down by the High Court in June this year in relation to two cases – Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank and QNB Capital – where fake case citations had been generated using publicly available GenAI tools.

In the first of those cases, the barrister, instructed by Mr Ayinde, drafted grounds for judicial review based on, what looked to be, five cases which supported her position. However, it transpired that none of those cases existed. The barrister acting for Mr Ayinde denied having deliberately used GenAI to conduct legal research and then covering up her use of GenAI. However, she did accept that that she might have inadvertently relied on GenAI-generated responses to Google searches. That led the judge to give a very stark warning that those who use AI to conduct legal research owe a professional duty to check the accuracy of that research before using it to advise clients or place it before the court.

Conclusions

In short, GenAI can be an incredibly useful tool for lawyers enabling them to provide clients with advice more efficiently and quickly. However, firms which both fail to understand the technology and implement proper guard-rails are at serious risk of professional indemnity claims and regulatory breaches. 

In simple terms, lawyers must accept responsibility for the work product rather than sub-contracting that to ChatGPT. As aptly put by an academic from NYU’s Alliance for Public Interest Technology: “The problem starts when people think AI is smarter than it is”.

Our content explained

Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.