The implementation of AI into the healthcare sector is well underway with the long-term commitment to the integration of AI into everyday life in the UK. As part of the AI Opportunities Action Plan:
- The UK’s Medicines and Healthcare products Regulatory Agency (MHRA) has expanded its AI Airlock programme, a controlled environment to enable the testing of AI-powered medical technologies. Pilot companies include SmartGuideline which uses a verified knowledge base with a specifically trained LLM that enables clinicians to “smart-search national guidelines with normal questions”.
- In February 2025, the UK Government announced a further £82.6m in funding for AI-driven research projects focussed on diagnosing and treating cancer.
- The MHRA and the BMA are actively supporting efforts to determine the most effective and safe ways to introduce these advancements to the sector.
It is clear that AI is here to stay and as important as the positives are (and there are a huge amount) there are also risks to be managed.
How is AI being used?
Before looking at how a claim might arise, it’s important to understand how AI may be used in the healthcare setting.
Cancer diagnosis
Arguably the most talked about example is cancer diagnosis.
The ‘traditional’ diagnosis and staging of cancers typically follows a pathway of initial physical examination, scans (x-rays or ultrasounds) and finally biopsies. Each of those steps is something AI is playing an increasingly significant role in across the UK.
Triaging / initial assessment
Deep Ensemble for Recognition of Malignancy (DERM) is one of the most advanced examples in terms of approval for widespread use. The tool, which has now been approved and recommended for conditional use by NICE in the NHS, sees healthcare professionals use a smartphone with a dermoscopic lens to capture high-resolution images of ‘suspicious’ skin lesions. These images are then uploaded for an AI algorithm to review against its own bank of known skin conditions and in turn determine whether the lesion is suspicious.
Importantly from a liability perspective, humans aren’t taken entirely out of the picture. If a lesion is identified by DERM as pre-cancerous or cancerous a dermatologist is then required to review the case virtually (part of which inevitably means checking whether DERM’s conclusion is correct) and put in place the treatment plan.
Equally, if a patient falls into a group DERM is known to be less effective at assessing, an additional healthcare professional will need to review the patient.
On that basis, the only time there will be limited human input is if DERM decides there’s nothing to worry about, in which case the patient is discharged from the urgent care pathway and receives follow-up written advice from a healthcare professional. The risk of a failure to diagnose remains but arguably no more so than when the entire triaging/initial assessment process is human-led.
Secondary investigations (scanning)
There are multiple ongoing trials of AI analysing imaging, like the use of the Early Detection using information Technology in Health (EDITH) system to review mammograms.
Similarly to DERM, EDITH steps into the shoes of the clinician (here typically a radiologist) and analyses the imaging taken against its own knowledge bank to determine whether there are any changes from the norm indicative of breast cancer. It is intended that rather than having two specialists per mammogram only one will be needed to safely interpret the findings, given they will have the support of the AI. That not only frees up clinician time but also results in significant costs savings.
Mental health
In relation to mental health treatment, software is being developed and, in some instances, is already in place to supplement therapies, deliver guided self-help, monitor moods and behaviours, and provide cognitive behavioural therapy interventions.
Brain computer interfaces (BCIs)
The last, but perhaps most ‘sci-fi’ example, worth noting are BCIs. It seems fair to say they are one of the most potentially revolutionary milestones AI has enabled.
While the notion of BCIs has been around for some time, the improvements in AI have allowed much more effective variations to be developed. For example, the ability of AI to interpret vast amounts of data can be integrated with BCIs to interpret a patient’s brain signals more effectively. That in turn would allow patients to better control connected devices and even communicate when they otherwise may never have been able to.
Where are the liability risks?
Of key interest to the professional indemnity market is how this will all fit with the assessment of a clinical negligence claim under our existing common law framework.
Informed consent
AI powered reviews, as with any traditional clinical process, carry the risk of an adverse outcome. It is therefore arguable that any clinical process that is undertaken with the assistance of AI ought to be expressly drawn to the patient’s attention and consent gained. Failure to do so may mean an exposure to a consenting allegation.
Balancing those additional risks are the potential defences that the integration of AI assisted technology may afford clinicians facing consent based allegations. For example, we are already seeing the rollout of AI-assisted scribing. The additional detail that scribing will record is likely to prove invaluable when it comes to trying to resolve informed consent claims early and at lower cost.
Bolam and Bolitho
In England and Wales the Bolam and Bolitho tests remain the foundation of assessing whether treatment has been performed negligently. Put simply, a clinician is not negligent if they acted in accordance with logical practices accepted as proper by a responsible body of medical opinion.
If a clinician follows the findings of an AI tool incorrectly staging a cancer diagnosis will their reliance on those AI findings be Bolam and Bolitho compliant? In such circumstances the court will now need to ask whether relying on or rejecting AI was itself a logical decision that would be supported by a responsible body of medical opinion. As with all things AI this is a developing area, but in those circumstances it seems likely liability would lay with the clinician if they did not take reasonable steps to ensure the advice given by the AI was correct.
That is, of course, assuming there is not a wider legislative change to push for a strict liability regime where the use of AI is concerned. Or the approach may swing the other way, finding that as more data about the competency of AI emerges over the years, the extent to which clinicians are required to ‘second guess’ their AI tools will decrease.
Equally, as the availability of AI in our healthcare sector grows clinicians will have to make decisions about what tool to use for which treatment/care pathway and what inputs to provide those tools with. This seems to be a particular stress point for potential claims, with bodies including the BMA noting the need for those skills to become a foundation in training, along with a general need to increase the digital literacy of the healthcare workforce in order to ensure the safe implementation of AI across the healthcare space.
It is therefore crucial healthcare employers provide their staff with the appropriate opportunities to upskill and use AI safely, and for the clinician’s awareness of the issues arising to be adequately recorded. Otherwise, we could well start to see claims for incorrect use of AI healthcare tools in a similar vein to how claims for incorrectly performed treatment (e.g. scans) are brought.
Causation
It almost goes without saying that the legal principles of causation remain the same (ie whether the breach caused the harm or loss suffered). However, just as with Bolam and Bolitho there is likely to be shift in the way these tests have to be applied by lawyers, experts and the courts, for example:
- Was the correct tool used?
- Was the correct information input into the tool?
- Is it possible to determine precisely what assessment the AI undertook or is the system black boxed?
- Was an incorrect output the result of a software failure?
- What would the correct output have been?
- What conclusion would a reasonable assessment of the tool’s output have reached?
Medical malpractice claims are already heavily reliant on experts given the clinical complexities but these sorts of issues look set to expand that. In all likelihood, they will require a ‘new’ kind of medicolegal expert who specialises in the creation and/or use of healthcare AI software and existing experts will need to update their skillsets to become familiar with how AI is used in their sub-specialisms.
Quantum
Last but by no means least, it is worth having a think about what AI may do for the scale of claims we might see. Fundamentally the injuries a claimant suffers are at the core of what claims are worth, all the wider losses (eg lost earnings) stem from this.
The injuries sustained due to medical malpractice are finite and AI does not seem likely to add to that list, rather it will act as a new potential cause. Where AI seems to have the potential to have a significant impact is when it comes to mitigation.
First, it may be a useful tool for medicolegal experts to assess what a claimant’s prognosis is likely to be early on given the vast data sets it can pull together. In claims where prognosis is otherwise incredibly uncertain, this may provide a route to earlier and therefore more cost-effective and proportionate resolution.
Equally, if implemented effectively and safely AI looks set to enable quicker, more effective and new treatments – the starkest example being BCIs. These have the potential to allow those with very significant injuries to regain some independence and functionality they would otherwise have lost entirely. Presuming these kinds of treatment continue to develop and are adopted, it seems likely we will start to see claims with higher upfront treatment costs, given the relatively ‘new’ nature of such treatments, but a fall in the overall claim value due to reduced future losses (which are frequently the lion’s share of such claims). However, quite how far we are off that point is difficult to tell.
Hardware and software failures
What if the tool itself gets things wrong? It may, for example, be down to the tool being trained on poor quality data. Issues of product liability, procurement and contractual indemnities are likely to feature even more prominently. The next phase of the AI Airlock Programme will look to explore broadly the regulatory challenges and regime standards for AI as a medical device, and will no doubt inform current debates about how and when liability will switch from provider to user.
As with all professions, indemnity providers will need to take care to consider exactly which risks they are intending to cover when considering policy wordings, and the contractual arrangements in place between policyholders and any suppliers of AI.
Bias, discrimination and ethical concerns
A LLM is only as good as the material it is trained on. Directed properly, a more automated decision making process should allow opportunities for inherent bias and preconceptions to be removed from the clinical decision-making process. Whilst in its infancy however, there will be a need to pay particular attention to the risk of the AI process exacerbating existing inequalities and the very real risk of ethical issues and discrimination claims. Caution will also need to be exercised to ensure the sensitive personal data of patients is protected.
Conclusions
There is certainly a lot to think about when it comes to how our common law might start to adapt to tackle the use of AI and its role in clinical negligence claims, assuming new legislation does not step in to take the reins. For the time being, the key takeaways as we see them are as follows:
- AI is already being used to assist treatments and looks set for increased adoption.
- There remains a significant role and responsibility for clinicians in AI-assisted treatment pathways, including ensuring correct use of AI tools and critically assessing their outputs.
- While AI tools do pose new risks that might result in professional indemnity claims against clinicians (e.g. misuse of AI or a failure to critically assess outputs) it can also act as a means to protect against the risk of claims;
- Lawyers, experts and the courts will need to approach AI related medical malpractice claims in some entirely new ways, including but not limited to somewhat novel applications of the Bolam and Bolitho standards and when investigating causation.
This article is a part of our FutureProof series and if you are interested in finding out more about how AI is impacting professional life you can sign up to follow that here.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.