Existing clients

Log in to your client extranet for free matter information, know-how and documents.

Client extranet portal

Staff

Mills & Reeve system for employees.

Staff Login
17 Jul 2025
5 minutes read

FutureProof: Intelligent indemnity – insuring AI risks

Talk of AI in the insurance market tends to focus on the potential practical impact, from being able to assist with underwriting decisions to allowing for more efficient claims handling and adjusting.

Although clearly relevant, this article focuses on the impact adopting AI may have on the indemnity cover professionals are afforded and whether or not there is a need for new, AI bespoke, professional indemnity cover.

Does AI pose an increased risk?

Does adoption of AI by a policyholder automatically increase that policyholder’s risk profile? While there are clearly certain risks posed, and insurers are right to be cautious, it would be wrong to assume that a more concerted adoption of AI (in particular, Generative AI) in business automatically increases risk. Much will depend on whether the AI is being used in a responsible way with appropriate checks, balances and internal policies in place.

Used responsibly, in fact, and particularly as the technology develops, the use of Generative AI may end up reducing risk profiles. After all, and as we stand right now, most (if not all) of the claims that we all deal with are down to one factor – human error. Where AI can increase efficiency and accuracy, there is a real potential for improvement in outcome. One such example was covered in our recent article, looking at the benefits to the accountancy profession of adopting AI. Indeed, within the legal profession, the Master of the Rolls has talked about the potential for law firms to be negligent if they failed to use Gen AI moving forward.

What's covered?

The emerging problem for insurers and policyholders alike is the real uncertainty over whether risks arising from the use of AI fall within the scope of traditional PII policies. In this FutureProof mini-series of articles, we’ve identified a host of potential risks beyond standard “negligence” claims, covering both first- and third-party losses. These include:

  • Data protection/breach of confidentiality/copyright issues
  • Cyber attacks
  • Reputation damage/crisis management
  • Defamation and discrimination
  • Regulatory issues
  • Physical damage
  • Terrorism

While some of those risks fall within the scope of traditional cover offered to professionals in particular those whose polices must comply with “minimum terms”, others certainly start to blur the lines. For larger organisations who already hold separate cyber and D&O cover in addition to their PII cover, that may not cause an issue. But for the smaller, high street, organisations that are operating on limited budgets and reliant on the protection that is thought to come with minimum terms cover, for example, it is far more concerning.

It may of course be that there is no call for the traditional professional indemnity policy to change. After all, if say a solicitor provides negligent advice / acts negligently due to its use of technology and this gives rise to civil damages, barring any applicable exclusions, one would typically expect the policy to respond. Whether or not there would be a defence to the claim and/or a right of recovery against any third parties would be a separate consideration (and one which is explored more fully in our earlier article). This would however not impact the solicitor’s right to indemnity.

If that is the case, then the issue is in fact perhaps the more nuanced question of what additional risks professionals are exposing themselves to through the adoption of AI and what cover is available in those circumstances.

Will AI represent a new line of business?

There are a number of tailored AI products available on the market already. However, these tend to be focused on the provider of the AI service, or to tackle the issues arising when a company takes AI development in-house. As uptake increases, it’s only a matter of time until attention starts to turn to the cover required for the professional implementing the AI for the benefit of a different end-user (ie, the client).

For now, insurance market events and public discussion demonstrates that underwriters are actively engaging in this issue and exploring the potential risks that AI represents to insureds and therefore a PI insurers’ book. They are considering how underwriting practices might need to be adapted. This may mean including a wider adoption of more comprehensive policies, blurring the existing product lines, and allowing for “no gap” in cover. Or, instead, closely reviewing those existing policies and adopting specific exclusions for AI to prevent any such blurring. The interplay with minimum terms will be important and no doubt a discussion which will require the input of regulators, as well as the parties to the insurance policy themselves, to ensure the overall protection of consumers is not undermined by the uptick in the use of Gen AI.

Policyholders are also likely to be more alive to the risks and may well start reviewing their policies to check that they are not paying over the odds for cover that is not necessary if, for example, the organisation is so small it has no intention of using AI in the immediate future. No doubt this is a discussion that brokers will be having going forward with their insured clients. There’s also likely to be an increasing focus within proposal forms on AI usage, with insureds needing to give clear and comprehensive responses to ensure that a fair presentation of the risk has been provided.

Our own conversations with stakeholders within the market (be it broker, underwriter, adjuster or policyholder) is that, although there is an awareness of the “standard” risks, the broader picture and particularly issues (such as the risk of satellite litigation if more strict liability regimes are introduced) are not always at the forefront of discussions.

For now, it is in all parties’ interests to be alive to the potential risks and ensure good mitigation practices are in place to provide as much protection as possible while we’re all learning the benefits and risks that a more widespread adoption of AI into professional life will bring.

This article is a part of our FutureProof series. If you're interested in finding out more about how AI is impacting professional life, you can follow that here.

 

Our content explained

Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.