Existing clients

Log in to your client extranet for free matter information, know-how and documents.

Client extranet portal

Staff

Mills & Reeve system for employees.

Staff Login
02 Oct 2025
8 minutes read

FutureProof: Empowering insurance brokers and intermediaries - how AI is shaping a smarter future for professional indemnity

As artificial intelligence (AI) continues to drive revolutionary change across many industries, the broking sector also finds itself at a pivotal crossroads. It’s difficult to accurately state the extent of uptake of AI by brokers since various surveys have yielded different outcomes. For example, research by Open GI in 2025 indicates that only 20% of brokers are currently working with AI. Among national brokers, 45% have implemented some AI initiatives, but just 2% of those have a mature strategy. In contrast, 57% of regional brokers have yet to make a start. Whereas, in a survey conducted by RSA Insurance Group in 2023, 80% of the 200 brokers surveyed were using some form of AI. Whilst the research pieces paint a varied picture, what’s clear is that the disparity in uptake risks creating a ‘digital divide’ in the broking community where smaller firms fall behind due to lack of resources or strategic clarity. 

It seems to be widely accepted that AI offers the ability to maximise efficiency, insight, and innovation. However, it also brings with it a host of challenges from regulatory compliance, overreliance posing risk management challenges and ethical dilemmas. This goes some way in explaining the differences in adoption rates. In the latest article in our Futureproof series, we explore the exciting opportunities presented by AI for brokers and intermediaries, the key considerations for those looking to implement the tools into their business model and the implications for their professional indemnity insurers. 

Opportunities

Innovation, efficiency  and profitability

During the ‘Future of Insurance’ presentation at the BIBA Conference 2025, we heard leading brokers from Marsh, Lockton, Konsileo and the Clear Group (the BIBA Panel) who discussed some of the interesting ways brokers are already using AI in their everyday practice. The majority spoke about AI’s ability to drive efficiencies and innovation. At Marsh, an internal version of ChatGPT is being used to summarise meetings, compare policies and assess coverage gaps. Lockton are exploring voice recognition tools that analyse tone to better interpret customer experience and satisfaction, with the longer-term objective of using the technology to help identify vulnerable customers; an issue which the Financial Conduct Authority (FCA) continues to shine a spotlight on.  The efficient use of AI to streamline those processes which can be easily and appropriately automated can then enable brokers to find more time to focus on engaging with their customers and providing the tailored advice they require.

Improved risk assessment and underwriting

AI’s ability to process vast datasets in real time is revolutionising underwriting and is happening now. Risk-scoring models now analyse factors such as geolocation, weather patterns and behavioural data to assess individual risk profiles. For brokers, this means more accurate pricing, bespoke advice and tailored coverage, especially valuable in complex or emerging risk areas such as geopolitical and cyber liability. Equally, AI poses a challenge to policyholders since it may force underwriters to adopt a more selective and one size fits all attitude to risk. This is a threat to customers  but an opportunity to brokers who may be able to break past “blind reliance” on AI to explain why the data is wrong.

Fraud

Fraud detection isn’t just an operational necessity for the insurance industry; it’s a fundamental requirement for maintaining a trustworthy and profitable insurance market. With rapid technological advancement comes increasingly sophisticated fraud schemes which undermine the credibility of the insurance industry. In a member survey conducted by the British Insurance Brokers Association (BIBA) published in BIBA’s Guide to AI, 41% of members saw fraud detection and prevention as the most important area AI could help with. Gen.AI tools are capable of continuously adapting and thus able to identify new fraud patterns. This provides an invaluable opportunity to identify suspicious activity during quoting, binding, and claims stages.

Enhanced customer service and engagement

AI-driven chatbots and voice recognition tools are streamlining customer interactions. These systems can guide customers through quote generation, claims filing, and policy queries with minimal human intervention. More advanced systems even suggest “next best actions” and predictive analytics to empower brokers to anticipate customer needs such as other suitable policies. However, the ability to deliver a more personalised customer experience, may come with the sacrifice of the subtlety and nuance that human interaction provides. 

Potential pitfalls: Complexity, compliance, and culture

Data quality and integration

AI is only as good as the data it consumes and many broker firms struggle with fragmented, inconsistent datasets. Without robust data governance, AI outputs have the potential to be misleading or even biased. As with underwriting, significant care must be taken over “blind reliance” on the technology. Integrating AI with legacy systems is another hurdle, often requiring significant financial investment. The FCA encourages firms to adopt robust governance frameworks and conduct bias audits; the European Data Protection Supervisor has gone one step further, suggesting the use of synthetic datasets (manipulated to have a better representation of the world) in order to mitigate data bias. This emphasises the need for brokers and intermediaries to balance desire for innovation with their regulatory responsibilities.

Regulatory and ethical considerations

The integration of AI in insurance broking presents a complex array of regulatory and ethical challenges. Under the FCA’s principles based and outcomes focused regulatory framework, insurance brokers must ensure that AI deployment aligns with existing obligations, particularly those concerning consumer protection, fairness, and transparency. A key concern is the potential for statistical, ethical or social bias, where AI systems trained on historical or incomplete data may inadvertently discriminate against protected or vulnerable groups leading to unfair outcomes. The FCA has acknowledged that such outcomes may breach its Principles for Businesses and the Consumer Duty, even if not explicitly unlawful under the Equality Act 2010. Moreover, the use of AI in underwriting and risk assessment raises ethical questions around explainability and accountability, especially when decisions are made without human oversight.

Data protection

Brokers must also navigate the risks to data privacy, ensuring compliance with the existing UK General Data Protection Regulations (GDPR) with careful consideration of the European Union AI (EU AI) Act. The EU AI Act and GDPR are distinct frameworks; the former focusing on individual rights and data protection and the latter being a product safety law regulating the design, development and deployment of AI systems. However, both have common, shared principles including transparency, fairness, accountability, human oversight and risk management. Practically speaking, any brokers whose output (advice/services) from AI affects EU citizens will need to ensure compliance with the EU AI Act by considering measures such as implementing (a) adequate training of AI amongst staff and (b) human oversight mechanisms, registering ‘high-risk’ AI systems (those intended to be used for emotion recognition, recruitment and HR) in the EU database and conducting Fundamental Rights Impact Assessments (FRIAs) in addition to DPIAs required under GDPR. 

The broker as a professional

The technology sitting behind AI tools is complex and constantly evolving and those using it need to understand what AI can and cannot do, to ensure they use it effectively and responsibly. The consensus amongst the BIBA Panel was clear: AI should not replace professional advice but rather increase the demand for it to enable brokers to become more of a business adviser. Brokers must remain technical advisers first and foremost, maintaining the skills to manually compare insurance contracts. However, AI tools which enable the fast comparison of policies, for example, arguably removes one of the traditional roles of the junior broker.  If the broker starting out in their career is no longer required to read and compare the pros and cons of particular policies, how will they develop the knowledge and understanding necessary to be the customer adviser of the future? Ultimately, AI should simplify and innovate processes and service delivery without compromising on the broker’s skill set.

The evolving risk profile of a broker

Some of the potential hazards caused by the integration of AI will inevitably raise questions about how this changes the risk landscape for brokers and their professional indemnity insurers. AI has the clear potential to reduce the risk of human error. Equally, however, if a broker blindly relies on AI-generated insights such as risk scores, policy comparisons, or customer recommendations and those outputs are flawed, incomplete, or biased, customers may find themselves underinsured or entirely uninsured and look to their brokers to recover consequential losses which should otherwise be covered by their insurer. In such cases, the broker could still be held liable for breach of their professional duty, especially if they failed to exercise appropriate oversight or conduct due diligence in accordance with their civil and regulatory duties. The courts are unlikely to accept an argument that “the AI made the decision” as a defence if the broker failed to have appropriate oversight over the output. While we don’t expect a court is ever likely to decide that a broker’s use of AI is negligent, we do think that judges will continue to emphasise the importance of maintaining human judgement and documenting both decision-making processes and the instructions received from customers, as well as ensuring transparency with customers as to how brokers intend to use AI to deliver their services. 

Conclusion

For UK insurance brokers, AI represents both a powerful ally and a complex challenge. Any broker not adopting some form of AI is likely to find itself at a competitive disadvantage. The key lies in thoughtful adoption and management of the technology, intelligent investment in data solutions, upskilling teams and ensuring regulatory compliance whilst using AI as a means of maintaining the ever-important human touch. Brokers who strike this balance will not only enhance their service but also futureproof their business in an increasingly digital and data driven market.

This article is a part of our FutureProof series. If you're interested in finding out more about how AI is impacting professional life, you can follow that here.

Our content explained

Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.