Existing clients

Log in to your client extranet for free matter information, know-how and documents.

Client extranet portal

Staff

Mills & Reeve system for employees.

Staff Login

The critical AI window

A report to help leaders navigate compliance, protect their organisations, and unlock AI’s full potential safely and confidently.

Mills & Reeve’s The critical AI window: Moving from panic to play with confidence research is based on the views of senior executives, particularly general counsel and in-house lawyers, within public and private organisations across the UK.

To shape the research, roundtable events were carried out with senior legal leaders in collaboration with The Lawyer. This qualitative work was followed by a quantitative in-depth survey amongst 321 respondents in 2025.

Here are some key points from the report:

93% of businesses will likely use AI over the coming years but 90% of senior leaders believe it will increase inaccuracy

87% of senior leaders are concerned about data breaches

83% fear not meeting regulatory compliance yet only 31% of organisations have an AI risk mitigation strategy

How advanced is the current AI adoption within organisations?

Nearly all (93%) of organisations are using GenAI, taking action to implement it or at least considering its adoption. Two-thirds (68%) of the senior leaders we spoke to are familiar with GenAI, so are they using it every day to create value within their organisations? Not yet. You might not think it when you hear the constant hype, but not even a third (31%) of organisations are currently using GenAI. 26% are actively developing a strategy for it, 30% are in the researching and sourcing stage and 13% either don’t know or haven’t yet considered it.

However, while far from everyone has fully embraced AI, we are well beyond the starting point of adoption. If we plot today’s situation against the standard innovation adoption curve, we can see a large leading pack of innovators, suggesting we are some way into the GenAI organisational adoption journey. 

After a spike in innovators, it makes sense that there are fewer businesses in the early and late majority groups. Organisations are developing their strategies and researching their approaches, considering the best use cases following in the wake of the innovators. More sit in the late majority group than the early majority suggesting there is still some caution around using the technology. The level of laggards is consistent with where they are expected to be.

How transformative is AI proving to business leaders?

With initial adoption well underway, is GenAI being used to its full potential? 29% of senior leaders say they already use GenAI in a significant way, 31% will do so within 12 months, and 12% the next two years. That’s three quarters (75%) of organisations using GenAI in a significant way, mostly within the next two years. Perhaps the remaining quarter are deciding to wait once regulations are set, or perhaps they do not see value being realised from AI yet.

What is holding back leaders?

90% of senior leaders in our research are concerned about the risks of AI results being inaccurate. This could go some way in explaining why they only think GenAI can have minimal value. 88% of leaders worry about the impact of bias and more than 8 in 10 are concerned that GenAI risks data breaches, cyber attacks, liability action, IP infringements and failure to meet regulatory compliance. Despite all these concerns, only 31% of organisations have a risk mitigation strategy in place for GenAI.

Major GenAI concerns of senior leaders

We categorised our research into four key themes;

1. Bias and inaccuracy

All organisations can be exposed to inaccuracies through current GenAI options. Take Microsoft, for example, which was criticised for an AI-generated article listing a food bank as a tourist destination and encouraging people to visit “on an empty stomach”. The risk can be higher for trusted brands such as this, or where accuracy is vital (such as for law firms), with potential repercussions including reputational damage, service disruption and litigation.

2. Security

Security risks caused by GenAI are significant and interlinked. Cyber attacks are becoming more sophisticated and automated. AI is enabling hackers to create more credible and convincing phishing and quishing (fake QR codes) entry points making it increasingly difficult for employees to distinguish between legitimate and malicious content. When employees could share sensitive and confidential information unwittingly through AI, the situation becomes a doubleedged sword.

Even organisations that have governance processes in place face significant security risks from AI, warns Helen Tringham, partner, Mills & Reeve: “Employees, driven by the excitement of leveraging AI for innovation, may unintentionally bypass established protocols designed to safeguard data and systems. The scale of larger organisations amplifies this risk, making it harder to enforce compliance and embed a culture of security.”

3. Regulatory and compliance

The explosion of GenAI has seen a raft of regulations introduced across the world, with more likely, and all of them changing as AI understanding develops. In the UK, the regulation of AI relies on existing legal frameworks such as intellectual property, data protection and contract law, highlighting the growing need for regulators and legal practitioners to adapt these frameworks to address the novel risks and complexities introduced by AI technologies.

All this means that there will be no steady state for regulation for some time. Yet, the risk of not complying is significant, both reputationally and financially. Within the EU, under the EU AI Act, for example, violations can cause administrative fines of €35 million or 7% of total global turnover, whichever is greater.

Helen warns employees need clear guidelines to prevent this: “If they don’t fully understand the legal and ethical boundaries – whether around data protection, intellectual property, or equality law – the consequences can be profound. A single misjudgment could expose the organisation to group litigation, reputational damage, and costly legal disputes. In such a fast-evolving area, anyorganisation could inadvertently become the test case that defines how these issues are handled in law. That kind of scrutiny brings not only financial risk but
also a lasting impact on public trust and brand integrity.”

4. Employment

GenAI is reshaping every element of employment so it’s no surprise this is a major concern. With some recruitment tools seen to show gender bias, potential discrimination is front of mind for leaders. But GenAI is also reshaping existing jobs and generating demand for new skills, causing concerns for many employees and risking culture and productivity.

Ten steps that organisations should keep in mind to stay on track with AI?

Take a fresh look at current policies with today’s challenges in mind

Map collaborative processes to align teams and stop checks falling through the gaps between silos

Create a decision-making blueprint so AI meets organisational goals and ethics

Record why decisions are made, ensuring accuracy for regulators, customers and stakeholders

Develop an AI solutions matrix, providing an overarching understanding of AI use internally

Get into the details of contracts for AI implementation and usage to spot red flags

Continue to prioritise culture, as great people and their creative thinking are as valuable as ever

Embed safe AI practices with training

Inspire with leadership, by providing clarity on use cases and by upskilling with new solutions

Treat every element of fast-moving GenAI – from compliance to accuracy to effectiveness – as a continuous, measured cycle 

Read the full report

AI is reshaping the way we work - but without clear guardrails, the risks are real. With only 31% of organisations having a risk mitigation strategy in place, it's time to act.

This report helps leaders navigate compliance, protect their organisations, and unlock AI’s full potential safely and confidently. 

Meet our technology lawyers

Our specialist technology lawyers combines a wealth of legal experiences to meet the evolving needs of the technology sector.

Talk to our legal experts

Our lawyers share one vision – achieve more together. It’s a state of mind in every client relationship we start and every choice we make. And it’s what clients consistently say distinguishes us from your average law firm.

FutureProof

Our lawyers put together a series of reports exploring how AI is reshaping the landscape of professional indemnity insurance, examining emerging risks and the evolving role of insurers and clients.

AI articles across the firm