Innovation is a word that’s often used but rarely unpacked in a way that resonates across an organisation. Having spent years advising clients on technology adoption and leading innovation initiatives at Mills & Reeve, I’ve seen first-hand how a thoughtful approach can transform not just what we do, but how we do it.
What does innovation really mean?
At Mills & Reeve, we define innovation as “anything new to you that adds value.” It’s not just about those rare “lightbulb moments”, it’s about doing something better or differently, often by borrowing ideas from other sectors. Internally, our Innovation Champions, Innovation Hub, and Innovation Handbook have helped embed a culture where everyone’s voice matters.
How to establish a more innovative culture
- Make innovation part of your strategy: At Mills & Reeve, innovation became a strategic priority in 2015. We recognised that client expectations were changing, and so must we.
- Empower champions and create forums: Identify innovation champions across teams, set up hubs for idea sharing, and make it clear that no idea is too small.
- Provide tools and methodologies: Use handbooks and structured techniques like the “Five Whys” or Edward de Bono’s “Six Thinking Hats” to help teams frame challenges and explore solutions.
- Celebrate and publicise success: Innovation Weeks and hackathons can focus attention, encourage collaboration, and showcase achievements. Our “Volume Control” solution, for example, won a Legal Week Innovation Award and was born from such an event.
- Professionalise and resource innovation: Dedicated teams for legal operations, project management, and technology consulting ensure that innovation is not just a buzzword but a deliverable outcome.
The generative AI opportunity, and its risks
Our recent report, “The Critical AI Window: Moving from Panic to Play with Confidence”, found that while only 31% of organisations currently use Generative AI for business, 72% expect to by 2027. The risks most frequently cited include:
- Inaccurate or biased outputs
- Data breaches and cyber attacks
- Liability and intellectual property (IP) risks
These align closely with the legal risks we see in practice: IP infringement, breach of confidentiality, data protection failures, and cybersecurity vulnerabilities.
Case in point: Getty Images v Stability AI
The recent UK ruling in Getty Images v Stability AI highlights the legal uncertainties around AI training and output. While the court found no jurisdiction over the primary copyright claims (since training didn’t occur in the UK) and dismissed the secondary copyright infringement claims, it did find that reproducing watermarks could infringe trade marks – a glitch, but a telling one. The case is likely to spur copyright law reform and highlights gaps in UK law for AI and copyright. The lesson is clear: the legal landscape is evolving, and robust risk assessment is essential.
Move fast and break things
Mark Zuckerberg set out his approach to innovation in an open letter to prospective Facebook shareholders in 2012. It included the following:
- Move fast: as most companies grow, they slow down too much because they’re more afraid of making mistakes than they are of losing opportunities by moving too slowly. We have a saying: “Move fast and break things.”
- Be bold: We have another saying: “The riskiest thing is to take no risks.”
- Focus on impact: always focus on solving the most important problems.
While moving fast and breaking things may have worked for Mr Zuckerberg, is it the right credo for everyone? For example, an in-house legal team is expected to fix problems, not create new ones! Probably not, but still, there is something in that entrepreneurial attitude that many, including lawyers, would do well to emulate. In the long-term, choosing not to explore how AI might help solve important problems for your organisation due to an excessive fear of making a mistake may be the riskiest approach to take. Better to experiment, within appropriate guardrails.
Practical guardrails for safe innovation
Drawing on both legal developments and practical experience, here are some essential guardrails:
- Use only approved AI tools: Maintain a list of vetted Generative AI products. Require explicit consent for any others.
- Never share confidential or personal data with public AI models: Ensure that only enterprise or private versions are used, with contract terms prohibiting data sharing or access by the provider.
- Encrypt sensitive data: Technical measures must ensure that claims data or confidential information is never visible “in the clear” to providers or their support teams.
- Critically assess all outputs: Treat AI-generated content as a draft requiring human oversight. Never replicate outputs in public documents without review and adaptation.
- Check for client restrictions: Always verify whether a client prohibits the use of AI tools for their matters.
- Document human supervision: Evidence your review process for clients, insurers, and auditors.
- Stay alert to IP and data protection risks: Be aware of the potential for copyright or trade mark infringement, as well as data protection obligations.
Final thoughts
Innovation isn’t just about technology. It’s about culture, process, and mindset. Generative AI offers immense potential, but only if we put the right guardrails in place. By learning from recent case law, adopting robust internal policies, and fostering a culture where everyone can contribute, teams can lead the way in safe, effective innovation.
If you’d like to discuss any of these points further or see examples of our internal policies and frameworks, please get in touch.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.