Is the growth of AI being held back by uncertainty about how it may lawfully be deployed? If you think so, now is your chance to tell the government about it. In October, the Department for Science, Innovation and Technology issued an open call for evidence about its proposal to create an AI Growth Lab.
Perhaps the two most interesting questions posed in the call for evidence are:
- What, if any, specific regulatory barriers (particularly provisions of law) are there that should be addressed through the AI Growth Lab? If there are, why are these barriers to innovation? Please provide evidence where possible.
- What types of regulation (particularly legislative provisions), if any, should be eligible for temporary modification or disapplication within the Lab? Could you give specific examples and why these should be eligible?
What's going on here? In short, the government is indicating a willingness to consider changing the law to accelerate AI innovation.
Initially this would happen on a temporary basis within the context of a “regulatory sandbox”, an environment in which some of the usual rules would be suspended in order to allow businesses to trial novel AI products and generate real-world evidence of their impact. "Successful experiments would give regulators and public the confidence they need to make reforms permanent", the call for evidence says.
The concept of a regulatory sandbox for AI is not unique to the UK. Article 57 of the EU AI Act requires EU Member states to create an AI regulatory sandbox, one of the functions of which is to “contribute to evidence based regulatory learning”. It's not even the first AI regulatory sandbox in the UK (the UK's Medicines and Healthcare products Regulatory Agency already had a scheme like this for the development of AI as a medical device: AI Airlock). It may not be a new idea but conceptually, the notion of an AI regulatory sandbox makes a lot of sense (although as a Brit I can't help but feel it ought to be called an AI regulatory sandpit).
At its heart, the concept of a regulatory sandbox responds to the difficulty of making laws that work, particularly in a fast-moving industry such as AI. It recognises that a little trial and error can help deliver a legal framework that is fit for purpose and solves real problems – rather than imaginary ones. It takes a long time to pass an Act of Parliament so when you do, it's important to get it right.
How does this work in practice? Take this example from the AI Airlock – a company called Tortus has created a note taking AI powered clinical assistant for doctors. There is regulatory uncertainty over the boundary between documentation support (regulated as a “Class I” Medical Device) and diagnostic or decision-support functionality (regulated as a “Class IIA Medical Device”). When this legislation was originally drafted, it did not have to cater for this type of technology and so, unsurprisingly, there’s uncertainty over how it should be applied.
The AI Airlock allows Tortus to work with the regulator in a mutually beneficial way. The company can test their product without fear of falling foul of the regulations. The regulator can get a closer look at the product that allows it to form a clearer idea of whether the existing regulations need to be updated (and if so, how). At least, that’s the theory.
Comment
The call for evidence is a real chance to influence the future of AI regulation in the UK. If you’re taking an AI product to market and secretly wish you could (a) tear up parts of the statute book or (b) write yourself some new rules, now is the time to make your voice heard.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.