A testbed for medical device AI: the MHRA’s AI Airlock

The role of AI in medical technology is important and growing. AI-enabled solutions that improve diagnostic scan interpretation, for example, or provide tailored patient support, can offer rapid improvements in functionality. There are clear risks, however, and regulators must grapple with the need to ensure adequate levels of safety and performance.

International commitments to safe AI

At the recent AI Seoul Summit some of the world’s leading AI developers signed up to a new suite of safety commitments. These Frontier AI Safety Commitments are built on three core principles:

  • Outcome 1. Organisations effectively identify, assess and manage risks when developing and deploying their frontier AI models and systems.
  • Outcome 2. Organisations are accountable for safely developing and deploying their frontier AI models and systems.
  • Outcome 3. Organisations’ approaches to frontier AI safety are appropriately transparent to external actors, including governments.

The safety commitments envisage innovators working in collaboration with others, notably government organisations, to evaluate and mitigate risk.

Why collaborate?

A collaborative approach between innovators and regulators can be fruitful for both. The innovator may benefit from swifter adoption of a new technology. For the regulator, it offers an opportunity for the regulator to address risks at an early stage, as well as a valuable way to develop detailed knowledge and experience to improve policies and standard setting. One structure that can be used to promote collaboration is the sandbox model, where innovations can be shielded from full regulatory assessment while undergoing testing.

The AI Airlock

UK regulator, the Medicines and Healthcare products Regulatory Agency (MHRA), is developing a regulatory sandbox within which AI as a Medical Device (AIaMD) can be tested and assessed. This AI Airlock will soon be open for applications.

A particular challenge in this situation, as compared with other regulatory sandboxes (the Information Commissioner’s Office, for example), is the need for in-depth coordination between different organisations.  AI Airlock will bring in the expertise of Approved Bodies, the Department of Health and Social Care (DHSC), and the NHS AI Lab.

Areas of focus include:

  • Detecting and reporting product performance errors (including drift) and failure modes in post market surveillance data.
  • Increased automation and decision-making responsibilities within clinical workflow and producing pre-market evidence of safety.
  • Breaking down the complexities of generative AI based medical devices.

How to find out more

The MHRA will be holding a webinar for innovators and developers on 23 July to provide further information and answer questions relating to the project.


Learn more about our life sciences practice.

Our content explained

Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.

Posted by


Mills & Reeve Sites navigation
A tabbed collection of Mills & Reeve sites.
My Mills & Reeve navigation
Subscribe to, or manage your My Mills & Reeve account.
My M&R


Register for My M&R to stay up-to-date with legal news and events, create brochures and bookmark pages.

Existing clients

Log in to your client extranet for free matter information, know-how and documents.


Mills & Reeve system for employees.