AI in life sciences – legal evolution

Artificial intelligence (AI) is now firmly on the agenda within the life sciences sector, whether it’s mining data for new drug/target interactions, or collaborating to improve the diagnostic capability of healthtech solutions. This is an ever moving landscape and in this article we review the current legal landscape and highlight what’s changing.

Pressure to increase the productivity of the drug discovery process, alongside rapidly increasing pools of data, is driving the innovative pharmaceuticals industry towards AI and machine learning. Often this involves partnerships with technology specialists from outside the sector, or with newer organisations set up to focus on a particular dataset or method. Exscientia and BenevolentAI, for example, are working with global pharma and biotech partners to bring drug candidates into development at pace.

Health technology and diagnostics are also deploying AI in various ways. From wellness and health monitoring software, through improved diagnostics to enhanced treatment capabilities, AI and machine learning have a great deal to offer both clinician and patient.

But while the technological advances build and grow, legal uncertainty remains an issue. We’ll focus on three key areas where we’re seeing significant evolution and change.

1. Ownership and rights to control the fruits of innovation

Patent protection provides a key set of assets for life science businesses. The patents protecting a high value drug, for example, help developers to maintain their competitive advantage for a substantial period and recoup the heavy investment involved in bringing a product to market. Where an AI is used in drug discovery, difficult questions arise around who actually made the relevant invention.

Patent laws normally recognise creative activity on the part of human inventors. Where developments are generated by an AI algorithm, however, the law does not work well. Most legal systems do not recognise a non-human entity as an inventor.

A group of academics is currently challenging the requirement for a human inventor with a series of patent applications filed internationally in the name of an AI machine named DABUS. The project has received numerous knock-backs, although a patent has been issued in South Africa. We discuss this in more detail in our article AI inventorship, the DABUS case and who will control the future.  

AI innovation also relies on large collections of data, and here too, fitting with existing forms of legal protection can be difficult. Protection of databases varies considerably around the world. It is generally weak and inconsistent. Enforcement of rights often requires proof of copying – a difficult hurdle to overcome. As a result, we see owners of important database assets turning to encryption and security measures, alongside legal non-disclosure obligations.  

Intellectual property offices around the world recognise the problems. The World Intellectual Property Organisation has convened a series of international meetings to share and shape thinking and work towards consensus – although this will take time. Meanwhile, countries are mapping out their own approaches.

The UK Intellectual Property Office (IPO) is running an in-depth series of consultations to generate a policy response. The latest consultation (Artificial Intelligence and IP: copyright and patents) is currently being analysed. The right answer is not necessarily greater protection. A proliferation of patents filed in the name of AI inventors, or a new class of rights to protect data assets, could do more harm than good. The IPO is clear that finding the right balance while keeping human creativity at the heart of the system will be essential:

“Patents and copyright must provide the right incentives to AI development and innovation, while continuing to promote human creativity and innovation.”

The current level of uncertainty is unhelpful, although innovators can improve their position substantially through careful contracting.

2. Regulatory frameworks

In many cases AI-based technologies will fall within existing regulatory structures. Any product that is classed as a medical device, for example, will need to meet the stringent rules for that group of products and services. Our blog Good Machine Learning Practice for medical devices considers a recent group of initiatives in the medical devices arena.

But there are efforts under way to introduce a cross-cutting approach that will apply to AI in general rather than in a specific field of application.

Leading the charge, the EU has proposed a sector-blind regulatory structure for AI. This would introduce a risk categorisation and lighter or heavier touch systems for different categories of product. Some technologies would be ruled out altogether as unacceptable. Within this category are harmful practices that could manipulate people through subliminal techniques, or that exploit the vulnerabilities of groups like children or disabled people.

Products and services that are deemed high risk would be subject to a conformity assessment process. This would include many AI systems that are themselves medical devices, or form components of medical devices. These would have to comply with additional requirements to identify and manage risk throughout the life of the product.  

Lower risk products would have to meet transparency obligations.

The UK is adopting a different approach with a new initiative aiming to shape global technical standards. This envisages a growing market for external assurance providers alongside existing regulators, including sector specific regulators like the MHRA.

These differences in approach will not be easy to navigate, but staying on top of developments and seeking to adhere to best practice will help to minimise regulatory risk.

Regulation of data privacy is also having to adapt to accommodate advances in AI. Features of AI systems such as lack of transparency and bias in datasets, as well as the sheer processing power now being achieved, are raising widespread concerns that will need to be adequately addressed. Privacy regulators like the UK’s Information Commissioner’s Office offer detailed and regularly updated guidance where AI is deployed (see the ICO's Guidance on AI and data protection and AI and data protection risk toolkit).

Personal information relating to health and medical care is treated as particularly sensitive by data privacy laws. Collection and use of personal data – and particularly sensitive health-related data - requires well-planned and careful management to meet best practice and achieve compliance.

3. Liability for injury or damage

The flipside of regulatory compliance is liability – who takes responsibility if an AI-based product or service causes harm? AI-related risk may be more obvious in fields like autonomous transport, but problems can arise in any application. If an AI-supported diagnostic tool fails to identify a cancerous lesion leaving a patient untreated, for example, where would responsibility lie? How much review and double-checking of the AI’s output can we expect? What kind of failure rate would be acceptable? After all, everyone makes mistakes.

The drug discovery process may be less exposed to concerns like these. The journey from identification of a drug target to a marketed medicine involves multiple stages of pre-clinical and clinical assessment. This process serves to eliminate unsafe or ineffective products. But if new molecules are identified that are well outside the usual repertoire, it is possible that they could have unexpected harmful properties.

Particular problems arise in relation to:

  • allocating responsibility in a complex interactive system with several different partners involved
  • identifying how a problem has arisen when the “thought processes” of an AI system are not clearly defined and transparent
  • eliminating bias and inaccuracy that arises due to the nature of the databases used. For example, a data set reflecting one subgroup of the population could produce slanted or partial results.

Efforts to address these questions are in train. Once again, the EU is active here, consulting recently on proposals to update liability for digital products. This seeks to ensure, for example, that anyone harmed will be able to obtain compensation, for example by reversing the burden of proof for claimants to overcome the problem of lack of transparency in an AI system.

For developers, ensuring that liability is addressed in agreements with partners, backed by appropriate insurance, will help to deal with these risks. While the worst is unlikely to happen in any particular project, planning for it in advance is always better than clearing up afterwards.

 

 

This overview highlights a few of the areas to watch for innovators bringing AI solutions to life sciences. We are advising regularly on the impact of AI systems within the sector so please do get in touch with our life sciences, intellectual property and technology specialists to discuss your specific requirements.

Tags

Our content explained

Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.

Mills & Reeve Sites navigation
A tabbed collection of Mills & Reeve sites.
Sites
My Mills & Reeve navigation
Subscribe to, or manage your My Mills & Reeve account.
My M&R

Visitors

Register for My M&R to stay up-to-date with legal news and events, create brochures and bookmark pages.

Existing clients

Log in to your client extranet for free matter information, know-how and documents.

Staff

Mills & Reeve system for employees.