EU's bid to control the future of AI

This week the EU Commission published its legislative proposals for regulating artificial intelligence.

Broad application

The planned regulation is broadly cast, with a clear intention to influence the development of AI technology globally. It will affect both public and private sector entities, wherever they are based, if the AI system is put on the EU market, or its use affects individuals in the EU. Both developers and users of AI systems are covered.

Extraterritorial effect

The rules will apply, with some exceptions, to

  • providers offering AI systems within the EU, wherever they are based
  • users of AI systems located in the EU
  • providers and users of AI systems outside the EU, if the output produced by the system is used in the EU.

Tough penalties

We have seen the EU's willingness to introduce eye-watering penalties for non-compliance with the GDPR. This proposal takes a similar approach. It includes possible fines of up to €30m or 6% of worldwide annual turnover (whichever is higher) for the most serious breaches.

A risk ladder

The plans divide AI systems into four categories.

  • Unacceptable risk

These are applications classed as particularly harmful. Examples given include social scoring by governments, exploitation of vulnerabilities of children and the use of subliminal techniques.

Live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes are included in this category. This is subject to narrow exceptions like targeted searching for a victim of crime such as a missing child, the response to an imminent terror attach or the detection and identification.

The intention is to ban these altogether.

  • High risk

This category includes a broad range of possible applications for AI systems. These range from the operation of critical infrastructure such as road traffic management and electricity supply, the assessment of educational attainment, recruitment, credit scoring, criminal profiling and assessment of asylum claims.

The list is kept separate from the main legislation, with the expectation that it will be updated as technology develops.

Stringent obligations must be met for these systems.

  • Limited risk

This class of systems will require specific transparency requirements. An example is the use of chatbots – users should be made aware that they are speaking to a machine.

  • Minimal risk

Any AI system not caught by the other categories falls into the minimal risk category. These are subject to existing law, without additional obligations. The Commission expects most applications to fall into this category. Notably, providers of minimal risk systems are invited to apply requirements for trustworthy AI and adhere to voluntary codes.

Machinery regulation

Alongside the main AI proposals sits a new regulation for machinery, to replace the 2006 Machinery Directive. This aims to update and harmonise EU law on many kinds of consumer and professional machinery, taking account of the increasing role of software and connectivity in all product types.

International coordination

The press release accompanying the proposals highlights the different approaches being taken around the world, as we discuss in our report A Sense of Direction for AI. The EU, it says, with promote an international approach through bilateral and multilateral collaborations and alliances, and through organisations such as the OECD.

Posted by

Tags

Mills & Reeve Sites navigation
A tabbed collection of Mills & Reeve sites.
Sites
My Mills & Reeve navigation
Subscribe to, or manage your My Mills & Reeve account.
My M&R

Visitors

Register for My M&R to stay up-to-date with legal news and events, create brochures and bookmark pages.

Existing clients

Log in to your client extranet for free matter information, know-how and documents.

Staff

Mills & Reeve system for employees.