This week the EU Commission published its legislative proposals for regulating artificial intelligence.
The planned regulation is broadly cast, with a clear intention to influence the development of AI technology globally. It will affect both public and private sector entities, wherever they are based, if the AI system is put on the EU market, or its use affects individuals in the EU. Both developers and users of AI systems are covered.
The rules will apply, with some exceptions, to
- providers offering AI systems within the EU, wherever they are based
- users of AI systems located in the EU
- providers and users of AI systems outside the EU, if the output produced by the system is used in the EU.
We have seen the EU's willingness to introduce eye-watering penalties for non-compliance with the GDPR. This proposal takes a similar approach. It includes possible fines of up to €30m or 6% of worldwide annual turnover (whichever is higher) for the most serious breaches.
A risk ladder
The plans divide AI systems into four categories.
These are applications classed as particularly harmful. Examples given include social scoring by governments, exploitation of vulnerabilities of children and the use of subliminal techniques.
Live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes are included in this category. This is subject to narrow exceptions like targeted searching for a victim of crime such as a missing child, the response to an imminent terror attach or the detection and identification.
The intention is to ban these altogether.
This category includes a broad range of possible applications for AI systems. These range from the operation of critical infrastructure such as road traffic management and electricity supply, the assessment of educational attainment, recruitment, credit scoring, criminal profiling and assessment of asylum claims.
The list is kept separate from the main legislation, with the expectation that it will be updated as technology develops.
Stringent obligations must be met for these systems.
This class of systems will require specific transparency requirements. An example is the use of chatbots – users should be made aware that they are speaking to a machine.
Any AI system not caught by the other categories falls into the minimal risk category. These are subject to existing law, without additional obligations. The Commission expects most applications to fall into this category. Notably, providers of minimal risk systems are invited to apply requirements for trustworthy AI and adhere to voluntary codes.
Alongside the main AI proposals sits a new regulation for machinery, to replace the 2006 Machinery Directive. This aims to update and harmonise EU law on many kinds of consumer and professional machinery, taking account of the increasing role of software and connectivity in all product types.
The press release accompanying the proposals highlights the different approaches being taken around the world, as we discuss in our report A Sense of Direction for AI. The EU, it says, with promote an international approach through bilateral and multilateral collaborations and alliances, and through organisations such as the OECD.