Legalising autonomous vehicles: why it’s a drug related question

Autonomous vehicles are already a reality but the law still lags way behind. We present a potential approach to the legislative environment that would facilitate innovation and encourage development of this technology to bring it a step closer to being part of our everyday life.

Autonomy in driving is here already. Arguably, road traffic mortality is akin to a disease that kills 1.2 million people every year. Autonomous vehicles may not eradicate the deaths completely, but their use is likely to lead to a massive reduction. We should therefore settle the legal framework as quickly as possible so as to maximise public safety while ensuring the technology has an adequate framework within which to develop.

Towards the end of the 19th century, someone took a horse and cart, released the horse and bolted an engine to the cart. The car was born. Some people said it wouldn’t catch on. Even the law took an aversion to the car, stipulating initially that a man had to walk in front of the horseless carriage, waving a red flag, just in case the car did anything dangerous.

Fast forward 100 years and engineers are now taking the driver out of that horseless carriage. The self-driving vehicle has been born. Some people are saying it won’t catch on. People have garages and driveways to make use of. People like driving too much to give it up. Taxi drivers and lorry drivers and car dealer’s livelihoods are dependent on cars. Most of all people don’t like the idea of handing control to a computer or a machine, (although the same people seem happy using cruise control or ABS or riding in lifts or on automated trains).

Will the driverless car catch on?

Many drivers view autonomous vehicles with suspicion. Many enjoy driving, think they are good at it and are reluctant to let a computer take over. Others are understandably concerned about encountering a driverless car on the road. “What will it do? How will it respond in an emergency?”

However, 90 per cent of accidents are caused by human error, errors that would be eradicated by the self-driving vehicle thereby saving thousands of lives. In addition, significant time is spent in vehicles where the driving is not enjoyable, time that could be better spent on work, rest or play. Some drivers were sceptical of airbags and seatbelts when first introduced. Once the technology is sufficiently robust to clearly illustrate the benefits of autonomous vehicle use, public acceptance levels will inevitably rise.

But enabling the technology for public use depends critically on getting the law right, especially on testing and approval.

Is the current law relevant?

The law is similarly taking a suspicious approach to the autonomous vehicle. The speed of Google’s autonomous car is limited to 25kmph, in case it does something dangerous (remember the man with the red flag?). In addition, the Google car (which initially lacked any form of traditional driver control mechanism) must now be fitted with a steering wheel and brake pedal before venturing onto public roads under new California state government rules. Testing in California will require a driver “actively monitoring the vehicle’s operations and capable of taking over immediate physical control”. Apparently the driver must be able to control the vehicle, even though the computer processing power of one of Google’s vehicles means the vehicle can “think” many, many, times faster than a human being.

The current law in many parts of the world derives from the Vienna Convention on Road Traffic of 1968 and its 1949 predecessor, the Geneva Convention. Several important countries are missing from the list of Vienna Convention members – notably the US, China and the UK. But the US and the UK did sign and ratify the 1949 Geneva version, and so this earlier version binds them.

Importantly, these treaties share the concept of a “driver” being in charge of vehicles (and animals) that use the road and that driver must at all times be able to “control their vehicles (or guide their animals)”. (The references to “animals” demonstrates how antiquated the current law is.) So the law in countries that signed up to the 1949 Geneva version or the 1968 Vienna version will follow many of the same basic rules. In contrast, those countries, like China, that have not signed up to either version aren’t tied to it, and could, at least in theory, follow their own path.

A Conventional approach?

The Geneva Convention is less restrictive. Although apparently preventing full autonomy, some interpret the Convention as allowing the “driver” to be remote from the vehicle, and to be a company rather than a human. So countries that are parties only to Geneva may have more scope to develop their laws independently, especially if this leads to greater safety – the key objective underlying both conventions.

A March 2014 amendment to the Vienna Convention, if fully adopted, would permit the use of some automated systems, but would still require the driver to be able to override or switch off the system. This amendment is unhelpful as it fails to address whether the driver remains responsible or in control while the system is operative. Overriding or switching off the system is one thing, taking control in the split second of an emergency situation is quite another.

A recent proposed amendment by the Belgian and Swedish governments, would see the definition of “driver” replaced with a reference to “any person who drives or a vehicle system which has the full control over the vehicle … and is in conformity with… international legal [standards]”. This clearly envisages a situation where a driver no longer has to be present, although the challenge of developing international legal standards is not addressed.

Clearly when faced with the autonomous vehicle that deliberately takes control away from the driver, the current law is outdated and needs a rethink. Just as the rule book had to be re-written when horses stopped pulling the cart, in our view a new legislative framework needs to be developed to deal with and facilitate the move towards the autonomous vehicle.

So what should the legal framework look like?

Current approaches to the regulation of autonomous driving seem to work on the basis of a seamless slide from testing autonomous cars with an expert driver at the ready, to increasing autonomy. We don’t see it that way. We think that the starting point for any legal and regulatory regime should distinguish between two distinct aspects to the control of vehicles.

First, navigational control.

Can the vehicle start, steer and stop and get to its destination? This is the control that is required for the majority of the time while a vehicle is on the roads. In fact, in the absence of an emergency situation arising, this would be the only type of control a vehicle would ever require. Generally, it is the testing of navigational control systems in vehicles that is currently under way and planned around the world. This can be seen by the fact that testing these vehicles on the open road requires an alert driver who will take control in emergency situations.

In our view, the testing and use of autonomous navigational control systems is straightforward and to be encouraged. There is no additional danger to the public provided that the rules relating to such testing continue to clearly stipulate that an alert driver is required to be able, at all times, to take control in emergency situations.
However, we think that the current focus on testing navigational control systems leaves a huge gaping hole in the testing environment. This gaping hole is the unanswered question: “what happens when something goes wrong?”.

At the moment in the testing environments, this gaping hole is filled with action by an alert driver. But, we think using an alert driver in this way is an easy way out for legislators, dodges the question and therefore generates uncertainty among developers of this technology. What happens when a navigational control system is sold to the general public? Will members of the public ensure they are alert at all times and ready to take control? Is it realistic to expect a normal driver to maintain concentration when the system is doing all the work? If the system facilitates the loss of concentration by the driver, is the manufacturer not partially responsible? If on the other hand the system tries to keep the driver alert, focussed on the road ahead and ready to react to an emergency, what is the point of the autonomous car?

We are of the view that the failure of legislators to adequately address the above question is hindering innovation and the advancement of autonomous vehicle technology. We think it is this gaping hole that leads automotive manufacturers and commentators in the autonomous vehicle space to state that a key impediment to widespread autonomous vehicle development and adoption is the lack of an adequate legal and regulatory framework.

In our view, if a government can fill this gap with an appropriate legal and regulatory regime then they will help lay the foundation for the advancement of technology and innovation in this area.

We think that filling this gap requires a focus on critical event control; the second distinct type of control.

Second, critical event control.

Critical event control is the ability to take decisive, evasive and/or precautionary action on the occurrence of an event that may lead to injury to people or animals, or damage to property. It is separate and distinct from navigational control because, on the occurrence of a critical event, getting to your destination (the focus of navigational control) becomes of secondary importance. At this point, the vehicle needs to take action so as to prevent or minimise injury or damage that may occur as a result of the critical event. This action is currently assumed to be the responsibility of the driver but in due course, for autonomous vehicle use to be widespread, it will need to become the responsibility of the system.

Can the vehicle react appropriately when faced with an unexpected situation? Will it respond correctly to a child running into the road or a vehicle swerving into its path? Falling branches or patches of ice or oil on the road? A burst tyre? Will it make the correct decision when it is faced with those instantaneous choices that we all hope we will never have to make, such as whether to collide with another vehicle rather than run over a pet dog?

At the moment, governments and legislators are dodging the question of critical event control by requiring a driver to be in overall control of the vehicle at all times. The current testing regime fails to distinguish between navigational control and critical event control and so is, in our view, flawed.

We do not believe that critical event control can be adequately tested on the public roads. Where a skilled driver is constantly on the alert, no critical event will ever be left to the car to manage. Indeed that is the reason for the driver being there.

How should we test critical event control systems?

In our view, the golden key to unlocking the route to market for autonomous vehicles is drawing up appropriate rules and regulations for the testing and the public use of autonomous critical event control systems. Our suggestion to providing this golden key is as follows. (The approach to testing autonomous navigational control systems can continue in its current form – as we noted above.)

What is needed most of all is a rethink of how critical event control can be tested and approved. Looking back to the medical misadventures of the twentieth century, some disastrous experiments on the unsuspecting public led to the current phased procedure for testing a new drug.

We think that a phased testing for critical event control systems in vehicles would avoid a similar scenario with autonomous vehicle technology. In the early stages, this could be similar to current vehicle crash testing and tests for seatbelt and airbag systems. Testing would start with virtual testing and, once perfected in a virtual environment, testing would move into random artificial critical event scenarios. These scenarios could be generated under test conditions and the autonomous critical event control system could be tested in that environment. Systems that satisfactorily passed these tests could then be approved for the next phase of testing. Only after sufficient data had been collected in these early phases would vehicles be permitted to engage in controlled public use, and eventually approved for marketing. Ongoing data collection once vehicles were in public use would be needed to record any adverse events, as with marketed medicines.

Statistical data suggests that 90 per cent of road traffic mortality is caused by driver error. If autonomous critical event control systems could be shown to make errors half as often as humans, this would save over 15,000 lives in the US alone each year. The performance level does not therefore need to be 100 per cent - no human driver ever reaches this standard. But there would need to be public acceptance of the level required. Would better than 75 per cent of human drivers be good enough? 80 per cent?

Although it will involve considerable thought to establish a testing system of this type, the alternative seems far worse – gradually increasing levels of vehicle autonomy in increasingly “real world” situations until no driver is present - without ever having really tested critical event control.

A new system for the testing of critical event control systems would give developers the kind of certainty they need. This would set out the stages of testing, and the level of performance that would be expected at each stage.

Autonomous critical event control

Manufacturers are already taking steps towards autonomous critical event control. The introduction of Autonomous Emergency Braking (AEB) has led to significant reductions in forward collisions and we understand that AEB is likely to be a mandatory fit to all new vehicles registered from 2017 . Manufacturers are also developing collision avoidance, lane keeping assistance and other forms of autonomous control safety systems. One question that will need to be answered is how these systems interact together.

Our suggested approach is to view autonomous cars as trains on virtual rails, but with an added benefit that when it is safe to do so they can swerve off their virtual rails to avoid hitting the obstruction ahead provided that they can rejoin their virtual rails without colliding with another obstruction in the process. This proposal does not need to involve an ethical choice (like the old theoretical dilemma referred to as “the trolley problem”). The only choice for the vehicle is “can it steer around the primary obstruction without colliding with another obstruction?”. If yes, it takes evasive steering action, if no, it does its best to brake but then impacts the primary obstruction (in much the same way that it is usually safest for the human driver to do). Clearly irrelevant obstructions that the vehicle would drive over in usual circumstances (crisp packets, leaves, branches etc) are obviously ignored when analysing the potential evasive steering manoeuvre.

Next steps. A new international treaty?

Ideally the rules for autonomous vehicles would be agreed at international level, with a treaty moving on from Geneva and Vienna taking shape. But making any kind of progress in a sensible timeframe seems unlikely. We think that lawmakers should take a lead and set up a new framework. Others might then follow.

Visit our dedicated page to find out more >>

Our content explained

Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.

Mills & Reeve Sites navigation
A tabbed collection of Mills & Reeve sites.
Sites
My Mills & Reeve navigation
Subscribe to, or manage your My Mills & Reeve account.
My M&R

Visitors

Register for My M&R to stay up-to-date with legal news and events, create brochures and bookmark pages.

Existing clients

Log in to your client extranet for free matter information, know-how and documents.

Staff

Mills & Reeve system for employees.