Uber's reaction to a serious accident in Arizona was, entirely appropriately, to temporarily ground its entire self-driving fleet in Arizona, Pittsburgh and San Francisco.
To not ground its fleet, regardless of the cause, would have been a PR disaster at a time when public acceptance of autonomous vehicles is one of the principal hurdles to their widespread adoption.
Reports of the incident indicate that the crash resulted from a human driver's failure to yield to one of Uber's self-driving Volvo XC90 SUVs. It seems that the vehicle was in self-driving mode, with two test drivers seated in the front of the vehicle. There were no passengers in the Uber car, and no serious injuries resulted. Since then the fleet has resumed operation. Local police say that the Uber vehicle was not at fault.
The accident has attracted global media attention. Once again, the tendency has been to question whether this incident should derail the self-driving project. We say it should not, but it does demonstrate why a clear framework of rules for self-driving vehicles to follow in emergency situations is needed.
Accidents will happen
Accidents will happen involving autonomous vehicles. Although they offer enhanced safety, they do not promise an accident-free future. The human driver will run a red-light in front of an autonomous vehicle, the lorry will shed its load as an autonomous vehicle passes etc. But ultimately, the enhanced safety of autonomous vehicles will mean the impact of any accident will, in the vast majority of cases, be less. The sooner we get used to accepting that some collisions will still happen, the quicker we can realise the benefits of the technology. The number of serious incidents involving driverless cars is and will be small, while, of course, human drivers cause collisions every day.
This is an opportunity to gather data
The Uber vehicle will have been collecting a variety of different kinds of data immediately prior to the accident. This data will be hugely valuable in explaining why the incident happened and whether anything might have been done to prevent it. Making detailed data available to the research community would be a valuable contribution to the promotion of safety in the driverless field. It may even show that the vehicle took mitigating action to minimise the damage caused, in a way that a human driver might not have done. California requires certain information about accidents to be reported under its testing rules. This means that at least high-level data on incidents involving Google/Waymo's vehicles (which are tested in California) are accessible to researchers. Arizona's regime does not require the same level of public disclosure, although a police report is expected soon. While businesses are often understandably reluctant to share information, anything that Uber does make public will contribute to the wider safety debate and so we would encourage Uber to be as open as they can be.
Autonomous Critical Event Control standards would help here
We have argued in favour of setting a standard applicable to all highly autonomous vehicles. We have called this Autonomous Critical Event Control (ACEC) as it is the system which takes action when presented with an emergency situation on the road, a critical event scenario. We feel that the accident involving the Uber vehicle illustrates the need to determine ACEC through the regulatory regime. If the producer or operator of the autonomous vehicle can demonstrate that it was behaving in accordance with set standards in response to an emergency event, there would be much greater clarity around responsibility for the accident and acceptance that the reaction of the vehicle was as good as or better than the reaction of a human driver would have been.
We have put forward our solution to the action that an autonomous vehicle needs to take when faced with a critical event scenario. (You can read our proposals here in the conclusion to our article titled “Why we need to get used to the idea that self-driving cars will sometimes crash”). We invite debate in this area with a view to setting regulations in the near future. Trials of vehicles are taking place in locations around the world, with piecemeal regulation and a “try it and see” approach. We feel that clear standards that a self-driving vehicle must meet to deal with an emergency would greatly facilitate development of this technology.
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.