Daniel Cole
Partner
Head, Toronto Intellectual Property Department
Article
Driverless vehicles continue to raise difficult legal and moral questions around safety. What are the regulatory implications for this fast-paced industry?
Autonomous vehicles (AV) that require no input from human occupants are currently being tested on public roads. Experimental prototypes, still closely supervised by people, are already mixing with ordinary traffic in parts of the US, Canada, UK, Sweden, Germany and Japan.
Technology giant Google alone has clocked up more than 2.2 million miles of autonomous testing[1] since it began developing its technology in 2009. It has now launched a new company, Waymo, to commercialise the technology. Other participants - including manufacturers like Volvo, parts suppliers such as Bosch and service providers like Uber - are pursuing their own ambitious development projects.
The arrival of autonomous vehicles as either purchasable products or hireable services now seems inevitable. However, in addition to the obvious technological challenges, driverless vehicles also raise a host of legal and moral questions. Our roads, our laws and our expectations have all been shaped by more than a century of vehicles controlled by human beings, with all their foibles and failings. Adding robotic cars, buses and trucks to the mix is not going to be trivial.
"There are certain areas of the law that are well equipped to deal with new technology, such as the patent system," notes Daniel Cole, an intellectual property partner at Gowling WLG. "But the archaic language of traffic laws that talk about a vehicle being under a person's control - that's all going to have to be completely revamped. And if you've ever watched anything move through a legislature, you'll know that's not happening in a month. That's years and years of work."
Legal questions run from relatively minor issues, such as who pays for speeding fines, to deep moral questions about putting one life ahead of another in an accident.
One potentially tricky area is how to deal with rules that sometimes need to be broken. "Imagine an AV sitting at a red traffic light while an ambulance is trying to get through, refusing to move because it's been told it can't run through a red light. Meanwhile a patient is dying," says Cole. "There has to be a way to say it's OK to have that technical violation in these circumstances. But that's tricky because there are endless possibilities."
Liability when things go wrong is another area that is expected to create challenges. "There's going to be a shift in liability from the driver to the manufacturer or the people who market these products," observes André Rivest, Gowling WLG partner and head of its automotive group in Canada. Especially in the early days of adoption, when AVs and human drivers interact, it may be difficult to establish exactly who is liable for what, he cautions.
Putting members of the public in driverless vehicles will also require crossing a Rubicon that manufacturers - and their lawyers and insurers - may find unnerving. "If you look at today's features, like lane departure warning, they all come with disclaimers warning that they don't replace the driver's responsibility," notes Cole. "At some point we're going to flip that on its head and say that manufacturers are in control of the car. That's a huge mind-shift."
Rivest agrees. "The transition from lower level autonomy to full autonomy is where it's really delicate, and that's what we are beginning to address," he notes. "How should an AV react if a small child runs out after a ball and the car can't stop in time, but if it veers to the side it will run down an elderly couple? Who will make these decisions?"
People are fallible and human error accounts for an estimated 94% of crashes, according to figures published in the US. To limit the danger, we expect drivers to exercise good judgement and behave as responsibly as possible. Highway patrols, traffic cameras, fines and the threat of imprisonment back up that requirement, but we also acknowledge that human skill is variable. We simply live with the risk that some drivers will make fatal mistakes behind the wheel.
Yet we tend to be less willing to accept risks, even of a much lesser scale, when they are posed by machines. We expect dangers in equipment to be spotted and removed, preferably before anyone is hurt.
Similarly, the knowledge that computerised systems can react more quickly than human drivers in an emergency has led to hopes that AVs might dramatically reduce the overall frequency of accidents. But this potential has also fuelled speculation that driverless vehicles will need to include a "moral algorithm" to determine how they should react when human life is at stake. After all, an AV may need to decide whether to protect occupants at the expense of bystanders, for example.
"When cars crash today, people act instinctively - they don't make conscious decisions," points out Stuart Young, head of automotive at Gowling WLG in the UK. "But when you program a car, you are sitting at a computer writing the code, and you have every opportunity to make a calculated decision about what the car should do in given circumstances. I think there will be a moral judgement on someone who's been able to contemplate and come to a conclusion."
However, the situation may not be so clear cut. It is likely that autonomous vehicles will rely on complex software techniques, such as neural networks or genetic algorithms, which can acquire expertise without human reasoning. For example, a software system might "learn" the capability to recognise a cyclist by being provided with many thousands of example images, rather than any formal definition composed by a programmer. Internally, the software will build up a complex mathematical model allowing it to successfully recognise new images of cyclists. However, there will be no step-by-step reasoning in the software that can be unravelled and understood.
Similar machine learning techniques are likely to be employed extensively within AV development, ultimately dictating how the vehicle will react to unfolding circumstances. A software model will be built up over millions of miles of testing, helping the AV to interpret any consciously coded set of rules.
What results is a mire of moral questions that include not just which decisions ought to be made but how they might be reached. Some types of programming might be subject to debate.
"Regulation needs to get on top of this," says Young. "It needs to get ahead of it. Because at the moment there's nothing giving a clear steer as to who's going to take responsibility for what, or whether all decisions are going to be left to manufacturers."
That path, as Cole notes, means waiting for things to go wrong to establish legal precedents that might provide a measure of clarity.
Gowling WLG is calling for an alternative approach that recognises the need for affirmative action by governments around the world. Pre-emptive regulation of autonomous vehicles need not hold back their development, argues Young. Instead, clarity over expectations and responsibilities would likely resolve some hard-to-quantify business risks that might otherwise stand as stumbling blocks.
"What we've been looking at is asking government to set up an independent agency to regulate the technology," says Young. "In the UK, we have the HFEA (Human Fertilisation and Embryology Authority), which may seem like an odd analogy, but it has been successful. There's a lot of ethics involved in embryology and development, but it was set up as an independent government agency with the right representation. It's broadly seen as having done a very good job of allowing development whilst tracking and reflecting ethical concerns in society. And that's what we need for the moral aspects of the algorithms that are going to be developed."
It is also vital to recognise that the vehicle industry is a global one, where international agreements make more sense than local regulations. Given that vehicles can drive across national borders, useful models for regulation may also be found in the air transport industry, where international pacts govern corporate behaviour and limit liability for carriers.
Vehicles are already more heavily regulated than other consumer products, with type approval to ensure compliance with national and international regulations, and compulsory safety recalls to correct serious errors, so any move to regulate the programming of AVs would not be without precedent.
Today, most countries with a significant automotive manufacturing base have started to grapple with the issues raised by AVs, with varying levels of ambition. In the UK, for example, the Department for Transport recently carried out a consultation[2] to examine what changes might be needed to insurance, type approval regulations and the national Highway Code.
"The most comprehensive exercise I've seen is in the US," says Young. "The National Highway Traffic Safety Administration (NHTSA) has done a pretty thorough job with the Federal Automated Vehicles Policy[3], issued in September. It's a root and branch review of what needs to be done to create the right legal framework in the US (including a model state-by-state code), what should be retained at a federal level, and what needs to be set down in terms of vehicle safety. Of course, there have been critics of the policy, particularly around the data sharing aspects, and with the new Trump administration there is some doubt over whether it will get any further Federal support."
As technology advances, society is likely to recognise that AVs - even those without a verifiable moral algorithm - can save lives simply by reacting more swiftly, more decisively and more accurately to sudden unforeseen danger. The question that then arises is: how much safer than human drivers do AVs need to become before we are morally obliged to adopt them?
Footnotes
[1] https://waymo.com/
[2] https://www.gov.uk/government/consultations/advanced-driver-assistance-systems-and-automated-vehicle-technologies-supporting-their-use-in-the-uk
[3] http://www.nhtsa.gov/nhtsa/av/av-policy.html
CECI NE CONSTITUE PAS UN AVIS JURIDIQUE. L'information qui est présentée dans le site Web sous quelque forme que ce soit est fournie à titre informatif uniquement. Elle ne constitue pas un avis juridique et ne devrait pas être interprétée comme tel. Aucun utilisateur ne devrait prendre ou négliger de prendre des décisions en se fiant uniquement à ces renseignements, ni ignorer les conseils juridiques d'un professionnel ou tarder à consulter un professionnel sur la base de ce qu'il a lu dans ce site Web. Les professionnels de Gowling WLG seront heureux de discuter avec l'utilisateur des différentes options possibles concernant certaines questions juridiques précises.