Risques et responsabilité: le parcours des véhicules autonomes et de la cybersécurité (article en anglais)

16 minutes de lecture
18 décembre 2020

The arrival of connected and autonomous vehicles (CAVs) feels both imminent and far away. Even as we increasingly rely on automation in our daily lives, the idea of self-driving cars that can navigate the chaos of the roads still sounds as fanciful as an episode of The Jetsons or Knight Rider to most people. But there have been significant developments for autonomous vehicle technology—particularly in the last few years—and more clarity on the various degrees of "autonomy" a vehicle may achieve. After years of theoretical discussion, the picture of what a world in which CAVS are the dominant presence on the roads is coming into focus.



Questions about liability when something goes wrong are less settled, especially where accidents are caused by neither faulty programming nor human error inside or outside of the car. In this article, we consider the current wisdom on this issue.

Understanding CAV Technology

Before wading into the myriad legal issues CAVs raise, it helps to begin with a baseline understanding of CAV technology. Happily, there is already a common framework for understanding and classifying CAV technology, at least at the level of industry and government policy. For instance, the U.S. National Highway Traffic Safety Administration (NHTSA) defines CAVs as vehicles for which "at least some aspects of a safety-critical control function (e.g., steering, acceleration, or braking) occur without direct driver input." NHTSA, Automated Vehicles for Safety. Similarly, Transport Canada (the Canadian federal ministry in charge of transportation policies and programs) defines an autonomous vehicle as one that "uses a combination of sensors, controllers and onboard computers, along with sophisticated software, allowing the vehicle to control at least some driving functions, instead of a human driver (for example, steering, braking and acceleration, and checking and monitoring the driving environment)." Transport Canada, Automated Connected Vehicles 101, July 18, 2019.

Current definitions distinguish automated vehicles from "connected" vehicles. Connected vehicles are defined as ones that "use different types of wireless communications technologies to communicate with their surroundings." Id. However, automation technology, as it continues to mature, will incorporate increasingly sophisticated "connected" solutions, including vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and eventually vehicle-to-everything (V2X) platforms that will allow the vehicle to obtain information including traffic and weather conditions, nearby construction, and accidents. Verizon Connects, Connected Vehicle Technology, Feb 2, 2020.

CAV definitions typically acknowledge that vehicles can exhibit various levels of autonomy, from low levels (such as automated braking if the sensors register a proximity alert) to high levels (the vehicle executes all functions associated with driving). There is industry consensus over the theoretical "tiers" of automation, set out as six levels defined by the Society of Automotive Engineers (SAE) and adopted by NHTSA, Transport Canada, and other state agencies.

  • Level Zero ("No [Driver] Automation")―the human driver carries out all functions associated with driving;
  • Level One ("Driver Assistance")―the human driver carries out most functions, while the vehicle assists with basic steering or speed control in certain conditions;
  • Level Two ("Partial Automation")―the human driver carries out most functions, while the vehicle assists with basic steering and speed control in certain conditions;
  • Level Three ("Conditional Automation")―the vehicle's automated system monitors and responds to the driving environment under specific conditions, but the human driver must be prepared to override and take control when required;
  • Level Four ("High Automation")―the vehicle's automated system monitors and responds to the driving environment under specific conditions, and designed not to require any human intervention once activated;
  • Level Five ("Full Automation")―the vehicle's automated system monitors and controls all driving tasks in all conditions.

See, e.g., NHTSA, Automated Vehicles for Safety; Transport Canada, Automated Connected Vehicles 101.  Most publicly available vehicles are level two or below. However, jurisdictions are acknowledging and preparing for the inevitable arrival of level three and higher vehicles:

  • Level five vehicles were being tested (without passengers aboard) in parts of the U.S. as early as 2018, and the passage of the U.S. Self Drive Act, H.R. 3388 by Congress in 2017 (which subsequently stalled and died in the Senate) would have seen self-driving cars on U.S. roads by 2021 or 2020.
  • As of early 2020, twenty-nine American states had enacted legislation concerning CAVs, and eleven state governors have issued CAV-related executive orders. National Conference of State Legislatures (NCSL), Autonomous Vehicles / Self-Driving Vehicles Enacted Legislation, February 18, 2020.
  • Ontario (Canada) announced in 2019 that vehicles with level three platforms would be eligible to be driven on Ontario roads. Ministry of Transportation, Changes to Ontario's Automated Vehicle Pilot, Jan 22, 2019.
  • Japan passed laws in 2019 to allow level three cars on Japanese roads this past spring; Honda had planned to deliver level three vehicles by the summer of 2020, and Toyota (prior to the cancellation of the Olympic Games due to COVID-19) was scheduled to provide level four loop-line vehicles to transport athletes at Tokyo's 2020 Olympic and Paralympic Games. "Transformation of auto industry by AI: (1) self-driving technology and AI," IIoT Times, March 31, 2020.

CAVs and Cyber Risk

The danger posed by CAVs has been illustrated by several high-profile technical failures. By way of brief example, there have been at least four documented automated vehicle-related deaths:

  • May 2016 - Joshua Brown killed in a car crash while driving in a CAV;
  • March 2018 - Wei Huang killed in a car crash while driving a CAV;
  • March 2018 - Elaine Herzberg, a pedestrian, was struck and killed by an autonomous vehicle being tested by a rideshare service;
  • March 2019 - Jeremy Banner was killed in a car crash involving a CAV inappropriately set on autopilot.

None of these deaths was caused by a malicious third party, but the risk of a fatal, cyberattack in a CAV was demonstrated in a controlled environment when a well-publicized hack of a Jeep Cherokee in 2015 by two researchers left a reporter stuck under a highway overpass, unable to assume control of the vehicle. Andy Greenberg, Hackers Remotely Kill a Jeep on the Highway—With Me in It, WIRED, July 21, 2015.

Researchers have identified a number of cyber risks associated with CAVs, including:

  • Connectivity Risks―Insecure network connections allow hackers access to CAV computers. Hardening these points is complicated because CAV software contains millions of lines of code, much of it based on legacy software and open source code. This makes vulnerabilities in CAV programs hard to find.
  • Automation Risks―Vehicle sensors are possible attack vectors. Lidar sensor technology (used in most CAVs) is vulnerable to spoofing, and GPS can be jammed. Entire fleets of CAVs could be affected by malicious code embedded in mapping data or machine learning systems.

UKAutodrive, Gowling WLG, Connected and Autonomous Vehicles: A Hacker's Delight?

Others have noted that even mostly unconnected vehicles are vulnerable to attacks through seemingly innocuous vectors such as periodic software upgrades and aftermarket devices; peripherals plugged in via USB can be used to introduce viruses, and vehicle owners who are tech "enthusiasts" can be expected to look for ways to jailbreak their own cars in misguided and potentially dangerous attempts to gain more control over their own CAVs. Rand Corporation, Autonomous Vehicle Technology: A Guide for Policymakers, 2016.

Liability for Cyberattacks on CAVs

As is clear from the SAE taxonomy, the vehicle assumes more control as the levels climb, and level three is pivotal as it marks the inflection point where the vehicle is expected to perform most of the driving functionality with the driver serving as a redundancy. The issue of who—or what—is liable when things go wrong has been the subject of much debate but less in the way of specific guidance. More confounding still, who or what is liable when a third party intentionally causes the vehicle to malfunction or deviate from its programming in a way that creates increased risk for the occupants and those around them?

It is worth noting that discussions of liability in this context, as in most contexts where cyberattacks occur, are mostly silent on the liability of the most obviously responsible culprit: the attacker. Obviously, the hacker who causes an accident by bricking a CAV in traffic (in a ransomware attack, for example), or kills a driver by causing his GPS to fail and direct his car into the sea, or who kills a pedestrian by causing a CAV's brakes to fail at a busy intersection, would bear primary legal responsibility in any legal regime—if you could catch her or him. In most cyberattacks, the culprit is never identified (much less found), or the culprit is traced to foreign country with no prospect of co-operation by local law enforcement and no extradition treaty. And what about attacks caused by state actors to cause economic chaos and public uncertainty? The national political response to foreign states using their cyber offensive capabilities to steal money from Western banks, steal technology from Western corporations, and interfere with democratic elections, has been impotence if not complete acquiescence―at least in public. Will the response be any different when state actors use cyberattacks on CAVs to assassinate inconvenient public figures?

In the absence of reliable and effective recourse against those behind cyberattacks, responsibility will be parcelled out among CAV manufacturers, users, and to some extent, victims.

A 2016 report by the Rand Corporation helpfully frames the debate by setting out and exploring the three basic theories of tort liability for drivers: (1) traditional negligence, under which the driver is liable for unreasonably failing to prevent a risk resulting in harm; (2) no-fault liability, under which accident victims are compensated by their own insurance and may not sue drivers unless their injuries reach a certain threshold, and (3) strict liability, according to which vehicle operators are entirely responsible for abnormally dangerous or "ultrahazardous" operation of a vehicle. Autonomous Vehicle Technology: A Guide for Policymakers, Rand Corporation (2016). Some have argued that the operation of a CAV may constitute in itself and "ultrahazardous" activity for which the driver should be liable regardless of whether he or she was otherwise negligent. It would seem more intuitive, however, that the level five CAV driver should bear less legal responsibility for an accident than a level zero vehicle driver, and that the "reduction in fault may be roughly proportional to the extent to which the particular technology apparently controls the car." As control decreases, liability for the driver decreases, while liability for the CAV manufacturer increases.

Some academics have proposed (in order to lend predictability to the question of liability for CAV-related accidents generally, not just those caused by malicious third parties) strict liability regimes with administrative penalties to the state; general insurance schemes maintained by manufacturers to compensate for bodily harm; and even treating autonomous vehicles as equivalent to pets and applying strict liability rules similar to what would apply to owners. Alexander B Lemann, Autonomous Vehicles, Technology Progress, and the Scope Problems in Product Liability, Journal of Tort Law, October 2019.

A recent report by the European Union's Expert Group on Liability and New Technologies—which, as it happens, uses CAVs as a primary example of AI—puts for a thorough framework for determining liability. Liability for Artificial Intelligence and Other Emerging Technologies (2019). The report is worth reading for its in-depth discussion of challenges to existing tort law regimes posed by CAV and similar technologies, including the difficulties in making tort law concepts such as causation, burden of proof, vicarious and product liability, and contributory negligence. It is worth quoting a few of the changes to European tort law it proposes to accommodate AI-related tort claims:

[9] Strict liability is an appropriate response to the risks posed by emerging digital technologies, if, for example, they are operated in non-private environments and may typically cause significant harm.

[10] Strict liability should lie with the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from their operation (operator).

[14] The producer should be strictly liable for defects in emerging digital technologies even if said defects appear after the product was put into circulation, as long as the producer was still in control of updates to, or upgrades on, the technology. A development risk defence should not apply.

[16] Operators of emerging digital technologies should have to comply with an adapted range of duties of care, including with regard to (a) choosing the right system for the right task and skills; (b) monitoring the system; and (c) maintaining the system.

[17] Producers, whether or not they incidentally also act as operators…should have to: (a) design, describe and market products in a way effectively enabling operators to comply with the duties under [16]; and (b) adequately monitor the product after putting it into circulation

[24] Where the damage is of a kind that safety rules were meant to avoid, failure to comply with such safety rules, including rules on cybersecurity, should lead to a reversal of the burden of proving (a) causation, and/or (b) fault, and/or (c) the existence of a defect.

In the absence of legislative changes to existing tort and/or product liability regimes, it will be left to the courts to parse duties and standards of care, the enforceability of contractual limitations on liability, the extent to which conventional or even CAV-specific insurance will cover various kinds of harm.

Liability for CAV-related accidents, including due to cyberattacks will likely depend on fact-specific findings. And all current examples rely on, at most, level two and three platforms. But as vehicles reach Level four and higher, where driver intervention is no longer required or even expected, these cases may become more analogous to product liability tort claims than traditional motor vehicle accident claims. Notably, the insurance industry has been preparing for such contingencies. For example, a 2018 report of the Insurance Bureau of Canada, Auto Insurance for Automated Vehicles: Preparing for the Future of Mobility, recommended that insurers:

  1. Create policies that cover both driver- and automated technology-related negligence;
  2. Facilitate data-sharing between insurers, manufactures, and vehicle owners, to help determine the causes of a collision; and
  3. Update federal vehicle safety standards to incorporate new cybersecurity and technology standards.

The report further notes that responsibility for collisions will likely shift to vehicle manufacturers and technology providers as vehicles become more automated, and that hacking and other cybercrimes will emerge as new and significant risks associated with vehicle use.

The Road Ahead

As is so often the case, public policy and legal theory are lagging behind technological innovation when it comes to the issue of liability for CAV-related accidents. This is even more the case for accidents caused by cyberattacks and lapses in cybersecurity. One hopes that the recent technical setbacks in getting level three through level five vehicles on real roads will give policy and lawmakers a chance to catch up. If they don't, it will be left to courts to understand the technology and apply (and adapt) the law accordingly. Counsel's role in assisting the bench in understanding these issues will be crucial.


CECI NE CONSTITUE PAS UN AVIS JURIDIQUE. L'information qui est présentée dans le site Web sous quelque forme que ce soit est fournie à titre informatif uniquement. Elle ne constitue pas un avis juridique et ne devrait pas être interprétée comme tel. Aucun utilisateur ne devrait prendre ou négliger de prendre des décisions en se fiant uniquement à ces renseignements, ni ignorer les conseils juridiques d'un professionnel ou tarder à consulter un professionnel sur la base de ce qu'il a lu dans ce site Web. Les professionnels de Gowling WLG seront heureux de discuter avec l'utilisateur des différentes options possibles concernant certaines questions juridiques précises.