Your region is set to . If this is incorrect, please change your region.

Moral algorithm - how do we decide what driverless cars can decide?

05 December 2016

Driverless vehicles will need to be programmed with a clear and agreed set of rules for decision-making, according to new research published on Tuesday by international law firm Gowling WLG.

In its report on "The Moral Algorithm", Gowling WLG finds that concerns over the so-called "trolley problem" - where a vehicle must choose between hitting defined individuals - may have been exaggerated, with most of the experts interviewed agreeing that autonomous vehicles (AVs) will never be programmed to make such distinctions.

Nevertheless, the paper argues that harmonised safety regulations will be needed for other decisions, such as when it is permissible for a car to break the rules of the road, or when determining the ‘assertiveness' of a vehicle when it interacts with other road-users.

The report concludes with a series of eight recommendations, including the creation of an independent regulator to balance the legality, safety and commerciality issues surrounding autonomous vehicles, the development of a policy regarding how the moral algorithm will operate in terms of major safety situations and a programme of public education and consultation.

Commenting on the outcome of the research, Stuart Young, a partner at Gowling WLG, said:

"It is important not to equate regulation with burden. The risk in having too much regulation is well known. But the risk in having too little regulation is not as frequently discussed. I think we need regulation for this emerging technology to provide reassurance to the public on safety and reassurance to commercial participants on what the regulatory framework looks like. Good regulation will achieve a balance of safety, commerce and legality."

The "Moral Algorithm" study took the form of interviews with industry specialists and representatives from the UK Autodrive consortium during September and October 2016 as well as desktop research and analysis of publicly-available information.

Speaking about the dilemmas that could be posed once cars are required to make complex decisions, Tim Armitage, Arup's UK Autodrive Project Director, said:

"As with any complex new technology, AVs cannot be specifically programmed to respond to every possible scenario. This simply isn't practical when a machine is expected to interact with humans, in a complex environment, on a day-to-day basis. AVs will drive to the speed limits and will not be distracted from the task of safe driving; they will make practical decisions based on their programming, but they cannot be expected to make moral decisions around which society provides no agreed guidance. To allow AVs to demonstrate their capacity for practical decision-making in complex environments, and to begin to establish public trust through contact, the first step is allowing testing in relatively simple and well-defined environments. Of course, regulation will need to keep up, so in echoing Stuart's sentiments, it is vital the legal industry act now in order to help create a realistic and viable route to market for AVs."


See previous
Showing # of Results

See next

Media Contact

Dawn Beddard

Corporate communications manager

Subscribe to our updates