Skip to main content
Open this photo in gallery:

No moral judgements were involved when a self-driving vehicle killed a pedestrian in Tempe, Arizona, earlier this year. In future, cars will have to choose for themselves who lives and dies when accidents are unavoidable.HANDOUT/Reuters

A self-driving car is speeding down a busy road when suddenly a group of pedestrians appears in its path. The car has a split-second to decide between two horrific options. Should it plow down the unwitting pedestrians or swerve into a concrete barrier with the likelihood that occupants in the car may be killed?

What if the pedestrian is a woman with a stroller? Does that change the moral calculus? Or what if the occupants of the car are mostly young children while the pedestrian is a single jaywalker breaking the law? Or an elderly man, possibly disoriented?

In real traffic situations, human drivers usually have no time to work out which actions best accord with carefully reasoned moral principles. Instead, they react instinctively – usually in the direction of self-preservation.

But in the age of autonomous vehicles, which can assess and react to unavoidable accidents far more quickly than humans, the moral decision-making capacity of machines will no longer be a topic for philosophical debate, but a matter of life and death.

Those cars “will have algorithms that have to be explicitly programmed well in advance of the situation,” said Azim Shariff, a moral psychologist at the University of British Columbia. “They’ll have the luxury of deliberation, and as a result they’ll have the responsibility of deliberation.”

This may be fine if people universally agree on how computers should be programmed to make moral judgments. But a massive online experiment set up by a team that includes Dr. Shariff suggests otherwise.

When it comes to prioritizing the safety of pedestrians versus vehicle occupants, for example, the experiment found that our moral preferences vary significantly and are influenced by a range of factors including the age, gender, responsibility and social status of the potential victims.

Moreover, those differences appear to cluster by culture and geography. That means the manufacturers of autonomous vehicles and their government regulators cannot assume that the moral imperatives programmed into self-driving cars will be acceptable to customers in other parts of the world.

Edmond Awad, a postdoctoral research at the MIT Media Lab in Cambridge, Mass., and lead author of the study, said the results are important because they speak to the question of whose moral predispositions will be favoured when machines are required to make life-and-death judgments.

“We want to have a world where everybody feels like they’re getting fair treatment and that there aren’t people exclusively creating products without paying attention to who those products are disadvantaging,” he said.

The results, published Wednesday in the journal Nature, are based on a game called the Moral Machine, which has been played some 40 million times by participants from around the globe since it went online last year.

In the game, participants are asked to decide what should be done in a randomized series of car crash scenarios in which the factors at play include such options as sparing more versus fewer lives, sparing women versus men and sparing fit versus less fit individuals.

Over all, participants make choices that try to minimize harm, but differences arise based on precisely what that means to those who must choose.

The results suggest that North Americans and Europeans tend to favour choices that save the most individuals, with a greater preference toward inaction – that is, letting the vehicle stay on course rather than swerving to kill occupants. In Asian and Middle Eastern countries, the preferences skew toward saving pedestrians and those who are behaving most lawfully, while Caribbean and southern countries tend to skew toward sparing young, female and higher-status individuals.

Dr. Shariff, who recently moved from California to Vancouver to take up a Canada 150 research chair, mused that in future, such a change of address may require having to make adjustments to one’s self-driving vehicle in order to satisfy local moral codes. But it also raises the prospect of people illegally hacking vehicles to insert moral programming that is more likely to keep them alive.

“There are a bunch of decisions that are going to have to be made,” he said, adding that some manufacturers have been more vocal than others at calling for government oversight on the question.

At a public event last year, Ford Motor Co. executive chairman Bill Ford told a Washington audience that “no one manufacturer is going to be able to program in one ethical equation that is different than the others. I mean, that would be chaos.”

Mark Crowley, a computer scientist who works on machine intelligence at the University of Waterloo and was not involved in the experiment, says it does a good job of confirming just how difficult it will be to exclude bias from artificially intelligent systems of all kinds.

“Policy-makers, researchers and industry shouldn’t be surprised if there is some disagreement … if they just choose to impose [a moral preference] that makes sense to them,” he said. “It is truly just the beginning of the conversation that needs to be had in order to expect the public to accept these systems.”

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe