Moral dilemma of self-driving cars: Which lives to save in a crash
Would you ride in a self-driving car that has been programmed to sacrifice its passengers to save the lives of others, in the event of a serious accident?
New research has found that people generally approve of autonomous vehicles (AV) governed by so-called utilitarian ethics, which would seek to minimize the total number of deaths in a crash, even if it means harming people in the vehicle. But it gets more complicated than that. The study, based on surveys of U.S. residents, found that most respondents would not want to ride in these vehicles themselves, and were not in favor of regulations enforcing utilitarian algorithms on driverless cars.
The researchers say this moral dilemma suggests that attempts to minimize loss of life by legislating for utilitarian algorithms could actually increase casualties by slowing the adoption of lifesaving technology. [Photos: The Robotic Evolution of Self-Driving Cars]
"The moral dilemma for AV is something that is brand-new. We're talking about owning an object, which you interact with every day, knowing that this object might decide to kill you in certain situations," study co-author Jean-François Bonnefon, a research director at the Toulouse School of Economics in France told reporters in a news briefing yesterday (June 22). "I'm sure you would not buy a coffee maker that's programmed to explode in your face in some circumstances."
At what cost
Traffic accidents in the U.S. cost up to $1 trillion annually and caused nearly 40,000 deaths last year, according to the researchers, with about 90 percent of the incidents attributed to human error. AVs could prevent many of these accidents, they added, but there will still be circumstances where collisions are unavoidable.
"Programmers will be forced to write algorithms which anticipate situations in which there are multiple people that could be harmed," said study co-author Azim Shariff, an assistant professor of psychology at the University of Oregon.
To judge public attitudes toward these algorithms, the researchers used Amazon's Mechanical Turk crowdsourcing platform to present survey participants with a series of collision scenarios and ask their opinions on such issues as morality, regulation and purchasing preferences.
The researchers found that more than 76 percent of participants thought that it would be more moral for AVs to sacrifice one passenger in a car rather than kill 10 pedestrians. Even in hypothetical circumstances in which only two pedestrians would be saved, the average approval rate among respondents was more than 50 percent, according to the study. [Self-Driving Cars: 5 Problems That Need Solutions]
A majority of the study participants still supported a utilitarian approach when they imagined themselves or loved ones in the vehicles, and they also agreed that cars should be programmed this way. But when asked if the government should legislate for this, or if they would buy a self-driving car governed by these types of utilitarian ethics, the researchers found that most people said "no."
"People want what's in the common good, but they want a free ride by buying cars that prioritize their own safety," said Iyad Rahwan, co-author of the paper and an associate professor of media arts and sciences at the Massachusetts Institute of Technology.
To regulate or not to regulate
The researchers say that without regulation, there is likely to be "a race to the bottom," where customer preference forces all driverless-car manufacturers to produce self-protective cars. But at the same time, the researchers say these new findings suggest regulations could be counterproductive.
"It's going to probably cause people to have a lot of pause about going the autonomous route altogether," Shariff said. "And the negative consequences of that are actually quite profound."
Jason Millar, chief ethics analyst at the Open Roboethics initiative and a research fellow at the University of Ottawa, was not involved with the new research, but has conducted similar surveys on attitudes toward the ethics governing AVs. He questions how much the paper adds to the ongoing discussion.
"It doesn't teach us much that we didn't already know about people's preferences," he told Live Science. "Giving up on utilitarian number-crunching in order to save oneself is perfectly consistent with what we know about moral psychology."
In other words, previous research has shown that people tend to support utilitarian ways of thinking in impersonal situations, but they will switch to self-preservation when it affects themselves and loved ones. And Millar added that many ethical theories justify such a position.
Millar thinks the problems envisaged by the researchers are unlikely to unfold, because people will probably adopt AVs due to enhanced overall safety, regardless of government regulation. He points out that current legal precedents are likely to play a major role in the rules governing collisions, something that was not discussed in the new study.
"Focusing the public's attention on these hypotheticals also distracts from far more pressing ethical issues," Bryant Walker Smith, an assistant professor of law at the University of South Carolina and an expert on the law of self-driving vehicles, told Live Science.
These include weighing how cautious we should be with integrating AVs on public roads, he said, considering both their potential to save lives and the inevitability of the technology's growing pains, which could lead to crashes and fatalities.
While the researchers said the situations discussed in the survey will likely be rare, Rahwan added that it is still essential to gauge public opinion on the matter, because this is what will ultimately guide future legislation.
To that end, the researchers have launched a website that lets people judge the most acceptable outcome of various real-world collision scenarios to help build a consensus on the issue.
"Autonomous cars have the potential to revolutionize transportation, eliminate the majority of deaths on the road, and that's over a million global deaths annually," Rahwan said. "But as we work on making the technology safer, we need to recognize the psychological and social challenges they pose, too."
The new study was published today (June 23) in the journal Science.
Original article on Live Science.
Editor's Recommendations