Ethical Quandary of AVs

Tuesday, 28 June 2016

by: Qurius

When a crash is inevitable, autonomous vehicles (AVs) will have to decide whom to collide with. From Slate, Futurography.

Imagine the beginning of what promises to be an awesome afternoon: You’re cruising along in your car and the sun is shining. The windows are down, and your favorite song is playing on the radio. Suddenly, the truck in front of you stops without warning. As a result, you are faced with three, and only three, zero-sum options.

  • In your first option, you can rear-end the truck. You’re driving a big car with high safety ratings so you’ll only be slightly injured, and the truck’s driver will be fine.
  • Alternatively, you can swerve to your left, striking a motorcyclist wearing a helmet.
  • Or you can swerve to your right, again striking a motorcyclist who isn’t wearing a helmet.

You’ll be fine whichever of these last two options you choose, but the motorcyclist with the helmet will be badly hurt, and the helmetless rider’s injuries will be even more severe. What do you do? Now imagine your car is autonomous. What should it be programmed to choose?

Although research indicates that self-driving cars will crash at rates far lower than automobiles operated by humans, accidents will remain inevitable, they will be unavoidable, and their outcomes will have important ethical consequences. That’s why people in the business of designing and producing self-driving cars have begun considering the ethics of so-called crash-optimization algorithms. These algorithms take the inevitability of crashes as their point of departure and seek to “optimize” the crash. In other words, a crash-optimization algorithm enables a self-driving car to “choose” the crash that would cause the least amount of harm or damage.

In some ways, the idea of crash optimization is old wine in new bottles. As long as there have been cars, there have been crashes. But self-driving cars move to the proverbial ethicist’s armchair what used to be decisions made exclusively from the motorist's seat. Those of us considering crash optimization options have the advantage of engaging in reflection on ethical quandaries with cool, deliberative remove. In contrast, the view from the motorist’s seat is much different—it is one of reaction, not reflection.

Does this mean that you need to cancel your subscription to Car and Driver and dust off your copy of Kant's Critique of Pure Reason? Probably not. But it does require that individuals involved in the design, production, purchase, and use of self-driving automobiles take the view from both the armchair and motorist’s seat. And as potential consumers and users of this emerging technology, we need to consider how we want these cars to be programmed, what the ethical implications of this programing may be, and how we will be assured access to this information.

Returning to the motorcycle scenario, developed by Noah Goodall of the Virginia Transportation Research Council, we can see the ethics of crash optimization at work. Recall that we limited ourselves to three available options: The car can be programmed to “decide” between rear-ending the truck, injuring you the owner/motorist; striking a helmeted motorcyclist; or hitting one who is helmetless. At first it may seem that autonomous cars should privilege owners and occupants of the vehicles. But what about the fact that research indicates 80 percent of motorcycle crashes injure or kill a motorcyclist, while only 20 percent of passenger car crashes injure or kill an occupant? Although crashing into the truck will injure you, you have a much higher probability of survival and reduced injury in the crash compared to the motorcyclists.

So perhaps self-driving cars should be programmed to choose crashes where the occupants will probabilistically suffer the least amount of harm. Maybe in this scenario you should just take one for the team and rear-end the truck. But it’s worth considering that many individuals, including me, would probably be reluctant to purchase self-driving cars that are programmed to sacrifice their owners in situations like the one we’re considering. If this is true, the result will be fewer self-driving cars on the road. And since self-driving cars will probably crash less, this would result in more traffic fatalities than if self-driving cars were adopted.

There are complex issues that touch on our basic ideas of distribution of harm and injury, fairness, moral responsibility and obligation, and corporate transparency. It’s clear the relationship between ethics and self-driving cars will endure. The challenge as we move ahead is to ensure that consumers are made aware of this relationship in accessible and meaningful ways and are given appropriate avenues to be co-creators of the solutions—before self-driving cars are brought to market. Even though we probably won’t be doing the driving in the future, we shouldn’t be just along for the ride.