The Ethics of Crash-Avoidance Algorithms


Mon, May 12th, 2014 10:00 by capnasty NEWS

While the issues of ethics with autonomous cars are nothing new, now that Google's self-driving car is tackling cities, a new question has come up: should a self-driving car faced with no other option but to crash into something (or someone), collide with the object that will cause the least amount of harm overall?

In the name of crash-optimization, you should program the car to crash into whatever can best survive the collision. In the last scenario, that meant smashing into the Volvo SUV. Here, it means striking the motorcyclist who’s wearing a helmet. A good algorithm would account for the much-higher statistical odds that the biker without a helmet would die, and surely killing someone is one of the worst things auto manufacturers desperately want to avoid.

But we can quickly see the injustice of this choice, as reasonable as it may be from a crash-optimization standpoint. By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet. Meanwhile, we are giving the other motorcyclist a free pass, even though that person is much less responsible for not wearing a helmet, which is illegal in most U.S. states.

Not only does this discrimination seem unethical, but it could also be bad policy. That crash-optimization design may encourage some motorcyclists to not wear helmets, in order to not stand out as favored targets of autonomous cars, especially if those cars become more prevalent on the road. Likewise, in the previous scenario, sales of automotive brands known for safety may suffer, such as Volvo and Mercedes Benz, if customers want to avoid being the robot car’s target of choice.



You may also be interested in:

Packing for a World Tour
Crowsflight: a GPS App That Just Points to Your Destination
How Hard is Space Travel, in Principle?
Vintage Tokyo Subway Manner Posters
How Easy It Is To Lose a Plane in 2014