Picture this: you’re in your car, heading through the city. Maybe you’re on your way home from a long day at work. You see a group of pedestrians crossing ahead so you begin to slow down, only your brakes aren’t working. You furiously pump the brake-pedal but nothing happens, and you’re getting closer and closer to the pedestrian crossing. If you do nothing, you will hit the pedestrians. You could swerve to the right, saving the pedestrians but hitting a bystander walking along the footpath. Alternatively, you could swerve left, into a wall. This would save the pedestrians but kill you.
What would you do?
What is the ethically “right” thing to do? Should you hit the bystander on the sidewalk or crash your car into a wall, killing one person to save multiple pedestrians? Or should you let the car take its course?
These are the kinds of ethical dilemmas that we’re going to have to struggle with if we want to live in a world with autonomous cars. The cars will need to be told what to do if situations like these arise, so we will need to agree on the best course of action in these scenarios.
In a TEDx talk in 2016 Iyad Rahwan, associate professor at MIT, spoke about a survey he and his collaborators conducted in which people were presented with these types of scenarios. The survey questions had two options, inspired by Jeremy Bentham and Immanuel Kant.
The Bentham option was to crash the car before hitting the pedestrians, even if that meant killing a bystander or the passenger. The car should take the action that would minimise total harm.
According to Kant, the car should not do something that explicitly harms a human being, and you should let the car take its course, even if that will harm more people.
Overwhelmingly, people said they preferred Bentham’s way of thinking. Minimising total harm. However, when asked if they would purchase a car that behaved that way, they said “Absolutely not.” It appears they would prefer cars that protect them at all costs, even though that means jeopardising the common good (minimising total harm).
Clearly it’s hard for people to decide on an approach that they agree with. They can see the benefits of cars that reduce total harm, but just can’t bring themselves to put their lives in the hands of one. This is a shame because it’s been estimated that self-driving cars could reduce fatalities from traffic accidents in the United States by 90%. Based on statistics from 2013, this would mean almost 30,000 lives saved every year, just in the US.
It already seems like autonomous cars are better drivers than we are. In 2015 there were a number of accidents involving self-driving cars, but it was the human drivers who were at fault in all of them. Even recently, in Arizona an automated car owned by Uber was in an accident when it was hit by a human driver who “failed to yield”. What this means is the human driver didn’t give way and caused an accident.
Reducing traffic and accidents is the goal of a collection of computer scientists from Nanyang Technological University in Singapore who developed a routing algorithm that should minimise traffic jams. Even if just 10% of cars on the road were a part of a network of cars utilising this routing algorithm, it would drastically reduce traffic and congestion.
But for all the benefits self-driving cars seem to bring to the table, there are drawbacks. Cory Doctorow recently brought up an interesting issue. Since these cars would obviously be programmed to avoid killing people, theoretically pedestrians could step out into the street without fear, knowing that these cars will stop for them – as it’s in their programming to do so. He cites a paper written by Adam Millard-Ball (if you can’t access the paper, Adam also adapted it into a blog post).
Doctorow paraphrases Millard-Ball, writing “either cities will be effectively no-go zones for self-driving cars as pedestrians blithely step into the road; […] or drivers will take control over their cars rather than chilling with their smartphones, believing that pedestrians will be scared off by the possibility of a human driver failing to brake in time.”
What do you think about self-driving cars? Would you trust one with your life? If this technology fails or encounters an error, people’s lives are at stake. Sure, technology messes up all the time, but as John Capp, the Director of Electronics and Control Research at General Motors, points out, “We’re all fairly tolerant of cell phones and laptops not working, but you’re not relying on your cell phone or laptop to keep you alive.”