Driverless car dilemma? Sounds like something out of a science fiction film, but it is expected to be stark reality in the very near future.
Technology is advancing at a rapid enough rate that industry insiders predict driverless cars will be on the road within the next four years — perhaps sooner. During the development of the sophisticated technology required to operate a driverless car, a very interesting (and controversial) subject has risen to the forefront.
A driverless car will have to be capable of making decisions on the road. While most of these decisions will be mundane, there are a number of them that will cross all manner of moral and philosophical boundaries.
When a car has to make a judgment that involves the lives of human beings, how should it prioritize the lives of its passengers in situations involving other vehicles and/or pedestrians?
According to the Wall Street Journal, this is an ethical question that is being hotly debated behind the scenes. A recently-published study has indicated that while most people favor a car that makes decisions based on the “greater good” (minimal casualties), those same people strongly prefer to own a car that will protect them when there is danger on the road.
This is just the beginning of the driverless car dilemma; what about a driverless car’s ability to discern the difference between various types of people? Who decides which of these individuals should be spared?
“Cars don’t have the technology to distinguish a baby stroller from a grandmother from a healthy 21-year-old,” said Karl Iagnemma. Iagnemma is the CEO of nuTonomy, a company that is working to develop software for driverless cars. “The industry is still trying to get the software to work in a safe and reliable way, let alone worrying about reasoning about complex ethical decisions.”
Even if there was universal agreement on the importance of protecting the greater good, exactly how is a driverless car to apply that concept? The authors of the aforementioned study have pondered this, as reported by the New York Times.
“Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the autonomous vehicle, than for the rider of the motorcycle? Should autonomous vehicles take the ages of the passengers and pedestrians into account?” wrote the authors of the study that was recently published in Science magazine.
The driverless car dilemma is a much more complex version of the “trolley problem,” states Popular Mechanics. The trolley problem is a theoretical scenario involving a trolley that is moving at high speed toward a group of five men working on the tracks ahead. A lever can be pulled that will move the trolley onto another set of tracks, but there is one man working there. Should a conscious decision be made to allow one man to die in order to save the original five?
Consumers cannot have it both ways; they can’t commit to saving the most lives, and also champion their own personal safety. These two parameters will be in direct conflict at times.
One possibility would be to offer various types of driverless cars, with differing levels of weight given to the safety of its passengers in relation to those outside the vehicle. This may seem reasonable on the surface, but if this is an option given to consumers, there may be legal liability issues involved. If a car “decided” to run over pedestrians because its owner chose artificial intelligence that protected him at all costs, would the car owner then be in legal hot water regarding the injuries or deaths caused in a scenario such as this?
George Bernard Shaw once said, “Science never solves a problem without creating 10 more.” The driverless car dilemma is an exceptional example of Shaw’s theory, even though it comes along almost 70 years after his death.
[Photo by Lee Jin-man/AP Images]