As of the moment, humans are still in charge of running the world. We deserve more time to decide whether we want machines to take over our roads. The issue is as much ethical as technological. Consider the following points:
- The pitch used by autonomous car developers goes something like this: “If, in writing some article that’s negative, you effectively dissuade people from using an autonomous vehicle, you’re killing people.”
- The U.S. Department of Transportation cites the following statistics: 94 percent of crashes are caused by driver error; 41 percent occur due to “recognition error” (i.e. a driver’s failure to notice something about to go wrong); 33 percent occur due to “decision errors” – like driving too fast, illegal maneuvers; 11 percent due to bad driving technique; and 7 percent due to falling asleep at the wheel.
One must accept the following assumption autonomous cars are safer: autonomous cars are in perfect control of the environment, programmed to obey rules and be able to drive better than the average human and of course, never fall asleep at the wheel.
However, consider the following:
- Human drivers will hit autonomous cars because they do not drive like humans and human drivers do not recognize the logic of the car’s behavior. So, here it seems to be the human’s mistake but it is also the fault of the car designers who need to make their products emulate human behavior better (isn’t that a contradiction?).
- The car involved in killing a homeless woman was doing 38 mph in a 35 mph zone. It was so dark that a person stepping into the road from the shadows was a complete surprise. The safety driver at the wheel said she wouldn’t have been able to avoid the homeless woman. So, here’s the question – shouldn’t a robot programmed for total safety have gone much slower than the permitted speed if darkness prevented it from remaining in full control of its environment? Would going as fast as the car did qualify as a “decision error?”
- Next question: Is it possible to program a machine to take into account the whole complexity of any real-life situation and react to it better than a human does?
- This raises a whole new set of questions: according to experts in Germany “Programming a certain risk behavior into a machine not only has consequences in critical situations but also defines the driving style generally. How safe is safe enough? How safe is too safe? Excessive safety would paralyze road traffic and seriously hamper acceptance of autonomous vehicles. Giving leeway to risky driving styles would jeopardize the safety objectives. How egalitarian does an automatized driving system have to be? Is a manufacturer allowed to advertise with fast cars at the price of lowered safety for other road users?”
And, you guessed it, the above questions bring up more concerns.
- A taxi company has a vested interest in maximizing the number of rides and therefore having its vehicles drive as fast as possible. If you factor in abiding by speed limits, is that conducive to lowering the speed when it’s dark or when weather impairs visibility and increases reaction times?
- There is also another “human” factor to be considered. According to a 2017 research poll, 9 percent of people say they enjoy driving and do not want their emotional relationship with cars ruined.
Currently, there is no demand to put more driverless cars on the road. But it would be a safe assumption that when there are more such vehicles on the road, the less afraid people will be as long as there aren’t too many accidents. However, we are human, and should have the right to decide if we want autonomous cars in our home towns and in what capacity. It is probable that autonomous cars used as transport for the elderly and the handicapped would receive more understanding and more support.
Whatever the case, there is no decision to be made, at least not yet.
Source: The Star-Ledger, March 2018, Leonid Bershidsky, Bloomberg View