When humans drive cars, we can choose our reactions to certain dangers while on the road. But what does a driverless car do?
After all, it doesn’t rely on instinct — it relies on code.
According to New Scientist magazine, a 2015 study found that most people think driverless cars should take action to minimize harm; but this could mean choosing your life over, say, that of a pregnant woman crossing the street. And while people agreed with the concept of minimal harm, they also said they would never get into a car that would choose to kill them.
So, Giuseppe Contissa and his colleagues at the University of Bologna in Italy came up with a “solution” of sorts: an “ethical knob” that would allow driverless car owners to program their vehicles to be “altruistic” (save others), “egoistic” (save you), or “impartial” (total sociopath) in the event of an accident.
“The knob tells an autonomous car the value that the driver gives to his or her life relative to the lives of others,” said Contissa. “The car would use this information to calculate the actions it will execute, taking into account the probability that the passengers or other parties suffer harm as a consequence of the car’s decision.”
While intriguing, the idea of driverless cars still scares the shit out of us. It really is putting the fates of human beings in the, er, hands of machines, which is pretty much the definition of the Singularity, aka the end of mankind.
Call us old-fashioned, but if the solution to some of the most challenging issues driverless cars present is to hand back control to humans, isn’t it easier if we just keep driving our own goddamn cars?