The research article, The social dilemma of autonomous vehicles and the accompanying perspective, Our driverless dilemma do not deserve publication in a prestigious scientific journal. Real world scenarios are tests of sensors and anti-lock brakes, not moral algorithms.
The perspective starts out "Suppose that a driverless car is headed toward five pedestrians. It can stay on course and kill them or swerve into a concrete wall, killing its passenger." This scenario assumes that the driverless car (DC) has been caught unawares by the pedestrians! Either the DC has either malfunctioned, or it was cruising at an unsafe speed.
Crucially, the authors never explain how such scenarios could arise. In all real situations, slamming on the brakes and sounding the horn would be the appropriate response. Doing otherwise increases the risk to pedestrians and other vehicles and exposes the manufacturers of the DC to lawsuits.
These articles founder on a simple question. If the putative moral dilemma exists, it should already have happened. Why aren't human drivers worried?
Both articles assume that DC's could be "programmed" to deal with non-existent ethical dilemmas. In fact, DC's will likely use neural networks that mimic best driving practice. Bizarre driving behavior will not arise from such neural networks.
For years to come, these articles are likely to give unjustified comfort to insurance companies threatened by autonomous vehicles.
Edward K. Ream
P.S. The picture accompanying the perspective shows how silly these scenarios are:
Unless the foreground DC has malfunctioned, it will be applying maximum brakes. The speed limit for this two-lane road is probably less than 50 mph, so given the implied reaction time (no visual obstructions!), the car should easily stop before hitting the children. The car may even have stopped already.
Swerving into the guard rail will decrease the DC's maneuverability, increase the chances of careening into the background DC and increase the chances of hitting the children. In no way is swerving left or right a reasonable choice. If the foreground DC simply brakes, the children have maximum chance of avoiding the car, either by getting close to the guard rail or by jumping over it, or even by turning and running away. Swerving may fatally confuse the children.
P.P.S. The technology behind DC's will almost certainly involve communication between DC's. So the foreground DC, having the better sight lines, will warn the background DC to brake.
P.P.P.S. The Trolley Problem thought experiment, and variants such as presented in these papers, ignore two crucial factors: urgency and uncertainty. There is no time to compute Rube Goldberg responses, and there is no way to know the outcome of bizarre behavior.
Here, the thought experiment results in obvious nonsense. You could call it a reductio ad adsurdem. It seems that these kinds of thought experiments have limited applicability.