Thursday, July 7, 2016

An open letter to Science magazine

The research article, The social dilemma of autonomous vehicles and the accompanying perspective,  Our driverless dilemma do not deserve publication in a prestigious scientific journal. Real world scenarios are tests of sensors and anti-lock brakes, not moral algorithms.

The perspective starts out "Suppose that a driverless car is headed toward five pedestrians. It can stay on course and kill them or swerve into a concrete wall, killing its passenger." This scenario assumes that the driverless car (DC) has been caught unawares by the pedestrians! Either the DC has malfunctioned, or it was cruising at an unsafe speed.

Crucially, the authors never explain how such scenarios could arise. In all real situations, slamming on the brakes and sounding the horn would be the appropriate response. Doing otherwise increases the risk to pedestrians and other vehicles and exposes the manufacturers of the DC to lawsuits.

These articles founder on a simple question. If the putative moral dilemma exists, it should already have happened. Why aren't human drivers worried?

Both articles assume that DC's could be "programmed" to deal with non-existent ethical dilemmas. In fact, DC's will likely use neural networks that mimic best driving practice. Bizarre driving behavior will not arise from such neural networks.

For years to come, these articles are likely to give unjustified comfort to insurance companies threatened by autonomous vehicles.

Edward K. Ream

P.S. The picture accompanying the perspective shows how silly these scenarios are:

Unless the foreground DC has malfunctioned, it will be applying maximum brakes. The speed limit for this two-lane road is probably less than 50 mph, so given the implied reaction time (no visual obstructions!), the car should easily stop before hitting the children. The car may even have stopped already.

Swerving into the guard rail will decrease the DC's maneuverability, increase the chances of careening into the background DC and increase the chances of hitting the children. In no way is swerving left or right a reasonable choice. If the foreground DC simply brakes, the children have maximum chance of avoiding the car, either by getting close to the guard rail or by jumping over it, or even by turning and running away. Swerving may fatally confuse the children. 

P.P.S. The technology behind DC's will almost certainly involve communication between DC's. So the foreground DC, having the better sight lines, will warn the background DC to brake.

P.P.P.S. The Trolley Problem thought experiment, and variants such as presented in these papers, ignore two crucial factors: urgency and uncertainty. There is no time to compute Rube Goldberg responses, and there is no way to know the outcome of bizarre behavior.

Here, the thought experiment results in obvious nonsense. You could call it a reductio ad adsurdem. It seems that these kinds of thought experiments have limited applicability.

EKR

Friday, July 1, 2016

Two epic fails in Science Magazine

The recent issue of Science Magazine contains a research article called
The social dilemma of autonomous vehicles with a perspective called Our driverless dilemma. Imo, both are epic fails. Both remind me of the "what's wrong with this picture" puzzle that I enjoyed as a kid.

Most importantly, the articles confuse engineering with science.  In fact, it is impossible to imagine a real scenario in which slamming on the brakes and sounding the horn would not be the appropriate response to the faux "dilemmas" discussed.

Insurance companies are worried

Before I go into lengthy criticisms, let me point out a disturbing possibility, namely that the insurance industry, notably lead by Warren Buffett, wants to discredit or impede self-driving cars/trucks.  The reason is simple: such technology fundamentally threatens Geico and the rest of the auto insurance industry. In effect, we may be seeing the beginning of a self-driving-car-denial propaganda campaign.

What's wrong with this picture

Now on to the detailed critique.  For simplicity, let AV denote an autonomous vehicle (self-driving car/truck).

The perspective starts out:

"Suppose that a driverless car is headed toward five pedestrians. It can stay on course and kill them or swerve into a concrete wall, killing its passenger."

Folks, this is total bullshit.  An AV, even with today's technology, such as Tesla's, will be continually monitoring the environment.  To suppose the scenario proposed above is to suppose that somehow (!!) the AV is caught unawares. Either that AV has malfunctioned, or the AV is driving faster than its sensors can detect problems. Neither is likely, and neither poses any ethical dilemma.

Furthermore, the scenario assumes, quite preposterously, that one can predict the outcome of any action with certainty. The scenario most likely to mitigate harm would be to take the obvious action: namely slam on the brake and sound the horn.  This minimizes the kinetic energy involved, and maximizes the time for the pedestrians to react.

Furthermore, the "crashing into concrete wall" scenario is a one in a trillion possibility.  And as a practical matter, slamming on the brakes will never result in a lawsuit.  Swerving into a wall almost certainly will.

Take a look at the picture that supposedly illustrates the "dilemma".  What's wrong with this picture?

- No brick wall :-)
- Unobstructed sight lines for foreground AV.
- No indication of how fast the foreground AV is going.

In fact, unless the foreground AV has malfunctioned, it will have already stopped or it will be applying maximum brakes.  We can imagine the speed limit for this two-lane road is less than 50 mph, so given the implied reaction time (no visual obstructions), it is quite reasonable to assume that the car will stop before hitting the children.

Furthermore, given the guard rail and its proximity to the foreground AV, swerving into the guard rail is likely to decrease the AV's maneuverability, increase the chances of careening into the background AV and increase the chance of hitting the children.  In no way is swerving left or right a reasonable choice.

Swerving is likely to confuse the children.  If the foreground AV simply brakes, the children have maximum chance of avoiding the car, either by getting close to the guard rail or by jumping over it.  This would be true even if there were a cliff on the other side of the rail.

And one more thing.  The technology behind AV's will almost certainly involve communication between AV's.  So the foreground AV, having the better sight lines, will be warning the background AV to brake.

What's wrong with the thought experiment

Now let me turn to the main research article.  This article is a variation on a preposterous thought experiment known as the trolley problem that, is, alas, presently common in psychology and economics. This is the worst kind of thought experiment, one that is completely out of touch with reality.  I am astounded this kind of nonsense is taken seriously.

The plain fact of the matter is that this thought experiment "explores" a situation that can never ever happen.  The reason is clear: it asks experimental subjects to assume something contrary to fact, that one can know the results of actions taken under extreme time pressure.

Once again, in all plausible scenarios, the proper action is not to switch the trolley from one track to another, but to attempt to warn the people in danger.  If one has time to take the Rube Goldberg actions contemplated in the thought experiment (presumably after considering the consequences!) then one has enough time to yell, or to get into a car an blow the horn, or whatever.

In short, this thought experiment is utter nonsense, and whatever supposed conclusions it is supposed to "deliver" are also utter nonsense.  How is this supposed to be real science?

Let me be clear about how silly this thought experiment is. It ignores two fundamental facts about real-world emergencies: uncertainty and urgency.  We can not know the results of actions.  If the trolley is heading towards five people, is it not more likely that one of the 5 will see the approaching danger and warn the other?  And if there is time for calculation, there is also time for other actions not envisaged by this thought experiment.  In short, this experiment can teach us nothing about the real world.

Conclusions and summary

In all practical situations, slamming on the brakes and sounding the horn has the best practical chance of reducing harm to all concerned.  It maintains maximum control of the AV, it continually, smoothly and predictably reduces kinetic energy and it gives pedestrians and other vehicles the best chance of avoiding collisions.  It also eliminates the risk of lawsuits.

There are, in fact, no ethical dilemmas involved with AV's, except possibly for this: slamming on the brakes could injure passengers inside the AV, something not mentioned in either article. This is a real possibility if passengers neglect to wear seat belts. And that might become more common when AV's are (rightly!) perceived to be much safer than human-driven vehicles.

Both articles assume that AV's could be "programmed" to deal with oh-so-unlikely ethical dilemmas.  In fact, AV's will use neural networks that mimic standard driving practice.  There is almost no chance that bizarre driving behavior will arise from such neural networks.

In short, these articles have nothing whatever to offer the engineers working on AV's.  They might, however, provide unjustified cover insurance companies wanting to delay AV's, thereby delaying the day when AV's reduce the slaughter on our highways.  An epic, epic fail for Science Magazine.

Edward K. Ream
July 1, 2016

P.S. After writing the original post, I see that there is another embarrassing question to ask the authors.  If the moral dilemma exists for AV's, why doesn't it also exist for cars driven by humans?

The answer is clear: braking is the only logical course in all real situations. The scenarios discussed in the papers simply never happen.  If we assume that AV's will be better drivers than humans, then the supposed dilemma will arise even less often than for humans.  Except that the dilemma never happens for humans either ;-)

P.P.S. We can see that the trolley problem and its ilk lead to incorrect conclusions. We could call these papers reductio ad absurdem refutations of their own conclusions, and by extension, the faulty form of analysis engendered by the Trolley Problem.

EKR
July 2, 2016