Friday, July 1, 2016

Two epic fails in Science Magazine

The recent issue of Science Magazine contains a research article called
The social dilemma of autonomous vehicles with a perspective called Our driverless dilemma. Imo, both are epic fails. Both remind me of the "what's wrong with this picture" puzzle that I enjoyed as a kid.

Most importantly, the articles confuse engineering with science.  In fact, it is impossible to imagine a real scenario in which slamming on the brakes and sounding the horn would not be the appropriate response to the faux "dilemmas" discussed.

Insurance companies are worried

Before I go into lengthy criticisms, let me point out a disturbing possibility, namely that the insurance industry, notably lead by Warren Buffett, wants to discredit or impede self-driving cars/trucks.  The reason is simple: such technology fundamentally threatens Geico and the rest of the auto insurance industry. In effect, we may be seeing the beginning of a self-driving-car-denial propaganda campaign.

What's wrong with this picture

Now on to the detailed critique.  For simplicity, let AV denote an autonomous vehicle (self-driving car/truck).

The perspective starts out:

"Suppose that a driverless car is headed toward five pedestrians. It can stay on course and kill them or swerve into a concrete wall, killing its passenger."

Folks, this is total bullshit.  An AV, even with today's technology, such as Tesla's, will be continually monitoring the environment.  To suppose the scenario proposed above is to suppose that somehow (!!) the AV is caught unawares. Either that AV has malfunctioned, or the AV is driving faster than its sensors can detect problems. Neither is likely, and neither poses any ethical dilemma.

Furthermore, the scenario assumes, quite preposterously, that one can predict the outcome of any action with certainty. The scenario most likely to mitigate harm would be to take the obvious action: namely slam on the brake and sound the horn.  This minimizes the kinetic energy involved, and maximizes the time for the pedestrians to react.

Furthermore, the "crashing into concrete wall" scenario is a one in a trillion possibility.  And as a practical matter, slamming on the brakes will never result in a lawsuit.  Swerving into a wall almost certainly will.

Take a look at the picture that supposedly illustrates the "dilemma".  What's wrong with this picture?

- No brick wall :-)
- Unobstructed sight lines for foreground AV.
- No indication of how fast the foreground AV is going.

In fact, unless the foreground AV has malfunctioned, it will have already stopped or it will be applying maximum brakes.  We can imagine the speed limit for this two-lane road is less than 50 mph, so given the implied reaction time (no visual obstructions), it is quite reasonable to assume that the car will stop before hitting the children.

Furthermore, given the guard rail and its proximity to the foreground AV, swerving into the guard rail is likely to decrease the AV's maneuverability, increase the chances of careening into the background AV and increase the chance of hitting the children.  In no way is swerving left or right a reasonable choice.

Swerving is likely to confuse the children.  If the foreground AV simply brakes, the children have maximum chance of avoiding the car, either by getting close to the guard rail or by jumping over it.  This would be true even if there were a cliff on the other side of the rail.

And one more thing.  The technology behind AV's will almost certainly involve communication between AV's.  So the foreground AV, having the better sight lines, will be warning the background AV to brake.

What's wrong with the thought experiment

Now let me turn to the main research article.  This article is a variation on a preposterous thought experiment known as the trolley problem that, is, alas, presently common in psychology and economics. This is the worst kind of thought experiment, one that is completely out of touch with reality.  I am astounded this kind of nonsense is taken seriously.

The plain fact of the matter is that this thought experiment "explores" a situation that can never ever happen.  The reason is clear: it asks experimental subjects to assume something contrary to fact, that one can know the results of actions taken under extreme time pressure.

Once again, in all plausible scenarios, the proper action is not to switch the trolley from one track to another, but to attempt to warn the people in danger.  If one has time to take the Rube Goldberg actions contemplated in the thought experiment (presumably after considering the consequences!) then one has enough time to yell, or to get into a car an blow the horn, or whatever.

In short, this thought experiment is utter nonsense, and whatever supposed conclusions it is supposed to "deliver" are also utter nonsense.  How is this supposed to be real science?

Let me be clear about how silly this thought experiment is. It ignores two fundamental facts about real-world emergencies: uncertainty and urgency.  We can not know the results of actions.  If the trolley is heading towards five people, is it not more likely that one of the 5 will see the approaching danger and warn the other?  And if there is time for calculation, there is also time for other actions not envisaged by this thought experiment.  In short, this experiment can teach us nothing about the real world.

Conclusions and summary

In all practical situations, slamming on the brakes and sounding the horn has the best practical chance of reducing harm to all concerned.  It maintains maximum control of the AV, it continually, smoothly and predictably reduces kinetic energy and it gives pedestrians and other vehicles the best chance of avoiding collisions.  It also eliminates the risk of lawsuits.

There are, in fact, no ethical dilemmas involved with AV's, except possibly for this: slamming on the brakes could injure passengers inside the AV, something not mentioned in either article. This is a real possibility if passengers neglect to wear seat belts. And that might become more common when AV's are (rightly!) perceived to be much safer than human-driven vehicles.

Both articles assume that AV's could be "programmed" to deal with oh-so-unlikely ethical dilemmas.  In fact, AV's will use neural networks that mimic standard driving practice.  There is almost no chance that bizarre driving behavior will arise from such neural networks.

In short, these articles have nothing whatever to offer the engineers working on AV's.  They might, however, provide unjustified cover insurance companies wanting to delay AV's, thereby delaying the day when AV's reduce the slaughter on our highways.  An epic, epic fail for Science Magazine.

Edward K. Ream
July 1, 2016

P.S. After writing the original post, I see that there is another embarrassing question to ask the authors.  If the moral dilemma exists for AV's, why doesn't it also exist for cars driven by humans?

The answer is clear: braking is the only logical course in all real situations. The scenarios discussed in the papers simply never happen.  If we assume that AV's will be better drivers than humans, then the supposed dilemma will arise even less often than for humans.  Except that the dilemma never happens for humans either ;-)

P.P.S. We can see that the trolley problem and its ilk lead to incorrect conclusions. We could call these papers reductio ad absurdem refutations of their own conclusions, and by extension, the faulty form of analysis engendered by the Trolley Problem.

EKR
July 2, 2016 

2 comments:

  1. Excellent reasoning on this, puts it into a whole new light for me.
    I wish the wholesale conversion to AV would happen as soon as possible, not only to reduce the stress of driving for me, but to eliminate the idiotic driving habits of so many other drivers.
    (Edit needed: breaks -> brakes)

    ReplyDelete
  2. Thanks for the spell checking. I caught this in a revision (not posted it), but not in the original.

    ReplyDelete