Saturday, October 22, 2016

Leo 5.4-final released

Leo 5.4 is now available on SourceForge and on GitHub.

Leo is an IDE, outliner and PIM, as described here.

Simulating Leo's features in Vim, Emacs or Eclipse is possible, just as it is possible to simulate Python in assembly language...

The highlights of Leo 5.4
  • Added clone-find commands, a new way to use Leo.
  • The clone-find and tag-all-children commands unify clones and tags.
  • The new pyflakes and flake8 make it possible to check files from within Leo.
  • Added importers for freemind, mindjet, json and coffeescript files.
  • Rewrote the javascript importer.
  • Imported files can optionally contain section references.
  • The viewrendered plugin supports @pyplot nodes.
  • Improved the mod_http plugin.
  • @chapter trees need no longer be children of @chapters nodes.
  • All known bugs have been fixed.
Links

Thursday, October 20, 2016

Leo 5.4-b1 released

Leo 5.4-b1 is now available on SourceForge. Leo is an IDE, a PIM and and an outliner.
The highlights of Leo 5.4
  • Added clone-find commands, a new way to use Leo.
  • The clone-find and tag-all-children commands unify clones and tags.
  • The new pyflakes and flake8 make it possible to check files from within Leo.
  • Added importers for freemind, mindjet, json and coffeescript files.
  • Rewrote the javascript importer. It can optionally generate section references.
  • Imported files can optionally contain section references.
  • The viewrendered plugin supports @pyplot nodes.
  • Improved the mod_http plugin.
  • @chapter trees need no longer be children of @chapters nodes.
  • All known bugs have been fixed.
Leo is:
  • A fully-featured IDE, with Emacs-like commands.
  • An outliner. Everything in Leo is an outline.
  • A Personal Information Manager.
  • A browser with a memory.
  • A powerful scripting environment.
  • A tool for studying other people's code.
  • Extensible via a simple plugin architecture.
  • A tool that plays well with IPython, vim and xemacs.
  • Written in 100% pure Python
  • Compatible with Python 2.6 and above or Python 3.0 and above.
  • A tool with an inspiring and active community.
Leo's unique features:
  • Always-present, persistent, outline structure.
  • Leo's underlying data is a Directed Acyclic Graph.
  • Clones create multiple views of an outline.
  • A simple, powerful, outline-oriented Python API.
  • Scripts and programs can be composed from outlines.
  • Importers convert flat text into outlines.
  • Scripts have full access to all of Leo's sources.
  • Commands that act on outline structure.
    Example: the rst3 command converts outlines to reStructuredText.
  • @test and @suite scripts create unit tests automatically.
  • @button scripts apply scripts to outline data.
  • Outline-oriented directives.
Simulating these features in vim, Emacs or Eclipse is possible, just as it is possible to simulate Python in assembly language...
Links

Wednesday, September 14, 2016

A blunder in Bill McKibben's "A World at War" in the New Republic

Bill McKibben's piece A World at War in the New Republic is a call to arms against climate change.  The stakes are indeed as high as he makes them out to be.

Alas, McKibben misstates what it will take to reduce CO2 from the atmosphere.  In fact, the situation is far more serious than he implies.

The following quote contains a blunder:
But would the Stanford plan be enough to slow global warming? Yes, says [Mark Z.] Jacobson: If we move quickly enough to meet the goal of 80 percent clean power by 2030, then the world’s carbon dioxide levels would fall below the relative safety of 350 parts per million by the end of the century.
This is a completely wrong.  It results from bathtub problem mistake.  Reducing the rate at which water flows into a bathtub does not lower the level of the water! CO2 is removed from the atmosphere by weathering of rock, and that process takes at least thousands of years. See this page for more.

We won't get even to 400 ppm unless we learn how to remove CO2 from the atmosphere.  That can be done, but it would take a huge amount of green energy.  Only governments could fund such a project. It's not going to happen with climate deniers in control.

Edward

Thursday, July 7, 2016

An open letter to Science magazine

The research article, The social dilemma of autonomous vehicles and the accompanying perspective,  Our driverless dilemma do not deserve publication in a prestigious scientific journal. Real world scenarios are tests of sensors and anti-lock brakes, not moral algorithms.

The perspective starts out "Suppose that a driverless car is headed toward five pedestrians. It can stay on course and kill them or swerve into a concrete wall, killing its passenger." This scenario assumes that the driverless car (DC) has been caught unawares by the pedestrians! Either the DC has either malfunctioned, or it was cruising at an unsafe speed.

Crucially, the authors never explain how such scenarios could arise. In all real situations, slamming on the brakes and sounding the horn would be the appropriate response. Doing otherwise increases the risk to pedestrians and other vehicles and exposes the manufacturers of the DC to lawsuits.

These articles founder on a simple question. If the putative moral dilemma exists, it should already have happened. Why aren't human drivers worried?

Both articles assume that DC's could be "programmed" to deal with non-existent ethical dilemmas. In fact, DC's will likely use neural networks that mimic best driving practice. Bizarre driving behavior will not arise from such neural networks.

For years to come, these articles are likely to give unjustified comfort to insurance companies threatened by autonomous vehicles.

Edward K. Ream

P.S. The picture accompanying the perspective shows how silly these scenarios are:

Unless the foreground DC has malfunctioned, it will be applying maximum brakes. The speed limit for this two-lane road is probably less than 50 mph, so given the implied reaction time (no visual obstructions!), the car should easily stop before hitting the children. The car may even have stopped already.

Swerving into the guard rail will decrease the DC's maneuverability, increase the chances of careening into the background DC and increase the chances of hitting the children. In no way is swerving left or right a reasonable choice. If the foreground DC simply brakes, the children have maximum chance of avoiding the car, either by getting close to the guard rail or by jumping over it, or even by turning and running away. Swerving may fatally confuse the children. 

P.P.S. The technology behind DC's will almost certainly involve communication between DC's. So the foreground DC, having the better sight lines, will warn the background DC to brake.

P.P.P.S. The Trolley Problem thought experiment, and variants such as presented in these papers, ignore two crucial factors: urgency and uncertainty. There is no time to compute Rube Goldberg responses, and there is no way to know the outcome of bizarre behavior.

Here, the thought experiment results in obvious nonsense. You could call it a reductio ad adsurdem. It seems that these kinds of thought experiments have limited applicability.

EKR

Friday, July 1, 2016

Two epic fails in Science Magazine

The recent issue of Science Magazine contains a research article called
The social dilemma of autonomous vehicles with a perspective called Our driverless dilemma. Imo, both are epic fails. Both remind me of the "what's wrong with this picture" puzzle that I enjoyed as a kid.

Most importantly, the articles confuse engineering with science.  In fact, it is impossible to imagine a real scenario in which slamming on the brakes and sounding the horn would not be the appropriate response to the faux "dilemmas" discussed.

Insurance companies are worried

Before I go into lengthy criticisms, let me point out a disturbing possibility, namely that the insurance industry, notably lead by Warren Buffett, wants to discredit or impede self-driving cars/trucks.  The reason is simple: such technology fundamentally threatens Geico and the rest of the auto insurance industry. In effect, we may be seeing the beginning of a self-driving-car-denial propaganda campaign.

What's wrong with this picture

Now on to the detailed critique.  For simplicity, let AV denote an autonomous vehicle (self-driving car/truck).

The perspective starts out:

"Suppose that a driverless car is headed toward five pedestrians. It can stay on course and kill them or swerve into a concrete wall, killing its passenger."

Folks, this is total bullshit.  An AV, even with today's technology, such as Tesla's, will be continually monitoring the environment.  To suppose the scenario proposed above is to suppose that somehow (!!) the AV is caught unawares. Either that AV has malfunctioned, or the AV is driving faster than its sensors can detect problems. Neither is likely, and neither poses any ethical dilemma.

Furthermore, the scenario assumes, quite preposterously, that one can predict the outcome of any action with certainty. The scenario most likely to mitigate harm would be to take the obvious action: namely slam on the brake and sound the horn.  This minimizes the kinetic energy involved, and maximizes the time for the pedestrians to react.

Furthermore, the "crashing into concrete wall" scenario is a one in a trillion possibility.  And as a practical matter, slamming on the brakes will never result in a lawsuit.  Swerving into a wall almost certainly will.

Take a look at the picture that supposedly illustrates the "dilemma".  What's wrong with this picture?

- No brick wall :-)
- Unobstructed sight lines for foreground AV.
- No indication of how fast the foreground AV is going.

In fact, unless the foreground AV has malfunctioned, it will have already stopped or it will be applying maximum brakes.  We can imagine the speed limit for this two-lane road is less than 50 mph, so given the implied reaction time (no visual obstructions), it is quite reasonable to assume that the car will stop before hitting the children.

Furthermore, given the guard rail and its proximity to the foreground AV, swerving into the guard rail is likely to decrease the AV's maneuverability, increase the chances of careening into the background AV and increase the chance of hitting the children.  In no way is swerving left or right a reasonable choice.

Swerving is likely to confuse the children.  If the foreground AV simply brakes, the children have maximum chance of avoiding the car, either by getting close to the guard rail or by jumping over it.  This would be true even if there were a cliff on the other side of the rail.

And one more thing.  The technology behind AV's will almost certainly involve communication between AV's.  So the foreground AV, having the better sight lines, will be warning the background AV to brake.

What's wrong with the thought experiment

Now let me turn to the main research article.  This article is a variation on a preposterous thought experiment known as the trolley problem that, is, alas, presently common in psychology and economics. This is the worst kind of thought experiment, one that is completely out of touch with reality.  I am astounded this kind of nonsense is taken seriously.

The plain fact of the matter is that this thought experiment "explores" a situation that can never ever happen.  The reason is clear: it asks experimental subjects to assume something contrary to fact, that one can know the results of actions taken under extreme time pressure.

Once again, in all plausible scenarios, the proper action is not to switch the trolley from one track to another, but to attempt to warn the people in danger.  If one has time to take the Rube Goldberg actions contemplated in the thought experiment (presumably after considering the consequences!) then one has enough time to yell, or to get into a car an blow the horn, or whatever.

In short, this thought experiment is utter nonsense, and whatever supposed conclusions it is supposed to "deliver" are also utter nonsense.  How is this supposed to be real science?

Let me be clear about how silly this thought experiment is. It ignores two fundamental facts about real-world emergencies: uncertainty and urgency.  We can not know the results of actions.  If the trolley is heading towards five people, is it not more likely that one of the 5 will see the approaching danger and warn the other?  And if there is time for calculation, there is also time for other actions not envisaged by this thought experiment.  In short, this experiment can teach us nothing about the real world.

Conclusions and summary

In all practical situations, slamming on the brakes and sounding the horn has the best practical chance of reducing harm to all concerned.  It maintains maximum control of the AV, it continually, smoothly and predictably reduces kinetic energy and it gives pedestrians and other vehicles the best chance of avoiding collisions.  It also eliminates the risk of lawsuits.

There are, in fact, no ethical dilemmas involved with AV's, except possibly for this: slamming on the brakes could injure passengers inside the AV, something not mentioned in either article. This is a real possibility if passengers neglect to wear seat belts. And that might become more common when AV's are (rightly!) perceived to be much safer than human-driven vehicles.

Both articles assume that AV's could be "programmed" to deal with oh-so-unlikely ethical dilemmas.  In fact, AV's will use neural networks that mimic standard driving practice.  There is almost no chance that bizarre driving behavior will arise from such neural networks.

In short, these articles have nothing whatever to offer the engineers working on AV's.  They might, however, provide unjustified cover insurance companies wanting to delay AV's, thereby delaying the day when AV's reduce the slaughter on our highways.  An epic, epic fail for Science Magazine.

Edward K. Ream
July 1, 2016

P.S. After writing the original post, I see that there is another embarrassing question to ask the authors.  If the moral dilemma exists for AV's, why doesn't it also exist for cars driven by humans?

The answer is clear: braking is the only logical course in all real situations. The scenarios discussed in the papers simply never happen.  If we assume that AV's will be better drivers than humans, then the supposed dilemma will arise even less often than for humans.  Except that the dilemma never happens for humans either ;-)

P.P.S. We can see that the trolley problem and its ilk lead to incorrect conclusions. We could call these papers reductio ad absurdem refutations of their own conclusions, and by extension, the faulty form of analysis engendered by the Trolley Problem.

EKR
July 2, 2016 

Monday, June 20, 2016

Mark Blyth

In a recent article in Foreign Affairs magazine, called Capitalism in Crisis Mark Blyth wrote:

"Of course, people have predicted an environmental apocalypse before. A group of experts called the Club of Rome famously published The Limits to Growth in the 1970s, forecasting economic and environ­mental crises—and those predictions have failed to come to pass. But this time may be different."

According to Blyth, this was added by the Editors of Foreign Affairs. Sadly, this is a denialist trope.  Blyth is well aware of LTG, and does not need to be convinced.

Friday, May 6, 2016

A thorough refutation of Republican/Libertarian ideology

The latest issue of Foreign Affairs Magazine is one of the best in recent memory.

The issues starts out with several fascinating articles about Russian and Putin.

The article:

Jacob S. Hacker and Paul Pierson, Foreign Affairs, Vol. 95, no. 3, May-June 2016.
Making America Great Again, The Case for the Mixed Economy
https://www.foreignaffairs.com/articles/united-states/2016-03-21/making-america-great-again

is, in essence, a complete refutation of the really quite ridiculous notion that "Government is the Problem". Hacker and Pierson convincingly demonstrate that strong government is essential to the orderly functioning of free markets, and decisively contribute to the general welfare.

It would be great if every Republican and every Libertarian would read this and understand how crazy the current climate is.  But I'm afraid that the following applies: "Whom the Gods would destroy, they first make mad."

Edward