Saturday, July 18, 2009

Thoughts on 'Automation' I

In a recently-published article, Shankar Vedantam describes a growing (or at least resurgent) frustration concerning what the author refers to as 'automation'. The use case cited is the June 22, 2009 Washington subway crash, wherein one train that had stopped on the Washington Metro Red Line was hit by another, oncoming one for reasons that as of this writing are still being elucidated by investigators. The fact that the final report on the case has yet to be written highlights a weakness of the article, since it isn't clear that the Washington metro case is really an example of kind of situation that concerns the author. However, Vedantam also cites more definitive examples of what we might term the 'automation problem': for instance, he describes a case in Warsaw in which a plane, equipped with an automated sensor system designed to prevent premature thrust reversal by suppressing the reverse thrust function until the plane's weight was fully resting on its wheels, caused a plane to overshoot a runway during a rainstorm in which the plane hydroplaned on landing and the weight-sensitive sensors failed to trigger in time.

Although Vedantam never specifically defines what he means by 'automation' (another weakness of the article), examples like this make it pretty clear that what he has in mind are systems which are intended to be self-correcting in maintaining a desirable stable state, based on a model of the world that informs the system design: the kind of functional organization that Norbert Wiener long ago coined the word cybernetic to describe. A fundamental issue with such systems is precisely that, while they are very good at identifying and coping with the consequences of one or more explicitly or implicitly stored models of the way the world works, they remain mostly bad at identifying and coping with cases in which the evidence points to the available models' failure or inapplicability. The case of the hydroplaning airplane is a beautiful example: a case where the weight sensors failed to engage during a tractionless skid was an outcome which the system's designers did not anticipate, and there was no way the system per se could recognize or even hypothesize that it was in a situation to which its grounding theory had ceased to apply.

Before congratulating ourselves on possession of an ability that has yet to be duplicated by our artifacts, we should pause to consider that humans themselves aren't particularly good at this. As a rule, we like our paradigms better than they strictly deserve, and that we have an often-expressed habit of overfitting data to model. For instances we need look no farther than the recent Wall Street debacle which ended the lifespans of so many venerable investment firms: the story here is admittedly complex, but a significant part of it consists the collective failure by many investment officers and fund managers to recognize that the complex pricing models underpinning the mess of credit default swaps that dominated the housing market had ceased to be predicable of the current business environment.

0 Comments:

Post a Comment

<< Home