Thursday, July 23, 2009

Learning in the Natural World

A couple of thoughts concerning a recent release on learning, published on the Science Daily site. One is that this is somewhat revealing of how some folks still think of learning potential and semantic intelligence as though these were, in some sense, independent of the environments in which they are embedded. We've known for quite some time that animals, humans being no exception, are predisposed to abstract certain generalizations from certain evidence. Such predispositions are the only way it can be possible for humans to learn, e.g., subclasses of grammar classes which are provably unlearnable in the limit on the paucity of input data which humans evidently rely on. It also seems indisputable that these predispositions are rooted in the evolutionary lineage of the species; our epistemology is built on a foundation of syntheses that represent, as it were, an archaeology of what our ancestors needed to induce given the requirements of the times.

It's also important not to discount the role of the secondary effects of modeling other minds. Humans don't just induce rules in the sense of adapting their behavior so as to conform with identifiable rule - sets: they are able to model this behavior in themselves and others in a way that results in further adaptation of behavior, and to appeal to models as justification for behavior. This opens the door to things like culture and narrative playing a role in the learning process.

These considerations add up to the result that the process whereby humans arrive in development at semantic awareness sufficient for allowing things like society and acculturation to play a role in learning can't happen any old way. And the way that it does happen is likely to be a reflection of the development of the species as a whole. For this reason, it behooves us to be very careful of assuming that features that we readily perceive as being causally or transparently connected appear so or could appear so to brains in earlier stages of development.

Here's a practical upshot for one of the things that was looked at by Kuhl. It's a pretty safe assumption that television and computer monitors weren't on the scene fifty-and-some-odd thousand years ago when the bootstrap-your-way-into-semantic awareness procedure was in the process of working itself out on the African veldt. And when one considers the mess of cultural and artistic conventions which support interpreting a bunch of phosphorescent pixels on a 2-D surface as a 3-D representation of a narrative involving a person, there is therefore no reason to think that applying this set of conventions would be a native human ability, or anything like a native human ability. In short, we really shouldn't be surprised that human infants can't inherently learn things from people in television images: expecting otherwise is, frankly, as crazy as expecting that a flat red octagon with the letters S-T-O-P printed on it in white would 'automatically' qualify as a stop sign for a Chaldean shepherd in the 6th century B.C..

Saturday, July 18, 2009

Thoughts on 'Automation' I

In a recently-published article, Shankar Vedantam describes a growing (or at least resurgent) frustration concerning what the author refers to as 'automation'. The use case cited is the June 22, 2009 Washington subway crash, wherein one train that had stopped on the Washington Metro Red Line was hit by another, oncoming one for reasons that as of this writing are still being elucidated by investigators. The fact that the final report on the case has yet to be written highlights a weakness of the article, since it isn't clear that the Washington metro case is really an example of kind of situation that concerns the author. However, Vedantam also cites more definitive examples of what we might term the 'automation problem': for instance, he describes a case in Warsaw in which a plane, equipped with an automated sensor system designed to prevent premature thrust reversal by suppressing the reverse thrust function until the plane's weight was fully resting on its wheels, caused a plane to overshoot a runway during a rainstorm in which the plane hydroplaned on landing and the weight-sensitive sensors failed to trigger in time.

Although Vedantam never specifically defines what he means by 'automation' (another weakness of the article), examples like this make it pretty clear that what he has in mind are systems which are intended to be self-correcting in maintaining a desirable stable state, based on a model of the world that informs the system design: the kind of functional organization that Norbert Wiener long ago coined the word cybernetic to describe. A fundamental issue with such systems is precisely that, while they are very good at identifying and coping with the consequences of one or more explicitly or implicitly stored models of the way the world works, they remain mostly bad at identifying and coping with cases in which the evidence points to the available models' failure or inapplicability. The case of the hydroplaning airplane is a beautiful example: a case where the weight sensors failed to engage during a tractionless skid was an outcome which the system's designers did not anticipate, and there was no way the system per se could recognize or even hypothesize that it was in a situation to which its grounding theory had ceased to apply.

Before congratulating ourselves on possession of an ability that has yet to be duplicated by our artifacts, we should pause to consider that humans themselves aren't particularly good at this. As a rule, we like our paradigms better than they strictly deserve, and that we have an often-expressed habit of overfitting data to model. For instances we need look no farther than the recent Wall Street debacle which ended the lifespans of so many venerable investment firms: the story here is admittedly complex, but a significant part of it consists the collective failure by many investment officers and fund managers to recognize that the complex pricing models underpinning the mess of credit default swaps that dominated the housing market had ceased to be predicable of the current business environment.