Monday, February 14, 2011

Intentionality, Rationality, and Cooperation

Of late, I've been reading The Evolution of Cooperation by Robert Axelrod (revised edition, Basic Books, 2006).  I'm suitably impressed - this was remarkable and significant work, and I find myself somewhat troubled by the intuition it is not nearly as widely circulated and known as it ought be.  The central theme of the book - the conditions for transcendence of the prisoner's dilemma - has proved to be a crux for much of my own extracurricular study.  The more I've examined the matter, the more convinced I am that natural selection affords a purely naturalistic framework whereby what be called higher order features- 'higher order' in the strict logical sense that they can only be represented with a more expressive language that quantifies over the predicates and propositions of whatever basal language is used for reasoning about physics - acquire real selective effectiveness as a certain order of complexity is reached.  The Prisoner's Dilemma is central to this process, insofar as symbiotic cooperation is the starting point for emergence of aggregate, self-replicating agents (for example, multicellular organisms) whose adaptive behavior, as implemented through natural selection processes involving their components, begins to require desciptions using a higher order of expressivity. 

I do wonder, though, whether Axelrod is letting the objective reading that he gives the discount parameter (which is, indeed, well-advised) lull him into false convictions about the rationality of cooperation's evolution.  The discount parameter for an iterated series of prisoner's dilemmas is a quantitative measure of the rate at which the value of the payoff of the iterated dilemma-series decays for the players, and so can be taken as a measure of the extent to which the past experience of the players has significance for the future.  But even on the assumption that we interpret 'value' in terms that can be readily quantified - contribution to the reproductive fitness of the organism, say - the problem remains that an organism which is incapable of any form of learning, whose behavior is affected neither by past experience nor future model, cannot implement a computation that validates a strategy of reciprocity: from the perspective such an agent, iteration has no meaning and the discount parameter might as well be set at zero.  Within the horizon of this organism's knowledge, the only strategy which decision theory can validate will be to offer defection, no matter what.  Nor is it clear that a 'Skinnerian' organism that is capable only of operant conditioning fares better from the standpoint of being able to realize a model in which it is rational to cooperate.  Assuming the severely negative outcome that results from cooperating when the opposite number defects trumps the milder negative outcome resulting from mutual defection in determining future behavior, the Skinnerian organism seems bound to converge on a strategy of proffering defection in any environment where there are any agents which proffer defection on the first encounter. 

What we must always keep in mind, though, is that primitive organisms are not themselves using decision theory to model what is 'rational' - they are, after all incapable of doing any such thing.  There is nothing whatever to preclude the emergence through chance mutation of the odd individual which, though incapable of learning, simply acts irrationally, proferring cooperation.  If two such individuals should meet, the resulting synergy may suffice to favor their producing offspring, thereby insuring survival of whatever trait gave rise to their irrationality in the first place.  In time, the cooperatives emerging through this avenue may become so tightly organized as to count as self-replicating aggregate organisms in their own right - not least because the behavioral plasticities such agents can exhibit through internalized selection pressures acting on the populations of their still-replicating component agents may confer a considerable advantage in the environment-at-large.  As I say, I'm pretty well convinced that if the behavior of the component agents is described in a first-order language, describing the behavior of the aggregate organism will require a second-order one - and the behavioral plasticities of aggregate organisms of a certain degree of complexity will eventually require a language that admits propositional embedding for predicates other than just the logical connectives.  Selection pressure favors the emergence of aggregate organisms whose behavioral plasticities, resulting from internal selection effects, can best be interpreted in terms of intentionality and semantics, because intentionality and semantics are the necessary conditions for memory and modeling, which in turn provide the framework for modeling an iterated prisoner's dilemma - the only context within which a cooperative or reciprocal strategy becomes rational.  There is, then, nothing intrinsically rational about 'significance', intentionality and semantics, these having arisen only through a purely stochastic process of hierarchical organization, driven by selection pressures grounding out in the functional equivalent of a leap of faith.  But given intentionality and semantics, a long-term view becomes accessible wherein rational strategies of reciprocation can be discerned, and to the extent that the interacting agents share the semantic stance, pursuit of such strategies has a real survival value for the participants. Eventually, agent sophistication and the expressivity of their world model will be driven to the point where it is not only possible, in communication, to frame the prisoner's dilemma and thereby circumvent it, but to identify and reason about new venues for cooperation.

0 Comments:

Post a Comment

<< Home