Tuesday, April 08, 2008

Some Thoughts on Physical Symbol Systems

When Alan Newell and Herbert Simon argued in Computer Science as Empirical Inquiry: Symbols and Search that a computer could be regarded as a physical symbol system – essentially, a physical instantiation of an interpreted syntax, and that this was a meaningful level of interpretation and expression, they founded a paradigm which has taken its place at the center of classical A.I. and which has been highly influential in the development of computer science. But a computer, in the sense of a physical concatenation of parts which implement a Universal Turing Machine, is a human artifact. It does what we think it does in virtue of an interpretation of its actions, which we provide as users. Indeed, this interpretation is two-fold. In the first place, there is the interpretation of the behavior of the physical machine, according to which we relate certain discrete classes of input, output, and stored physical states to numbers, so that the machine as a whole can be seen as implementing a computable arithmetic function. Layered on top of this we have a secondary interpretation of the computable function, wherein the digital inputs and outputs are seen as encoding semantic features. The inherently digital element of the computational paradigm is something that is often glossed or forgotten in popular discussions of the subject, and it is something we ignore at our peril in discussions of machine intelligence. The claim that a computer can implement intelligence is often taken to be identical with, or at least entailed by, the position that intelligence is a fundamentally physical process subject to scientific investigation and natural law – but really, there is more to it than that, at least if it amounts to the claim that a single, computable arithmetic function can be interpreted in such a way as to embrace the whole of what we consider to be semantic competence. Not only is this claim not obvious, it is not obviously of a piece with scientific materialism. By the same token, if the physical symbol system hypothesis is to be pushed beyond the realm of artifacts cobbled together by semantically conversant hominids and into the realm of natural systems, the question must be asked: are we warranted in applying said hypothesis to systems which have evolved in nature, and in what sense could the hypothesis ever be requisite for explaining the behavior of such systems? Behind this question lurks a meta-question: how did the ability to interpret the behavior of things in terms of algorithms, and ultimately of UTMs, appear in the natural world in the first place?

It is important to be clear about this. There is no serious reason to doubt the Church-Turing thesis, and I am not suggesting that we call it into question. Any behavior that we can sufficiently delineate in terms of an algorithm of physical steps can be implemented by an appropriately programmed Universal Turing Machine: what is in question here are precisely the physical limits attaching to what can be delineated. Stuart Kauffman, among others, has pointed out that the occurrence of exaptation in the evolutionary process is not something which can be predicted in advance, owing to its dependence upon chance confluences of feature and circumstance. This gives us good grounds for believing that evolution by natural selection is not computable in the limit: there is no way to be certain in advance that any fixed set of atomic terms and relations is adequate for describing what is going to be relevant in the domain.

This may be vital, for it is a self-perpetuating organization – what Kauffman calls an autocatalytic agent – that seems to be the candidate natural system for which an algorithmic explanation is mandatory. If a system is configured to respond to environmental circumstances in such a way as to maintain a set of boundary conditions which in turn perpetuate the system’s survival, then some sets of environmental features are relevant to this system’s survival in a way in which others are not, and the system’s behavior in responding to – and learning about – features which are relevant is what justifies a rule-based interpretation. The position amounts to something that is almost, but not quite paradoxical: the reason that bits of living processes can be, and indeed, demand to be interpreted as implementations of Turing Machines derives from the fact that the processes overall are engaged in an open-ended evolutionary development that is not computable. Or to put it another way, the very reason that certain physical states of certain organisms have to be regarded as symbolic, as denoting or delineating other states in the world, has fundamentally to do with the fact that those organisms are self-catalyzing, self-reproducing systems that evolved via selection processes within a thermodynamically ‘open’ natural process whose boundary conditions cannot be pre-specified.

So, where does this leave semantic intelligence, which in some respects seems very like the internalization of the natural selection process within an individual organism? First, while the programs of a Universal Turing Machine are good models for localized pieces of the process, and while the overall process by which the pieces are coordinated is certainly constrained by natural law, the coordination within that framework is too tychistic, opportunistic, adaptive, and innovative to be entirely implemented by a finite program and a fixed ontology that anticipate all contingencies in advance. This is not to disparage programs and ontologies by any means; rather, it suggests that special attention must be focused on the process whereby these get compiled, decompiled, and deployed, as this develops from the interactions of a community of comparatively blind agents following simple rules. Evolving programs, wherein even the atomic terms are potentially subject to environmentally originating modification, may be the model to consider. Use of the word is deliberate; the coordination process appears to me to have much in common with natural selection; indeed (to reiterate), it may in some sense be an internalization of it, with populations of neural agents in the central nervous system replacing populations of organism in an ecosystem. By the same token, these reflections suggest that examining the intelligent agent’s environment and the agent’s individual and hereditary history within that environment may be crucial for understanding intelligence.

Bibliography

Kauffman, S. Investigations, Oxford University Press, New York (2000).

Newell, A. and Simon, H.A., "Computer Science as Empirical Inquiry: Symbols and Search," Communications of the Assoc. for Computing Machinery, 19, p. 113-126 (1976).