Saturday, August 01, 2009

Lately, I've been reviewing a recent post of Seth Shostak's at www.seti.org. Shostak makes the familiar argument that, regardless of whether or not selection pressures arising from and acting on semantic intelligence converge on a humanoid body plan, once semantic intelligence has emerged, it will quickly (in the cosmic time scheme) find technological means of liberating itself from biology, and will elevate itself to god-like status via embodiment in integrated circuits or the next big thing (or the next few big things) in solid-state electronic engineering. Hence, there is an enormous probability that any extraterrestrial intelligence we encounter will be an abiological, god-like AI, and so there is no reason to expect that such a thing would be housed in a humanoid body.

A few issues with this. I confess I'm playing devil's advocate with respect to some of them, but not with respect to all.

First , is it really so obvious that human-like intelligence can be 'scaled up' to god-like-capability status with respect to memory storage, processing speed, data handling and so-on? The fact that industrial-scale data processing systems still conspicuously lack semantic reasoning capability and the fact that human intelligence 'topped out' where it did in the evolutionary scheme of things may be forewarning us that there are in fact strict computational limits on how intelligent intelligence can be, whatever that may mean exactly, and also maybe limits on how good a job intelligence can do on improving itself along the lines of processing speed, memory, effective generalization from examples, or any of the other relevant dimension you might care to name. And, although one would think the point would be obvious, perhaps I'd better take the opportunity here to emphasize that this is not the beginning of an argument against artificial intelligence per se. Quite the contrary, I remain firmly convinced that human semantic intelligence is a natural phenomenon that we will finally see reproduced in systems that humans have devised. But believing that human artifacts will eventually think like humans is one thing; believing that intelligence can be scaled up to 'super-human proportions' is another. I'm not even sure what the latter means, exactly - but I can think of several candidate definitions, and, unfortunately, for each of these, I can see reasons it might not be possible to accomplish.

It's also not clear that the odd and frankly disturbing prejudice that Shostak betrays against 'conventional' biology is really warranted. It's important to understand, when I say this, that I'm not proposing that there is anything special or magical about known biochemistry when it come to implementing intelligence - or for that matter, when it comes to implementing evolution by natural selection. But, by the same token, I don't think there's anything particularly special about silicon and integrated circuits from standpoint of designing hardware platforms for implementing human-like, or transhuman intelligence. I'd say the empirical jury is still very much out on the question of what is the most efficient physical architecture in this department, and I for one wouldn't be remotely surprised if it turned out to be spongy brains (or some concatenation of neural tissues, anyway) floating in saline solution after all. Albeit they might be genetically engineered and/or otherwise-optimized ones.

Regarding the question of how necessary or unnecessary a humanoid body is for implementing generalized semantic competence, I'm reminded of a little-known argument that Isaac Asimov made in The Caves of Steel : to the extent that the evolutionary history of human beings has conspired to make the human body a very good generalist when it comes to tool-manipulation, it follows that the human body is the optimal form we know when it comes to systems dedicated to the manipulation of other, non-sentient instrumentalities (toasters, front-end loaders, dump trucks, plumbing, PCs...). If the Asimov argument is correct (I, for one, don't find it particularly persuasive), it argues not only for a more or less omnipresent selection pressure in favor of humanoid forms for semantic intelligence, but it also suggests that, even if extraterrestrial intelligences didn't evolve into a humanoid form to begin with, they might be advised to adopt one.

There are also a number of similar arguments put forward by Stephen Harnad and George Lakoff, that human metaphoric understanding and, indeed, intentionality, presuppose a humanoid embodiment. The convergent evolution argument advanced by Simon Conway Morris (and others before him with less-obvious agendas) is really an elaboration of this: assuming that the human form is unavoidably involved in the development of semantic intelligence, and assuming there are strong selection pressures operating in favor of semantic intelligence, it follows that there are strong selection pressures operating in favor of humanoid forms. Although I think this line of reasoning is ultimately wrong, I also fear that Shostak may be giving it too short a shrift: it it were right, the fact that a strong evolutionary pressure favored a humanoid form as the embodiment of humanistic intelligence would constitute prima-facia evidence that artificial intelligence would need to favor such a form as well in order to succeed as such.

As I say, though, I'm not persuaded by the argument that a human body per se is a necessary condition for humanistic intelligence, which, on the face of it, seems more akin to flight than hydrodynamic streamlining. I.e. while there is a set of fairly complex parameters that have to be observed, one also gets the impression, based in part on near-misses or proto-cases like cephalopods and pachyderms, that there's more than one way of skinning the proverbial cat as far as evolutionary implementation is concerned. But hiding behind the false prejudice in favor of humanoid forms, I suspect there lurks this truth: namely, that a humanistic intelligence can only exist as a self-interested agent within a community of such agents, and that behind as well as within each such agent there must be one or more lineages of such agents developing under the influence of natural selection. Thus, while I wouldn't expect an alien intelligence to look like us, I would bet dollars to devil dogs (Shostak's phrase) that it would look more like an engineered ecology than some kind of operating system.

0 Comments:

Post a Comment

<< Home