Sunday, August 23, 2009

human and machine intelligence: is existential conflict inevitable?

Recent discussions sponsored by the Association for the Advancement of Artificial Intelligence have reopened the general question of how much we as a species have to fear in the long run from A.I.: specifically, in engineering common sense intelligences with situational competence and theoretical flexibility comparable to or surpassing that of modern humans, are we creating a new species that will outcompete us and render us extinct? It's a legitimate worry, if generations of science fiction stories from Kapek's R.U.R. to Moore's Battlestar Galactica are anything to go by - and I don't mean that facetiously. Science fiction has at least occasionally proven accurate in predicting developments in human industry and culture, so it's probably advisable to give some consideration to these warnings. But how seriously do we need to take them?

First, it's probably worth taking some time to dispense with a couple of specious considerations: namely, the claim that we shouldn't worry because we're not close to realizing A.I., together with its limit case, the claim that we shouldn't worry because A.I. is impossible. The first of these is almost analogous to the just-as-specious argument that we shouldn't worry about tracking near-Earth asteroids or pre-empting cosmic impacts because said events are historically very rare and improbable. Rare they are, but the negative consequences associated with one - in the limit, the destruction of human civilization along with a substantial portion of the terrestrial biosphere - are so radically severe that investment in technologies for tracking and pre-emption becomes rational. A similar consideration applies in the case of A.I. It may not be coming soon, but given that it's coming, we're advised to try to come to as clear an understanding as we can of the potential consequences for our species. As regards the claim that A.I. is impossible: dealing in any detail with the various a priori arguments that have been put forward in defense of this is not going to be possible within the scope of this posting: suffice to say, I don't find any of them sufficiently convincing to justify complacency regarding the potential risks.

Now to the real problem. The decisive question, when it comes to the issues of whether A.I. poses an existential threat and if so, how serious, is reflected in the phrase that I used when I asked: are we creating a new species that will outcompete us? The choice of terminology is significant. For the essence of the worry that conflict between human and A.I. is inevitable is precisely Darwinian: assume that any human-like intelligence with other-than-human implementation is going to have roughly the same interests and will require roughly the same resources as humans, and that the various pursuits of enough of those interests and enough of those resources add up to zero-sum games, and you get the conclusion that species conflict, or something so like it as makes no difference, is inevitable.

But is it in fact true that a non-human intelligence will have so many interests that overlap with humans in a zero-sum fashion that there won't be room enough for both in the ecosphere? That's not an easy question to resolve. If one approaches the matter on the basis of paleontology, the picture isn't terribly encouraging. There's ample evidence in the fossil record that at one time, many species of hominid were roaming the planet, their representatives presumably exhibiting near-human intelligence. Obviously, only one survived. Why this is the case is still a matter of debate among paleontologists, and may always be so absent definitive empirical evidence. But there is an ominous suggestiveness about this state-of-affairs, a darkling hint that general intelligence may well be a jealous god in the evolutionary scheme of things: when present in varying degrees which draw upon the same resources, the inevitable outcome is that the higher will crowd out the lower. In this respect, intelligence may well exhibit a deep affinity with biology itself: spontaneous generation can happen only once, after which the very fact of its occurrence effectively vitiates the conditions for its repetition. In which case, deliberate generation of a superior intelligence might well vitiate conditions for the survival of its progenitor.

The counter-argument is that we're not talking about the creation of another hominid that utilizes the same pool of resources as modern humanity. It isn't clear that an A.I. exhibiting a human or near-human degree of common-sense intellligence would have enough interests in common with humanity to render a Darwinian existential conflict inevitable. Indeed, I imagine there are still AI researchers who would question whether it is necessary that an A.I. would have any interests at all. I'm not one of them: it has long seemed to me that the fundamental problem of understanding and coping with relevance in A.I. is intimately related to the questions of biological agency, reproduction, and Darwinian survival, to the extent that I suspect it is really impossible to have a fully functional common-sense intelligence that is not a self-interested agent within the context of an inclusive ecology. That said, the jury is still out on whether the interests in question would cover topics such as would make a balls-to-the-wall, death-or-glory battle for the same ecological niche an inevitability or even a possibility. And it's important to remember, too, that intelligence has demonstrated a capability for identifying venues wherein mutual cooperation is more advantageous than conflict, and for establishing the social and cultural norms necessary for transcending aggressive instinct and accessing sociological springboards in the fitness landscape. None of this is in any way decisive, of course, but it does afford reason for hope.

A final point that is probably worth making concerns the matter of superior intelligence, and the aforementioned observation that, in a competition of intelligences, the highest order of intelligence is favored to win in the limit. Simply put, it's far from clear how 'superior' human-like intelligences can become. A very common presumption holds that, because intelligence is a natural phenomenon, and given that intelligence can be realized in artificial constructs, it should be possible to engineer intelligences that are in some sense 'god-like' with respect to human beings, in the sense of integrating over a scope of data many orders of magnitude beyond what humans are capable of, or in the sense of drawing dramatically deeper inferences, or in the sense of processing information at a much faster rate, or in all three senses and more besides. I can't emphasize strongly enough that this is a non-sequitur. In fact, while I don't have a definitive story to tell, here, nothing would surprise me less than if it turned out to be the case that there are provable computational limits on how 'god-like' a common-sense intelligence can be within a given time-scale, including (or perhaps especially) a human one. Given that common-sense intelligence does confer a general selective advantage, the expectation would be that selection pressure would drive its continuous expansion to trans-human proportions. The fact that this didn't happen, that Homo Sapiens was the last hominid standing in the putative early-holocene intelligence competition instead of some übermensch, suggests that a hard-and-fast natural limit guaranteed human intelligence topped off where it did. If true, it would do nothing to confirm or refute the claim that a conflict between human and machine intelligence is inevitable, but it would offer the comfort - possibly a cold one - that such a contest would at least take place upon a level playing field.

Saturday, August 01, 2009

Lately, I've been reviewing a recent post of Seth Shostak's at www.seti.org. Shostak makes the familiar argument that, regardless of whether or not selection pressures arising from and acting on semantic intelligence converge on a humanoid body plan, once semantic intelligence has emerged, it will quickly (in the cosmic time scheme) find technological means of liberating itself from biology, and will elevate itself to god-like status via embodiment in integrated circuits or the next big thing (or the next few big things) in solid-state electronic engineering. Hence, there is an enormous probability that any extraterrestrial intelligence we encounter will be an abiological, god-like AI, and so there is no reason to expect that such a thing would be housed in a humanoid body.

A few issues with this. I confess I'm playing devil's advocate with respect to some of them, but not with respect to all.

First , is it really so obvious that human-like intelligence can be 'scaled up' to god-like-capability status with respect to memory storage, processing speed, data handling and so-on? The fact that industrial-scale data processing systems still conspicuously lack semantic reasoning capability and the fact that human intelligence 'topped out' where it did in the evolutionary scheme of things may be forewarning us that there are in fact strict computational limits on how intelligent intelligence can be, whatever that may mean exactly, and also maybe limits on how good a job intelligence can do on improving itself along the lines of processing speed, memory, effective generalization from examples, or any of the other relevant dimension you might care to name. And, although one would think the point would be obvious, perhaps I'd better take the opportunity here to emphasize that this is not the beginning of an argument against artificial intelligence per se. Quite the contrary, I remain firmly convinced that human semantic intelligence is a natural phenomenon that we will finally see reproduced in systems that humans have devised. But believing that human artifacts will eventually think like humans is one thing; believing that intelligence can be scaled up to 'super-human proportions' is another. I'm not even sure what the latter means, exactly - but I can think of several candidate definitions, and, unfortunately, for each of these, I can see reasons it might not be possible to accomplish.

It's also not clear that the odd and frankly disturbing prejudice that Shostak betrays against 'conventional' biology is really warranted. It's important to understand, when I say this, that I'm not proposing that there is anything special or magical about known biochemistry when it come to implementing intelligence - or for that matter, when it comes to implementing evolution by natural selection. But, by the same token, I don't think there's anything particularly special about silicon and integrated circuits from standpoint of designing hardware platforms for implementing human-like, or transhuman intelligence. I'd say the empirical jury is still very much out on the question of what is the most efficient physical architecture in this department, and I for one wouldn't be remotely surprised if it turned out to be spongy brains (or some concatenation of neural tissues, anyway) floating in saline solution after all. Albeit they might be genetically engineered and/or otherwise-optimized ones.

Regarding the question of how necessary or unnecessary a humanoid body is for implementing generalized semantic competence, I'm reminded of a little-known argument that Isaac Asimov made in The Caves of Steel : to the extent that the evolutionary history of human beings has conspired to make the human body a very good generalist when it comes to tool-manipulation, it follows that the human body is the optimal form we know when it comes to systems dedicated to the manipulation of other, non-sentient instrumentalities (toasters, front-end loaders, dump trucks, plumbing, PCs...). If the Asimov argument is correct (I, for one, don't find it particularly persuasive), it argues not only for a more or less omnipresent selection pressure in favor of humanoid forms for semantic intelligence, but it also suggests that, even if extraterrestrial intelligences didn't evolve into a humanoid form to begin with, they might be advised to adopt one.

There are also a number of similar arguments put forward by Stephen Harnad and George Lakoff, that human metaphoric understanding and, indeed, intentionality, presuppose a humanoid embodiment. The convergent evolution argument advanced by Simon Conway Morris (and others before him with less-obvious agendas) is really an elaboration of this: assuming that the human form is unavoidably involved in the development of semantic intelligence, and assuming there are strong selection pressures operating in favor of semantic intelligence, it follows that there are strong selection pressures operating in favor of humanoid forms. Although I think this line of reasoning is ultimately wrong, I also fear that Shostak may be giving it too short a shrift: it it were right, the fact that a strong evolutionary pressure favored a humanoid form as the embodiment of humanistic intelligence would constitute prima-facia evidence that artificial intelligence would need to favor such a form as well in order to succeed as such.

As I say, though, I'm not persuaded by the argument that a human body per se is a necessary condition for humanistic intelligence, which, on the face of it, seems more akin to flight than hydrodynamic streamlining. I.e. while there is a set of fairly complex parameters that have to be observed, one also gets the impression, based in part on near-misses or proto-cases like cephalopods and pachyderms, that there's more than one way of skinning the proverbial cat as far as evolutionary implementation is concerned. But hiding behind the false prejudice in favor of humanoid forms, I suspect there lurks this truth: namely, that a humanistic intelligence can only exist as a self-interested agent within a community of such agents, and that behind as well as within each such agent there must be one or more lineages of such agents developing under the influence of natural selection. Thus, while I wouldn't expect an alien intelligence to look like us, I would bet dollars to devil dogs (Shostak's phrase) that it would look more like an engineered ecology than some kind of operating system.