Sunday, August 23, 2009

human and machine intelligence: is existential conflict inevitable?

Recent discussions sponsored by the Association for the Advancement of Artificial Intelligence have reopened the general question of how much we as a species have to fear in the long run from A.I.: specifically, in engineering common sense intelligences with situational competence and theoretical flexibility comparable to or surpassing that of modern humans, are we creating a new species that will outcompete us and render us extinct? It's a legitimate worry, if generations of science fiction stories from Kapek's R.U.R. to Moore's Battlestar Galactica are anything to go by - and I don't mean that facetiously. Science fiction has at least occasionally proven accurate in predicting developments in human industry and culture, so it's probably advisable to give some consideration to these warnings. But how seriously do we need to take them?

First, it's probably worth taking some time to dispense with a couple of specious considerations: namely, the claim that we shouldn't worry because we're not close to realizing A.I., together with its limit case, the claim that we shouldn't worry because A.I. is impossible. The first of these is almost analogous to the just-as-specious argument that we shouldn't worry about tracking near-Earth asteroids or pre-empting cosmic impacts because said events are historically very rare and improbable. Rare they are, but the negative consequences associated with one - in the limit, the destruction of human civilization along with a substantial portion of the terrestrial biosphere - are so radically severe that investment in technologies for tracking and pre-emption becomes rational. A similar consideration applies in the case of A.I. It may not be coming soon, but given that it's coming, we're advised to try to come to as clear an understanding as we can of the potential consequences for our species. As regards the claim that A.I. is impossible: dealing in any detail with the various a priori arguments that have been put forward in defense of this is not going to be possible within the scope of this posting: suffice to say, I don't find any of them sufficiently convincing to justify complacency regarding the potential risks.

Now to the real problem. The decisive question, when it comes to the issues of whether A.I. poses an existential threat and if so, how serious, is reflected in the phrase that I used when I asked: are we creating a new species that will outcompete us? The choice of terminology is significant. For the essence of the worry that conflict between human and A.I. is inevitable is precisely Darwinian: assume that any human-like intelligence with other-than-human implementation is going to have roughly the same interests and will require roughly the same resources as humans, and that the various pursuits of enough of those interests and enough of those resources add up to zero-sum games, and you get the conclusion that species conflict, or something so like it as makes no difference, is inevitable.

But is it in fact true that a non-human intelligence will have so many interests that overlap with humans in a zero-sum fashion that there won't be room enough for both in the ecosphere? That's not an easy question to resolve. If one approaches the matter on the basis of paleontology, the picture isn't terribly encouraging. There's ample evidence in the fossil record that at one time, many species of hominid were roaming the planet, their representatives presumably exhibiting near-human intelligence. Obviously, only one survived. Why this is the case is still a matter of debate among paleontologists, and may always be so absent definitive empirical evidence. But there is an ominous suggestiveness about this state-of-affairs, a darkling hint that general intelligence may well be a jealous god in the evolutionary scheme of things: when present in varying degrees which draw upon the same resources, the inevitable outcome is that the higher will crowd out the lower. In this respect, intelligence may well exhibit a deep affinity with biology itself: spontaneous generation can happen only once, after which the very fact of its occurrence effectively vitiates the conditions for its repetition. In which case, deliberate generation of a superior intelligence might well vitiate conditions for the survival of its progenitor.

The counter-argument is that we're not talking about the creation of another hominid that utilizes the same pool of resources as modern humanity. It isn't clear that an A.I. exhibiting a human or near-human degree of common-sense intellligence would have enough interests in common with humanity to render a Darwinian existential conflict inevitable. Indeed, I imagine there are still AI researchers who would question whether it is necessary that an A.I. would have any interests at all. I'm not one of them: it has long seemed to me that the fundamental problem of understanding and coping with relevance in A.I. is intimately related to the questions of biological agency, reproduction, and Darwinian survival, to the extent that I suspect it is really impossible to have a fully functional common-sense intelligence that is not a self-interested agent within the context of an inclusive ecology. That said, the jury is still out on whether the interests in question would cover topics such as would make a balls-to-the-wall, death-or-glory battle for the same ecological niche an inevitability or even a possibility. And it's important to remember, too, that intelligence has demonstrated a capability for identifying venues wherein mutual cooperation is more advantageous than conflict, and for establishing the social and cultural norms necessary for transcending aggressive instinct and accessing sociological springboards in the fitness landscape. None of this is in any way decisive, of course, but it does afford reason for hope.

A final point that is probably worth making concerns the matter of superior intelligence, and the aforementioned observation that, in a competition of intelligences, the highest order of intelligence is favored to win in the limit. Simply put, it's far from clear how 'superior' human-like intelligences can become. A very common presumption holds that, because intelligence is a natural phenomenon, and given that intelligence can be realized in artificial constructs, it should be possible to engineer intelligences that are in some sense 'god-like' with respect to human beings, in the sense of integrating over a scope of data many orders of magnitude beyond what humans are capable of, or in the sense of drawing dramatically deeper inferences, or in the sense of processing information at a much faster rate, or in all three senses and more besides. I can't emphasize strongly enough that this is a non-sequitur. In fact, while I don't have a definitive story to tell, here, nothing would surprise me less than if it turned out to be the case that there are provable computational limits on how 'god-like' a common-sense intelligence can be within a given time-scale, including (or perhaps especially) a human one. Given that common-sense intelligence does confer a general selective advantage, the expectation would be that selection pressure would drive its continuous expansion to trans-human proportions. The fact that this didn't happen, that Homo Sapiens was the last hominid standing in the putative early-holocene intelligence competition instead of some übermensch, suggests that a hard-and-fast natural limit guaranteed human intelligence topped off where it did. If true, it would do nothing to confirm or refute the claim that a conflict between human and machine intelligence is inevitable, but it would offer the comfort - possibly a cold one - that such a contest would at least take place upon a level playing field.

0 Comments:

Post a Comment

<< Home