Saturday, January 23, 2010

Concerning the Fermi Paradox

Before shifting my focus from SETI back to what are more commonly considered topics in AI, I thought I would share a few reflections on Fermi's paradox, a topic I've been thinking about a lot of late.  The Fermi paradox, is so called on account of being attributed to physicist Enrico Fermi, who is alleged to have first posed it in 1950 in the form of a question:  where are they?  By 'they' he meant technology-using extraterrestrial intelligences, and by 'where', he was really asking why humanity has not seen definitive evidence of the existence of at least one such intelligence to date.  The question raises more serious issues than might be thought at first.

First, a little more about the 'they' in question.  We can, with some justification, take Fermi to have been referring to what the Kardashev scale rates as 'Type III civilizations' (or perhaps it would be a bit less presumptive to speak of a 'Type III intelligence').  In the Kardashev scheme, a Type I civilization is one that has achieved control over energy and resources on a planetary scale, while a Type II civilization is one that has done so on a stellar scale, and a Type III civilization has done so on a galactic scale.  Exactly what 'control' means in this context is, admittedly, something of an open question (and a subject I'm going to come back to a little later in this posting).  For now, let it suffice that whatever 'control' means exactly, only an intelligence or group of intelligences that was manipulating energy on a galactic scale could have been subject to terrestrial detection in 1950, and although our instrumentalities have greatly improved in the last six decades, it's probably safe to say that such an intelligence is all that we could detect now over cosmological distances.  The knee-jerk answer to Fermi is that the fact that we have never seen any must simply mean that Type III civilizations are very, very rare in the observable universe.

This simple answer begs too many questions.  The difficulty is that the sheer number of star systems in the observable universe, or even in our own galaxy, blunts the force of an appeal to mere unlikelihood.  The logical argument adumbrated by Fermi's paradox is that, if it's remotely possible for a long-lived, 'Type III' civilization to exist, the odds are overwhelmingly in favor of us seeing evidence of its handiwork all over the place.  Let's say for the sake of argument that we think it is very unlikely that any given star, irrespective of age and spectral type, will have planets which will be able to host intelligent life forms that give rise to a long-lived, space-faring civilizations: let's say it's literally a one-in-a-billion shot.  There are estimated to be 2x1011 stars in the Milky Way Galaxy.  So the expectation in that case is that the number of stars in our galaxy hosting planetary systems with Type III civilizations is around

1 x 10-9 x 2 x 1011 = 200

Conversely, if the odds against are 'just' one-in-a-million, there should be around 200,000 such civilizations.  One can argue these are not large numbers with respect to most cosmological scales, but there's a further catch:  remember we are talking about civilizations whose lifetimes are measured in hundreds of thousands or millions of years, and which are capable of harnessing energies on a cosmic scale.  Assume just one of these nascent civilizations goes out into the galaxy with colonial intentions, and that this gives rise to a familiar 'seeding pattern':  this civilization sends out Von Neumann probes that colonize n solar systems, and each of those colonies sends out probes that colonize n more solar systems, and so on.  It's an exponential process.  And because of this, even if one takes the speed of light in a vacuum to be an absolute limiting upper bound on how fast the colonizers can travel, and even granted the literally astronomical distances that are involved, one ends up with the whole galaxy getting colonized in much less time than the current age of the observable universe; indeed, much less time than life has existed on the Earth.  In summary, if we are in a galaxy where long-lived space-faring civilizations arise at all, there are powerful reasons for believing that we should be seeing evidence of the activity of at least one such, and possibly more than one, everywhere we look.  Only if Type III civilizations are so astronomically unlikely as to be for all intents and purposes impossible should they be absent from our neck of the woods - and we know of no compelling physical reason why they should be impossible.  Hence the paradox. 

What follow are a few speculations of my own concerning possible resolutions to the Fermi paradox.  Some may approximate answers to the problem that have already been given; others, I think, have aspects that are new.

1) Maybe there are strict physical/computational/evolutionary limits on how 'god-like' you can become
There seems to be an assumption common to a lot of the literature on this subject that getting from where we are to a so-called Type III civilization is simply a matter of 'scaling up'.  This sort of extrapolation seems to me extremely speculative and dangerous - particularly when I contemplate the information processing requirements that are indicated for such a civilization.  The issue here is one that is already well-familiar to us on Earth:  given a very large store of data, how do you get the information you need when you need it according to semantic criteria, particularly when the models you are using are seldom schematized with the input data store in view?  It's an understatement to say that at the present time, the computational limits on such processes are ill-understood.  I would not be surprised if it turns out to be the case that there are mathematically provable upper bounds on how well semantic intelligence can do, relative to data volume, and that these bounds have little or nothing to do with speed and efficiency of the data retrieval process per se or the physical medium in which it is implemented (be it cytoplasm, silicon, or whatever). Such limitations, rooted in statistical mechanics, might constitute a physical, thermodynamic impasse to intelligent, tool-using civilizations, effectively forbidding transition to Type III.  

It also occurs to me that there might be limits, deriving from evolutionary dynamics, on how effectively a civilization could hope to 'overrun the galaxy' using self-replicating 'Von Neumann' style technology.  It seems highly doubtful that any replication process subject to natural law could be made sufficiently robust as to insure perfection, in which case, any replicator unleashed on a galactic scale would be subject to natural selection pressures which it would be hard if not impossible for the parent civilization to control.  Whether such pressures could be harnessed in a way that contributes to the parent civilization's cause of self-propagation is likewise to be doubted, in which case any such technology would have only a limited subjective time window in which it would be effective.  Perhaps physical limitations deriving from evolutionary dynamics, at whose nature we can at present only guess, insure that this time-window is always significantly less than what would be required to colonize the galaxy, even if the Von Neumann 'seeder' probes travel at a speed very close to that of light.

2) Maybe whoever gets there first destroys all of the competitors
Science fiction authors Greg Bear and Alastair Reynolds have both played with the idea that an intelligence that reaches the Type III stage might actively pursue a genocidal policy of eliminating less-developed intelligences in order to obviate the possibility that these might eventually evolve into competitors.  Such a possibility cannot be discounted out of hand, given that it is a rational one to pursue, on a narrow definition of rationality, if survival is an intelligence's ultimate value.  On this view, the reason we have not observed the activity of any Type III civilizations in our neck of the galactic woods is that there is in fact only one, which has been systematically eliminating all of the other potential candidates for a very long time.

It seems to me, however, that there are a couple of cogent arguments against this prospect:  one unabashedly anthropic, the other less so.  The anthropic argument is that, if one really was adopting a policy of eradicating potential competitors from the galaxy the safest way to go about doing it would be to proceed proactively.  I.e., one should not wait for receipt of radio signals to tell one that a new civilization and potential competitor was emerging in some nook of the galaxy - given the potential transit time, by the time you received the signal, it might already be to late.  The only viable strategy would seem to be to pro-actively seek out planets with developing biospheres or pre-biotic conditions and sterilize them.  But if a Type III civilization in the galaxy were doing that, then the odds would seem to be pretty overwhelming that we wouldn't be around to contemplate the possibility, for the very reasons that give rise to the Fermi paradox in the first place:  a Type III civilization seeding the galaxy with Von Neumann type, replicating probes of a genocidal persuasion could seemingly keep all potentially life-bearing planets covered, such that the chances of a biosphere slipping through the cracks for any great length of time, even in an out-of-the-way corner of the galaxy like ours, would be slim-to-none.  So the fact that the biosphere is here at all, and has been for some billions of years, tends to suggest not only that a genocidal Type III is not operating in our neck of the woods, but also that genocide, contrary to what game theory might initially seem to suggest, is not a particularly effective strategy over a geologically extended time frame, since no Type III civilization in the galaxy (assuming any exist) seems to have adopted it for more-or-less the time period life has existed on the Earth.  The second, non-anthropic argument (and the reason the strategy might not be effective) is the likelihood, over a very large time scale, of encountering another intelligence exercising a comparable scale of technology.  Such an eventuality is of course highly unlikely in the near term, assuming technological intelligences continue developing at an exponential rate (intelligences separated by only a few decades' worth of technological development would still find themselves at vastly different technological levels).  But when considering intelligences so long-lived as to be virtually immortal, limit cases must be considered practical realities.  If all parties pursue a 'rational'  course of attempting to maximize utility by eliminating the competition in this scenario, the end result is liable to be mutually assured destruction.   Both parties in such a case are advised to do the enlightened if not game-theoretically-rational thing, and attempt to conciliate each other.  But if this is deemed feasible and a genocidal course, consistently pursued, assuredly leads to mutually assured destruction, then why pursue the genocidal course at all?  Surely it is better to leave developing civilizations alone, in the hope of eventually having more points of view which one can profitably compare against one's own.  It may be that any civilization wise enough to survive to Type III status is wise enough to make this calculation, so that nobody capable of pursuing a genocidal course does so.

It also has to be noted that there are questions about whether the predictions generated by this hypothesis really square with the observed facts.  It's true the hypothesis suffices to explain why we have not to date detected any communications from any Type III civilization to date - but the fact is that 'Type III scale' activities besides communications are predicted to be detectable.  The reasoning behind the Fermi paradox leads to the conclusion that even if there is only one Type III civilization in the Milky Way, probability favors traces of its activity existing within our own solar system.

3) Maybe
we will wipe out/have wiped out the competition
A variant on possibility #2 (anticipated by Isaac Asimov, among others, in his novels Foundation's Edge and The End of Eternity) is that the genocidal agent in our corner of the universe is us, or more specifically, a Type III intelligence that humanity ultimately spawns, which uses its capabilities to travel back in time to the earliest seconds of the observable universe's existence in the inflationary epoch, there to assure their ultimate ascendancy by engineering things so that only the biosphere which gives rise to their own line of descent will be ever be realized - hence, we see an otherwise empty and lifeless universe precisely because our descendants have assured our safety and their own genesis by insuring that it has developed so.   As preposterous as this scenario sounds, what it assumes may be within the bare limits of physical possibility - which, for a Type III intelligence, with the resources of a galaxy at its disposal, may be all that is required.  It suffices, for the present, that General Relativity admits solutions that allow for spatiotemporal paths or 'handles' that give an agent access to his, her, or its own past - and as Kip Thorne and others have argued, that may well be the case if the agent has access to a sufficient supply of the right kind of energy.

This hypothesis shares many of the same objections as #2.  For one thing, if we think it unlikely that any kind of Von Neumann-style self-replicating technology could persist for a substantial part of the lifetime of the galaxy without becoming subject to purpose-subverting selection pressures, then the possibility that such technology could sustain its original objective for a substantial part of the lifetime of the universe while remaining unaffected by evolution would seem to be astronomically negligible, and it's far from obvious what other technology, other than Von Neumann-type replicators, could do the trick.  I'm also not sure but that a variant of the prisoner's dilemma doesn't also insure that the course of action that's envisaged is an un-enlightened (albeit locally 'rational') course to follow.  It seems very likely that a Type III intelligence, given its capabilities, would be prone to spawning off all manner of new, independent intelligences.  Assuming a couple of these got created with comparable capacities at the Type III level, we can foresee a circumstance arising in which the rational course for each of them to follow would be one that, if they both followed it, ended up insuring the universe gets engineered with a future in which everybody's destruction is mutually assured, with a timeline consistent with everything everybody observes up to the point at which the time-traveling starts. In which case, it may be advisable for Type III civilizations to unilaterally forswear this sort of activity.

4) Conversely, maybe ethics plus resource conservation and management dictate a low profile
There could in fact be many reasons for a long-lived Type III civilization minimizing its footprints in the galaxy.  Much depends on what we take the word control to mean, where this is used in the Kardashev descriptions:  what in fact does control of a resource set - planetary, solar or galactic - entail?  More than just the capacity to use up large amounts of time, energy, and material in big-ticket engineering projects, at least if that control is to be maintained for any length of time.  Recent experience has taught us that management of resources on a planetary scale is a tricky business that has at least as much to do with not using available resources as it does with expending them, not to speak of the fact that large-scale expenditures of energy and material often prove to have less-than-welcome side effects over the long term - and it is hard to see why the lessons which apply on the local scale would not also apply on the galactic one.  Thus, the fact that we haven't seen anybody turning the galaxy into central park, (a memorable phrase I'm pretty sure originated with Freeman Dyson), doesn't necessarily mean there's no one out there capable of doing it:  it may mean merely that anyone who's made it to that stage knows better than to try.  One can also reasonably ask what motivation a civilization or association of intelligences that had achieved immortality or near-immortality for its members and sustainable use of its resources would have for 'spreading itself across the galaxy', whatever that means, exactly.  Assuming they were curious, there would be motive to study, investigate, and explore, yes - but such a society has no obvious need to propagate, or to radically re-engineer everything it discovers. 

Such considerations acquire a special amplification in light of questions about interaction with other, less developed intelligences.  It stands to reason that an intelligence which has mastered the art of flourishing through cooperation will respect other, self-aware intelligences as ends in themselves.  In light of that scruple, consider the complications that could arise from an interaction between a less technologically advanced civilization and a more advanced one, a problem class that could likely be summarized by Arthur C. Clarke's dictum that sufficiently advanced technology is indistinguishable from magic.  The disadvantaged civilization would likely be inclined to be influenced less by rational consideration of whatever the more advanced one chose to say to it and more by the more advanced one's technological superiority - a prospect which might give an ethical intelligence pause: about initiating contact with less developed intelligences, or even about providing them evidence of its existence.  The scruple might plausibly extend to galactic engineering projects which, if they were of such a scale as to be detectable even without advanced instrumentation, might otherwise be interpreted in some quarters as the productions of divinity.

In short, it may be that universally recognized principles of ethics and conservation mandate that intelligences that reach the Type III stage keep a low profile, nor is it evident that a long-lived, stable civilization that had achieved responsible and sustainable management of its resources would have any motivation to do otherwise.

5) Maybe they transition to forms that are subtle and hard-to-detect
In his novelization of 2001: A Space Odyssey, Arthur C. Clarke gave a memorable description of the evolution of the society of monolith-builders whose machinations played such a key role in his story's development:

And now, out among the stars, evolution was driving toward new goals.  The first explorers of Earth had long since come to the limits of flesh and blood; as soon as their machines were better than their bodies, it was time to move.  First their brains, and then their thoughts alone, they transferred into shining new homes of metal and of plastic.

In these, they roamed among the stars.  They no longer built spaceships.  They were spaceships.

But the age of the Machine-entities swiftly passed.  In their ceaseless experimenting, they had learned to store knowledge in the structure of space itself, and to preserve their thoughts for eternity in frozen lattices of light.  They could become creatures of radiation, free at last from the tyranny of matter.

Into pure energy, therefore, they presently transformed themselves; and on a thousand worlds, the empty shells they had discarded twitched for a while in a mindless dance of death, then crumbled into rust.

Now they were lords of the galaxy, and beyond the reach of time.  They could rove at will among the stars, and sink like a subtle mist through the very interstices of space...
Arthur C. Clarke, 2001: A Space Odyssey
Signet, 1968

There is, of course, plenty to criticize in Clarke's conception, not least its weirdly gnostic notion this side of (Einstein) that 'energy' is a somehow less 'tyrannous' medium than 'matter', or its even weirder idea that 'machines' are somehow superior to 'biology' (given that biological organisms themselves constitute machines vastly more complex and self-directed than any composition of metal and plastic that humans have yet been able to slap together).  Nonetheless, the central observation seems sound:  assuming that self-aware intelligence is capable of bootstrapping itself into progressively more robust and longer-lived media, the terminus of that process is likely to be self-organizing forms that are realized in terms of long-lived and nearly-indestructible entities and processes at the quantum level. 

Such processes may not be easy to detect, on account of their very persistence and resistance to accidental derangement.  What we consider to be 'exotic' particles, forces, and energies, featuring little or no interaction with 'normal' matter might well play a central role in the realization of such intelligences, in which case we might not possess instrumentation capable of registering their presence.  Even more conventional means of implementation such as nanotechnology might be very hard for us to find unless (or even if) we were looking quite hard for it, and optimizing forms information processing might be another source of draw-down in detectability, a point futurist Hans Moravec makes in his book Robot:  Mere Machine to Transcendent Mind that is striking enough to be worth quoting in full:

As they arrange space-time and energy into forms best for computation, Exes [Moravec's term for the cybernetic intelligences that succeed us] will use mathematical insights to optimize and compress the computations themselves.  Every consequent increase in their mental powers will improve their competitiveness as well as the speed at which they make further innovations.  The inhabited portions of the universe will rapidly be transformed into a cyberspace, where overt activity is imperceptible, but the world inside the computation is astronomically rich...As the cyberspace becomes more potent, its advantage over physical bodies will become manifest even on the expansion frontier.  The Ex wavefront of coarse physical transformation will be overtaken by a faster wave of subtle cyberspace conversion, the whole becoming finally a bubble of Mind expanding at near lightspeed.

Perhaps the knit of cyberspace will be too subtle to discern with eyes and minds as coarse as ours.  If so, robots may simply seem to vanish, leaving behind a universe indistinguishable from that before their arrival.  The Exes will experience boundless expansion of extent and possibility, but their existence will be an interpretation of the essential thermal hiss of everything that is far beyond our reach.  Emigration into "interpretation space," combinatorially vast and rich beyond imagination, could explain the absence of evidence for advanced civilizations elsewhere in the universe.  Sufficiently developed entities may simply move on to wider pastures inaccessible to simpler minds.  Perhaps civilization after civilization originates, develops, and plunges into the interpretive depths leaving the easy surface interpretations empty to repeat the cycle.
Hans Moravec
Robot:  Mere Machine to Transcendent Mind
Oxford University Press, 1999

I'll admit to being inclined to take this scenario with a grain of salt, not least because I know of no proof that there is even a computationally effective way of accomplishing what it envisions (see possibility #1 for the minority opinion).  Nevertheless, it is arresting, and argues, I think, for someone applying a mathematically rigorous approach in an examination of the computational issues.


6) Or maybe everyone just bugs out after a while

Modern scholia of physics raise the possibility of other universes, with different histories, physical constants, and/or natural laws.  It is true that slight modification of any of the physical constants of our own continuum seems to lead to a universe in which stable organizations of matter at the macroscopic level cannot exist - but it seems to me that we know very little about the prospects for other islands of stability within this possibility-space overall.  Likewise, contemporary physics seems to hold out little hope of other universes being reachable, or even observable, through ordinary means:  but there is danger in trying to extrapolate whether or not such limits might apply to entities with the capabilities of a Type III civilization at their disposal.  Let us suppose for the sake of argument that there are other universes in which complex material forms can survive, and that some of these are even more interesting than, longer lived than, or otherwise better to live in than our own; and suppose further that some of these are reachable by intelligences with access to wormholes, singularities, zero-point vacuum energy, and/or some of the other exotic and large-scale energy resources postulated  to be available at the Type III level.  Consider also that for a very long-lived civilization, whose lifetime is measured in millions or billions of years, there is a very real prospect of rapidly reaching a point where everything that is knowable about this universe has been learned, as well as of surviving to the point where universal heat-death or whatever terminus physics dictates becomes a more-than-theoretical problem.  In that case, it is hard to see what incentive any Type III civilization would have to stick around.  Maybe every Type III civilization bugs out not long after reaching that level, leaving this universe behind to serve as an undisturbed spawning-ground for other intelligences.

7) Just what constitutes 'evidence of intelligence', anyhow?
The final and maybe most important point to raise consists in the question of how we would know a Type III intelligence when we saw it.  If one believes that intelligence is the product of, and acting in accordance with natural law, then it is meaningless in the limit to distinguish between the artifacts of intelligence and phenomena that arise by purely natural processes.  While there are, to be sure necessary natural conditions for intelligence, foremost among which must be counted adaptability to changing conditions, and while we are advised to take perceptually derived feature dependencies like causation seriously, the selective advantage that springs from the attribution of intentional awareness per se stems not from their serviceability for scientific explanation, but for the role they play in organizing concerted actions that adjudicate situations involving public goods.  Intelligence is as intelligence does within the context of its community, and where two independently evolved, self-aware intelligences meet it is far from certain that the conditions for their recognizing each other as such can be met, especially if their realization involves radically different time scales and/or physical media.  The matter depends, first and foremost, upon whether a common 'public good' can be found that constitutes sufficient pragmatic warrant for the leap-of-faith intrinsic to the intentional stance.  If two intelligences operate on very different spatio-temporal scales, and/or find realization in radically different media (or more than likely, both), it is not clear that a sufficient basis for recognition will exist, at least in the near term.   

So there they are - all the ideas I have on the subject at the present time.  Take them for whatever they are worth.  If I were a betting man, I would guess that the real story involves some combination of items 4-7, but it would be folly at this juncture, in the absence of a better theoretical framework and empirical evidence in the context of the same, to commit to anything.  Perhaps it's best to conclude on the same note Arthur C. Clarke did in his introduction to the aforementioned 2001: A Space Odyssey, wherein he acknowledged the work as his own take on the Fermi Paradox.

The truth, as always, will be far stranger.