Sunday, April 14, 2024

Alfred Korzybski famously said the map is not the territory; there are different ways in which the two may be conflated, and these are instructive.  


The ontological absolutist mistakenly believes in the possibility of an ideal map such that every feature of the territory (and indeed, of territories beyond the borders of the map) can be predicted from it:  if a feature is not in the map, it is not in the territory.  Which is to say, they believe the reality of the territory consistently and completely determines the content of this map to the point where it would be an infallible guide, and they believe furthermore that such a map is discoverable.  The totalitarian ontological absolutist (which is to say, the religious or ideological fanatic) believes that such a map is already in their possession, and the path to truth consists merely in forcing everyone to use it.  The nominalist or moral relativist, conversely, believes that the reality of a map consistently and completely determines the territory: everybody’s map is equally real for them, invent a map and you reify a truth.  And the totalitarian nihilist as a corollary believes that if you force everyone to use the map you have invented, you can create reality to suit yourself.  From one perspective, the moral absolutist and the moral relativist might seem to be at opposing ends of an ideological spectrum; but both at bottom make the same fundamental mistake of assuming a unidirectional and omnivalent determinacy between the territory and its representation, and it is for this reason that, though in apparent opposition, they are so often indistinguishable in their practice and its outcome.


None of this is how representation actually works.  Representation is an internalization of an external selection filter:  it makes pre-emptive selections based on past experience of the external filter’s operation in order to game filtration. But nothing constrains the external filter’s operation such that future behavior will perfectly replicate the past - indeed, things are guaranteed not to be so, insofar as the agent deploying the internal filter is by definition a part of the external and thus reciprocally subject to selection based on consequences of their own action in a way which is not learnable from experience of the prior filter before the agent learned it, becoming thereby a novel element in the scenario.  The universe, as J.B.S.Haldane said, is not only queerer than we suppose, but queerer than we can suppose, and reality will always, as a matter of necessity, be, not less, but much greater than what we are able to articulate.

Sunday, May 10, 2020

Thoughts on the present times.

Thoughts on the present times.

There are too many variables, too many parameters operating at all levels, for us to have any clear picture of the road ahead; and by ‘us’ I mean proponents of a higher spiritual path for civilization in general.  Far from it simply being a case of both the best of times and the worst of times; we find ourselves in the position of, quite conceivably, being on the brink of either the best, or the worst, as measured along any number of dimensions.  It would take days to enumerate all of the factors, the ‘known unknowns’; and I greatly fear the unknown unknowns outweigh the knowns w.r.t. this virus, and many other things besides.


When one cannot resolve the practical questions at hand, it may be no dishonor to turn the focus to the larger picture.  Keeping ever in mind that ends and means are entangled, as Koestler says:  what should be our ends?  What are the hallmarks of a rational ethics, a polis that makes central the rights and well-being of the individual, a civilization that must come to terms with the existence and heritage of a geological history and extant biosphere?  Is it too self-deceiving to hope that, by gently trying to turn discussion to these matters, we may gain some clarity with regard to our immediate position?

Thursday, August 20, 2015

The Morbius Endgame

Forbidden Planet is one of my favorite science fiction films - for my money, one of the most important ever made. I continually find myself struck by how many passing details it gets right - more so than many a more recent work. Look at the way Ostrow appeals to principles of Darwinian selection to argue that the prints left by the boojum stalking the crew can’t be the result of an ordinary biological phenomenon. One searches in vain for anything remotely close in a mess like Prometheus. And there is Robby: one of my favorite AIs in all of fiction. And of course this was the Star Trek before there was Star Trek: the influences on Rodenberry are unmistakable. To be sure, there are things not to love: Walter Pigeon’s phoned-in performance as Morbius, the lily-white, unigendered crew, and the sometimes-archaic, not to say atavistic, sexual mores. But they are superseded by the ideas that the film engages.

Lately I’ve been reading the film novelization by Philip Macdonald (under the pen name of W.J. Stuart), which is good, too - at any rate, better than the hack job that Anthony Boucher accused it of being in the June, 1956 issue of Fantasy and Science Fiction. Among other things, it clears up a few points that confound in the movie (news flash: the tiger and other Earth animals were Morbius’ conjurations, too). The connection, here, that gives me pause: the author was the grandson of late Victorian fantasist George Macdonald. Now I’m finding myself wondering if Philip M. may have had some influence on the construction of the original screenplay by Irving Block, Allen Adler, and Cyril Hume, and on whether or not some seeming-echoes of his grandfather’s works - the lurking presence of the Shadow in Lilith as compared with Morbius bette noire , for instance - are really there, or are mere pareidolia on my part.

It’s true that Macdonald-the-younger does take the end of the story in a much more explicitly theological direction than the movie ever does: the Krell Machine’s ability to fabricate material constructs from energy is interpreted as being sufficiently close to creation ex nihilo as to usurp the power of the Creator, and the fate of Morbius and the Krell is seen explicitly in terms of divine punishment for this Promethean effrontery. This supernaturalism is as unnecessary as it is distressing. One of the great things about the movie (as opposed to the novelization) is the subtlety with which it interweaves repeated recourse to Darwinism with a steadfast naturalistic deism that interprets Morbius’ and the Krell’s respective falls in terms of the working out of a kind of karmic natural causality, stemming from their hubristic failure to take the implications of their evolutionary origins into account in what they are doing.

The question may be whether that causality is something we can altogether afford to ignore, as our own individualized manufacturing capabilities accelerate towards the historical horizon that marks the end of scarcity. It might seem too fantastic, but I think the risk has been in some degree underplayed: it is at least as plausible a peril as the much-ballyhooed Singularity of Kurzweil and company. On the one hand, internet-side marketing technologies are becoming ever more efficient in sussing out and responding to subconscious behavioral tells and use-patterns that users cannot consciously pre-empt amid increasing talk of keeping record of such tells for security features. On the other, we have the advent of automated application authoring based on gestural indication coupled with the individually tailored 3-d superfabrication capabilities implicit in the so-called Internet of Things. Couple these factors, and they seem to me a recipe for Trouble, vis-a-vis the prospect that personally targeted and overly solicitous pattern detectors might end up fabricating applications and constructs that served the subject’s appetites (or, indeed, the subject’s rational self-interests) in personally dangerous and generally anti-social ways. And like the Krell malady, this is the sort of thing that could easily overtake and surprise a society not especially sensitized to the danger of it. When confronted with technology that promises to supply your heart’s desire, best retain the option to be careful what you wish for (a lesson George Macdonald would have readily appreciated).

Saturday, March 01, 2014

The Importance of Selection History

Behind everything we take as ‘given’, or ‘atomic’, whether this be sense data (color, sound, etc.), or cause (induction of causal relationships on the base of evidence), or meaning, broadly construed, there lies an evolutionary history which we ignore at our peril. Most if not all of what seems ‘salient’ to us is only so because of the conjunction of some range of data with the fact of our heritage. Anything can, with sufficient ontological engineering, be found to be ‘similar’ to anything else through adjudicating along any number of dimensions. But if a similarity strikes us as intuitive or a kind as ‘natural’, it is a sure hallmark that a process of natural selection has been at work. A natural kind is a feature whose instantiation historically correlates in some way with perpetuation of some other feature that is a necessary condition, over some region of space-time, for the ancestry of the biology that is adjudicating what is ‘natural’. And a causal relationship is a correlation between instantiation of two features that is a necessary condition for perpetuation of some feature of the ancestry of the adjudicating biology. The fact that recognition in higher organisms encompasses internal adaptation and reasoning over higher-order features in no wise invalidates this fundamental truth. These capabilities evolved within specific contexts, and for fairly specific reasons, and we cannot afford to take this fact for granted.

In particular, in automated recognition systems in AI, it is fatal to ignore the semantic aspect even at the lowest levels - which is to say, fatal to conceive the relevant features as being somehow given in the data, as opposed it being a case of assimilating data to feature, where the process of assimilation is fundamentally the product of a genealogy. And while complete reconstruction of that genealogy may not be necessary (let alone recapitulation), it’s still safe to say that there can be no effective engineering of feature extraction in the long term without a fundamental understanding of the historical biological contingencies conditioning extraction.

Saturday, December 28, 2013

Of Transcription Factors, Experiential Inheritance, Second-Order Selection Of Proteins, and Saltatory Evolution

Recently, there has been considerable news generated by a paper recently published in Science, Exonic Transcription Factor Binding Directs Codon Choice and Affects Protein Evolution, by A.B. Stergachis et al., and an associated perspective commentary, The Hidden Codes That Shape Protein Evolution, by R.J. Weatheritt and M.M. Babu.  In the popular press the reporting has tended to focus on what is described as the ‘double meaning in genetic code’, with, admittedly, some encouragement from A.B. Stergachis.  While the metaphor of a secondary genetic code may indeed be appropriate to deploy, here, much of the extant discussion has, to my mind, failed to bring into focus the most important implications of both this research and the wider context concerning transcription factors of which it is a part.  The truly remarkable implications of this work - if it holds up - concern the role of transcription factors in facilitating the inheritance of response patterns based on parental experience, and in second-order selection among proteins leading to saltatory evolution.

An acquired characteristic per se cannot directly impact an individual’s genome; thus it cannot directly insure its heritability by the next generation.  However, it certainly may affect the overall likelihood that the organism will succeed in passing its genes to the next generation.  If the acquired trait itself increases the organism’s overall fitness and if the readiness with which the trait was acquired is in any way genetically determined and heritable, then - other things being equal - the fact of the organism’s having acquired the characteristic will increase the likelihood that the progeny will acquire the characteristic at least as readily.  This much was evident from the first.  What has only become apparent in the last decade or so is the role of so-called transcription factors and the associated upregulation or downregulation of genes in facilitating the inheritance of propensities for trait-acquisition based upon the acquisition of macroscopic traits.  Simply put, portions of the genome code for proteins - transcription factors - that variously inhibit or disinhibit transcription from other parts of the genome, and these transcription factors in turn may be activated or inactivated by environmental conditions.  The individual experience of an organism can thus result in upregulation or downregulation of certain genes; if this upregulation or downregulation proves conducive to the organism’s survival and reproduction, the genetic proclivity to upregulate or downregulate under like conditions will be inherited by the next generation, based upon the parent’s experience.  

But, beyond facilitation of what I’m sorely tempted to call ‘soft Lamarckianism’, transcription factors appear to allow for an even more important plasticity - namely evolution stemming from selection pressures operating on a population comprised of different sections of the same genome.  It appears that a given genome exhibits significant redundancy with respect to a number of different protein types:  that is, different sections of the genome code for essentially the same protein.  Critically, this is redundancy with variation:  while the protein types coded for are similar and almost functionally equivalent, the codon sequence, and hence the amino acid sequence, are not quite the same, and there is every reason to believe that the environmental context could be such as to insure that one version of the functional enzyme does a slightly better job than another.  Now consider the role transcription factors might play.  Suppose that there is some feedback mechanism, however indirect, that leads from differential performance to upregulation or downregulation - more specifically, from suboptimal performance of the enzyme variant to downregulation of the section of the genome that coded for it, or from optimal performance to upregulation of the corresponding genome section.  Now we have all the conditions in place for natural selection to occur, operating over a population of genome sections coding for functionally analogous but variant enzymes.  Be it noted this is ‘second-order’ natural selection occurring on top of the first order selection that operates between whole genomes:  when its consequences are viewed at the molecular scale, they will appear as a kind of adaptive learning; while at the macroscopic scale they may manifest as saltatory evolution.

It is also worth observing - as is generally true of cases involving adaptation via second-order selection - that selection pressures favor conservation of such a feedback mechanism, if there are any means by which it can be stumbled upon and conserved.  That is, a mechanism that allows for optimization of enzyme function through selective adaptation constitutes a first-order advantage for the whole genome that possesses it; thus, genomes might be expected to evolve such a feature if a reliably replicable version can be hit on accidentally.  Indeed, it may be the primary reason for the apparent redundancy of protein-encodings in the genome.

Sunday, July 21, 2013

Thinking About Information Value

It is easy to forget that entropy, along with most of the other thermodynamic properties of a system, is fundamentally a feature of the external boundary of that system plus the internal boundaries that distinguish the system components, determining their possible interrelationships and hence, the space of systemic microstates. But how are these boundaries drawn in life, as opposed to Thermodynamics 101? To say that they are ‘given in experience’ is no answer this side of Nelson Goodman, Saul Kripke, and the later Ludwig Wittgenstein. Here in fact we may take a hint from Goodman and his theory of ‘projectability’, which conditions the induction of a relationship in any current case upon the past use history of the predicate that is attributed including successes and failures. This points to natural selection, and the consideration that, in nature, the detection of the internal and external boundaries of a thermodynamic system is, quite literally, the work of a physical device that selection has configured to detect the categories by which the boundaries are distinguished (and unless we want to be supernaturalists, there is no fundamental distinction between detection and work: i.e., to detect a signal is to be triggered by the signal to carry out some task, which we may as well consider a computation, expending some quantum of energy as it is executed).

Therefore we cannot speak of the absolute entropy, or the absolute information value of any state of a physical system independent of an organism or device which assesses that system’s internal and external boundaries. The entropy is a function of the system boundary and the space of possible microstates, and these in turn are conditional on what the detector is configured to detect. Use a different detector, and you draw the boundaries differently get a different entropy. Or take any arbitrary ‘slice’ of the plenum that you please; it is always possible to imagine some evolutionary history that would produce a detector able to extract useful work from, and therefore to detect, just that configuration within its context of internal and external boundaries.

Sunday, February 24, 2013

The Potential of Self-Reference

A second-order language - more precisely, any language rich enough to encode the Peano arithmetic - has the potential for self-reference. We have known since Gödel that this leads to the eventuality of true statements in the language that cannot be derived. But there has been little consideration of the practical upshot. If we view such languages as production systems: the expressivity of the language allows us to specify a set of rules for building and revising sets of production rules which constitute theories of the world. Self-reference allows for the possibility that this very set of production rules would be self-applicable, hence, self-revising. Thus, a finite set of axioms could be authored with an interpretation that nonetheless allowed for infinite adaptability, so to speak, over the long term. In second-order languages, the control afforded by rule sets need not be a fundamentally limitative feature.

The human intuition is that this infinite potential is in some way characteristic of living systems. Certainly ecologies evolving stochastically under conditions of natural selection evince it. And complex agents that learn and adapt within such ecologies may evince it, too.

Consciousness as Natural Selection

Experience - conscious experience, if you will - is as much a matter of ruling things out as it is of ruling things in. To experience everything as infinite, in William Blake’s memorable turn-of-phrase, is to experience nothing, strictly speaking. Consciousness is sharpened precisely to the degree that it is focused, and it is focused by exclusion, by selecting among potential factors.

Consciousness is selection. Indeed, I am tempted to say, ‘natural selection’. Is there, after all, any other kind?