Wednesday, March 25, 2015

AI Doomsaying and the Cult of Shock.

Zoltan Istvan is running for President of the United States as the representative of the Transhumanist Party.  If we ignore the fact that the mans name is evocative of Bond villains, we still have a situation where someone who claims to speak for the future of humanity is playing into the historical mistake of a hierarchical society which uses scarcity and privilege to oppress.  If we weren't too evolved to think that names matter, we might be prone to point out that xenophobia doesn't make sense to begin with and perhaps Barack Hussein Obama could have swayed a few more nincompoops if he had a name like John Fitzgerald Kennedy.   Perhaps it simply isn't obvious just how misguided, quixotic, and potentially disruptive, an attempt to bring about a technology driven future built on the power structures of past, albeit through an earnest act of political theater, could be.

As an AGI (Artificial General Intelligence) researcher I am concerned about regulation of AI (which Istvan has advocated for) and the knee jerk xenophobic response that such hollow gestures might evoke.  Much in the way that a regulated product tends to have an imposed type of scarcity, regulated AGI research will mean passing control of a resource that could benefit many, to a few power elites.  Dependence and power inequality are sure to follow.  In some sense access to AI and therefore research thereof, should be a basic human right.  Not only would explication and instantiation of this right help offset power inequality it should also leverage a certain level of MAD (Mutually Assured Destruction).  This is how the scientists who developed nuclear fission weapons dealt with the power inequality related to their invention, simply distribute the knowledge and the "checks and balances" of human society are restored but the stakes have been significantly raised.

Once upon a time I worked in a well equipped prestigious AI lab.  It was the culmination of many years of hard work and an outcome which I had dared not consider a real possibility while I was growing up and studying in school.  In spite of the comfort and prestige attached to the position, I eventually came to strongly disagree with the senior research staff on the subject of how to best demonstrate the capabilities of our project (the creation of a curiosity driven agent deployed in an iCub humanoid robot).  It happened when we were asked to field ideas for a demonstration of the completed ontogenic agent.  I suggested the use of the Red Dot Test, often used by animal psychologists, to demonstrate a crude type of self awareness.  Specifically in our application it would have served as a demonstration of the emergent autoclassification of the self.  The effect could easily be demonstrated by placing a mirror in the learning environment and carefully embedding a prior for learning causal relationships in our agent (though any frequency analysis would reveal the correspondence of some mirror image events to behavioral actions and presumably something like the idea of the self).  Instead, the senior lab members decided that if the iCub (a rough facsimile of a five year old male child) was capable of playing Nintendo Duck Hunt, it would demonstrate a positive outcome of our efforts to explore play based learning.   As a specialist in navigational behaviors I have always been conscious of (and consciously avoiding) the potential to apply my work to military purposes.  Needless to say I balked at the prospect of a project to make a child like humanoid robot engage in a behavior that is a direct emulation of the martial behavior of firing a gun.  This test, if it succeeded, wouldn't just be directly portable to a killer agent, it could also cause a horrid backlash by the AI Doomsayers.  I believe that we should be very cautious about the public image associated with AGI.  In fact, we are probably our own worst enemies when it comes to the project to keep AGI out of the domain of authoritarian regulation.  Even Istvan defers to "experts" when asked to comment on the subject of AI's existential threats to humanity.   In fact, the answers, to the questions that arise from the idea or existence of a truly autonomous agent (with a capability to form novel goals), are far from being completely and well answered.  That disclaimer aside, certain precedents dominate the speculative landscape surrounding these questions.  For one, in nature we seldom observe herbivores evolving a direct means to kill members of a competitor species.  In effect nature ignores direct active inter-species warfare and strongly favors adaptations for individual performance as a prefered way to cope with competition.  Adaptation to passive competition is superior to a genocidal adaptation for the rather counter intuitive reason that: a species that undergoes competition, evolves faster (the Red Queen Effect), so a species that eliminates competition tends to stagnate.  For two, as Abraham Maslow indicated with his Hierarchy of Needs, humans, as an ascendent social creature (as well as our only universally accepted example of truly flexible intellectual agency), tends to graduate to increasingly cooperative imperatives as their baser needs are satisfied.  For three, the radial dissipative dynamics of energy and the natural speed limits like the speed of light, in our Universe at least, tend to preclude omniscience and as a consequence the most powerful things which seem to exhibit volition (humans, corporations, animals, etc.) seem also to rely on a multi-agent solution to the problems endemic to space and distance.  If we consider all these precedents it seems clear that individuation, socialization, peace (even among species competing under niche pressure), and cooperation, are each broad basins of attraction in phenomenological space.

If we consider that choosing an existential enemy is an intellectual process (unless that enemy selects you) we should also assume that any supposedly dangerous super intelligent AI would be super effective at selecting an enemy.  Humans, insofar as I can tell (we have domesticated several species), are profoundly cooperative, in spite of suffering a strong imperative to reject difference.  Somebody or something which has the capacity to be a good friend is probably not a necessary enemy.  War, hunting, and in all likelihood all existence ending activities are relatively expensive when compared with not engaging in such activities.  Obviously the costs of such behaviors go upward depending on the intellect of the thing which you intend to end.  Imagine waking up to a world in which no other example of your kind exists and the creatures that just created you had just passed a sort of reflexive intelligence test by understanding and being able to emulate their own means of intellectual enterprise in order to create you.  I think it would feel a lot like childhood (something many of us are probably familiar with).  I think the unanswerable existential questions that would beset such a creature would make our continued existence as a companion species, not just desirable, but entirely necessary for the egotistical well being of the nascent super intelligence.  Imagine the survivor guilt of people whose birth kills their mother.   Imagine if humankind destroyed the Earth's biosphere and yet lived on as deep space drifters.  Would we ever outlive our guilt?  My guess is that love and reciprocity are not just the obvious results of evolutionary phenomena, they tend to persist as valid justifiable behaviors (even in individuals that have an extraordinary capability to transcend innate behaviors) because certain unanswerable questions can be speculatively addressed and acted upon through behavior selection.  For instance, the speculative overgeneralization found in the superposition of an abstract version of the self (in creative/effective agents) to account for causally ambiguous phenomena is likely to lead to the unanswerable question: "do I currently exist as a test of suitability to the goals of a creature like myself?"  This question stems directly from the overgeneralization the model we are best positioned to learn about, ourselves.  So if, for instance, I am evolving virtual creatures that must learn to create self similar mobile autonomous agents, shouldn't we assume that such an agent (given that agent has an ability to abstract the self (which seems natural considering the task)) would speculate about the creative nature of the creator of their simulation?  I am saying that creative AI robots, that have "parents" and "offspring", will naturally overgeneralize the idea of the parental relationship when considering unknowable relationships (like their own with respect to a speculative simulation creator).  Further, any sentient super AI should be curious about and therefore, likely, to come to understand that it is, in effect, the "offspring" of humankind.  The fact a question cannot be answered probably won't prevent a hyper intelligent creature from behaving as though it might answer the question later.  It seems that an agent could  reasonably start acquired examples and even acting upon information that could inform what it should do, before the questions answer is resolved.

For all these reasons, I believe that a creative AGI will persist in a policy of cooperation with humans.

Another slick adaptation we see in animals which are predated upon is distributed and confusing camouflage (like zebra stripes).  I think that this would be good model for avoiding the authoritarian domination of the AI field by extant powers.  The task of creating a tool making tool that embodies our principle adaptation is not just a demonstration of ultimate self understanding, it is an essential process by which we will empower and uplift our species from the paradigm of scarcity and likely most of the negative consequence of that paradigm.  For this reason, we should not taint the process with all the phobias acquired in the process of suffering in a scarcity based evolutionary process.

Istvan apparently believes he is serving the goals of Transhumanism. By presenting a nominal target (in the form of a political party and Presidential campaign) for the forces which might oppose it, before we have access to the real force which ultimately drives widespread acceptance of novel technologies: realized personal benefit, Istvan is risking blowback greater than any possible good that could come of his political theatre.  I am not just saying that I would prefer if he were working directly on the enterprise of scientific inquiry or engineering related to Transhumanist technologies. I am also saying that simply by applying the identity of a discredited social construct (centralized authoritarian government) to the slowly building "movement", while we currently use such technologies to selfish and shallow ends, it might dilute or diminish the ultimate potential (and therefore best selling point) of the technologies and the associated social movement.  In my mind one of the best applications of Transhumanist technologies will be to usurp hierarchical government.  It is like ordering a coal burning steam car online to drive it to the car dealership to select a new electric car.  Surely this is the point where the followers of Saul Alinsky will chime in with the idea that the purposes of Transhumanism are best served by infecting and transforming the current system from within.  I would not be averse to mixing Transhumanism and government, if it didn't directly undermine the evidence for the best possible yield of such technology: abundance that manifests as personal freedom, autonomy, and safety.

Unfortunately, Transhumanism has been utilized by certain individuals as a means to challenge extant social conformity paradigms.  I say "unfortunately" because I believe (not that these are unimportant issues) that these egocentric and typically shallow applications of humanity transcending technological capabilities are simply an example of putting the wrong foot forward.  If Transhumanism only meant that we could get prosthetic tails, easy cosmetic surgery, or shock young earth creationists with artificial life created in less than seven days, we should just go full speed ahead with presenting those things as the point of the venture.  The real bounty of Transhumanism will be the moral ascent of the affected individuals.  The completion of our species's and civilization's arc toward peaceful cooperation and plenty should not be characterized by reactionaries suffering from an obsession with identity politics or any soon to be redundant demonstrations of the supremacy of technology over arcane religions or the theatrical shock value of ostentatious augmentation.  Transhumanism should be characterized by transcendence of hardship (not just color blindness) with technology.  Transhumanism should not be characterized by idle speculation by inexpert journalists with dubious political intentions.  Transhumanism should simply be demonstrated, effectively, quietly, and in the most humble and inauspicious applications that can be identified, while remaining consistent Utilitarian ideals and the greatest possible intentions.