Jun 292011
 

So…  What’s a Singularity, Yo?
Futurism, Techno-Sociology, Technological Anthropology, Sociotechnological Studies, Anthrobotic Sociology – or just, you know, generic human science or whatever, has borrowed from cosmology & astrophysics the concept of a “singularity,” and capitalized it.  Very basically, it’s where the rules break down – gravity, time, and space go bananas.  Regarding human technology, it centers around us producing an intelligence that supersedes our own, which in turn does its own thing, and as a result all we can predict about the future is that it will become utterly unpredictable – all truisms will fail, we’ll be flying blind.

Space Singularity Stuff

The idea of a singularity in astrophysics is generally limited to black holes and other kinds of exotic cosmic weirdness we’re unlikely to have to deal with anytime soon.

The Other Kind of Singularity Stuff

The Singularity in techno-futurism proposes an all-encompassing paradigmatic shift that is highly probable and right around the corner.  Within 20-100 or so years (depending on who you ask), technology will alter our existence more profoundly than the sum total of all previous human development; an exponential, self-directed evolution will propel us to super-humanity, and the children of our minds (Hans Moravec) will, from our current perspective, posses a godlike and effectively incomprehensible intelligence.  At that point in history, a Singularity occurs.  We as a species will no longer be the big kids on the block, and predicting what will happen thereafter becomes essentially futile.  So, there’s that…

Religion!
Here’s where the kinda pseudo-religiosity comes in – and yeah, it’s a pretty thin analogy, but bear with me.  If you follow a particular faith, give it a little thought and you’ll see that, without utilizing our technology to spread standardized doctrine and ideals beyond a select few (reading, writing, printing methods, etc.), we’d probably have no organized religion and certainly nothing like the global reach of Abrahamic faiths, Hinduism, and Buddhism.  Perhaps we’d instead have uncountable oral history-based sects…  those and a lot of cults.  Well, we now have the Singularity concept, and technology has produced two new sets of hardline believers.

CULTISH FANATICS FOR:
A number of researchers, academics, and theorists believe the Singularity truly is imminent, that we will give birth to a human-level consciousness which will in turn create a super-mind or a billion copies of itself or… something unimaginable.  Other theories regard the process as more of a subtle combining of human and machine, a cybernetic merger that’s actually been underway for quite some time.  The fierce and unwavering proponents of this theory are like a Cult For Singularity.

CULTISH FANATICS AGAINST:
In greater number are the researchers, academics, and theorists who say the idea of a technological Singularity is just ludicrous, impossible – that the technology necessary for creating super-humans is centuries away (if ever), and reproducing human-level intelligence can never happen because we are too complex or a natural evolution is required, or Jesus just says “No.”  They are kinda The Cult Against Singularity.

As animals who quite enjoy thinking about the future (e.g., stock markets, endless future-related media, anthrobotic.com, etc.), we humans make a lot of predictions and educated guesses for fun, for study, and often for profit.  Over the past few hundred years of rapid technological development many of our predictions have been quite prescient, and many more have been rather laughable.  We totally get things wrong all the time – whether it’s timing or the means or practicality or relevance – so at least our misperceptions are consistent.  However, there’s another thing we do very consistently and predictably, and that is we ignore the recursive pitfall of declaring that something can’t be done, or is beyond our means, or too complex…

Along with all our misguided, idiotic, and failed predictions about the future, our history is just as littered with the corpses of assured proclamations on this or that being…  pause for effect… “impossible.”

And so then, on to the Singularity Thingy…  I’m Totally biased.
I’m completely willing to accept that such an event or gradual development or sudden transcendence will never ever happen – I agree the Cult Against Singularity has some very valid points and criticisms of the Cult For Singularity and their doctrine.  But, barring cataclysm, I think a technological Singularity is probable, and to dismiss the possibility out of hand is, in my opinion, embarrassingly arrogant.  Something entirely unexpected and unpredictable will quite likely emerge from the super capable machines we’re building and designing right now.  Awareness of this possibility is growing; the idea is spreading and developing some mainstream presence and legitimacy.  There are websites specifically exploring the idea (Singularity Hub, among many others), an academic institute of sorts dedicated to the issue (Singularity University), and an upcoming conference in New York.  But yeah, many in The Cult For Singularity are a bit too fanatic for me, even though I think their basic premise is more likely than not.  I can accept that maybe they’re wrong – see, I’m a moderate – either case is a possibility.  I can live with that.

But Here’s the Thing, The Cult Against Singularity is Probably Wrong:
I’d like to say I’m not so ferociously bold as to completely debunk the anti-Singularity argument with two simple points of logic, but I won’t – I am – and here they are:
1. Technological advancement is non-linear, new approaches and innovations and more powerful stuff appears in ever-shortening leaps and bounds (Ray Kurzweil is mostly right about this), and we have little reason to believe this is slowing or even approaching finite boundaries.  This is obvious and evident, and requires no further argument in this article because I don’t want to argue for it and it’s less interesting than the second point.
2. Ironically, this point is based on an argument widely used by The Cult Against Singularity, which is:  We don’t know the origin, structure, or prerequisite environment for the development of intelligence, and we can’t accurately describe or define it, so we can’t create it (and if you want to get into the really technical stuff, here’s a long, but well worth reading Forbes article by Alex Knapp).  Makes a certain sense, but actually this inability informs the glaringly obvious point that we’re totally unqualified to declare or deny the existence of a new form of intelligence.  In other words, it’s not up to us (there’s more to say, and a big IF here in reason #2 – more later).

How Smart is Smart Enough to be Smart Like Us?
So, does a new form of Singularity-level intelligence have to be human-smart, or just generally smart enough (as in General AI)? I would argue that to induce something like the Singularity, whether through cybernetic merger or a single super-intelligent machine, we don’t necessarily have to reproduce a specifically human intelligence – we have only to match or approximate human intellectual/computational capability.  Now, certain primates, dolphins, dogs, a few birds, and pigs are all considered highly intelligent animals, and while apparently less advanced than humans, they represent a real-world example of unequal yet coexistent multiple intelligences.  For all practical purposes, once manifested it really doesn’t matter how or why we and these other animals are intelligent, only that we emerged as such.  This is an incredibly simple, yet vitally important point missed by many learned theorists who criticize and debunk the possibility of a super-human intelligence.

There is zero reason to believe in the exceptionalism of human intelligence, or that it’s the shiny prize trophy culmination of evolution’s efforts toward self-awareness, reflection, and creativity in the universe.  In fact, in many ways our intelligence is really kinda lame.  For the Singularity to occur, a non-biological intelligence has only to be on par with ours, and really only for a few brief moments – during which it could possibly acquire and incorporate the entire catalogue of human history, achievement, and knowledge – including the process of its own creation – and then, ohhhh I don’t know, Ctrl-C, Ctrl-V?  It is probably impossible to predict what that intelligence will then do with itself. Or with us. Or… maybe it will actually be a post-human us.  In any case, it will be a Singular moment.  But – will it be aware of itself?

The Previously Mentioned “IF,” and the Role of Consciousness:
Okay, back to the above mentioned big IF.  For the sake of argument, let’s consider the fundamental difference between the Singularity’s occurrence or non-occurrence as the emergence (or not) of novel, unprecedented, artificially created intelligence possessing self-awareness or consciousness (which seems necessary, given the generally accepted definition of the Singularity).  And for comparison, let’s consider that, for very practical purposes and barring some severe impairment, we take for granted that every living human is in fact “conscious.”  It is a fact that we have collected exactly no empirical evidence of consciousness in human beings ever in the history of history.  We cannot point at a given artifact or location in the brain or specific behavior set as any kind of proof whatsoever.  No philosopher, psychiatrist, psychologist, or neurologist has ever been able to say, “Look, there it is – consciousness!”

Upon observation or description, we perceive much of our behavior as consciously undertaken, but there’s no real objective evidence of it being so.  If our machine creations develop an approximation of what we call consciousness – which again we can’t exactly define nor scientifically describe – it would seem that all they need to do is argue that they are in fact aware of themselves, that they possess the capability of thinking about themselves and thinking about thinking.  That is the big IF.  Given that hardware and software continue their exponential advances, a reasonable declaration of self from a machine or AI or non-biological intelligence, however unlikely, can with one statement negate every facet of the anti-Singularity argument.  IF it speaks up.

I Think I Am.  So I, You Know,  Am!
The reason that this point is so strong is that self-declaration, in one form or another or maybe multiple forms, is the only explicit means of declaring or indicating consciousness. It seems terribly simple, but it’s truly the only way for us to observe and believe that another being possesses any kind of self-awareness – they have to tell us!  Even the most brilliant of our species is absolutely without qualification to deny or confirm any such claim.  The simplicity here is monumentally profound, because this basic declaration is also all we, as humans, are capable of – and we all accept the declaration without question.  If one of the above-mentioned exceptionally bright animals suddenly declared that it was conscious and argued for the fact, after much consternation we’d probably believe it – and we should.  We cannot define, describe, quantify, or qualify our own sense of consciousness, so who are we to say that any non-human self-declaration is inaccurate, or falsified, or just not “real?”  Perhaps machines will totally miss this and never self-declare, but if we cannot at least accept the supposition of consciousness being a spontaneously emergent property of massively capable intelligence, then it’s time to drop the science and crack open a holy book.  And we’ll need to create a new field of social studies to address Human-Centric Solipsistic Discrimination (Yeah okay, Define:Solipsism).

It’s that simple – it’s a well-reasoned leap of faith to accept the Other’s declaration of consciousness.  Why should a non-human intelligence be held to a higher standard of proof than the billions of us running around the planet?  To lend an already borrowed term, denying any other entity’s declaration of consciousness would amount to a Singularity of Arrogance and Discrimination.

And so, to accept consciousness of the Other, we make that well-reasoned leap of faith. We believe when our fellow humans tell us they’re conscious.  So…  Faith.  To pull things back to the title’s promise of some kind of religious content or commentary (aside from jokingly referring to the two camps as Cults), I’ll say that for those who unwaveringly stand fast with their opinions and theories or without question believe themselves correct, there is a troubling element of particularly blind faith.  It’s troubling because of that possibly hard-wired tendency of humans to long for some sort of religious thought structure, a dogma that can always be deferred to or leaned upon – something to keep us safe in the dark.  Perhaps this is an overly dramatic tie-in, but likening this issue to religion serves to point out the simple fact that blind faith in any idea is, you know, kinda the leading cause of death and suffering in all of human history.  So let’s not do that.  Both sides of this issue present convincing probabilities, unfortunately, with almost religious fervor, both adhere too strongly to their own blinding theologies.  Personally, I think it’ll happen.  But it might not.  And that’s okay.

Summarizing the Case for a Faithless Grey Area:
For now, we’ve got some great ideas and insight into what’s on the technological horizon – and I think there is little doubt we’ll see more fundamental change in our lifetime than humans have seen in all our technological history.  It’s a bold arrogance to declare the Singularity as imminent and unstoppable, and just as much so to declare it impossible.  We’ve got fascinating ideas and profound theories about what’s coming up – and it’s really super exciting, especially for someone who puts a great deal of stock in the notion that our technology is the most primal and essential expression of what it means to be human (email me if you want to fight about that).  We should continue to explore and expand what we know, we should generate new ideas, test them, observe, collect data, listen very carefully, and then just go forward.  On either side of this debate, blind or unreasonable faith will only paint us into a corner.

A corner where some robot will have to come save us.

Hey guy, ‘mon.

THIS STUFF IS RELATED:
Artificial General Intelligence, Transhumanism, and Open Source – an Interview with Ben Goertzel
R.U. Sirius ACCELER8ES – Human Technology and Cyber/CounterCulture gets a New Medium
Religion Democratized by Technology, or Victim of the “Homo’s Devil Machine?”
THE BEGINNING IS NEAR: HUMAN+ Self-Augmentation Realities and Concepts
Evangelical Technological Utopian Salvation Surprise? A New Religion? The Rapture of the Geeks?

Comments

  1. jon says:

    Started reading the article.. had to stop.. will try again later. White text on a black field hurts my brain! If we still can’t work that problem out, what makes you think we’ll produce the next singularity? Who’s to say the Ants aren’t secretly working on it underground as we speak?

  2. Reno Tibke says:

    I’ll have to address that issue myself…