Here’s an articulate, well-composed piece by Paul Allen and a colleague wherein they argue that, contrary to the Kurzweilian & Vingeian(?) assertion that “It’s Near,” the Singularity is, umm… far.
Their Why is Wrong Though
It’s an interesting thing for a computer pioneer like Paul Allen to be a kind of technological naysayer. Because for all his genius and all this guy has given the world, his reasoning for not being hip to the possibility of a Technological Singularity is surprisingly lame. What I’m saying is, he might actually be right that it’s distant or unlikely, but his reason why is like, you know, fallacious.
My Logic is Undeniable. My Logic is Undeniable!
Okay, here’s the thing: I will totally concede that people like Paul Allen are in many ways smarter and more capable and more experienced than I (obviously, right?). But for some unknown reason, he and other analysts unenthusiastic about and absolutist naysayers against the Singularity fail to see a fundamental flaw in their most hallowed rebuttal, and that rebuttal is: We don’t understand thought or consciousness or intelligence or cognition and we can’t define them therefore we’re incapable of recreating human intelligence or a super-smart NBI (non-biological intelligence).
Quote: “This prior need to understand the basic science of cognition is where the “singularity is near” arguments fail to persuade us.” and “Building the complex software that would allow the singularity to happen requires us to first have a detailed scientific understanding of how the human brain works that we can use as an architectural guide…”
But… but… WHY?
Why is this prior need needed? I covered this before in my tactless & condescending piece “Can a Computer be as Intelligent as a Human? Or, Asking the Wrong Dumb Question. Get it?” What I was trying to say, with my trademark lack of professionalism or journalistic skill, is that the criticisms leveled at pro-Singularity ideas rely heavily on pointing out that we don’t understand human intelligence, as if that is a fundamental prerequisite, something we must accomplish before we even begin thinking about the possibility of maybe considering the unknown ramifications of creating a super-human intelligence maybe.
Dude gets all hung up on the process of reverse engineering the human brain in order to make an AI – he says it’s really hard and far far away, then he gives some ancillary treatment to just building a silicon AI – also far far away and fraught with various combinatory pitfalls and other discouraging words that have lots of syllables and make a writer sound like they’re on their game. But this really is just splitting hairs. Perhaps an attempt to avoid being called out on this point I’m calling him out on?
I don’t know, man – it is a good piece, but toward the end of the article I can’t even concentrate on what they’re saying because of the incessant beating of the goddamn “We can’t build a human-level AI/NBI any time soon/ever because it’s really hard and we don’t even really know what that means because we don’t know what intelligence is, etc., but maybe we can in the distant future but also probably not” drum. How can we possibly be qualified to deny the emergence of a phenomenon we’re not even able to effectively describe and define in and of itself as a extant trait of already living things?
We Ain’t That Smart, Yo.
Perhaps unconsciously, Allen is asserting the primacy of human intelligence in our known universe – a cognitive barrier beyond which nothing can pass – because evolution made it… or something. Numerous non-human animals around the world display various levels of intelligence, and we accept that as either a spontaneously emergent property of life or a gift from a spooky parent figure in the sky. Those animals aren’t intelligent like us. But if they were, in their own way, as intelligent or more so than humans – well, suffice it to say, we’d know it. Admittedly, we don’t know what exactly intelligence/awareness/consciousness is, but we know what it isn’t when we don’t see it. Knowwhattamean?
Good Article – Bad Point
I guess I just wanted to point out that even the most brilliant among us, who often have the greatest reach and loudest voice, can be somewhat short- or narrow-sighted. The Singularity might not happen – and I can accept that easily and openly. It’s only rational to assume such.
But it’s equally rational to at least agree on the possibility that, from the ferocious computational abilities of supercomputers combined with biologically-based learning algorithms and models of thought we haven’t even considered yet, the spontaneous emergence of a different, comparably powerful intelligence, is very possible. Maybe even probable.
Through a kind of hyper-accelerated, guided evolution at the hands of humans, machines just might wake up smart one day. All by themselves. Just like we did, after millions of years. And if they do so, since we’re unqualified to define it or describe it or draw a schematic of how to create intelligence, we’re therefore just as inept at denying an entity’s demonstration or declaration of it – and what choice would we have but to accept it?
And hey – maybe they won’t wake up on their own.
Maybe Jesus or Aliens will do it.
Oh, and if you’re into the AI Machine Intelligence Consciousness issue with a twist of criticism and admonishment for the hardliners against and for the Singularity, see:
Technology Created Organized Religion. Next Project: Cults For & Against the Singularity
BOOKS MENTIONED IN THIS POST:
Get your copy of Kurzweil’s “The Singularity is Near”
or Vinge’s “A Fire Upon the Deep” at Anthrobotic’s Amazon!