(NOTE: This post was re-shared on 08/27/2012 in response to reports of Nico the robot being taught, and possibly learning, some semblance of self-awareness. See a report on that over at Kurzweil AI.)
IBM is doing more amazing stuff.
First, props to Rhonda Callow [<-WAYBACK MACHINE ARCHIVE LINK] for bringing up this socio-technological issue. It’s extremely relevant and timely and merits serious discussion. IBM, at 100 years and two months old, continues giant-stepping forward in the field of non-biological intelligence (NBI), and people should talk about that – because IBM is pushing AI and NBI, like, STAT.
The Wrong Question.
After mentioning Watson, this year’s non-human Jeopardy! champion, the author of the referenced piece also details IBM’s newest experimental brain-like chip that learns and does massively parallel processing stuff like the human brain. So then we have the question, which is the article’s title (Can a Computer be as Intelligent as a Human), but I think that, maybe without realizing it, what the author really means to posit here is: Can a computer be intelligent like a human?
And the real answer to the whole AI/NBI question is: It doesn’t have to be.
No offense to Callow, the brief article is well written and asks an important question. But the part that kills me is her in-article answer to the question, which is: “I don’t think so.” Okay, fine – her opinion – but directly after, she concedes that we have yet to really understand or define what intelligence is. Here is where I’ll insert a giant figurative GLARING FLAW OF LOGIC neon sign. Granted, she’s only stating an opinion, but it’s obviously biased, and she pretty much dismisses even the possibility.
I have the Weirdest Logic Right Now.
Perhaps read her piece first, and then come back here.
I get worked up about even small articles like this because, in aggregate, such opinion becomes a kind of memetic force whose adherents seem oblivious to the inherent logical flaw of the “Artificial Human-Level Intelligence is Wildly Unlikely and/or Impossible Because We Don’t Know What Intelligence Is” camp. But go read her stuff, and then read the rest of my stuff here, and then decide for yourself if I’m being overly critical or picky or if it’s actually me who’s got the flawed logic. Comments welcomed.
Okay – Back Now?
Here’s the thing: if we’re unqualified to clearly define what intelligence is, even among our own species, then are we not also, by that very fact, entirely unequipped to declare human-level (or higher) artificial intelligence as either terribly unlikely and maybe even impossible, or extremely probable and definitely coming? Wouldn’t a more responsible answer to the article’s question be something like: “We don’t know what intelligence is exactly, but we’ll know it if we see it.”
See, the word “if” is your out.
The subtle idea that exposes the so-big-it’s-hard-to-see logical flaw in this and every other human exceptionalist argument of its kind is as follows: While we don’t know what intelligence is, we kinda do know what it’s not – and therefore all we really can know about defining intelligence is that we know it when we see it.
Intelligence is as Intelligence Does.
This is key – we know intelligence when we see it. We train dogs, for example, so we know there’s a certain level of intelligence there. And then there are dolphins, various birds, crows, pigs, etc. – we can observe them, and even though we cannot clearly define exactly what we mean, we know there’s an intelligence at work in their goings on.
Callow refers to us as having “real, biological intelligence,” and this is kind of baffling. For one, it’s kind of like defining intelligence, which our inherent non-ability to do is the crux of her argument against humans being able to replicate it at a high-level. And I get the feeling that there’s a kind of fear at work here, and it’s undermining the logic of an obviously intelligent person.
It seems we can accept the existence of other intelligences, as long as they’re not as intelligent as us. That is a heartbreaking notion of staggeringly arrogant species-level narcissism. And probably, it’s rather dangerous.
“There is zero reason to believe in the exceptionalism of human intelligence, or that it’s the shiny prize trophy culmination of evolution’s efforts toward self-awareness, reflection, and creativity in the universe.”
-Reno J. Tibke
Yep, that’s right – I quoted myself.
And, as I also said before in my lengthy blah blah blah on the Cult For/Against The Singularity, unless we invoke religion or genetic manipulation by aliens and stuff, we have to assume that intelligence, on whatever level, is an emergent characteristic of life itself. It’s therefore only rational to assume that, given enough processing power and the right sets of instructions, an AI or NBI might one day decide to observe itself, then tell us about it, then tell us it doesn’t want to be turned off.
Actually, it’s not even up to Us.
Just saying, keep an open mind about machines getting wise. One thing is almost 100% certain, if human-level intelligence does emerge from the global soup of AI/NBI research and development and all these increasingly capable and connected neural networks, we’ll know it when we see it.
For now, we know that we don’t.
[ARTICLE VIA SYNC] [<-WAYBACK MACHINE ARCHIVE LINK]