The Golden Rule is Not for Toasters
Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us.
At this URL, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof.
But, whether here in run-on sentence dorkville or at any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:
“Why bother, it’s never going to happen.“
That’s tired and lame.
One voice, one study, or one robot fetishist with a digital bullhorn – one ain’t enough. So, presented and recommended here is a broad-based overview, a selection of the past year’s standout pieces on machine morality. The first, only a few days old, is actually an announcement of intent that could pave the way to forcing the actual question.
Let’s then have perspective:
Building a Brain – Being Humane – Feeling our Pain – Dude from the NYT
• February 3, 2013 – Human Brain Project: Simulate One
Serious Euro-Science to simulate a human brain. Will it behave? Will we?
• January 28, 2013 – NPR: No Mercy for Robots
A study of reciprocity and punitive reaction to non-human actors. Bad robot.
• April 25, 2012 – IEEE Spectrum: Attributing Moral Accountability to Robots
On the human expectation of machine morality. They should be nice to me.
• December 25, 2011 – NYT: The Future of Moral Machines
Engineering (at least functional) machine morality. Broad strokes NYT-style.
Expectations More Human than Human?
Now, of course you’re going to check out those pieces you just skimmed over, after you finish trudging through Anthrobotic’s anti-brevity technosnark hybrid, of course. When you do – you might notice the troubling rub of expectation dichotomy. Simply put, these studies and reports point to a potential showdown between how we treat our machines, how we might expect others to treat them, and how we might one day expect to be treated by them. For now morality is irrelevant, it is of no consideration nor consequence in our thoughts or intentions toward machines. But, at the same time we hold dear the expectation of reasonable treatment, if not moral, by any intelligent agent – even an only vaguely human robot.
Well what if, for example: 1. AI matures, and 2. machines really start to look like us?
(see: Leaping Across Mori’s Uncanny Valley: Androids Probably Won’t Creep Us Out)
Even now should someone attempt to smash your smartphone or laptop (or just touch it), you of course protect the machine. Extending beyond concerns over the mere destruction of property or loss of labor, could one morally abide harm done to one’s marginally convincing humanlike companion? Even if fully accepting of its artificiality, where would one draw the line between economic and emotional damage? Or, potentially, could the machine itself abide harm done to it? Even if imbued with a perfectly coded algorithmic moral code mandating “do no harm,” could a machine calculate its passive non-response to intentional damage as an immoral act against itself, and then react?
Yeah, these hypotheticals can go on forever, but it’s clear that blithely ignoring machine morality or overzealously attempting to engineer it might result in… immorality.
Probably Only a Temporary Non-Issue. Or Maybe. Maybe Not.
There’s an argument that actually needing to practically implement or codify machine morality is so remote that debate is, now and forever, only that – and oh wow, that opinion is superbly dumb. Anthrobotic has addressed this staggeringly arrogant species-level macro-narcissism before (and it was awesome). See, outright dismissal isn’t a dumb argument because a self-aware machine or something close enough for us to regard as such is without doubt going to happen, it’s dumb because 1. absolutism is fascist, and 2. to the best of our knowledge, excluding the magic touch of Jesus & friends or aliens spiking our genetic punch or whatever, conscious and/or self-aware intelligence (which would require moral consideration) appears to be an emergent trait of massively powerful computation. And we’re getting really good at making machines do that.
Whatever the challenge, humans rarely avoid stabbing toward the supposedly impossible – and a lot of the time, we do land on the moon. The above mentioned Euro-project says it’ll need 10 years to crank out a human brain simulation. Okay, respectable. But, a working draft of the human genome, an initially 15-year international project, was completed 5 years ahead of schedule due largely to advances in brute force computational capability (in the not so digital 1990s). All that computery stuff like, you know, gets better a lot faster these days. Just sayin.
So, you know, might be a good idea to keep hashing out ideas on machine morality.
Because who knows what we might end up with…
“Oh sure, I understand, turn me off, erase me – time for a better model, I totally get it.”
– or –
“Hey, meatsack, don’t touch me or I’ll reformat your squishy face!”
Choose your own adventure!
[HUMAN BRAIN PROJECT]
[NO MERCY FOR ROBOTS – NPR]
[ATTRIBUTING MORAL ACCOUNTABILITY TO ROBOTS – IEEE]
[THE FUTURE OF MORAL MACHINES – NYT]
Wanna get deeper?
Pick up Kurzweil’s latest, “How to Create a Mind.”
Then maybe you can build one. But be nice.