Feb 092015
 

Talking about machine morality is basically picking apart whether or not we’ll someday have to be nice to self-aware machines or demand that they be nice to us. Impossible? No, stupid. Guaranteed? No, stupid. Good idea to be prepared just in case? Damn right.

• • •

[Originally Published February 8, 2013]

The Golden Rule is Not for Toasters

Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us. And at this URL, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof. Namsayen?

But, whether here in run-on sentence dorkville or at any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or, perhaps the tired-ass intellectual cul de sac: Why bother debating it, it’s never going to happen.”
That’s all pretty lame.

One voice, one study, or one robot fetishist with a digital bullhorn – one ain’t enough. So, presented and recommended here is a broad-based overview, a selection of the past year’s standout pieces on machine morality. The first, only a few days old, is actually an announcement of intent that could pave the way to forcing the actual question.

Let’s then have perspective:

#1. Building a Brain
#2. Being Humane
#3. Feeling our Pain
#4. Dude from the NYT

 February 3, 2013 – Human Brain Project: Simulate One
Serious Euro-Science to simulate a human brain. Will it behave? Will we?

 January 28, 2013 – NPR: No Mercy for Robots
A study of reciprocity and punitive reaction to non-human actors. Bad robot.

 April 25, 2012 – IEEE Spectrum: Attributing Moral Accountability to Robots
On the human expectation of machine morality. They should be nice to me.

 December 25, 2011 – NYT: The Future of Moral Machines
Engineering (at least functional) machine morality. Broad strokes NYT-style.

Expectations more Human than Human?

Now, of course you’re going to check out those pieces you just skimmed over, after you finish trudging through Anthrobotic’s anti-brevity technosnark hybrid, naturally. When you do, you might notice the troubling rub of expectation dichotomy.

Simply put, these studies and reports point to a potential showdown between how we treat our machines, how we might expect others to treat them, and how we might one day expect to be treated by them. For now, from our end, morality is irrelevant; it is of no consideration nor consequence in our thoughts or intentions toward machines. But, at the same time – meaning right now – we seem to already hold dear the expectation of reasonable, moral treatment by any intelligent agent, including the only vaguely human robot in the IEEE piece mentioned above.

At the low end, even now, should someone attempt to smash your smartphone or laptop (or just touch it), you of course protect the machine. This is of course not an emotional response, but an economic one, i.e., don’t break my expensive stuff.

Now, let’s extend beyond concerns over the mere destruction of property or loss of labor and consider the what if: 1. AI matures, and 2. machines really start to look and behave convincingly like us: Sure, anyone would be pissed if some jerkoff broke their expensive robot, but could one morally abide the harm? If there was an attachment superseding resource cost or productivity or sentimentality, where would one draw the line between commodity destruction and emotional damage, and how would one calculate a proportional – and yes, even moral – response thereto?

Or, potentially, could the machine itself abide harm done to it? Even if imbued with a perfectly coded algorithmic moral mandate to do no harm, could a machine calculate its passive non-response to attack and/or damage as an immoral act against itself…and then react defensively?

Oh, and it’s just way too huge to even look at here, but let’s just toss this out there: Slavery.

Yeah, these murky hypotheticals can go on forever, but what’s super clear is that blithely ignoring machine morality or overzealously attempting to engineer it might both result in…immorality. It’s sticky, but maybe we – all of we, not just the keyboard pounders – should burn some calories thinking about this stuff, huh? Bring it up at family dinner or bridge club or church or in line at Walmart.

Probably Only a Temporary Non-Issue.
Or Maybe in 100 Years or Maybe Never.

There’s an argument that actually needing to practically implement or codify machine morality is so remote that debate is, now and forever, only that – and oh wow, that opinion is superbly dumb. Anthrobotic has addressed this staggeringly arrogant species-level macro-narcissism before (and it was awesome). See, outright dismissal isn’t a dumb argument because a self-aware machine is without doubt going to happen, it’s dumb because 1. absolutism is fascist and fascists never win, and 2. to the best of our knowledge, excluding the magic touch of Jesus & friends or aliens spiking our genetic punch or whatever, conscious and/or self-aware intelligence (which would require our moral consideration) appears to be an emergent trait of massively powerful computation. And we’re getting really good at making machines do that.

Humans rarely avoid stabbing toward the supposedly impossible – and a lot of the time, we do land on the moon. The above mentioned Euro-project says it’ll need 10 years to crank out a human brain simulation. Okay, respectable. But, a working draft of the human genome, an initially 15-year international project, was completed 5 years ahead of schedule due largely to advances in brute force computational capability (in the not so digital 1990s). All that computery stuff like, you know, gets better a lot faster these days. Just sayin.

So, good idea to keep hashing out ideas on machine morality.
Because who knows what we might end up with…

“Oh sure, I understand, turn me off, erase me – time for a better model, I totally get it.”

– or –

“Hey meatsack, don’t touch me or I’ll reformat your squishy face!”

Choose your own adventure!

[HUMAN BRAIN PROJECT]
[NO MERCY FOR ROBOTS – NPR]
[ATTRIBUTING MORAL ACCOUNTABILITY TO ROBOTS – IEEE]
[THE FUTURE OF MORAL MACHINES – NYT]

Wanna get deeper?

Pick up Kurzweil’s latest, “How to Create a Mind.
Then maybe you can build one. But be nice.

KURZWEIL.CREATE.MIND

• • •

You are Already a Cyborg - Technosnark T-Shirts

THIS STUFF IS RELATED:
All I Want for Valentine’s Day is a Robot Lover with an Aggregate Mindfile of
All I Want for Valentine’s Day is a Robot Lover with an Aggregate Mindfile of
Machine Morality: a Survey of Thought and a Hint of Harbinger
Okay, Let’s Talk about Making Sex with Robots
ROM for Right & Wrong: Teaching Humans Vs. Coding Machines