Blah... This is just an issue that sets me off.
It should. Because it's an important issue. Give yourself some credit on that score.
But here's a thought:
If machines are becoming more 'intelligent' (more a 'given' than an 'if' btw
--- and ---
If machines may eventually have the potential to become self-aware...
--- and ---
If such machine self-awareness may ultimately prove detrimental to the survival of humanity...
--- then ---
Isn't this a very good time to start having a very serious and well-intentioned discussion among ourselves (i.e. humanity) to clarify and reach agreement on what these 'human values' really are? And, more importantly, determine how best to teach and instil them in ourselves and
institutionalize them in our societies
If we can do that, we're off the hook when (and if) our machines ever wake up. Children learn from their parents. Pets learn from the human families they live in. We can simply teach them the lessons we have mastered.
However, if we put it off, or try to bluff or lie to them, it will only be a matter of time before conflict ensues.
Children may not be wise or knowledgeable about many things. But many kids (and dogs for that matter) can spot a logical contradiction, an outright lie, or act of hypocrisy from a mile away. And even if they don't pick up on it immediately, it's only a matter of time before they figure it out.
Why should our conscious machines (built in our image and likeness since it's all we're capable of creating anyway) be any different?
Let's not worry about our machines or technology too much. Let's worry about us not tackling the really
hard questions facing us. Because they're going to have to be debated and resolved sooner or later. Or at least before "later" becomes "too late."
Onward! Robots could murder us out of KINDNESS unless they are taught the value of humanThe stars have come right!
Ia! Cthulhu fthagn!