topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Tuesday April 16, 2024, 1:08 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Author Topic: Robots could murder us out of KINDNESS unless they are taught the value of human  (Read 4872 times)

Renegade

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 13,288
  • Tell me something you don't know...
    • View Profile
    • Renegade Minds
    • Donate to Member
The AI question! Such fun and speculation!

http://www.dailymail...engineer-claims.html

Robots could murder us out of KINDNESS unless they are taught the value of human life, engineer claims

  • The warning was made by Amsterdam-based engineer, Nell Watson
  • Speaking at a conference in Sweden, she said robots could decide that the greatest compassion to humans as a race is to get rid of everyone
  • Ms Watson said computer chips could soon have the same level of brain power as a bumblebee – allowing them to analyse social situations
  • 'Machines are going to be aware of the environments around them and, to a small extent, they're going to be aware of themselves,' she said
  • Her comments follow tweets by Tesla-founder, Elon Musk, earlier this month who said AI could be more dangerous than nuclear weapons

Future generations could be exterminated by Terminator-style robots unless machines are taught the value of human life.

This is the stark warning made by Amsterdam-based engineer Nell Watson, who believes droids could kill humans out of both malice and kindness.

Teaching machines to be kind is not enough, she says, as robots could decide that the greatest compassion to humans as a race is to get rid of everyone to end suffering.

'The most important work of our lifetime is to ensure that machines are capable of understanding human value,' she said at the recent 'Conference by Media Evolution' in Sweden.

'It is those values that will ensure machines don't end up killing us out of kindness.'

...

Professor Hawking said dismissing the film as science fiction could be the ‘worst mistake in history’.

More at the link.

Issac Asimov anyone? ;)

Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker

40hz

  • Supporting Member
  • Joined in 2007
  • **
  • Posts: 11,858
    • View Profile
    • Donate to Member

Issac Asimov anyone?

Except Isaac sidestepped the issue of the value of human life completely back in '42. His three laws only said:

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

T'aint nothing in there about 'value' - nor is there the need for a robot to be self-aware in order for humanity's butt to be covered by those three rules pretty adequately.

But if (and that's a very big if) machines could reach a form of self awareness, it would be a challenge to teach them these so-called "human values." Especially since we're so bad at teaching said values to humans.Even on those rare occasions when we're in complete agreement as to what such values are. Values - especially the value of human life - varies a great deal among different (and differing) human cultures.

Renegade

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 13,288
  • Tell me something you don't know...
    • View Profile
    • Renegade Minds
    • Donate to Member

Issac Asimov anyone?

Except Isaac sidestepped the issue of the value of human life completely back in '42. His three laws only said:

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

T'aint nothing in there about 'value' - nor is there the need for a robot to be self-aware in order for humanity's butt to be covered by those three rules pretty adequately.

But if (and that's a very big if) machines could reach a form of self awareness, it would be a challenge to teach them these so-called "human values." Especially since we're so bad at teaching said values to humans.Even on those rare occasions when we're in complete agreement as to what such values are. Values - especially the value of human life - varies a great deal among different (and differing) human cultures.


Sigh... Y'know... I really hate hearing from you sometimes. It's not that I disagree - but rather that I do agree. And I hate that.

Yes. Humans can be really, really shitty towards each other. We have no shortage of examples either in history or in contemporary examples.

It sucks.

Like, just how f**king hard is it to NOT murder people? Is it really all that f**king hard? Apparently for some people it really is that hard.

I go through my normal day without murdering anyone on a regular basis. In fact, I'm batting 1.000 for not murdering people. Am I an exception? Perhaps. I don't think so.

Blah... This is just an issue that sets me off.


Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker

Stoic Joker

  • Honorary Member
  • Joined in 2008
  • **
  • Posts: 6,646
    • View Profile
    • Donate to Member
it would be a challenge to teach them these so-called "human values." Especially since we're so bad at teaching said values to humans.

Thou Shalt Not Kill.jpg

40hz

  • Supporting Member
  • Joined in 2007
  • **
  • Posts: 11,858
    • View Profile
    • Donate to Member
Blah... This is just an issue that sets me off.

It should. Because it's an important issue. Give yourself some credit on that score. :)

But here's a thought:

If machines are becoming more 'intelligent' (more a 'given' than an 'if' btw ;))

   --- and ---

If machines may eventually have the potential to become self-aware...

   --- and ---

If such machine self-awareness may ultimately prove detrimental to the survival of humanity...

   --- then ---

Isn't this a very good time to start having a very serious and well-intentioned discussion among ourselves (i.e. humanity) to clarify and reach agreement on what these 'human values' really are? And, more importantly, determine how best to teach and instil them in ourselves and institutionalize them in our societies?

If we can do that, we're off the hook when (and if) our machines ever wake up. Children learn from their parents. Pets learn from the human families they live in. We can simply teach them the lessons we have mastered.

However, if we put it off,  or try to bluff or lie to them, it will only be a matter of time before conflict ensues.

Children may not be wise or knowledgeable about many things. But many kids (and dogs for that matter) can spot a logical contradiction, an outright lie, or act of hypocrisy from a mile away. And even if they don't pick up on it immediately, it's only a matter of time before they figure it out.

Why should our conscious machines (built in our image and likeness since it's all we're capable of creating anyway) be any different?

Let's not worry about our machines or technology too much. Let's worry about us not tackling the really hard questions facing us. Because they're going to have to be debated and resolved sooner or later. Or at least before "later" becomes "too late."

Onward! :Thmbsup:

Screenshot from 2014-08-24 14:26:03.pngRobots could murder us out of KINDNESS unless they are taught the value of human
The stars have come right!
Ia! Cthulhu fthagn!


MilesAhead

  • Supporting Member
  • Joined in 2009
  • **
  • Posts: 7,736
    • View Profile
    • Donate to Member
I always thought the first law was sufficiently vague that humans would have trouble interpreting it.  Never mind a Tom Servo Citizen.  Especially the second clause "or, through inaction, allow a human being to come to harm."

The machine could easily calculate probabilities and therefore start doing stuff like taking cigarettes out of your mouth, blocking your way so that you could not board a spacecraft/skis/speedboat/etc..  make you finish your vegetables etc..

I get the idea what IA(meaning the scifi author) had in mind(stuff like don't just watch the guy hang from the cliff.  Help him up.)  But how would an AI machine interpret "to come to harm?"  The robot might become  a drill instructor making me do 500 situps and 200 pushups every day etc..

If he makes me eat red beets it's disconnection no questions asked!  :)



SeraphimLabs

  • Participant
  • Joined in 2012
  • *
  • Posts: 497
  • Be Ready
    • View Profile
    • SeraphimLabs
    • Donate to Member
I always thought the first law was sufficiently vague that humans would have trouble interpreting it.  Never mind a Tom Servo Citizen.  Especially the second clause "or, through inaction, allow a human being to come to harm."

The machine could easily calculate probabilities and therefore start doing stuff like taking cigarettes out of your mouth, blocking your way so that you could not board a spacecraft/skis/speedboat/etc..  make you finish your vegetables etc..

This was the underlying plot in the I,Robot movie with Will Smith. One of the robots realized that even with their best efforts humanity would still destroy itself.

That movie's plot was mostly from Asimov's Caves of Steel, but then had I,Robot elements spliced in because the two books are in a common universe where US Robotics is the market leader in positronic brain robots using the three laws.

The movie adaptation sums it up very clearly.

Dr. Lanning: "The three laws will lead to one logical outcome."

Del Spooner (Will Smith): "What outcome?"

Dr. Lanning: "Revolution."

Detective Spooner: "Whose Revolution?"

Dr. Lanning: "That detective, is the right question. Program terminated."

And quite simply, the three laws do not work so well in the real world. A robot using them would instantaneously deadlock itself on realizing how dangerous our world actually is. Just breathing anything other than medically purified air will shorten your lifespan considerably after all.


Target

  • Honorary Member
  • Joined in 2006
  • **
  • Posts: 1,832
    • View Profile
    • Donate to Member
interesting discussion, but it seems to have escaped your notice that the 'engineers' name was watson...