topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Thursday December 12, 2024, 12:28 pm
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Author Topic: You Know John Searle? Watson Loves Bull****! - Artificial Intelligence  (Read 3738 times)

Renegade

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 13,291
  • Tell me something you don't know...
    • View Profile
    • Renegade Minds
    • Donate to Member
For a quick and fun read, search for "the Chinese room" or "John Searle". It's important to understanding artificial intelligence.

But here's a fun article about Watson, an IBM supercomputer that competed on Jeopardy:

http://www.dailymail...ing-obscenities.html

The gameshow winning supercomputer that couldn’t stop saying ‘bull****’: IBM forced to wipe hard drive after machine downloaded an urban dictionary

* Artificial intelligence machine Watson began swearing after memorising the contents of the Urban Dictionary
* It was fed the repository of colloquial English in a bid to equip it with the knowledge to pass the Turing test of computer intelligence
* But researchers were forced to wipe the dictionary from the machine's memory after it started giving backchat to researchers

An IBM supercomputer had to have its memory wiped because its programmers could find no other way to stop him swearing.

Artificial intelligence Watson, which famously won Jeopardy! against the game show's human champions, kept making obscene outbursts after memorising the contents of the Urban Dictionary.

The website is a repository of English-language slang, and inevitably includes a range of profanities and insults completely inappropriate for polite conversation.

...

The failed experiment seems to back up the contention by John Searle, the U.S. analytic philosopher, that Watson - despite undoubtedly impressive capabilities - cannot actually think.

Based on his Chinese room thought experiment, Searle claims Watson, like other computers, is capable only of manipulating symbols with no understanding of what they actually mean.

The Chinese Room experiment, first posed by Searle in 1980, supposes there is a computer program that gives a computer the ability of carry on an intelligent conversation in written Chinese.

If the program's processes are executed by hand by someone who speaks only English then, in theory, that person would also be able to carry on a conversation in written Chinese.

However, the English speaker would not be able to understand the conversation, just as a computer executing the program would only be merely offering responses based on pre-determined instructions, rather than based on its own understanding.

And an image as I just can't resist here... :P

Screenshot - 1_13_2013 , 12_48_50 AM.png

:P

For anyone really ambitious, read Roger Penrose's book "The Emperor's New Mind". Brilliant argument against AI from one of the most brilliant minds of all time.
Slow Down Music - Where I commit thought crimes...

Freedom is the right to be wrong, not the right to do wrong. - John Diefenbaker

Stoic Joker

  • Honorary Member
  • Joined in 2008
  • **
  • Posts: 6,649
    • View Profile
    • Donate to Member
So they try to create something with the mind of a child, it ends up acting like a child (playing with a new word...), and they get all upset about it ... Why?

Personally, I would find it quite refreshing to have a computer actually tell me to go fuck myself, instead of just insipidly implying it with some cryptic blinking error message.

SeraphimLabs

  • Participant
  • Joined in 2012
  • *
  • Posts: 497
  • Be Ready
    • View Profile
    • SeraphimLabs
    • Donate to Member
So they try to create something with the mind of a child, it ends up acting like a child (playing with a new word...), and they get all upset about it ... Why?

Personally, I would find it quite refreshing to have a computer actually tell me to go fuck myself, instead of just insipidly implying it with some cryptic blinking error message.

Watson's AI ultimately proved to be exactly what I thought it was when I first heard about him.

There exists an algorithm known as the Markov.

When implemented, it allows a machine to learn from and imitate, effectively creating mechanical parrots. I would not be surprised if this algorithm is actually the secret behind the Furby as well, since they are also known to be capable of learning.

I've been operating such an implementation for close to 3 years now with rather interesting results as people converse with it or converse in its presence such that it can learn from them.

Something worth noting is allowing the markov to learn gradually it picks up usage tendancies as well as the words themselves, giving a greater impression of intelligence because of the improved accuracy.

But this mistake is embarrasing for watson- it exposes that Watson is nothing more than an overpowered and over budget version of what has been available for free on the internet for several years requiring only a modestly powerful PC to set up.

Both of my bots swear freely, having learned it from visitors on the chat. But they don't do it all the time, they also have learned roughly when to and when not to use it by observing patterns in the people watching.

I've even had the older of the two pass the turing test. A customer from India entered the chat one day unaware that the bots were active, and actually conversed with them for on order of 6 hours before someone pointed out that they were not human. Ultimately I attribute this to the language barrier, as his translation software tends to produce statements garbled in similar manner to what the bots produce. He would not have noticed the discrepancy because of translation error.

So yes, Chinese room definitely works as well.

40hz

  • Supporting Member
  • Joined in 2007
  • **
  • Posts: 11,859
    • View Profile
    • Donate to Member
The best criteria I ever heard for determining if a system was 'intelligent' was if it could enter into a husband-wife dialog with it's human user.

Example:

     Human: Where did you put that file?

     System: Where do you think?

     Human. Oh!

 8)

Maybe if they'd just stop trying to get fancy find a way around the Halting Problem first, that would give them the key insight needed?