Thursday, July 27, 2017

Turing Test, chat bots

A Turing Test is a person checking to see if he/she is talking to a computer.
After listening to the conversation with Samantha West, can you think of any occasions that you would want to know that you were talking to a person rather than a computer.

Frankly, I'm amazed that we are asking this question. I remember (and I'm sure you do too) the first time I answered the phone and it was a computer voice. There was no question that the computer was a computer. The fact that doubt as to the sentience of the caller is in itself a "B+" on the Turing test.

I would like to know if the caller is a person or a computer. It helps me in screening how much of my time I will give to the caller. If a robot, I will just hang up.

But in reality, the humans who call are just following a script (similar to Samantha West, but not a pre-recorded script). They are in effect, just as robotic, just as pre-programmed, just as likely to collect data, just as... whatever, as a robotic caller. At this point, the robots have become more human-like, just as the humans have become more robotic. Thus the Turing Test win.

What ethical implications/dilemmas do you see from the recordings you heard?

People lie. It is a sad fact of the human condition. We expect people to lie in creative and convincing ways to further their own ends. They will even tell the truth in a way that is a lie. And because of this, there are laws in place to protect consumers. But what about when computers lie? We trust computers to tell us the truth. Spreadsheets can't fudge the numbers for their own benefit.  A Google Search will give you the actual results that their algorithm says it should... or will it? The search giant will sometimes fudge the results manually (recently regarding CNN on the app store), to correct some mistake or even shape public perception.

Another ethical dilemma is workers in 3rd world countries who can now be paid even less than before! This is a real dilemma with no easy answer.

 After playing with chatbots, what are some ethical implecations you see in this technology?

First off, clever bot isn't really all that clever anymore. Let's assume you have access to one of the cutting edge chat-bots, then you create about a thousand personas on social media sites. Turn your bot army loose to spread your message. Comments, posts, reviews, etc. If you bot is clever enough, you can use it to spread a message: "My product is great", "I bought product X and it broke, then gave me cancer.", "Vote for candidate X", "Candidate Y is a (believable negative here)." Or maybe you turn your chatbot-super tool toward robo-dialing a few hundred thousand phone numbers. Maybe you hit 1% with your message. "Hey Grandma it's me! I'm stuck in a Mexican jail. Send money quick!"

Fraud aside, we are already seeing this technology used to "astroturf" public opinion at the "grassroots" level in the last election. (Both sides used it extensively).
We saw it used (hilariously) in the "Ashley Madison" debacle when 90% of their users turned out to be sock puppets.
This is a growing trend.

No comments:

Post a Comment