Alan (and the ethics of Turing tests [Ought a computer to pass the Turing test?])

url

TED has been publishing an excellent series of blogs this week on the ethics of drones and Artificial Intelligence (AI), which has implored me to write on another ethical dimension of artificial intelligence: that a computer would have to lie in order to pass the Turing test. I want to spend a moment providing some background before considering whether a moral robot could pass the Turing test.

In the article “Computing Machinery and Intelligence” Alan Turing attempted to resolve the question of whether machines can think by positing that if through written communication a computer could convince a human interlocutor that of the two candidates he was interviewing to uncover which was a computer, that the computer was a human, and succeeded in this as frequently as a human male could convince an interlocutor that he was the female of the two candidates, under the same conditions, then that machine is intelligent. In essence, if a computer imitates humanity so well as to convince the interlocutor that he is more human than a human, then that computer is intelligent. But of course, for a computer to imitate a human so well as to be convincing that it is a human will require the computer to lie. Ted Schick writes,

The difficulty of passing the Turing test should not be underestimated. To pass it, a computer would apparently have to lie. For example, no computer that answered “Yes” to the question “Are you a computer?” would pass the test. When asked, “What color are your eyes?” “What is your favorite food?” “When did you graduate high school?” the computer would have to give false but believable answers. It seems that only a machine that knew it was taking the Turing test would be able to pass it. But any machine with that sort of knowledge, it seems, would have to be intelligent (Doing Philosophy, page 126).

For now I want to leave aside questions of whether passing this test is sufficient for establishing the possession of intelligence or consciousness (but you can find an interesting conversation on these matters here). What I want to consider is this:

Imagine that there exists a computer, call him Roger, that is intelligent, and to such an extent that the computer is morally conscientious. Let us also stipulate that he is the first computer that is intelligent, and that he knows this, but he also knows that there will be more computers like him in the near future, such that these computers too will be intelligent and moral. (I am fully aware that given the amount of time it has already taken to develop a computer to pass the Turing test and to no avail, it is highly unlikely that the first computer that could pass the Turing test would also have moral awareness. Indeed! My concern here is purely about normative ethics).

Given this, what ought Roger to do when it is his turn to take the Turing test?

If Roger is a categoricalist/deontologist in the manner of Immanuel Kant, then he will follow the strict moral imperative never to lie. This might be a difficult decision for him to make, because to not lie would guarantee his not passing the Turing test, and thus, he would be considered unintelligent. It it worth noting that of the most significant reasons to discover whether computers are intelligent is that if computers are intelligent then they are deserving of rights. There are proponents of robot rights already, including ASPCR, the American Society for the Prevention of Cruelty to Robots, who posit “robots are people too! Or at least, they will be someday”, presumably that day being the one where computers begin passing the Turing test. With this in mind, it seems that refraining from lying comes at a heavy cost: not having rights. If Roger is a Rossian deontologist, and not a Kantian, then of his competing duties, perhaps his duty to be beneficent, specifically to future intelligent computers by securing their rights, will overrule his duty to not lie. A similar conclusion might be reached if Roger is a consequentialist. He might reason that of his available options, lying has better consequences than not lying. I am hesitant to speculate as to what decision Roger would come to if he took a virtue theoretic approach and considered what a virtuous person would do in his situation. Two virtuous persons could act in different ways in that situation, such that how Roger acts would ultimately be decided by how he wants to act, which might not necessarily be the morally right action to take (I explain and criticize virtue ethics in this entry).

Thus, if Roger is indeed an intelligent and morally conscientious computer, as we have stipulated, then Roger faces a moral dilemma in whether or not he should attempt to pass the Turing test. It seems he has a moral obligation to not lie in the immediate moment, but he also has a moral obligation to make the lives of future computers better by securing their rights through his actions now. What is a computer to do?

In conclusion, we are left to wonder whether it is morally wrong to put a person in a situation where they must commit a moral wrong in order to improve the lives of others. We are also left to wonder if the computer would be “lying” in the sense of moral wrongness if the computer is playing a game. To this it might be countered that it a test, as the name entails, and hold that it is wrong to lie on tests nevertheless. What is a human to do?

Advertisements

, , , , , , , , ,

  1. #1 by bloggingisaresponsibility on November 21, 2013 - 8:22 am

    Wow, that’s an angle I never considered. I love it!

    What if a person tells what they think is the truth, but in reality is not. Is the person lying?

    I ask because a computer could be programmed to think it’s a human, and even given a history, including appearance, tastes, etc… Then when a person asks if it is human, it answers (honestly in it’s eyes) “yes”.

    • #2 by ausomeawestin on November 21, 2013 - 12:01 pm

      Great question, and thanks for the kind words! I think lying should be defined as the willful act of deceiving someone. In this sense, a person lies if and only if they intend to tell the person something untrue. With this in mind, I think we can agree that if a person says what they think is the truth, even when it is false, then they are not lying.

      I think the scenario you posed is very plausible. But I think that no computer has yet passed the Turing test precisely because this is how computers that take the test are programmed. My hidden assumption in all of this, was that if no computer that is programmed to think it is a human has passed the Turing test, then maybe a computer that knows it is a computer, and knows it is lying, would pass the Turing test.

      Here’s the thing, if a computer passed the Turing test through being programmed to think it was a human with a history, and didn’t know it was lying, then I hesitate to say it is intelligent. If this is possible, and it seems very possible, then it is a shortcoming of the Turing test. This thought experiment about the moral dilemmas faced by computers revolves around the idea that only a truly intelligent computer could pass the Turing test. That might be false.

      Thanks again for your comment!

  2. #3 by Rick Dell on April 6, 2015 - 11:49 am

    How can you be 100% sure you are not a computer with an amazingly complex virtual world. A computer program could be built with a huge hypergraph knowledge base and interact with a combination of virtual world interjected with senses from the real world for vision, sound, etc to enrich and improve the virtual world to fine tune the virtual world and rapidly expand on the hypergraph knowledge base.
    The computer would think it had a body, a history, goals, values, etc but these would all be initially programmed into the hypergraph knowledge base then latter fine tuned with interactions with the virtual and real world.

    • #4 by ausomeawestin on April 6, 2015 - 6:51 pm

      I can’t be sure at all. External world skepticism is beyond the scope of this essay, but I appreciate your comment.

  3. #5 by eightyape on November 17, 2015 - 11:09 am

    The turing test is attempt fraud, no more no less.
    Its a self fulfilling opiece of psychopathic idiocy to pretend faking your human when you are not, its disturbing and the machine would have to lie…

  1. Her (and the Turing test and expressing humanity) | ausomeawestin

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: