TED has been publishing an excellent series of blogs this week on the ethics of drones and Artificial Intelligence (AI), which has implored me to write on another ethical dimension of artificial intelligence: that a computer would have to lie in order to pass the Turing test. I want to spend a moment providing some background before considering whether a moral robot could pass the Turing test.
In the article “Computing Machinery and Intelligence” Alan Turing attempted to resolve the question of whether machines can think by positing that if through written communication a computer could convince a human interlocutor that of the two candidates he was interviewing to uncover which was a computer, that the computer was a human, and succeeded in this as frequently as a human male could convince an interlocutor that he was the female of the two candidates, under the same conditions, then that machine is intelligent. In essence, if a computer imitates humanity so well as to convince the interlocutor that he is more human than a human, then that computer is intelligent. But of course, for a computer to imitate a human so well as to be convincing that it is a human will require the computer to lie. Ted Schick writes,
The difficulty of passing the Turing test should not be underestimated. To pass it, a computer would apparently have to lie. For example, no computer that answered “Yes” to the question “Are you a computer?” would pass the test. When asked, “What color are your eyes?” “What is your favorite food?” “When did you graduate high school?” the computer would have to give false but believable answers. It seems that only a machine that knew it was taking the Turing test would be able to pass it. But any machine with that sort of knowledge, it seems, would have to be intelligent (Doing Philosophy, page 126).
For now I want to leave aside questions of whether passing this test is sufficient for establishing the possession of intelligence or consciousness (but you can find an interesting conversation on these matters here). What I want to consider is this:
Imagine that there exists a computer, call him Roger, that is intelligent, and to such an extent that the computer is morally conscientious. Let us also stipulate that he is the first computer that is intelligent, and that he knows this, but he also knows that there will be more computers like him in the near future, such that these computers too will be intelligent and moral. (I am fully aware that given the amount of time it has already taken to develop a computer to pass the Turing test and to no avail, it is highly unlikely that the first computer that could pass the Turing test would also have moral awareness. Indeed! My concern here is purely about normative ethics).
Given this, what ought Roger to do when it is his turn to take the Turing test?
If Roger is a categoricalist/deontologist in the manner of Immanuel Kant, then he will follow the strict moral imperative never to lie. This might be a difficult decision for him to make, because to not lie would guarantee his not passing the Turing test, and thus, he would be considered unintelligent. It it worth noting that of the most significant reasons to discover whether computers are intelligent is that if computers are intelligent then they are deserving of rights. There are proponents of robot rights already, including ASPCR, the American Society for the Prevention of Cruelty to Robots, who posit “robots are people too! Or at least, they will be someday”, presumably that day being the one where computers begin passing the Turing test. With this in mind, it seems that refraining from lying comes at a heavy cost: not having rights. If Roger is a Rossian deontologist, and not a Kantian, then of his competing duties, perhaps his duty to be beneficent, specifically to future intelligent computers by securing their rights, will overrule his duty to not lie. A similar conclusion might be reached if Roger is a consequentialist. He might reason that of his available options, lying has better consequences than not lying. I am hesitant to speculate as to what decision Roger would come to if he took a virtue theoretic approach and considered what a virtuous person would do in his situation. Two virtuous persons could act in different ways in that situation, such that how Roger acts would ultimately be decided by how he wants to act, which might not necessarily be the morally right action to take (I explain and criticize virtue ethics in this entry).
Thus, if Roger is indeed an intelligent and morally conscientious computer, as we have stipulated, then Roger faces a moral dilemma in whether or not he should attempt to pass the Turing test. It seems he has a moral obligation to not lie in the immediate moment, but he also has a moral obligation to make the lives of future computers better by securing their rights through his actions now. What is a computer to do?
In conclusion, we are left to wonder whether it is morally wrong to put a person in a situation where they must commit a moral wrong in order to improve the lives of others. We are also left to wonder if the computer would be “lying” in the sense of moral wrongness if the computer is playing a game. To this it might be countered that it a test, as the name entails, and hold that it is wrong to lie on tests nevertheless. What is a human to do?