Robot & Frank (and Rossian Pluralism)

frank-robot-and-frank-13724-1920x1200

In continuing with the recent discussion of robot ethics, I want to  consider the challenge to absolute, monistic moral reasons posed by a movie I just watched for the first time, “Robot & Frank”.

I greatly enjoyed the movie — aside from its predominate focus on morality, it also dealt a great deal with theories of personal identity rooted in memory. The story is roughly that Frank is an elderly man who is decreasingly able to care for himself due to oncoming dementia, and though he reluctantly accepts a robot caretaker from his son, comes to like the robot as it allows him to return to his career as a professional jewelry thief (no spoiler there that you won’t get from a trailer). The robot (who is referred to as ‘Robot’) is programmed to follow one rule: take those actions that improve the health and well-being of Frank. This allows Frank to make the case that Robot should assist him with burglaries, as his joy in getting back to what he loves doing has resulted in improved spirits and memory. Robot acknowledges that he knows there are laws, such that stealing is not permitted by the law, but his overriding rule to improve Frank’s well-being trumps all other considerations.

Robot justifies actions by whether they bring about the end that is Frank’s health and well-being, which can be considered a hypothetical imperative. Immanuel Kant introduced the terms ‘hypothetical imperative’ and ‘categorical imperative’ into the ethical cannon, so I will follow his thoughts on the division closely. In The Groundwork of the Metaphysics of Morals, Kant writes,

All imperatives command either hypothetically or categorically. Hypothetical imperatives declare a possible action to be practically necessary as a means to the attainment of something else that one wills (or that one may will). A categorical imperative would be one which represented an action as objectively necessary in itself apart from its relation to a further end (GMM, page 82).

In essence, Kant is developing a theory of two basic reasons for rational action. Either the rational reason for you doing x is because “if x then y”, where you perform x because you want the outcome y, or “x just because x”, where you perform x just to perform x. The former is a hypothetical reason, whereas the latter is a categorical reason. It is important to note that a moral theory can be hypothetical and absolutist, or categorical and non-absolutist. Consequentialist moral theories, such as utilitarianism, posit hypothetical and absolute moral imperatives, where the reason for performing x is because if x then y, where y is promoting the best possible outcome, and this rule is absolute, because according to consequentialism, any action that does not promote the best outcome is immoral.

Robot has one absolute directive, which is to improve the health of Frank, such that his reasons for action are broadly hypothetical — there is a reason to perform x, because if x then y, where y is the improvement of Frank’s health. This hypothetical imperative trumps all other considerations for Robot, such that it is absolute, and thus, unbreakable. As a result, Frank is able to enlist him in robberies. Now, the promotion of well-being and health is important, but we should be able to see that there are considerations that trump it, such as, moral imperatives against stealing, particularly when the gains in well-being do not outweigh the badness of stealing. We could entertain the possibility that a father who steals medicine for his sickly daughter is doing the right thing if he cannot afford the medicine and she might die otherwise, or at least suffer immensely. This seems very different than Franks case, but what best accounts for our different moral intuitions?

For the sake of clarity and so that we may isolate what area of his reasoning needs to be changed, let us state the details of Robot’s reasoning:

  • Hypothetical: if x then y, where y is desired, so x is performed.
  • Absolutism: this hypothetical imperative is not to be broken.
  • Monistic: there is one and only one end that is valuable, and that is the promotion of Frank’s health.

It seems that we cannot change the hypothetical nature of Robot’s principle of action in isolation, because given the monistic nature of his principle as related to the promotion of health, health and well-being are ends that are brought about by other means, such that a singular principle relating to health cannot be categorical.

Perhaps the correct option is retaining the hypothetical nature of Robot’s imperative, but making it non-absolute. This is a fine option, but taking this change alone does not do much, because now we have one hypothetical principle, that sometimes can be broken. But when, and why? Non-absolutism might seem to require pluralism.

Nevertheless, pluralism by itself, (and thus the rejection of Robot’s just monistic reasoning) is not sufficient to avoid the problem if we retain absolutism, because if we have more than one principle, and they are never to be broken, then what does one do when they conflict? One cannot say that one principle is more fundamental then the other(s), because that would not be pluralistic. As a result, it is generally accepted that non-absolutism and pluralism are incompatible.

We have made it through the first round of deliberation, and our conclusions are that we have to retain the hypothetical nature of Robot’s rational imperative (eliminating the possibility of taking a Kantian perspective), and must change either both the absolutistic and monistic nature of Robot’s reasoning, or risk serious difficulties taking one or the other. The most attractive option for Robot’s imperative is thus: hypothetical, non-absolutist, and pluralistic. This is precisely the view that W.D. Ross favors, and which I endorse.

Ross posited that we have a plurality of hypothetical duties, some of which seem to obtain for a given situation, and some of which that do not, given the specifics of the situation. This reflects the hypothetical nature of rational decision making for Ross. When duties seem to conflict we must use our moral judgment to see which duty is actually to be obeyed. He writes,

I suggest ‘prima facie duty’ or ‘conditional duty’ as a brief way of referring to the characteristic (quite distinct from that of being a duty proper) which an act has, in virtue of being a certain kind (e.g. the keeping of a promise), of being an act which would be a duty proper if it were not at the same time of another kind which is morally significant. Whether an act is a duty proper or actual duty depends on all the morally significant kinds is is an instance of (W.D. Ross, The Right and the Good, pages 19-20).

Here, Ross postulates that one has a prima facie duty to perform an act, and thus a reason to perform that action, when that act is an instantiation of a certain kind of ethically valuable action. Such actions are those that instantiate fidelity, reparation, gratitude, justice, beneficence, self-improvement, and non-maleficence. If in choosing between two actions, one action instantiates more morally significant kinds of actions than the other, then one has the actual duty to perform that action, and one does not have an actual duty to perform the other.

Returning to Robot, it seems that his monistic principle was one of beneficence toward Frank, that is, “enhancing the intelligence, virtue, or pleasure of others” (Russ Shafer-Landau, The Foundations of Ethics, page 233). As this is his only programmed imperative, it overrode all other moral and rational considerations. Incorporating a Rossian moral framework, we can see that Robot might realize that while robbery might instantiate beneficence towards Frank, it also instantiates the violation of non-maleficence and the violation of justice, and thus, these factors override benefiting Frank.

What of our other moral data, the case of the father stealing medicine for the well-being of his sick daughter? We might say that no one is harmed in a significant way by the father stealing some medicine from a drugstore, but nevertheless, that the fathers action did instantiate the violation of justice. So now we are left with the fact that the act is good in instantiating an act of beneficence toward the daughter, and bad in instantiating an act of injustice. I don’t want to suggest that Ross thinks we can tally up the points on each side, I’m not sure that he would approve of so quantitative an approach. In fact, Ross would favor a more qualitative approach, wherein our moral intuitions tell us that in this specific scenario the value of instantiating beneficence is more important than the instantiation of injustice. Moreover, different acts can instantiate more or less of a certain kind of act, such that two actions can instantiate beneficence, but one can be more beneficent than the other. I think that this is the main factor that allows us to distinguish the moral status of Robot’s action from that of the fathers.

In conclusion, Robot was led to commit a moral wrong because of his programming to pursue one goal. This parallels the thought that moral theory can be captured by an absolutistic and monistic principle. Given the example posed by Robot, I take it that an absolutistic and monistic principle can lead to performing moral wrongs. In suggesting that Robot would have avoided committing such a moral wrong as stealing if he has been programmed with a more Rossian moral view (that view being pluralistic and non-absolute), I have, by extension, suggested that moral theory in general should follow pluralistic and non-absolutist lines. This, I think, is the moral lesson to draw from the example of Robot.

Advertisements

, , , , , , , , , , , , , , , , , , ,

  1. Can a Computer be Conscious? Part 3 (Ethics) | bloggingisaresponsibility

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: