Utilitarianism, Supererogatory Acts and the Demands of Morality

James Gray has posted a very interesting piece on the compatibility of act utilitarianism and supererogatory actions, a portion of which I want to discuss here. Supererogatory acts are those acts that would promote the most good, but which one is not morally required to do because of the large personal sacrifices they require. This type of action does not fit well with act utilitarianism, a moral theory that posits that the right action in a situation is the one that has the best overall result, meaning that of all the possible actions in a situation, the action that does the most good when taking into consideration its bad effects is the action to be performed. Supererogatory actions are, by definition, actions that have the best overall result but don’t seem to be required, which is incompatible with act utilitarianism’s central thesis that the action that has the best overall result is the required action.

The first way around this tension that Gray notes is to distinguish obligated-ness from ought-ness (and is the only way I will comment on today, such that even if this method is seen to fail, act utilitarianism and supererogation might be made compatible by another of Gray’s methods). Utilitarianism, on this view, would be understood as a moral theory that tells us what we ought to do but not what we are obligated to do, such that from the utilitarian standpoint a supererogatory action is one that we have reason to perform, and thus, ought to do, because it promotes the most overall good, but not one we are obligated to do, because utilitarianism is just not in the business of telling us what we are obligated to do.

This is an interesting response to this tension but not one that I think succeeds. On this view, the action that promotes the most overall good is an action that we ought to do, because as the reasons balance out we have more reason to do that action than the other possible actions. But this version of utilitarianism is hardly informative, all it truly tells us is that an action that has the greatest number of overall reasons for us to perform that action is the action we have the most reason to perform. In order to say something interesting, the utilitarian should be held to the claim that the action that we have the most reason to perform is an action that we are obligated to perform.

Informative moral theories not only describe our moral thinking but must also extend and expand moral knowledge through inferences from that moral thinking. The very thing that makes utilitarianism a theory for action guidance is the conclusion that we are obligated to do the action that we have the most reason to perform. Insofar as act utilitarianism is incompatible with supererogation, and act utilitarianism seems plausible, it seems we might have to revise our beliefs about the existence of supererogation. Perhaps our intuitions in favor of supererogation are the products of self-interested biases, and without these biases we would be skeptical of supererogation. If there are objective moral truths then it does seem likely to me that morality would be very, very demanding. As such, the demandingness of moral theories that attempt to describe this moral reality should not lead us to doubt those theories, but rather, should perhaps lead us to doubt the permissibility of some of our actions.

, , , , , ,

  1. #1 by ignacioggm on October 3, 2014 - 5:42 am

    Hi there!

    I have a question, maybe you can help me out here:

    Does utilitarianism (or supererogation) provide any method to actually estimate which course of action optimizes well-being? or is it only concerned with choosing among a set of actions once the magnitude of the “amount of well-being” generated by them has been estimated by other means at our disposal?


    • #2 by ausomeawestin on October 4, 2014 - 3:00 pm

      Excellent question! For my money, utilitarianism doesn’t provide a very satisfying way of estimating which course of action maximizes utility, though various attempts have been made. The most famous articulation was by Jeremy Bentham, who posited a “hedonic calculus”, wherein one subtracted the units of pain from the units of pleasure for each possible action, and the action with the most units of pleasure (the greatest magnitude, as you say, which could be -4, if the others are -10 and -20) is the action that ought to be done. The immediate question is “what is 1 unit of pain equal to?” such that it doesn’t seem to be a very workable method of interpersonal justification. Still, the utilitarian will likely note that we can assign unit numbers as we see fit as long as they are intrapersonally consistent, so we can decide which action we ought to do. But this doesn’t really answer the demand for a rigorous method of calculating which action optimizes well-being across all people. John Stuart Mill attempted to make a system where some pleasures are qualitatively better than others (intellectual pleasures would be better than gustatory or carnal pleasures), but these same concerns can still be pushed. It seems so long as a moral theory entails that we ought to do the action with the best consequences, we must have an adequate and intersubjective method of calculating consequences, which I do not think is possible.

      Thanks for reading and commenting, I do appreciate it!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: