In Defense of Utilitarianism

First published 26 October 2017. Last updated 26 October 2017.

This post addresses a few common arguments against utilitarianism: the utility monster, the experience machine, and the repugnant conclusion (a.k.a the mere addition paradox).

The Utility Monster

The utility monster is a thought experiment intended to defeat classical utilitarianism. Read about it here. Basically, it goes like this…. (Disclaimer: I’m going to paraphrase it uncharitably.) Suppose there’s a being whose utility is more important than everyone else’s. Then we’d have to do whatever it wanted, and wouldn’t that just be horrible?!?!

I don’t find this argument compelling at all, and that’s putting it as politely as I can. Sure, there are utility monsters. We humans are utility monsters from the ant’s point of view. I think we should be nicer to ants than we currently are, actually, but it’s not controversial to say that humans are at least somewhat more important than ants. Blowing this principle up to extreme magnitudes is not good philosophical argumentation; it’s just exploiting scope neglect and self-interest to get people to abandon a system supported overwhelmingly by rational argument because it conflicts with their reliably unreliable intuitions.

The Experience Machine

The experience machine is a thought experiment by Robert Nozick, intended to defeat classical utilitarianism. Read about it here. The gist of it is:

Nozick asks us to imagine a machine that could give us whatever desirable or pleasurable experiences we could want. Psychologists have figured out a way to stimulate a person’s brain to induce pleasurable experiences that the subject could not distinguish from those he would have apart from the machine. He then asks, if given the choice, would we prefer the machine to real life?

He expects you to say no, which he thinks would invalidate utilitarianism.

First of all, if you’ve heard it called the “pleasure machine”, which is also common, I should point out that the word “pleasure” can be misleading in tests of utilitarianism. The idea that utilitarianism merely maximizes “happiness” or “pleasure” is a common misunderstanding. Utilitarianism maximizes all forms of well-being of conscious creatures, including the enjoyment of abstract values like freedom or justice. So if you don’t want to plug into a machine that would simulate an ongoing bacchanalian orgy forever and ever, because it’s missing something, of course that makes sense. But it’s not a critique of utilitarianism; it’s a critique of a confused strawman of utilitarianism.

Setting that particular piece of confusion aside, the experience machine is still a bad argument. There are alternative formulations of the argument, with empirical evidence to back them up, that show how it exploits status quo bias to get the result it gets intuitively from many (not all) people. Basically, if you set it up in reverse, and say “you’re in the experience machine now; do you want to exit the machine and enter the real world?” people will generally say no. It’s mere attachment to the status quo, not some material philosophical truth about the value of “reality.”

Furthermore, the argument is a bit naive with respect to the question of reality. You can never know that you’re not in an experience machine, including right now, or right after you exit an experience machine, or after you’ve exited 10 nested experience machines within a year.

The Repugnant Conclusion (a.k.a the Mere Addition Paradox)

Derek Parfit’s “repugnant conclusion”, a.k.a the “mere addition paradox”, points out that classical utilitarianism in some cases prefers {a very large society of beings with lives barely worth living} to {a very small society of very happy beings}. Some people find this preference intuitively objectionable. Does this mean that classical utilitarianism is wrong?

This is an important question. It’s related to a whole field called “population ethics”. It’s important not only because of its direct ramifications for ethics and social planning, but also because it informs our thinking about life and death on a smaller scale. So it’s important, but I think it’s not as hard as it sounds.

How do we escape the repugnant conclusion? I’m going to rule out non-utilitarian answers because they are wrong on a deeper level. Jumping ship to deontology over this would be throwing the baby out with the bathwater.

In my opinion, utilitarians have two decent options here. (Actually, I no longer think Option 1 is decent, but I used to, which is why I’m bringing it up at all.) Thankfully, I do think Option 2 is quite effective.

  • Option 1: switch from total utilitarianism to average utilitarianism.
    • Average utilitarianism says that the utility of a society is equal to the average of its members’ individual utilities, whereas total utilitarianism says that the utility of a society is equal to the sum of its members’ individual utilities.
    • Average utilitarianism is invulnerable to the repugnant conclusion. (Not sure why? Try it yourself.)
    • However: if we accept average utilitarianism, we accept that killing off people of below-average utility (assuming away pain, fear, disorder, grief, etc.) is a moral good. We must accept this even if the people in question are quite happy, as long as they are less happy than average. We must also accept that birthing and raising a happy child with utility level X may be either moral or immoral, depending on the happiness of the rest of society. It is not clear why the status of the rest of society should matter to the question of whether creating that happy life was a good thing to do. Because population size can never directly affect social utility in the averagist model, the averagist must accept that there is nothing inherently good about being alive or bad about dying.
  • Option 2: see that the repugnant conclusion isn’t actually repugnant after all. I won’t get too into the weeds here; if you want all the details, you can find them in another article called Flavors of Utilitarianism, where I discuss the average-total distinction and several others. The short explanation is that Parfit’s critique is attenuated when you consider scope neglect (humans aren’t good at thinking about big numbers) and cost-effectiveness (it’s often cheaper to improve existing lives than to create new ones).

5 thoughts on “In Defense of Utilitarianism

  1. Nice post. 🙂 I agree with your critiques of the experience machine, but I think we can steelman the objection by pointing to the more general difference between hedonistic and preference utilitarianism. Yudkowsky gives an example of that:

    http://lesswrong.com/lw/lb/not_for_the_sake_of_happiness_alone/
    “When I met the futurist Greg Stock some years ago, he argued that the joy of scientific discovery would soon be replaced by pills that could simulate the joy of scientific discovery. I approached him after his talk and said, ‘I agree that such pills are probably possible, but I wouldn’t voluntarily take them.'”

    Such pills (or other hypothetical forms of brain modification) could also presumably eliminate one’s disappointment that the joy of scientific discovery isn’t real and make any other changes needed to appease lingering objections of a hedonist, without fulfilling the original preference to _actually make scientific discoveries_.

    I’ve gone back and forth on this issue, but currently I feel like fulfilling the actual preference and not just changing the hedonic state is at least somewhat morally important. (However, I think extreme hedonistic drives like avoiding torture are usually much stronger than more abstract, spiritual pining, in which case I mostly agree with hedonistic utilitarianism in practice applied to extreme emotions.)

    Liked by 1 person

    1. Thanks Brian! I’ve written a little about preference utilitarianism here: https://sandhoefner.wordpress.com/2017/03/07/average-vs-total-utilitarianism/ Basically, I think it’s confused about what level of abstraction it’s on. It may be pragmatic to encourage people to act as preference utilitarians, or to program AIs to act as preference utilitarians, but that won’t be the case every time, and it doesn’t make preference utilitarianism true as ethical theory. And I think the only reasonable way of defending such pragmatic implementations is precisely by referring back to hedonic utilitarianism, so preference utilitarianism is at best a useful pragmatic abstraction on top of hedonic utilitarianism.

      As for the pill question, I’m perfectly satisfied with an answer that judicious hedonic utilitarianism affords us. Hedonic utilitarianism values the real discovery over the fake one insofar as it makes the world a better place by advancing science. I don’t think Eliezer’s preference has any intrinsic value apart from the value of his actual experience. Valuing preference satisfaction/frustration which is never registered in the experience of any conscious being strikes me as a well-intentioned category error.

      Like

      1. Here’s an excerpt from Eliezer’s post that I really like:

        > If I claim to value art for its own sake, then would I value art that no one ever saw? A screensaver running in a closed room, producing beautiful pictures that no one ever saw? I’d have to say no. I can’t think of any completely lifeless object that I would value as an end, not just a means. That would be like valuing ice cream as an end in itself, apart from anyone eating it. Everything I value, that I can think of, involves people and their experiences somewhere along the line. The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.

        I think the last sentence there is a mistake. He admits that everything of value involves subjective experience, but doubles down on the idea that subjective experiences attached to a certain kind of objective state are more valuable. Why should the objective state suddenly become valuable once it’s attached to experience? And how are we to draw lines around things that are “real” anyway? I don’t love the idea of having a term in our moral calculus which is fundamentally unknowable (i.e. whether a given experience reflects some “really real” objective state or not).

        Again, there are easier ways to value stuff being “real” – on pragmatic grounds. If your dream is to be a firefighter, you should do it in what we call the real world, where we’re pretty confident that you’re saving actual people from burning to death, rather than in VR where (let’s suppose) the people you save are simulated simply enough to be less morally important than “real” people. The same goes for “real” scientific discovery. To me, this seems like a sufficiently powerful and much neater way of fitting the “realness matters” intuition, without unintended consequences.

        Like

      2. I was thinking of the science-pill thing in a hypothetical posthuman future where science is already figured out, so that Yudkowsky’s scientific discoveries have no instrumental value.

        I actually go further than Yudkowsky and think that people’s preferences about external things can matter even if there’s never experience of them. This seems clear to me when I think about my own preference for other beings not to suffer. I want that to be true even if I’m not around to know that that’s the case. Of course, you can say that this particular example is subsumed by hedonistic utilitarianism, which also prescribes that beings shouldn’t suffer. However, I can imagine having non-hedonist preferences about the external world that I would value just as much as I care about other beings not suffering. For example, imagine someone who really wants to make sure that his paintings get preserved after his death. It seems to me that actually making sure the paintings get preserved is valuable because it fulfills something that person strove so hard toward. (One can also make instrumental arguments about the value of keeping post-death promises in society, but I’m assuming a thought experiment where those don’t apply.)

        The fundamentalness of hedonic experienced might be challenged by thinking about consciousness in different ways. For example, if we take an extended-cognition view (https://en.wikipedia.org/wiki/Extended_cognition), we might consider the future world where the paintings are preserved to be “part of” the artist’s (spatially and temporally) extended mind. The multiverse is a unified whole without sharp boundaries around minds or sharp distinctions between transformations of matter that occur inside or outside one’s skin.

        Liked by 1 person

      3. Okay, interesting. I don’t feel like preference satisfaction/frustration fully outside of conscious experience has value. That feels kind of spooky to me. What is a preference other than a bit of neural code in my skull? What is left of it after I die? What does it mean to value its satisfaction/frustration on its own, floating out there in idea-space? I have a preference to receive money. If you hide money in my house such that I’m guaranteed never to find it (but I technically own it, since it’s on my property), have you done me a favor? Have you done a favor to the preference itself?

        The extended mind thing is really interesting; I hadn’t seen it applied to preference utilitarianism before. I’ll have to think about that, but provisionally I don’t give it significant weight because it seems about as tenuous as and less important than things like electron suffering. (Also, from a framework POV, if I value the preference satisfaction only through its effects on the extended mind, I would call that hedonic.)

        Like

Leave a comment