Three Types of Negative Utilitarianism

By Brian Tomasik

First published: . Last nontrivial update: .

Summary

This piece discusses three intuitions about the badness of suffering that can't all be true. Depending on which is rejected, the result is either pure negative utilitarianism, lexical-threshold negative utilitarianism, or negative-leaning utilitarianism. I don't know which view I subscribe to, but fortunately, the choice isn't important, because all three flavors of negative utilitarianism yield roughly the same practical conclusions.

Contents

Definitions

Assumptions

Three inconsistent intuitions

Following are three claims that may all seem intuitive.

  1. Happiness can outweigh small pains: I would accept many of the pains that people normally experience in life in exchange for a sufficient amount of happiness. For example, I would accept mild nausea in exchange for extra days spent with a good friend.
  2. Finitude of pains: No pain is infinitely worse than any other pain. One reason we might find this plausible is that we could construct a finite series of pain states—one slightly more intense than the last—starting from a given mild pain and ending with a given intense pain, and if we think each step only increases badness by a finite amount, then the intense pain can only be finitely many times as bad as the minor pain.a
  3. A day in hell could not be outweighed by happiness: I would not accept a day in hell in exchange for any number of days in heaven. Here I'm thinking of hell as, for example, drowning in lava but with my pain mechanisms remaining intact for the whole day. Heaven just wouldn't be worth it, no matter how long. It seems like there's no comparison. Nonexistence is fine for me—I wouldn't be around to miss it—but hell-level suffering is just not something I would accept.

These intuitions can't all be true. A day in hell is some extremely intense pain. But by intuition #2, it's only finitely many times worse than an extremely minor pain. But enough instances of happiness can outweigh enough instances of an extremely minor pain by intuition #1. Hence, enough instances of happiness can outweigh the day in hell, contrary to intuition #3.

NU theories differ on which intuition they exclude

  1. Happiness can outweigh small pains: Discarding this intuition leads to regular NU because it implies that happiness can't outweigh suffering.
  2. Finitude of pains: Discarding this intuition yields lexical-threshold NU, since there are some forms of suffering that are infinitely (lexically) worse than others.
  3. A day in hell could not be outweighed: Discarding this intuition yields NLU, since according to NLU, a day in hell could in principle be outweighed by a huge enough amount of happiness.

Which intuition do I reject?

  1. Happiness can outweigh small pains: It seems like small pains are fine. That said, sometimes our thinking is distorted: When we can't have something we want, we feel bad, and our desire causes suffering. If I didn't exist, I wouldn't really mind staying that way rather than popping into existence, experiencing some happy moments, and then popping back out. It still does seem like I'd rather have good moments with my friends than not exist, but the desire is actually quite weak, and it may be biased by the fact that I already exist, so I'm tempted.
  2. Finitude of pains: Consider the pain of burning. Construct a sequence of experiences starting from 1 day in hell at, say, 1000°C, to 10 days at 999°C, to 100 days at 998°C, and so on down to 10950 days at 50°C (which is still an uncomfortably hot temperature). It seems as though each next step in that chain is overall worse than the previous step. On the other hand, perhaps a brain transitions to fundamentally different dynamics at some point (or several points) along the way, and those qualitatively different dynamics at lower temperatures might be seen as lexically less bad than the dynamics at higher temperatures.
  3. A day in hell could not be outweighed: The obvious reply to this intuition is "scope neglect!". Maybe so. But if you actually asked me what it would take to outweigh a day in hell, I would say there's nothing that could compensate, and this feeling doesn't go away no matter how much I think about the question.

All told, I don't know which intuition to reject. Rejecting intuition #1 can avoid this particular conflict, but the remaining intuitions would still be vulnerable to the "Torture vs. Dust Specks" dilemma, which directly pits intuition #2 against intuition #3. In other words, the torture-vs.-dust-specks thought experiment makes it plausible that the real problem lies with either intuition #2 or intuition #3, not #1 per se. (Of course, a Buddhist or NU might reject intuition #1 anyway even if doing so doesn't solve the fundamental problem.)

Continuum fallacy

Above I mentioned the "sequence argument" for the finitude of pains: we could imagine constructing a sequence of painful situations from something very bad to something relatively mild, where each element of the sequence differs only slightly from its neighbors (such as by 1°C or even 0.001°C in temperature), making it unclear if there's a lexical difference in badness at any specific step. It could be alleged that this argument is a form of the sorites paradox or continuum fallacy: "The fallacy is the argument that two states or conditions cannot be considered distinct (or do not exist at all) because between them there exists a continuum of states." It is the case that "unbearable pain" and "barely noticeable pain" are distinct, even if there may be a relatively continuous sequence of intermediate states between them. The moral question is then just whether we consider "unbearable pain" to be lexically worse than "barely noticeable pain" or not.

If we precisify the concept of "unbearable pain", we may find that bearable pain transitions to unbearable pain sharply at some point. For example, suppose we offer a person a large reward in exchange for enduring a minute of exposure to water at temperature T degrees Celsius. At any point during the minute of exposure, the person can press a button to make the pain stop immediately and forfeit the reward. When T is, say, 40°C, the person will presumably make it through the whole minute. When T is, say, 150°C, the person will probably not make it through the whole minute, unless the person either has an abnormal nervous system or is extremely well mentally trained, along the lines of monks who self-immolate as a form of protest. Suppose we have a deterministic simulation of the person underdoing this hot-water experiment. We could run the simulation for all integer values of T between 40 and 150 (although doing so would be extremely unethical!). We would then find some precise threshold for T at which the person switched from not pressing the button to pressing the button. Due to the complexity of the neural and physiological dynamics involved in enduring hot water, it's possible this wouldn't even be a single threshold. For example, maybe the person would press the button only for T = 55, T = 57, T = 58, and T > 60. Whether the button was pressed in a given run would depend on the exact dynamics of how the simulation unfolded. But for any given set of initial conditions, there would be a precise answer as to whether the run was found to be "unbearable" or not by this particular measure.

Of course, one could vary this experiment in many ways. How big exactly is the reward? How long does the period of pain last? Which person is undergoing the experiment, during what mood, in what kind of environment? What if cold temperatures were used instead of hot? Or electric shocks? And so on. Each of these variations would produce somewhat different unbearableness tresholds, which validates the intuition that "unbearableness" as a broad concept is vague—i.e., the exact boundary between the extreme ends is unclear.

One possible view is that because "unbearableness" is vague, there's some extremely limited sense in which even a dust speck in the eye is unbearable. (This is essentially the same as my argument for panpsychism, namely that anything can be seen as conscious to at least an extremely limited degree (Muehlhauser and Tomasik 2016).) So if one wants to reduce unbearable suffering, a dust speck in the eye should still matter to a minuscule degree, and 3↑↑↑3 of them should collectively matter far more than 50 years of torture.

If you do want to say that torture is lexically worse than a dust speck, how do you pick the exact threshold where an experience starts being lexically worse? For example, how do you decide that the threshold is at water temperature 81.3°C for person X in situation Y given environmental conditions Z, rather than 81.2°C? The exact threshold you pick would depend on what "unbearableness" metric you're using. I guess the answer is that a lexicality supporter would ultimately just have to pick something, to avoid paralysis by indecision.

Contradictions are expected

My brain is a jumble of different impulses and subroutines, so it's unsurprising that my intuitions don't all cohere. The famous quote from Walt Whitman is

Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes [of brain subprocesses].)

David Eagleman makes the same point in Incognito: The Secret Lives of the Brain, Ch. 5: "The Brain Is a Team of Rivals".

Practical implications aren't much affected by the choice

The ratio of expected pleasure to expected pain in humanity's future, as judged by a typical traditional (non-suffering-focused) utilitarian, is perhaps on the order of 1:1 to 10:1 and certainly isn't higher than 100:1 or 1000:1. Even if I adopted the NLU position, I would require more than 1,000,000 times as much happiness to outweigh suffering (where magnitudes of "happiness" and "suffering" are defined by the judgments of a typical traditional utilitarian; this specification is necessary because there's no objective answer to how much happiness or suffering a given organism experiences). So regardless of which of these theoretical NU stances I adopt, the practical conclusions should be roughly comparable.

Likewise, even if I were NU rather than lexical-threshold NU, I might still care vastly more about extreme suffering relative to mild suffering, and since the amount of extreme suffering in the world is not astronomically smaller than the amount of mild suffering, a focus on preventing extreme suffering might still be most important.

Acknowledgements

MichaelExe very helpfully pointed out a major flaw in the original version of this piece. This thread contains more discussion of the original flaw.

Original version of this piece

"Am I NLU or NU?" on Felicifia.

Footnotes

  1. Larry Temkin responds to this kind of argument by rejecting transitivity, but I see this solution as too absurd to consider as a possibility.  (back)