First published 5 July 2017. Last updated 7 September 2017.
This post clarifies some of my most important philosophical beliefs, which I take for granted in much of my other writing.
I am a consequentialist and a utilitarian. I’ve written about my flavor of utilitarianism here. In brief, that flavor is {hedonic, non-negative, total, act} as opposed to, for example, {preference, negative, average, rule}. In briefer, I think Bentham pretty much had it right. Utilitarianism beats other ethical systems because of its simplicity, in virtue of which it’s sort of correct by definition, so I try to keep it that way. Most of the complexity is in the application, not the theory, and many post-Bentham strains of utilitarianism confuse application with theory. This simple view doesn’t mean that utilitarianism is trivial; it just means that it doesn’t do what people usually expect ethical systems to do. It tell us to do the calculus (to add up all the good and bad impacts to get the net impact) for each possible action, and to take the action which has the best net impact.
What does this mean for how we ought to actually live on a daily basis? My thinking on this question borrows several points from Peter Singer’s Famine, Affluence, and Morality:
- There’s no such thing as supererogation (“moral extra credit”).
- No amount of true selfishness (as opposed to, for example, self-care which makes one more effective in the long run) is ethically justified.
- Traditional moral categories like “right”, “wrong”, “okay”, and “above and beyond” are misguided. Actions should be evaluated on an ethical spectrum, relative to the rest of a choice set. Suppose you have three actions available to you. Action A decreases global utility by 1. Action B increases global utility by 1. Action C increases global utility by 10. So to sum up, your choice set is {A, B, C} and their respective impacts on global utility are {-1, 1, 10}. Actions A and B are both unethical here, even though it seems like B improves the world slightly, because it’s wrong to abstain from C as long as it is available (see the next bullet). In this context, A and B are more similar to each other than B and C. There’s nothing special about the 0 line that makes A “wrong” and B and C “right” or “okay”. The only right thing at any time is to take an action with the best expected impact on global utility. All other actions are wrong, and some are more wrong than others. So it’s silly to say that a particular action has a particular ethical valence across the board, independent of the rest of the choice set. If C is not available, then B is right.
- There’s no intrinsic ethical difference between, for example, doing a bad thing and failing to do an equally good thing.
- The proximity of an event to you does not intrinsically affect the event’s ethical importance.
These observations leave us with an exceptionally demanding notion of what we’re ethically obligated to do. That’s something I’m still wrestling with personally, but its personal difficulty does not impugn its philosophical correctness.
Deciding how to to actually execute utilitarian calculus is often a question for philosophy of mind, not for utilitarianism. Utilitarianism did its entire job by telling us to do the calculus in the first place, rather than by giving us a sketchy list of contrived rules to follow. Philosophy of mind is relevant here because utilitarianism only cares about actions which affect the well-being (utility) of “conscious” beings (whatever “conscious” means!). So it’s up to philosophy of mind (and cognitive science, etc.) to tell us which beings we should care about ethically, and to what degree, and how those beings are impacted by certain actions. Of course, forecasting how an action will affect beings is facilitated by having a generally accurate world model. To that end, I am a naturalist.
On meta-ethics, I am a moral realist. There’s nothing mystical about this. I don’t think morality is written into the fabric of the universe, thereby “really existing” in some Platonic sense. But I think it’s sensible to reason about moral propositions objectively (if this word scares you, sub in “impersonally”) in a way that’s not altogether different from how we reason about scientific or even mathematical propositions. I strongly reject the notion that morality is just a glorified set of personal preferences, and that comparing the moralities of individuals, cultures, or eras is necessarily nonsensical.
My beliefs in the philosophy of mind are messier than my beliefs in ethics. I guess I’m basically a computationalist, and I believe pretty strongly in substrate independence. I’m not sure how we should treat fuzzy or nested minds. I’m inclined to think that consciousness is a real, objective property of physical systems, “out there” to be analyzed.
As long as I find a theory convincing, I try to take its implications literally and seriously, rather than clumsily revising the theory to get more comfortable results. Therefore, I believe it’s important to reduce the suffering of wild animals, including insects and other simple life forms. I also give some credence to seemingly crazy ideas such as Boltzmann brains and panpsychism and infinite-universe subjective immortality.
Wow! I’m impressed despite it being a little above my head! But still wanted to thank you for your very kind decision to follow Learning from Dogs!
LikeLiked by 1 person