Discover more from Aaron's Blog
Dumb dichotomies in ethics, part 2: instrumental vs. intrinsic values
"Contingent values" to the rescue
Reader discretion advised. The text below contains dry philosophy that may induce boredom if not interested in formal ethics. Continue at your own risk.
With my only credential being a six course philosophy minor, I was entirely unqualified to write Dumb Dichotomies in Ethics, part 1. So, of course, I’m going to do the same thing. This time, I will recognize explicitly up front that actual, real-life philosophers have made similar, albeit better developed arguments than my own (see for example “Two Distinctions in Goodness.”)
That said, I would like to do my part in partially dissolving the dichotomy between intrinsic or ‘final’ values and instrumental values. At a basic level, this distinction is often useful and appropriate. For instance, it is helpful to recognize that money doesn’t intrinsically matter to most people, although it can be very instrumentally useful for promoting the more fundamental values of pleasure, reducing suffering, preference satisfaction, or dignity (depending on your preferred ethical system).
Some things don’t fit
The issue is that certain values don’t fit well into the intrinsic/instrumental dichotomy. I think an example will illustrate best. As I’ve mentioned before, I think hat some version of utilitarianism is probably correct. Like other utilitarians, though, I have a few pesky, semi-conflicting intuitions that won’t seem to go away.
Among these is the belief that, all else equal, more equitable “distribution” of utility (or net happiness) is better. If you grant as I do that there exists a single net amount of utility shared among all conscious beings in the universe, it seems intrinsically better that this be evenly distributed evenly across conscious beings. This isn’t an original point—it’s the reason many find the utility monster critique of utilitarianism compelling, which goes something like this (source):
A hypothetical being…the utility monster, receives much more utility from each unit of a resource they consume than anyone else does. For instance, eating a cookie might bring only one unit of pleasure to an ordinary person but could bring 100 units of pleasure to a utility monster…If the utility monster existed, it would justify the mistreatment and perhaps annihilation of everyone else, according to the mandates of utilitarianism, because, for the utility monster, the pleasure they receive outweighs the suffering they may cause.
Unlike some, though, I don’t conclude from this that utilitarianism is wrong, in that pleasure/happiness and lack of suffering (call this utility for brevity) is not the sole thing that intrinsically matters. I do think utility is the only “thing” that matters, but I also think that a more equitable distribution of this intrinsic value itself is intrinsically good.
Does that mean that I intrinsically value egalitarianism/equity/equality? Not exactly. I only value these things with respect to some more fundamental intrinsic value, namely utility. I think that equitable distribution of wealth and political equality, for instance, are only good insofar as they contribute to equitable distribution of utility. In the real world, I think it is indeed the case that more wealth equality and political equality would be better, but if you managed to convince me that this would have no effect on the distribution of happiness, I would bite the bullet and say that these things don’t matter.
So, I don’t think it’s fair to call “equitable distribution” an intrinsic value1, since it depends entirely on the intrinsic value of the thing that is being distributed. But, neither would it be fair to call it an instrumental value or goal in the same way as material wealth or political freedom. It just doesn’t fit neatly neatly into the simple bifurcated framework.
I think the solution here is to recognize that intrinsic values break down into “contingent” and “absolute” subtypes, where the former depend on the latter. Here is something of a working definition:
Contingent values are those things that intrinsically matter only with respect to some ‘underlying’ intrinsic value in that they promote the underlying value in a way that cannot even in principle be achieved any other way.
To flesh this out, here are some more examples.
Jane intrinsically values satisfying people’s preferences, even if doing so does not increase net utility. Admittedly, it’s hard to put myself in the shoes of an ethical system I don’t believe in, but I can imagine Jane holding the contingent value of “people wanting things that I also want.”
Such a value wouldn’t be “absolutely intrinsic,” because she doesn’t care about what others want except insofar as it pertains to how good it is to satisfy other’s preferences. It’s not instrumental either, as getting other people to want good things isn’t merely a means to more fully satisfying those wants (although it could be this too in the case that making everyone’s preferences the same allows more total preferences to be satisfied).
Kyle intrinsically cares about justice. For example, he thinks that retribution can be good even if it does not elevate anyone’s wellbeing. He might hold the contingent value of “no one doing things worthy of punishment.” In other words, Kyle would prefer a perfectly-just world in which no one is punished to another perfectly-just world in which some people are punished to precisely the right degree, entirely because the former is a better “form” of justice. 2
“People not doing punishable things” might also be one of Kyle’s intrinsic values, but it doesn’t have to be. It would be coherent (if a little odd) for him to not care about this if he became convinced that justice didn’t matter.
Julia intrinsically values believing true things. She would rather believe the truth about something even if doing so made her upset and did not enable her to do anything about the situation. Julia might have the instrumental value of “equitable distribution of truth-holding,” much as I value equitable distribution of utility. That is, she’d rather everyone believe 70% true things and 30% false things than to have all the false beliefs concentrated among a small subset of the population.
Is this pedantic or trivial? Maybe. I also don’t think it’s terribly important. That said, trying to construct (or determine, if you’re a moral realist) an ethical framework using only the intrinsic-instrumental dichotomy is a bit like building a fire with only twigs and logs3; perhaps it can be done, but an important tool seems to be missing.
Though, on second thought, maybe we can get around all this by calling “equitable distribution of utility” an intrinsic value alongside utility itself?
Perhaps a world in which people do immoral things worthy of punishment cannot be perfectly just. Even if so, I don’t think it changes that Kyle’s value is contingent.
Was staring at my fireplace while trying to think of a good metaphor.