The gang from Episode 10 is back, with yet another Consistently Candid x Pigeon Hour crossover
As Sarah from Consistently Candid describes:
In this episode, Aaron Bergman and Max Alexander are back to battle it out for the philosophy crown, while I (attempt to) moderate. They discuss the Very Repugnant Conclusion, which, in the words of Claude, "posits that a world with a vast population living lives barely worth living could be considered ethically inferior to a world with an even larger population, where most people have extremely high quality lives, but a significant minority endure extreme suffering." Listen to the end to hear my uninformed opinion on who's right.
Listen to Consistently Candid on Apple Podcasts, Spotify, or via RSS
Follow Max on Twitter and check out his blog
Transcript
Slightly cleaned up with the help of AI
SARAH: Welcome back to consistently candid. This time we're just going to do a fun episode, which is the same format as my first ever episode. So I have Max Alexander and Aaron Bergman and they are going to argue about philosophy while I largely nod along because I don't fully understand what they're talking about. But they're going to explain it so beautifully that by the end I will understand what they were talking about. That's the aim. So we are going to discuss the repugnant conclusion, which they have a slight disagreement about. Maybe I should try and define the repugnant conclusion. As I understand it, we're going to discuss the very repugnant and the very repugnant conclusion as well. Okay. Yeah.
AARON: I don't even know if we were going to discuss the repugnant conclusion.
MAX: No, we're okay with that one. It's fantastic.
SARAH: But that's the only one I know what it is. Okay. Well, we should first define the repugnant conclusion and the very repugnant conclusion and explain how they're different. That seems like a good place to start. So my understanding of the reclusion is, is it something like if you have enough people living barely net positive lives, that would be better than having a smaller number of people who are living very happy lives?
MAX: Well, and relevantly, this is assuming total utilitarianism, which is that you just take the total amount of happiness and that's how you say whether a world is good or something. This is a standard way the repugnant conclusion comes up.
SARAH: Okay. And so then the very repugnant conclusion. Does one of you want to quickly explain what the Bavaria repugnant conclusion is?
AARON: Well, I just asked Claude, do you want me to read what Claude says? Because it's going to be like... No, I actually know I go overboard with Claude, but this is a good use case, I think. So the repugnant conclusion is a philosophical idea in population ethics. It suggests that a very large population with lives barely worth living could be considered better than a smaller population with a very high quality of life. Okay. And then the very repugnant conclusion takes this a step further. It proposes that a world with an enormous population living lives barely worth living, plus some number of people living in extreme suffering, could be considered better than a world with a smaller population all living excellent lives. So, yeah, actually, that wasn't incredibly clear. I think the main image to get in your head is a lot of mildly happy lizards, a huge number of mildly happy lizards, and also a lot of people who are like in extreme suffering. And very well, it doesn't have to be that. That can be good because some people...
MAX: Think lizards don't matter. It could just be like people. Still.
AARON: Yeah.
SARAH: Okay. So the crux of it is, can massive amounts of suffering offset massive amounts of value, or whatever they operate? The opposite of suffering is.
AARON: I think it's slightly clearer if you're justifying or outweighing suffering because it's like, instead of the other way around. But the metric, you think it's more intuitive if you say, "Can we outweigh the suffering such that with other people living good lives, the suffering is worthwhile?" I feel that's a more intuitive framing, at least in my mind. Maybe it's not for everybody. I don't actually think this is worth talking about much, or probably at all, but I actually... So I do think the repugnant conclusion is true in most actual ways. It can be applied in the real world. I don't think it is strictly logically implied by total utilitarianism for similar reasons. To like the thing, I'm going to argue, but let's look at who cares because it doesn't have many practical implications. Oh, I think it's true, but it's not like strictly logically implied. I don't give a shit.
MAX: Can you say more about that? I think, before you...
SARAH: Before we should, just really quickly have you guys state your positions for the record.
MAX: Yes, I just want to go first. I think my position is in some sense weaker than Aaron's because I can just say I think Aaron is mistaken about his beliefs or something. Or the implication of beliefs. I don't have to say because I, yeah. And for reasons we can get into, I think there are ways in which Lauren is maybe technically right, but I don't think he endorses the things you have to accept for that to be true. My position is that he does not accept all the things that would imply what he's trying to.
SARAH: Argue for or what specifically is Aaron wrong about?
MAX: Oh, yeah. That total utilitarianism does not imply the very repugnant conclusion or the repugnant conclusion, as he's just said, as well.
AARON: Okay, look, for what it's worth, Max, or the standard position, is Max's side. The standard default, I guess consensus is, yeah, totally. Utilitarianism implies the very repugnant conclusion. And then I...
MAX: It's not as satisfying when I prove you wrong.
AARON: I agree.
MAX: Everyone already thought that.
AARON: Yes, and luckily, this was pretty.
SARAH: Similar to our last debate, where I feel like it was the same last time, you had the consensus position, and then Aaron was just doing something rogue.
AARON: It's not my fault. I have correct takes.
SARAH: No, I like it. It's fun. Okay. Sorry. Carry on.
AARON: Okay. I should. Yeah. So, my thesis is that utilitarianism does not imply that for any amount of suffering, there exists a corresponding well-being that morally justifies that suffering. And that's the...
MAX: I will say, I think that is meaningfully different than what you've said.
AARON: Oh, shoot. Okay. What have I said?
MAX: That total utilitarianism does not imply the repugnant conclusion. I think what you're saying...
AARON: Yeah, I was hesitant to even say that. A repugnant conclusion. Because it's like...
MAX: Yeah, we can ignore that.
AARON: Weaker. Let's come back to that. I think it's worth coming back to, but not starting with it, because I do think, for most, the repugnant conclusion is basically true and largely follows from logic or the meaning of words. It doesn't necessarily flow that strongly from itself, but it still does in a slightly weaker sense. Okay, do you want me to maybe... Knox, you want to start? Or I can. I can sort of give my argument.
MAX: I think it's probably best if you give your argument first.
AARON: Sarah, just jump in whenever and yell at me.
SARAH: Yeah, no, there will be no yelling.
AARON: Okay. So, sadly, I didn't do a ton of preparation for this episode. I'm going to apologize to our dear listeners. Instead of going in a sequence, I'll give the most important, fundamental take, which is... I'll just quote from it because I wrote it better than I could say it: "We have no affirmative reason to believe that every conceivable amount of suffering or pleasure is ethically congruent to some real finite number."
What I mean by that is, in more chill terms, there's an assumption, and I claim it's an unjustified assumption, that states of the world can be modeled by the number line. Or states of the world, as well as individual parts of it, or events. So if you want to say negative two is a paperclip, negative a million is a week of torture, and a billion is some very high level of well-being, for some number of person hours or something.
Taking a step back, first of all, I think this is totally plausible. It's not obviously wrong, but it's also not justified anywhere. It's not obviously right either. And so this is the fundamental reason I think we should be like, "Wait, there's this huge unjustified, often silent assumption being made here that is not obviously wrong, but also not obviously right." And then maybe I can get into the rest of what I say.
MAX: I'm already willing to push back on this, though probably not in the ways you'd like. But I don't know if you'd like to finish your argument first or if I should.
AARON: Okay, yeah, sure. So, first of all, I think what I just said opens up the possibility of both the repugnant and very repugnant conclusion. Not yet, not following. Because you can imagine that you take some large amount of suffering. And then, in the default number line simplistic case, you can just keep adding up, maybe that's like negative billion. And you can keep adding up little small numbers. Eventually, you keep adding up real numbers, like in the set of real numbers, which is just .1.1 whatever. Who cares? Then eventually you're going to get past zero. And if my previous claim is true, then you can get to a situation where, no, in fact, I think the simple way of putting this, semi naive way, which I don't really think checks out in a formal mathematical sense, would be like, oh, there can be infinitely strong amounts of disutility or suffering that are worth negative infinity points. Negative infinity, like utils. And I'm giving air quotes because I think that's like a saying, like, negative infinity is a way of shoehorning something closer to the truth into existing paradigm of real numbers. Yeah. So now that we have, I claim we've established, and then I know that's contentious, but I can establish the possibility of these things being wrong, then, yeah, I want to appeal to people's intuition, at least. At least my intuition, although intuitions are different. But I think, yeah, a couple intuition pumps or examples. I think the most reliable way to form correct beliefs about things like ethics, or I guess ethics specifically, is to try to simulate skin in the game as much as possible. So instead of taking, even though you are like a philosopher in an ivory tower, there are better and worse ways of thinking that get you closer to the position of really understanding. And in some sense, like living in the world that you, that's not like feeling closer or more attached or more open to the actual consequences of your ideas. And so I think it's very easy to have this very high, abstract view about the fact that, oh, there's really large negative numbers. And really, you add enough positive numbers and it gets past zero. I want to say, okay, let's pretend, first of all, that there's a universe. I don't know, say it's a billion people or whatever, and you are going to live every single one of those lives. And one of those lives is like, let's just go with a normal human lifespan. Or, I don't know, we can round it to 100 for convenience, as philosophers do, like, 100 years of absolutely brutal torture. It actually doesn't have to be realistic. It can just be conceivable, like, something worse than what's ever existed on earth. And then I want to let. There's the question of, okay, is there any amount of well being that you could experience in these other lives such that you would accept living this life as well? And, like, for me, the answer is, like, super obviously not. I can't claim what it is for other people. This is, in some sense, like, trying to elicit a preference. But, like, I want to say first, I think probably most. I think my sense is that most people will say, no, there's nothing you can do that will get me to accept 100 years of brutal torture. Um, and then I think there's a plausible view that, oh, this is just being irrational. Uh, and I don't think that's necessarily wrong. But I do think that the onus, the burden of proof then goes on the other side to say, actually in the skin in the game example, um, like when push comes to shove, you think that the thing that you actually want to do is in fact irrational. I think that's. Yeah. And I don't see the way of, er. I certainly haven't been convinced, and I don't, haven't heard any very convincing arguments that in fact this is the irrational thing to do. And I, and long story short, I think this suggests that, yeah, you're getting like something close to that, the like negative infinity versus whatever large positive numbers case. Because as a matter of fact, like in the, if you were like offered this credibly, like right now at least if it was me, I would not in fact do that thing. And so I think this is very strong evidence that, in fact, I guess under, sorry, I'm premising all of this,
SARAH: Yeah, I think that's pretty compelling.
MAX: Yeah.
AARON: Yeah, let's go.
SARAH: Well, Max, destroy him.
MAX: Well, I don't know if it'll be as compelling, probably. I think I took some notes while you were talking. There are a few things we'll go through in order of least interesting, saving the best for last. But it's interesting that you just now distinguish between creating a universe where you have to live every life, most of which are pleasurable, but one has suffering. Whereas in your blog post, you talk about undergoing a week of terrible torture for arbitrarily many years of really great experiences. Mostly because, and maybe other people disagree with this intuition, the one you used just now about torturing a person and then getting to live other lives, each life weakens your case intuitively, just because there's a clear distinction between entities. Whereas it really feels quite bad if you're like, "Would you like a million dollars? Or would you like me to stab you, and you have to go through rehab for months?" That feels really quite bad. But if you gave it out, would you like to live one life where you get a million dollars? I don't know if this example I'm giving actually...
AARON: I actually... Okay, it's... Oh, sorry. Keep going.
MAX: Oh, I was just gonna say, I think I was explaining my intuition poorly. I'm trying to get at this idea that by distinguishing between lives, it becomes more appealing. But I'm not sure if everyone else feels that way.
AARON: Yeah, I think that's interesting. That is your intuition. I think what I said before and what I say in the blog posts are supposed to be the same argument, even though I didn't say them explicitly.
MAX: I agree. I just think intuition is...
AARON: And I think you can play with this and get more or less, or different degrees of intuition. One example is, you're going to roll a trillion-sided die. And only if you get the bad roll do you live that terrible life. And I think that is...that actually pushes in towards a normal max point of view, because that's what I think. It looks very natural. Surround one in a trillion to zero. And I have that intuition as well. But you can reformulate this as, okay, it's hard to...Obviously, humans don't, in fact, live multiple lives at the same time. But you can try to imagine what that would be like, unless it's more compelling. And I think the most compelling one is the lives in cereal. So you actually have to live through whatever it is, a week, a month...I mean, I think the longer, the more compelling. And I think that's actually fair in some sense, to top the amount of time, because if you go too short, people also sort of intuitively round it to zero. But yeah, let's just go with one human lifetime of brutal torture. Like, if you have to undergo that first or 50th, actually, for that matter. I think it's very counterintuitive to say that you would accept that, even though you get to live all these other good lives before and out.
MAX: Yeah, I guess. Well, two things I'll say. One is that I'm not a total utilitarian. I don't know if I have a view I lean very strongly towards or something, but I'm not one in, like, an annoying way. Lots of people who are knowledgeable about philosophy never commit to anything. So I will be one here in that sense. The other thing I'll say is, I think what you've just said helps me get at maybe what I'm pointing out, which is something like, maybe we have this intuition that suffering can sometimes muddle other things. Like, if I were to traumatize you, it's really terrible. And this impacts the rest of your life. Intuitively, if you could choose to have a life that is lower peaks of pleasure, but you never get traumatized by a terrible event, or have these higher peaks, but you get traumatized, I think our intuition would be like, oh, avoid the trauma just because being traumatized has these knock-on effects in an analytical way. Maybe you could be like, oh, I've set up the example so it doesn't matter, but we feel this is the case. And so it's evolutionary risk aversion that is well placed and will distinguish between the two hypotheticals you set up. Just because you don't get traumatized. If you're living each life, you just live a really terrible life, and then the next one, you've forgotten it.
AARON: I'm not sure that the memory thing impacts me very much. I'm perfectly happy to throw out memory and say, oh, no, we're just talking about the experience. Even if memory of some terrible event would make the other good lives worse, I'm willing to say, no, actually, let's throw that aside or pretend it doesn't matter.
MAX: And I'm more pointing at why I think people would respond differently to these two hypotheticals.
AARON: Yeah, I'm sorry. I know I'm jumping in over you. I think one thing is just, in the 21st century, people who live in western countries, probably upwards of 90% of what is legitimately called trauma is just not even close to the worst of experiences. And so it is totally plausible. I am perfectly willing to say that some amount of suffering actually can be outweighed. The thing that I'm not willing to say is that any amount of suffering, once you fix it, can then be outweighed. I hope this is somewhat clear.
MAX: Yeah, so I think to move away from this side thing, we've gone back to the actual important part of your argument. I think the number line stuff doesn't really get you what you want, per se. You have to take additional things as well. And the reason for this is actually it doesn't matter whether or not suffering and pleasure are modeled by the real numbers. That's not really important. If it turns out you're more realist and they're modeled by the natural numbers, okay, okay.
AARON: Yeah. The natural numbers have this same relevant property where if you fix any negative natural number, you can add any positive natural number enough times. But yes. So there are plenty of different mathematical spaces that have the same relevant property. But I also think that there are, are like quite confident that there are like mathematical... I don't know if space is the right word or ontologies or something. I guess just models where the addition subtraction thing just doesn't end up always getting you back to zero or whatever the equivalent of zero is. Yeah.
MAX: So I think yes, that is true. But relevantly, it's not necessarily helpful for you. The key thing here is that total utilitarianism gets you into these really weird situations because it's based on trade-offs against units. You can get some unit, and it doesn't necessarily have to be experience. It could be lives, it could be whatever, but there is some kind of unit. And these can be traded off against each other, or you have to have some method of totaling them. It's not total utilitarianism, basically.
AARON: So I know what you're saying. This is like the most... I think I'm tentative. I don't even want to say difficult because I actually think it's difficult in a technical sense to explain, or not that difficult, but moderately more difficult than what I have explained. And that probably does lend some credibility to being wrong. If there's no super pithy little sentence I can say that tosses it off. So units, I haven't really thought about.
MAX: Maybe I can give you an example and you can say, yeah, sure, let's see. So basically, I think it has to be the case that...well, okay, it doesn't have to be. I should be more technical, but lots of versions, even ones you probably want to have, of total utilitarianism imply some form of the repugnant conclusion. Sorry, the very repugnant conclusion. So if it's the case that you can compare and total up things, like it can ever be the case that you could have a world where you get one paper cut and also an ice cream cone versus a world where none of those things happen. If it's the case you can prefer the first one, but you get some amount of suffering and also some amount of pleasure, you will get some version of the very repugnant conclusion. Maybe it doesn't feel very repugnant because it's just a lot of people getting paper cuts, but you will get some version of this.
AARON: Yeah, I don't think that's true. And that is because... a corollary of what I claim is that paper cuts don't add up to torture either. There's no finite number of paper cuts that are morally equivalent to an event that can't be overcome by positive welfare. And also, there's a single counterexample which is... let's just stipulate that one year of brutal torture is at negative infinity. It's actually a fine model if you want to have real numbers. And then we have one flavor of infinity. I'm pretty sure this actually does work, but I'm not certain. I haven't thought about this a lot, but... okay, so we say that's not really plausible because you can always get worse. But let's set aside the other sizes of infinity. Let's go with one size of infinity for now. You'll set that. Okay. Yeah. Like one year of brutal torture is worth negative infinity. Anything less bad than that can be modeled by some extremely large negative number. But once you hit this threshold, you get to the infinite point. This is just a counterexample to claim that... as long as you agree that what I've just described is in fact totally utilitarian. Just a semantic point. But yeah, that's... you want to object to that?
MAX: I'm confused about what you're saying. Some versions of a very repugnant conclusion don't say the worst torture imaginable. They just say some amount of suffering. The idea is that if you have a situation with lots of barely worth living lives, it's iffy. I'd prefer to live in a world with a small number of very happy people. But okay, maybe I can buy this. But then you get into this thing where there's a very large number of people suffering. I just think about suffering differently. This finally gets at the idea that total utilitarianism has some kind of flaw we can't get with. Because just having a really large number of suffering people and then a really large number of people barely worth living, it's just, I don't know, suffering. It's weird.
AARON: So my.
MAX: Sorry if you think that pleasure and pain can be compared to offset, but are like kind of the thing, what I said with the ice cream. You can get a paper cut and then get an ice cream cone, and it's preferable to one where none of those things happen. That is enough to get you some version of the very repugnant conclusion. And then I said that, and then you were like, I don't think that's the case. So I'm like, here is. Do you? It's just like trillions of people getting paper cuts or whatever is like right below the threshold point that you were talking about where it goes to negative infinity.
AARON: Yeah. So actually, I think you might be right that I was mistaken about how the term "very repugnant conclusion" is used. The point I want to stand by is that totalitarianism does not imply that any amount of suffering can be morally justified by positive welfare. It seems that different sources describe the very repugnant conclusion differently. At least one doesn't stipulate that it's there. Or it just says "very negative welfare," which is up to interpretation. So, yeah, I actually don't...if you want to consider that the very repugnant conclusion is just the claim that large amounts of negative welfare can be coupled with a huge number of beings experiencing positive welfare such that the world is net positive, I think that there are amounts of negative welfare that each being would be experiencing such that that is true. Yeah. So maybe I was initially mistaken about that.
MAX: Yeah, that's reasonable. Or it is. We could say there's some subset of things you could put in the class of very repugnant situations that you accept in some subset that you don't.
AARON: Yeah. I mean, not that anybody cares about a specific blog post, but this thing that I actually thought about for a while, and I didn't... Wait, did I type the words "very repugnant" anywhere in the post? No, I didn't actually. So that is my out. Or, my...
MAX: You didn't type repugnant.
AARON: I didn't type very repugnant. Did I type repugnant?
MAX: I don't think you typed.
AARON: No, I didn't do that either.
MAX: To be clear, the reason we got into this is because at some point, you said you didn't think total utilitarianism implies the very repugnant conclusion. And then you linked this blog post.
AARON: Yeah, I might.
MAX: Which I think. I don't think that's true.
AARON: Oh, okay. I think you might be right. I think I was just wrong about how it's used. Or at least I was overconfident about how it was used. I'm sorry, Matt. I hope you can.
MAX: It's okay.
AARON: Relying on you.
MAX: Well, I recently found out, again, that I have the philosophy to green. You don't. You only have econ.
AARON: I have a minor.
MAX: Oh, okay.
AARON: Accounts.
MAX: I don't know, but then I outrank you.
AARON: Yeah, that's how we decide what's true in philosophy, huh?
MAX: Yeah, exactly.
AARON: Whoever has a PhD.
SARAH: Anyone listening? But it's crazy that I have no credentials in philosophy, which you wouldn't be able to tell. I've been contributing so wonderfully to this conversation, and I totally understand what's going on.
AARON: Typical men were monopolizing the airtime. Sorry, Sarah.
SARAH: No, I feel like I definitely could have chipped in, but at every point, every time I thought I had something coherent to say, a minute later I realized it wasn't. So I've just been storing half thoughts in my brain, and I don't think any of them are worth saying out loud.
MAX: Well, I'd say many philosophers are like that. They just say them out loud. This is me degrading philosophy. So I think lots of them are silly.
SARAH: I'm confused about Aaron's thought experiment where nobody would choose to endure a year of torture to have a certain number of positive experiences before or after. I understand that intuitively, but it also seems like it doesn't say anything about whether or not the conclusion is true. Even so, the weaker thing, the repugnant conclusion, if you ask me to pick whether I want to live in the world with a small number of happy people versus the world with a large number of barely happy people, obviously my individual preference is going to be to live in the first world because I want to be happy.
AARON: Oh yeah. But that's conditioning on you living in both worlds. That implicitly means you already won the lottery of living in the first world. Whereas, okay, this gets into anthropics and weird things involving waking up in boxes as Cinderella or something, some crazy stuff. But it's less likely that a being that considers herself Sarah will wake up in the first world because there are fewer beings.
MAX: Well, I actually think so. Okay. And maybe we should just do this later, but... Oh, I see. I think even if you didn't stipulate that Sarah, if you say there'll be no Sarah in either world, or maybe even there'll be a Sarah in one world and there's no Sarah in the other, if you do this sort of... which would you prefer to experience or something where experience could mean you wake up tomorrow as George Washington or something. I think most people would live all...
AARON: The lives in one end.
MAX: They would pick the smaller number of really happy people. This is why their argument is supposed to be devastating for utilitarianism because...
AARON: Yeah.
SARAH: My point is people's individual preferences don't really say anything about what's best for a collective, especially in a situation where everyone's individual preferences aren't being respected. But even if you looked at it in totality, it would still be better than the alternative which respected people's preferences. Do you see what I'm saying?
AARON: Yeah, I see what you're saying and that's why I want to invoke that. Just say that, okay, each case we're going to assume that you are a being that gets to live all of the lives. And I want to claim, and I am claiming that, yeah. If you imagine living all of the lives in world a and world b, that actually is an accurate understanding of what the worlds mean in some sense. Yeah. This gets into, like, perfect. Oh, are you like, is it even coherent to consider you're like a single being? Like, the same gets into questions of personal identity, personal identity being coherent across time? Um, yeah. So that's one thing, I think, like one sub issue that Sarah claimed goes away if you just stipulate you live all the lives in both worlds. The other one is, is it possible that people are just wrong? And yes, it's possible. But as I said before, I think the burden of proof is on the people who claim that. Now that you're stepping inside the frame where you're personally considering what it really means to live all lives in both worlds, I think this is a more epistemically legitimate, more epistemically productive and accurate way to compare the two worlds. Instead of pretending, instead of refusing to imagine oneself at the ground level and purely trying to talk in terms of abstractions, which I just think our minds aren't really set up to do, they are set up to imagine, like, what? Yeah, imagine different possible futures. Is it worth chasing this tiger to get dinner or whatever and not really. Not nearly as good at reasoning accurately about other minds or models of worlds and weird stuff. I feel like that was not super. I hope some of that got across.
SARAH: Yeah. So you're saying that to get a better sense of whether the very repugnant conclusion or the repugnant conclusion is true, you should imagine living all of the lives in both worlds as yourself.
AARON: All right.
SARAH: Is that what you're saying?
MAX: Yes.
AARON: And I guess, 99.99%. That is exact. That's what I'm saying. Maybe a slightly better phrasing is that this will set you up to get the right answer in a way that just thinking about it without making a strong normative claim. I don't know, because then we're getting into normative claims.
SARAH: Sorry, but if you...
AARON: No, sorry. I take it all back. Yes, the answer is yes.
SARAH: But if you were to ask me, imagine living ten extremely happy lives back to back, and then you were like, now imagine living 100 barely net positive lives back to back. I would clearly pick the ten lives. The fact that I have to live the slightly rubbish life more times makes it even worse.
AARON: I think if you think that the life worth living is rubbish, then you're mistaken about what those words mean. Actually, this is super common. It's a little bit of a side point, but I think for the repugnant conclusion specifically, people take the phrase "slightly positive life" or "a life barely worth living" and imagine a life that actually sucks. And I'm saying I want to...
SARAH: I'm imagining one that's underwhelming and boring. I'm not going to kill myself, but I'm nothing.
AARON: I wouldn't.
SARAH: Want to do that a bunch. I would definitely, yes.
AARON: I think the thought experiment or the repugnant conclusion is supposed to take as a premise that if the choices are between living zero or one of those lives, it's just like stipulating. It's just like a premise that you would prefer to live that, one of the very slightly worth living lives. And I think this clashes with how we normally use words. And I think this largely...
MAX: Well, in turn, I will say that there is the kind of thing you're going on here, or there's this other alternative, which is like what people often gesture at as being a life barely worth living. It's not correct. And it's like a much higher, more pleasurable than the music and potatoes kind of scenario.
AARON: Yeah.
MAX: Which would cause issues for your stuff in other ways, like utilitarian stuff. But this is a thing. Total hedonic utilitarianism often...well, I guess it should be total utilitarianism. It doesn't have a thing attached to it about what pleasure is or something. And then lots of total utilitarians...sorry, I keep putting words in the wrong place...then add on the hedonic part. And in many ways, this is a thing I think people really disagree with. And this is where lots of disagreements have, rather than the just total ordering sort of stuff. It's about being a preference theorist rather than a hedonist or something.
AARON: No, I think you're totally right. I do want to keep using the hedonic frame. Mostly because I think it's true and secondly because I think the arguments apply across theories of welfare. So I think this is the most natural and easy one to talk about. But if you also think that what a good life means or what a life worth living means is to live a life that is in accordance with the virtues talked about in the Bible or commandments, um, the arguments for hedonic utility, the hedonic based arguments will apply to those as well. Just because we're, I don't know yet, you can just use the term like pleasure and erdoganic welfare as the placeholder for whatever you think is true. And I do happen to think that is the correct theory of welfare.
MAX: I do think you're right in a technical sense, though maybe not overall. Your argument about there being a point where it's infinitely bad to have this experience that can't be traded off again, that actually does matter a lot on the theory.
AARON: Oh, yeah. Yes. That's true. I just don't have a strong...I think that's the correct theory of welfare. So that's built into my argument about the very, let's call it the strong barrier bug, which is bad welfare or extremely negative welfare, extremely bad life or whatever. I haven't thought that much about...I guess you can consider low just being extremely sinful or something. And if you're really bought into what sin means, it's like the fundamental, worst thing. It's like a fundamentally intrinsically bad thing. I do tend tentatively...I'm thinking out loud. This hasn't passed through many filters in my brain. I tentatively think all my arguments will still apply, but they won't be nearly as intuitive because I don't know. I don't think I'm good at reasoning. Maybe if you're a really skilled theologian who really buys into sin as a real thing, then you're going to be good at reasoning about this. I don't think I will be good at reasoning about sin, because I just don't really actually think it's a legit thing.
MAX: Yeah. So, Sarah, I don't know, did this.
AARON: Yeah.
SARAH: Good question.
MAX: Right. That's why you're the host.
SARAH: Okay. I'm so confused. I feel like my brain is just catching up to what you guys were saying about 20 minutes ago. Maybe in 20 minutes, I'll understand what you were saying just now.
MAX: Because we do keep switching things quickly.
SARAH: I feel like my brain is buffering.
MAX: Yeah.
SARAH: This is more of a meander than a debate, but I quite like it.
MAX: If you've been involved in EA for a while, the terms just become ingrained. It's easy for us to switch around, but if you're hearing about this for the first time, I...
AARON: Think this is unfair, because I actually just didn't know about unfair in some really strong sense. But, I still think I'm right, but setting that aside, I think I've just spent way more time. I initially wrote this as part of a team project a couple years ago, and I was really trying to make this a serious project. Writing this, basically, what ended up is this blog post. And that probably makes me quicker on my feet in some sense, even if that's not deserved, relative to the actual merits of my life.
SARAH: No, I think it's totally fine that we're not on an equal footing. In fact, I think that's part of the concept.
AARON: Yeah. If the roles were switched, I would be protesting. I would be like, wait, I haven't really thought about this. I think it's wrong, but I can't really articulate exactly why.
SARAH: No, it's fine. I don't mind. I'm not particularly knowledgeable or smart about this topic, and it's okay for everyone to know that.
MAX: Okay, can we say.
AARON: Can we say that, Sarah, can we put the link to my thingy in the description, in the podcast description to your blog? Yes, this post.
SARAH: Yeah, okay. So if we accepted the repugnant conclusion, what real-world implications should that have, if any?
MAX: Yeah.
AARON: It's like a reset. We should be going about happy lives, I guess. I know.
SARAH: Then wouldn't it be like, everyone should try to have as many children as possible?
MAX: I don't know.
AARON: Yeah, no, I think this is, sorry, what I'm going to say is... abstract ethics and what one intuitively thinks of as the implications often don't check out. For example, if you think most human lives are good, you might think, "Oh, we should ban abortion or we should maximize the number of humans alive right now." I just think those don't follow. That was just an example. I think there are plenty of other examples. Sorry, nothing's like jump jumping.
MAX: To my head, but maybe the better way to have thought about it, the time Aaron was talking, is to answer your question, Sarah. The reason the repugnant conclusion is so discussed is because people are like, "Oh, here's this intuitive way to rank whether or not a possible world is better or worse." So if we thought about it, would it be better to make the world like this, or like that? Just add up the total number of happiness in each and then compare, right? It's like, "Oh, yeah, great."
AARON: Sounds like a dubious assumption that that's modelable by the real number.
MAX: Sorry, let's just take the happiest world and make that one right. You'd be like, "Oh, we've solved ethics." And then somebody comes along and says, "Okay, well, there is the scenario where you could compare two worlds. One that is music and potatoes, and one which is, like, a small number of happy people really having a great time." And it's still the case that both of these worlds will be dominated by the happiest world possible, and that is always the one you want to make. So, practically, if you accept their conclusion, it's still the case that you're still aiming for the same thing as you were before, which is to bring about this best possible world. But it is the case you'll maybe sometimes be okay with trade-offs that at first seem unintuitive. Like, maybe, for example, you think, right now the world is, or whatever, right?
AARON: Yeah, I think the world's terrible right now. I'm being serious.
MAX: I feel like that's not, I was avoiding calling it terrible because of the repugnant conclusion, because we got this whole thing about the repugnant versus very repugnant. And if you think the world's terrible right now, then it's not the repugnant conclusion playing out. It's like a very, oh, sorry, I. But yes, fair enough. So assume the world is, like, mid right now, right? You might be like, oh, what's the point of disarming all our nukes? Or preventing AI from turning the planet into paperclips or something like that.
AARON: I think that's a good question.
MAX: And then you would be like, sorry.
AARON: I think it's a serious question, not a rhetorical one. It deserves serious inquiry. Not that we should just blow everything off. So I keep going.
MAX: My point was that the repugnant conclusion might imply that the world could be really great in the future. Even though it's iffy now, it could be way better in the future. So we're just taking the total across all time. That's a practical way to think about it. But mostly these thought experiments deal with really large numbers of people, and the world we live on doesn't have very many people, so it doesn't matter in the same way.
What's important, I think, if you're thinking about things practically, is what Aaron's saying: is there sometimes suffering that can't be traded off against good? Because there is lots of suffering in the world. And that would be practically relevant if the threshold Aaron's thinking of is, like, one week of being in Guantanamo Bay. We had that open for more than one week. So this really seriously implies things about what we should do in the world.
AARON: For what it's worth, I think there's a very natural thing you might think, which is what that implies we should do. We should cause a lot of people to die. And I think that's not true. What it does imply is that you should shut down Guantanamo Bay. This is a perpetual take. People are always terrified of suffering. Focused ethics. And I feel like they're always terrified in a really dumb, naive way. Not always, but almost all the time, actually. Yeah, sorry. Keep going. And then I'll have another point.
SARAH: Okay. Could we specify some actual worlds, some actual setups where... Can you give me a setup in which we have a very repugnant conclusion world? Like, how many people are there? How much suffering is there going on to counterbalance the pleasure? Can you just give me a scenario? Do you see what I'm asking?
AARON: Yes. I would say current world. Imagine you keep adding humans at the current level of American average welfare, who I think probably have positive lives. I think wild animal suffering and factory farming mean that that world is not going to get net good.
MAX: I guess so. This I think has some interesting implications.
AARON: I agree. I think the main implication is that you really want to prioritize removing, ending the things that are preventing the world from being good. We need to get those out of the way, not necessarily first, but sooner.
MAX: The main thing is that you're wrong.
SARAH: The main thing is that you're wrong.
AARON: I think the main thing is that I'm right.
SARAH: That was just such a you sentence. Sorry.
MAX: Thank you. Happy to provide. So before I lose the thread, a good thing...you're saying this thing. Okay, give me a second. Because I'm just saying really vague things, but I know what they refer to. So you're making this argument about something like...I'm going to take my assumptions. Suffering-focused ethics is the case. And then you might think it's the case, right, that we should just blow up the world or something. That's not the case. We should aim to make it better. Just because the world's really terrible, we should still aim to make it better. I should say the issue is this idea that there's no amount of happy people you can add such that the world will ever be net positive or something. Right. Because how could that be unless the suffering is infinite or something?
AARON: And it can, and in fact, be thought about. That is an accurate way to think about it.
MAX: But then it means that actually it doesn't matter whether or not you add one life, the thing that happens.
AARON: No, sorry. Keep going. This is a beautiful, classic fallacy.
MAX: Well, no, but that's, again, I'm using real numbers, but I could get more technical. I think that's still, maybe I should start that way.
AARON: I have a tweet about, I'm pretty sure you're about to say.
MAX: Okay, well, it's just that total utilitarianism is basically based on the idea of being able to trade off or total up experiences. That's in the name. And so if there is some experience that can never be paired with any other, can never be totaled up or compared to some other number of positive experiences, it doesn't matter, right?
AARON: I disagree. I think you're jumping from simple number add ability to comparability. That doesn't follow. One thing I think is true is you can have worlds that are net bad, which is a vague thing. What does that even mean? But setting that aside, suppose I'm right, that no matter how many 21st century Americans you add, the world never gets good. I still want to say that if you add a bunch of happy people to the world, you start out with the current world, world A, and then world A prime is the same world, but with a thousand more happy humans. That world is better. And the comparability still checks out. Look, the comparability is not going to be modeled by a finite real number on the left side and a finite real number on the right side with an equal, less than, greater than, or an equal sign in between. It's going to be something else, but still comparable. I actually do think comparability is fundamental to utilitarianism, early utilitarianism, not modeling things with the set of finite real numbers.
MAX: Yeah, so I've realized, or, yeah, I think you're right about your point here. The thing is, if you take certain assumptions, it's not very interesting. It's one of those things where it's, well, this is, because this doesn't imply anything action-relevantly different unless you're God, literally. This is the only way in which it's different. Because what matters then is whether you start the universe, not what you do in the universe. The set of actions you'll take in the universe are the same as if you bought the default view of suffering, sorry, of total utilitarianism. As long as you have the same hedonic values.
AARON: Well, once you're adding God, all bets are off.
MAX: Well, what I'm saying is, you're like, okay, to your point that things can't be compared using just real numbers, as long as you can compare universes, you end up ranking them. Right? And so our current world, you're just doing a semantic thing about whether we call the universe net positive or whatever. This ends up being the only, actually, sorry.
AARON: I think it was a little unfair to that. I think it's actually coherent to say what net positive means. Is that morally preferable to nothing existing? I think it's pretty intuitive. Or if you accept someone's assumption about sentience, no sentient beings exist.
MAX: Yeah, but this is why I said the kind of God thing. We can't have unstarted the universe.
AARON: Yeah, totally. Yes, I totally agree. There's no big sign that says if the world equals net negative, then actually... Sorry, I think I'm, keep going. Sorry.
MAX: I guess I feel like I've got a sense of your overall view. The main thing is maybe just how you think about welfare, which is separate from other stuff that's action-relevant. You might not be willing to take certain bets, though it seems like you are. That's maybe the more interesting thing to get at, because there's this abstract point that made me think, "Oh, maybe you're saying something that's not just trivially true." I'm kind of like that, thinking, "Oh, this might not be the case." For example, you're saying the ordering of worlds gets really weird if you're depending on where the point where torture goes to infinity is, right?
AARON: Yeah, I agree. This is the strongest point against me: you can add up things that are almost, but not quite, at that point. It's like a lexical threshold, and it never quite pushes over. I think this is the strongest argument against my point. I don't really have a response except to point to all the arguments for me.
MAX: Yeah, but can you, I...
SARAH: I don't understand this threshold thing. Can you clarify it?
AARON: Yeah, so if you're imagining, there's some threshold at which some sufficiently bad amount of suffering, or some instantiation of suffering, can never be outweighed by arbitrarily large amounts of well-being. This, I think, is a corollary to this: you take somewhat less bad suffering that's still extremely bad, and you can add arbitrary...sorry, yeah. Consider the least bad amount of suffering that is still over this lexical threshold. Lexical means, basically, it's functionally worth infinity points or negative infinity points here. It's not going to be outweighed. And then consider something that's somewhat less bad, but such that it actually can be outweighed by well-being. A corollary of this is that you can arbitrarily add up an arbitrary number of instantiations of slightly sublexical suffering, and it's never going to be as bad as that over-the-threshold amount of suffering that can't be outweighed. And I think this is counterintuitive and just gets extremely hard to think about because, I don't know, we're like humans and brains that aren't good at thinking about this stuff. I think it's unintentional.
SARAH: Okay, so if there was a person being tortured in a particular way, that meant it was over a threshold where it was so bad it could never be offset by any amount of utility. Then there were 50 people being tortured in a slightly less bad way. You had to choose which set of torture to prevent. You should save the one person and let the 50 people continue to be tortured slightly less badly than the one person who was being tortured in some specific way, which tips them over the threshold.
AARON: I'm going to say a word, and then you need to listen. Everybody listening needs to listen to the rest of the sentence, not just the word "yes," but only conditional on those in quotes, "slightly less bad amounts of suffering," in fact, being morally outweighable by well-being. And so, why would it...why?
SARAH: Would it be the case that there's some discrete point where things become not morally outweighable? That seems...
AARON: Yeah, so that's a good question. I don't even know if a discrete point is the right way of thinking. Hedonic welfare is a mysterious thing. I feel like our intuitive model is, oh, things can just get gradually worse and worse. Do you think this is probably close to true? And then you're like, I think I'm stuck with this challenging thing of, oh, so you think there's a really tiny, hardly noticeable change, and then it flips over the threshold to pass the lexical threshold such that it can't be outweighed. And I think this is a natural thing to think. But then, on the other hand, if you accept that all is in fact true, what this implies is that that tiny little change in fact made the difference between a universe that's never preferable, nothing existing, and one that can in fact be preferable if you add up enough good lives. So sorry, I feel like this is, I vaguely have strings on what I just said. I don't really expect anybody else followed it. I think the main thing I want to appeal to is the thing that you're probably thinking of as though slightly less bad is still bad enough to actually be past the threshold. And this is similar to the normal repugnant conclusion thing where people say, oh, lives slightly worth living and what they're imagining in their head are actually lives not worth living because of how we use the word living.
SARAH: But if the threshold exists, then there must be something just below it. I'm not really imagining any particular thing, but it's not...
AARON: Well, yes. So, the model I want to compare this to is like negative infinity. And if you want to be technical, we can say negative all of nothing. You want to use a specific size of infinity versus any. Oh, yeah, cool. Perfect. Versus any negative real number. And the intuitive thing to think is...oh, man...like negative infinity is separated by the largest negative finite real number by just a tiny amount. And what it's actually separated by is the whole thing. Uh, yeah. I think this conflicts with a very strong intuition that experience is just on a continuous scale. And honestly, I don't really have an answer. I still think all things considered, I'm likely to be right. This is a strong argument against me.
MAX: Yeah, important.
SARAH: You don't think hedonic experiences are on a scale like that?
AARON: So I think, no. And the reason is, at least not...the word "scale" is pretty general, not unlike the most intuitive, simple type of scale. Because that would imply that all negative, all amounts of suffering are modelable by the finite real numbers. I think those things are isomorphic, or one implies the other and the other implies the first.
SARAH: This might be a stupid question, but isn't part of the premise of utilitarianism that things can be quantified on a scale? Or if you can't, I don't know if this is what Max was saying earlier, but if you can't add up and subtract things like that, then aren't you just not doing utilitarianism?
AARON: So I think in real life, a lot of the time you actually can, if you want to compare two interventions, one of which prevents some bad disease in 100 people versus 200 people, the math is really going to be helpful there. In fact, 100 and 200 are like real numbers. Good. It checks out in that case. Um, I'm slightly, actually don't think this is, I don't think this would be controversial among people who are really in the weeds of defining moral systems, but it's like, probably gonna sound weird or sound heterodox, and that is that I want to claim what utilitarianism depends on is comparability between worlds.
MAX: Well, another.
AARON: And other things.
MAX: No, just that. The more good, the better, basically.
AARON: And also the finite real numbers are a really good model for a lot of things. For example, all the stars, all the planets in the observable universe, and all the amounts of matter that can be made into things, possibly things that can be sentient. Those are all in the domain of finite real numbers. I slightly lost my train of thought.
MAX: Maybe give some examples here, Sarah. Hopefully this will be illustrative. So you could imagine two versions of utilitarianism. The idea would be that both are versions of utilitarianism. One says that you can quantify an experience in terms of a number. Watching a really funny movie is ten, for example. You can add up things like this. Then you could imagine another version of utilitarianism that says, what is the value of watching a really funny movie? It's just better than watching a mid movie. You could put all the possible movies you could watch in order. The funniest one goes first. The critical difference is that there are no numbers in that version. It's just like, this one's more preferable than this one.
AARON: So I think at a base level, that's absolutely right. But what comparability also implies is that you're able to compare watching one really funny movie versus ten different families watching a somewhat funny movie on their own. And that's where the numbers pop out. The numbers pop out of comparability and pragmatic real-world things. Even if in some really fundamental, metaphysical sense, there's no comparability. There's like, the numbers are a tool that humans add on top of that, and they're like, sometimes comparability spits out numbers that work well. And that's an example of, okay, one hilarious movie is, maybe we can say the value added by that is we should be indifferent between adding that amount of value and adding ten people watching a somewhat funny movie. And in fact, these are just, one isn't better than the other. These are like equally worthwhile or whatever the term is.
MAX: So I will work, I will take the opportunity to get bogged down and more technical, because this is if we... I actually don't think you can find it online unless you went to my college. But if you were to see my published thesis, you should publish it from college. Maybe sometime you would know that there is a section that kind of confusingly talks about this, which I'll now do as well. So basically, the thing where Aaron was talking about, where the numbers pop up in air quotes is because we're thinking about this stuff in too practical a way, which is crazy, right? Philosophers should never be practical. So the thing is, kind of like comparing those movies, right? Is like, really how you should think about it, is comparing the world where everything is the same, except the only difference is the movie you're watching. And so really what we're doing is we're comparing the world where just you watch the movie to the world where two people watch the movie. If that makes sense, rather than thinking about it as adding, necessarily. And you could have a system that uses adding or whatever if you wanted to. So, yeah, I don't know if this very stupidly pedantic point makes sense.
AARON: But, yeah, I think we mostly agree. I think we agree on that. On the metaphysics of how numbers pop out of fundamental comparability questions.
MAX: Yeah, I was just saying more about it.
SARAH: Okay, so utilitarianism doesn't require that you quantify different experiences mathematically. It just requires that you can compare between different worlds, and you don't necessarily need some kind of sliding scale of experience or whatever.
MAX: It would help for figuring out how to compare them.
AARON: Yeah. Once you're actually in the real world, doing grant making at Open Philanthropy, then the math gets really useful.
MAX: Yeah. But it could be the case that God comes down and says, "This is actually how you rank the world properly."
AARON: No, read my really melodramatic line about God.
MAX: Okay. I think I know what you're talking about.
AARON: Oh, wait, hold on. I won't wait. Okay. Absence of evidence. Wait, sorry, you can finish. Sorry, I cut you off.
MAX: I think Sarah was going to say something.
SARAH: I've already forgotten what it was.
AARON: Oh, yeah. So perhaps sadonic states really are cardinally representable with each state of the world being placed somewhere on the number line of units of moral value. I wouldn't be shocked. But if God descends tomorrow to reveal that it is, we would all be learning something new. Okay. That's...I thought that was very dramatic and creative. Thank you.
SARAH: That's pretty good. I like that.
AARON: Thank you, though.
MAX: It doesn't really show. You're right.
AARON: No, I know there's a good line.
MAX: So actually, I feel we'd be learning something new.
AARON: Yeah. God exists. That would be cool. One point is I can argue for different things that I believe to be true. And the thing that the blog post the most, I guess the weaker and more likely to be true because it's a weaker claim is like...it's like pointing out, holy shit, there's this really...everybody is just like, going on this default assumption of representability by the finite real numbers. Actually, maybe there's a LessWrong post somewhere that actually spells out exactly why that's true, but no one sent it to me. I'm like, as far as I can tell, that doesn't exist. And maybe this has potential. This opens up a big range of things. If in fact, there's a silent assumption that is actually dubious. And then I do, in fact, think that something close to suffering...suffering...focus, ethics is true, and Donna's theories of value are true, etcetera. But, yeah, for what it's worth.
MAX: Yeah. Sarah, I don't know if you have more. Are there other things that are confusing, or... I shouldn't say still confusing, probably. Are there things in this chunk that you'd like to talk about? Because I was going to then bring us back to critique Aaron in a different way, by saying sensible things.
SARAH: Yeah, I think we should circle back to the original disagreement.
MAX: Right. I'm going to say that Aaron thinks we should nuke the world.
AARON: The factually incorrect statement.
MAX: There are ways you can get out of this. So maybe you will, but I will offer.
AARON: There are arguments. Okay, go.
MAX: What you've been arguing for is something like...one week of torture. That's the point at which it goes to infinity, for lack of a better term. So we're just going to say one week of torture, right? Let's imagine a world of pretty chill Amish farmers. I'm just going to say this is good. Or the USA. You just imagine the USA. Aaron thinks this is good enough. And then I add one.
AARON: At least the media, I think the median American is probably alive for them.
MAX: I'm going to go back to the Amish thing. Even if you think the Amish aren't happy just because it's a picturesque example, I assume the happy Amish and it's just an Amish commune and the rest of the universe is desolate or whatever. That sounds sad, but the point is nothing to interfere. And then I add one week of torture just off to the side that happens. Now we have these two universes.
AARON: Right.
SARAH: The way you said that was funny. Just a side order of torture.
MAX: Yeah.
SARAH: Sprinkling of torture.
MAX: So, yeah, it probably does in some sense. I'm going to talk about this in terms of infinities, which may be doing you a disservice because you don't have to do it with infinities.
AARON: I think I know what you're getting at. And yeah, I think it's fine.
MAX: So we have these two worlds, right? One is infinitely worse than the other. And the argument you were giving earlier for not glass in the world is that we can just keep adding happy lives to it, and then it's whatever. Like, we're on different tiers or something. So it's...we could compare then the world with one week of suffering, an Amish farm, and then we kill everyone because it was infinite, because we thought we should do that. Or the one week of suffering, and then we have a trillion years of Amish farming. Obviously, the trillion years one is better than the glass. Everything. And obviously, no suffering ever is better than all of those worlds. Right?
AARON: Is the glass... Did the glass ever... Everything one prevent the one week?
MAX: No, it's.
AARON: After the suffering, then I don't. That doesn't make any sense. But after the suffering, you're not doing any good. You're not actually doing anything.
MAX: Because you're, like, the point is that you might worry about more suffering in the future. It's just...
AARON: Okay.
MAX: It's just a baseline thing to get. We shouldn't disagree.
AARON: You can make it one week versus two weeks.
MAX: So we're not that yet. Because it's a... I think.
AARON: I think you made a really simple mistake that you don't really endorse.
MAX: Okay, what's the mistake?
AARON: I don't endorse retroactive justice.
MAX: No, I know. That's not the point I'm trying to make.
AARON: Okay, sorry.
SARAH: Imagine nuking the world, and you're like, this is for the shrimp.
MAX: Yeah, I fixed it. It has to save. That's not the point. The point is... yeah, sorry.
AARON: Okay, sorry.
SARAH: Can we recap the world we've got? We've got one world with happy Amish people. There's no torture anywhere. Then we have one with happy Amish people, but someone gets tortured for a week. And then we have one where there are trillions of Amish people with the week of torture.
AARON: Okay.
MAX: And what Aaron is saying is that the first one is good, or whatever, and the second two are both worse. They're infinitely worse than the first one because they have that weak form of torture.
AARON: I don't think they're equal. I don't think you can compare them. They're both, lexically, infinitely worse than.
MAX: Yeah, they're both infinitely worse than the one with no torture, but there's a difference between the two that you can compare.
AARON: Yes.
MAX: So this is the important thing. Now we get into the tricky thing, which is... maybe I should start by fixing myself to an assumption like hedonism that I want to say out loud, because you could... Yeah, there are two cases I can think of, and I will try to go with the first one, which is that a week of suffering or whatever is some kind of unit of infinitely bad suffering. If that makes sense, maybe.
AARON: Yeah.
MAX: Yeah. But we'll just say this, right?
AARON: In my view, we actually can. I don't know if mathematicians will like this, but I claim that in my view, if I'm right, we actually can treat everything as a variable, like how X is used in normal algebra.
MAX: Two.
AARON: Olive. Not versus one. All, if not versus whatever. Yeah, that's relevant.
MAX: I think it is. I think this is, if anything, you're agreeing with what I was saying. I was just saying that all of not equals this one thing. I was just fixing the unit.
AARON: Yeah, I agree.
MAX: Illustrative. Okay, so then we could add a second week of torture. Now your ranking goes like this: two weeks of torture, the Amish families, and then that all existed, and then you glass it after the two weeks are done. And then another world, which is two weeks of torture, and then a trillion years of Amish families. Right? So now we have five total worlds, and there are three different sets. One is the one with no torture. One is the one with one week of torture, and one is the one with two weeks of torture. You can compare within the sets, and you can compare the sets themselves to each other.
AARON: Right, assuming that. And the sex being that they both exist, or that there's a probability of each or something.
MAX: Hey, you're just breaking them by preferability.
AARON: Okay.
MAX: You can do preferability between groupings by weeks of torture and preferability with the weeks of torture fixed. Right? Now, what I'm saying is we should probably glass the planet under your view. Is that every day that a week of torture occurs, or whatever brings us to a new wrong? Like a new thing that we can compare, right? And no amount of adding happiness lives can ever offset the fact that we've moved down a rung. So the fact that factory farming is happening right now, and if you think factory farming is one of these rungs, it's like a thing that is infinitely bad. Every unit of whatever that it goes on, the smallest unit, brings us down another rung, and this can never be offset. So, is that actually the case? You just need to glass everything as quickly as possible, unless it's out there in space, like a torture farm. And if we don't kill that first, we keep moving down a rung.
AARON: I know what you're saying. And I, okay, once again, I'm going to say something and then I hope everybody will keep listening to what I'm saying instead of taking something out of context. You do that, I'm going to be upset and it's going to be falsely representative of my views. I kind of bite the bullet in this abstract scenario. I also think, and that is very, people can take that however they want it. For what it's worth, a little side point: you can create a lot of philosophical thought experiments that I will bite the bullet on, something that doesn't actually make sense in real life. I think it's for, for better, not even for worse, at least for me. I am very pleased that the world, the real world, makes it such that as an actually what I believed. And that's because there's a lot of things going on. One is, so I address this in the blog post, but the two big risks of human extinction are AI and bio. Neither of which look like just turning up or down the dial on the probability that all humans drop dead. So in the case of AI, you're like turning the future over to another agent. In the case of bio, as far as I can tell, this is, I'm not certain about this, but as far as I can tell, this is the following thing: the interventions that reduce the chance of, in your words, lasting the world, aka call that human extinction or whatever, are those interventions that also reduce the chance of societal collapse and almost human extinction. And long story short, the other thing that I actually don't literally know how people take this, I know I'm rambling on, is if your theory of change involves doing something that makes you look crazy and evil, it probably doesn't work. In fact, I'm really, really sure. So if you're like, if you want to convince yourself, yes, I will bite my bullet in your thought experiment. And I'm also very committed to the idea that I am not going to kill anybody. And it's like convenient that I can reconcile these, but I think it checks out that I can reconcile them. And I can reconcile them because, yeah, crazy and evil is not a compelling theory of change.
MAX: So I don't understand why you would think what you just said. I don't understand why it counteracts what I was saying before, because the issue is that every second the world is moving down a rung of never redeemable torture. And you need to stop that.
AARON: Yeah. In your thought experiment, I agree. I think the real world provides differences. For example, the likelihood that you fail, the likelihood that there are other sentient beings in the universe that we could...this is far out there. I'm just saying, plot may be relevant. I'm not making any strong claim, but maybe there are rescue missions we can go on to reduce wild animal suffering in the next galaxy over, whatever.
MAX: Well, I think importantly, here's a good question. If I gave you a button that would certainly glass everything in the universe and nothing would ever come again, do you think you should push it? The fact that I'm still saying this, you should have already pressed it.
AARON: I might ask. So, yeah, now this gets into moral uncertainty, and this is something I haven't thought a tremendous amount about because I don't think it's super decision relevant.
SARAH: I was going to say, Max, do you think that Aaron's worldview or argument implies that, say, if I had the button, I could press it right now, it destroys everything. But tomorrow there's going to be a button I could press that would just get rid of factory farming because the extra day of torture would be so bad, I would still be morally obligated to press the world destroying button now rather than waiting.
MAX: Yeah. Or I can be a little more pedantic, to Aaron's credit, is like, it really matters what the one week of torture is. Maybe that one day isn't actually one week of torture, but there is an amount of time such that you can't wait. And maybe that's like a million years. So Aaron looks like he's coming out on top because it's just so long, right, to have factory farming going on. But it is the case that you could press a button today to kill everything or wait one day and just remove all factory farming. You are obligated to press the button today under Aaron's view.
AARON: You were saying something. To my credit, or against my credit, I'm willing to bite the bullet and say, if we imagine a really simplified world that doesn't closely resemble the real world, and we set aside moral uncertainty, which I think is a convenient, but legitimate, thing. Thank you, universe, for creating this happy coincidence. It's nice that I can appeal to moral uncertainty and say, wait, irreversibility is a whole other thing. Then you have to think about the changes that I'm wrong. What do other theories imply? Is there some sort of free theoretic meta framework that I need to work under? I have a bunch of escape valves that are all super convenient, but again, I think legit. Lucky for me, if you set all that aside, I will bite the bullet. Yes. You are in fact, or not, not even morally obligated, because I don't even think normativity exists. It would be good for you to press that button.
MAX: Going on things you like. I remember you having written and just said before, I think we can get more specific and say that under Aaron's views at the moment, which maybe he would change if he reflected on them, because they're wrong. Maybe it's wrong, though. I think it's wrong in some cases. But anyway, the thing is that his kind of views on suffering imply that this kind of example you gave right now, because I'm relatively confident, probably above 50% chance, that if you went through everything Aaron says he thinks, and you said you can press a button right now that kills everyone, or you could wait a day and get a button that turns off all suffering in the universe forever, and you could even add a second button that just adds whatever amount of pleasure you want, he would always pick killing everyone today.
AARON: As a fat, empirical fact, I...
MAX: I like ignoring moral uncertainty, or whatever.
AARON: Yeah, as an empirical fact, I wouldn't do that. I like a mix of psychological and other things. And probably I would look at moral uncertainty. It's not a behavioral fact that I would do that. But if you set aside moral uncertainty and totally simplify the world and accept that's true, then, yeah, my theory implies that even though I wouldn't actually do it in real life.
MAX: Sure.
AARON: And, yeah, I really do want to. It's very convenient. But I do want to emphasize that. I don't know, I feel like suffering-focused people are not at all the actual character that you can invent if you take some amount of strong claims.
MAX: I want to say something: I think you could do different versions of suffering, folks. Well, not... I think you can, but importantly here is, you are kind of... Maybe the thing I interrupted you on is to say that we're all weirdos, but you get a bad rep for being a weirdo. But I was going to say you have one weird version of suffering-focused ethics, and there are other versions one could do. Some of them just look more natural. Like, some of them are just the claims about what hedonic states look like, where it's really terrible or something, if you get what I'm saying. But anyway, maybe I... Yeah.
AARON: I don't know. This is repetitive, but I want to emphasize that there are significant, meaningful, and important differences between the thought experiment you were just dreaming up, which I think is fine, and the real world. I think so.
MAX: Okay. I will say there's the escape thing that lots of people do in ethics, which is probably good. I think more highly of you if you do this in some sense, which is if my view implies I should go steal some nukes, I probably have made some kind of mistake, maybe just about my evaluation of different actions. Stealing nukes often doesn't work, rather than whether or not I should do it if I could. But it'd be really weird if this thing I'm getting at doesn't actually imply some radically different stuff that maybe often means you should be really inching towards getting rid of as much.
AARON: Yeah. So I agree. It implies some radical stuff. I don't agree that it implies radical stuff that concerns my actions in the next few weeks or anybody's actions in the next few weeks, probably.
MAX: Yeah. The most important thing is to make sure we set off the AI that turns the universe into class or whatever.
AARON: This is the best thing you ever do. I mean, there are a bunch of things. Look, I don't trust an agent that really cares about glass and not create any software. That's the actual answer.
MAX: Yeah.
AARON: And then, or, separates every molecule so that, I mean, yeah. I'm like, you can invent thought experiments, and I will say, yes, I bite the bullet on these insane thought experiments. I also want, this is kind of way outside the box. I also want to appeal to political philosophy and say, listen, liberalism is not my most fundamental belief. I think it's pragmatically good in a lot of ways. But behaviorally, in 2024, abiding by the norms of liberalism is a really strong prior.
MAX: You should have the thing, I think the thrust of my critique, or something that I want to put on you, which I don't think you really have a good out of, and you're already biting the bullet. Except just that I'm like Aaron Bergman and I can't do anything about this, which maybe isn't even fully true, but it's like, literally nothing else matters except whether or not another week of torture gets out to the universe. It fundamentally, nothing else can be traded off against these things. These are the things being traded off against each other, you couldn't trade off with. If you set that fixed, you can trade off against things, but the only thing that will ever, sort of...
AARON: But another consideration is you really better consider the risk that your actions are going to result in the third week.
MAX: Yeah.
AARON: So, that's a really serious consideration. But I do think a world where your ideology is associated with crazy evil people might end up looking like a lot more terrible suffering. I know this is convenient, and I don't know exactly how to reconcile the convenience other than, "Oh, cool, cool. That worked out this way. What?"
SARAH: Can I ask a question?
AARON: Yes.
SARAH: Why does the torture week have to be infinitely bad? Couldn't Aaron just say there are some kinds of suffering or some amounts of suffering which can't be offset by any amount of good? That could be true, but why do we have to add this thing that's, oh, the suffering is infinitely bad, and therefore that's the only thing in the world that matters? It seems like the suffering could just be pretty bad and it hasn't quite been outweighed by the good. But then that wouldn't imply that nothing matters other than preventing another second of the torture from occurring. Does that make sense?
MAX: I will say that I think basically the whole thing I gave with the Amish people, you can just imagine I never said infinite, and it still works the same. It doesn't really matter, like the experiments, does it still, because it's just...
SARAH: But it wouldn't be. It wouldn't because you can.
MAX: Never trade off against it. It doesn't matter if the values sign is infinite.
AARON: Yes, I've been willingly using the term "infinite" as a sort of intuitive placeholder to just mean what you said. I like using "infinitely bad" to mean it's a moral truth. And maybe this gets into more realism, which we debated in the last podcast. But it's like a moral truth that worlds in which this thing exists, or the combination of torture and arbitrarily good things, is just worse than nothing, or whatever relevant comparison you want to have. I'm willing to use the term "infinitely worse" even though infinity has connotations in English and meanings in mathematics that are technical. I haven't studied any field that really deals with multiple sizes of infinity, but I know there are different technical details you can get into.
MAX: So to answer your question, Sarah, all that really matters is that you have these tears, like the torture weeks or whatever. Within them, it can never be the case that any amount of anything you add to one week of torture will bring you to zero weeks of torture. That's all that matters. It doesn't really matter what the week of torture is valued at, if that makes sense.
AARON: Can I do another cop out? Is that...I'm not...I think I'm right, but I'm not a hundred percent sure. I'm not even 90% sure that I'm right. So, yeah, once we're talking about real action, the real world, you get into moral uncertainty. And I think moral uncertainty is really practical. Because, I'm guessing, Max, you probably think there's more than a one in a trillion chance that you're wrong. Is that correct? One in a trillion? That's a very small number.
MAX: I can't really think of probabilities that small.
AARON: Okay. It's okay. I would be surprised if...
MAX: Don't know, something like, I can prove I'm right with math.
AARON: No, I disagree.
MAX: April 4 coming.
AARON: Yeah, no, you should. I invite paper for coming.
MAX: But I'll just say that forever. Until I die.
AARON: Yeah. I don't know. I wish you had just gone with the yes there. There's at least a one in a trillion chance that you're right.
MAX: Okay. You can continue if you'd like.
AARON: Okay. For what it's worth, I think if you ended up not coming to that conclusion, all things considered, not just inside of you, there's this person who's...you probably don't think I'm totally bat-shit insane in general, although maybe you can challenge that. But this person, I'm willing to do a podcast with. And I don't know, is maybe not a terrible person in real life. I want...probably has...I think you're a good one.
MAX: Oh, please don't go do anything bad. And then this.
AARON: No, I'm not going. Actually, not too long ago, I said something mean. Maybe we'll cut this part out. Sorry. I apologize to that person. I hope they're forgiven. No, I'm pretty sure this caused a fine item. Like, a...
MAX: An offsettable amount.
AARON: An offsettable amount of suffering. Although still bad. I like that.
SARAH: Is there anything you could say that would be so mean that it would go over the infinity threshold?
AARON: Not. I mean, maybe, but it would have to involve other chains of that resulting in other actions in the world. So not just the words.
MAX: Okay, well, unless.
SARAH: I think that's the challenge.
MAX: Experiments.
AARON: Causality gets weird. What if somebody has a brain condition, but they hear the word "dog" and then they're subject to...
MAX: Well, maybe. I don't know, sometimes I think there's some Joker comic or something where this...the idea that you can just be like, "So, oh, it's the joke." The Monty Python sketch, right? You're just like, "You're so good."
SARAH: Oh, yeah, the thing that's so funny it kills people.
MAX: Yeah. Joke.
SARAH: That's the funniest joke in the world.
MAX: You have paragraphs you can spell off, and it's so terrible that people are like, I'm just, you've broken me completely.
AARON: Words actually do result in real things happening. This isn't that bad, it's very normal. Like, words result in things happening. For example, you can order somebody to do something really bad, and you're like a crime boss or the military leader, so words aren't even the words. You have to consider the results of what you say.
MAX: Sure.
AARON: Yeah.
MAX: Yeah.
SARAH: Obviously, words can cause things to happen in real life, but I was getting more at the idea that they exist.
AARON: We can ask the person if they think what they experienced was so bad that no amount of well-being could ever be outweighed. I'm guessing they'll say no.
MAX: But you can put that in the show notes, right?
AARON: Yeah.
MAX: Text this person and see what they say.
AARON: They actually checked after I apologize. They said I actually know what this is, very cringe. I'm not gonna say anything.
MAX: Okay. We've been going a bit. I said I have two examples, but I've actually changed my mind. So I'm going to very quickly say the second thing, which is just like, you could imagine your system. There's probably more than two ways to make your system. So I'm just offering two ways to do it and critiques of them. We already did the first one with the Amish people. The second one is, as you were describing earlier, there's a point at which it moves over, and now it's never trade off of again. The thing is, no amount of additional suffering changes this. You're kind of reach the point and now it's just...you're fucked or whatever.
AARON: No, no, this is not true. You can torture. Plus, paperclips are worse than just torture.
MAX: No. So what I'm saying is something like, in the first one, I gave you rungs of badness, right? And then this one, there's just two rungs, I guess. One is the world where infinitely bad happens and the world where not infinitely bad happens. You can compare within these two rungs. So if that's the case, one, you already think we're in the wrong, where infinitely bad has happened. And then it's also the case that trade-offs get messy or something. Or maybe just that the infinitely bad thing doesn't matter. What if you added a Yemeni? Second, infinitely bad. The same amount of pain that's already happened again. Well, like, already at the point...
AARON: No, no. This brings you to another tear down.
MAX: No, but what I'm saying is, you could construct the system such that tiers don't exist. There are just two tiers or whatever. And I guess I'm offering a thing.
AARON: That way, you can stipulate that, but it's not correct.
MAX: Well, I'm pushing you on what I'm doing with that and why I thought of it first, then explained it second. If we talked about this, it would push you towards this other thing that I then offered a bunch of critiques for. So what I'm saying is that you get narrowed in on accepting certain premises, one of which being tears or whatever.
AARON: Yeah, I accept the tears. I don't accept that. If you add two.
MAX: I didn't necessarily think you wouldn't.
AARON: Take two blocks of, really, of it. Quote unquote, infinite badness is not, in fact, at the same tier as one block of infinite badness. You can create a thought experiment where you just assert that's true. But then I'm going to say this is not a good thought experiment because it's not true. It's not even true. Under the lights of, like, anybody's beliefs, as far as I know.
MAX: I zoned out for 3 seconds, so I missed what you said. But I'll just say that I think you have to take certain assumptions. Otherwise, you end up with this soup of infinitely bad, which then isn't actually action relevant.
AARON: I don't know. We're still, I still have to choose what to do in the next couple seconds. I don't know. At the end of the day, everything's action or not everything's actually relevant. One still must act in the world. And so I don't know. You're not going to convince me that... Oh, one thing.
MAX: If my point is that you have to take certain assumptions to avoid an axiology that functions like this, otherwise, you get stuck.
AARON: In that case, those assumptions.
MAX: And you already accepted those assumptions.
AARON: Okay, cool.
MAX: It is important. It means you probably can't avoid or always get certain things you've already committed to. You'd have to bite those bullets.
AARON: I still want you to say that I'm at least one in a trillion chance that I'm right. Near raw. I think it's like a 10% chance that you're right and I'm wrong. Or maybe even 20%, maybe 20/80 or... This is a bunch of places where it's not that simple.
MAX: I need to go consult my supercomputer in my basement. I need to do the math because I don't want to say it's one in a trillion.
AARON: What's the base?
SARAH: What's the one you've had the supercomputer this whole time?
MAX: Yeah. What? It only works when you calculate more uncertain things. It's really not very useful, and only if they're sufficiently unhelpful.
AARON: We can have a manifold market where people can bet on.
SARAH: This is the kind of boxed AGI we need to create.
MAX: Useless, like philosophy.
SARAH: After talking about this, we'll need to wrap up soon. I have to figure out...
AARON: Yeah, I'm getting tired of... Honestly, I don't like... I'm so impressed by people who do podcasts for... Wait, this was actually 2 hours. That's a long time we've been...
SARAH: We've been going for... This is long-form content.
AARON: I think it's insane that people do podcasts. I would just fizzle out of existence.
MAX: Yeah.
SARAH: I feel like Aaron is right. No, in the sense that...
AARON: Or maybe.
SARAH: I don't know if it's for the reasons he's set. It just seems...I feel like he's...I think you're right, but for a more basic reason than you said. It seems intuitively true that suffering can be worse than pleasure can be good. Like, the floor is lower than the ceiling is high, if that makes sense.
AARON: No, I don't think that's a much more eloquent way of saying exactly what I mean.
MAX: No, but it's not. That's not what you mean, right?
AARON: No, it's not what I said, but it's isomorphic to what I said, because that implies you have two worlds, and one is an arbitrary, conceivable world. So, if...Okay, I'm not sure if the floor...And so actually, with a caveat, I'm not sure if there's a floor in a ceiling. That's a way you could be wrong, but if there is a floor in a ceiling, and in fact there is, once we step out of the range of conceivability and it's talking about limits in the real world, like there's a finite number of molecules, there's only so many ways these things can be arranged. Like, actually, in fact, is like a floor in a ceiling to badness and goodness of experience. And this is an empirical point because mine is about conceivability, but I think it's not exactly empirical, but factually correct that if you take the worst possible experience and the best possible experience, that world is much worse than nothing existing. And that is, I think, pretty much it.
MAX: Importantly different than your actual claim.
AARON: So what I know is, it's a separate thing that I believe is similar, but that's...
MAX: What Sarah was saying. That's the claim Sarah made. And that's not what you think.
AARON: Okay, in terms of vibes, not a super technical thing. Not in terms of if you had to round my argument to something a normal human could understand.
MAX: No, I think that's important because.
AARON: This is a fact about psychology and philosophy as a sociological thing. Just like a prop. Like the wavy equal sign. My argument. Wavy equals what Sarah just said. It rounds to say what Sarah just said.
MAX: Then I'm going to accuse you of something that is terrible among philosophers.
AARON: Oh, no. What's that?
MAX: I actually don't know if that's true, but what you've said isn't very interesting. It's either the case that your view meaningfully implies something different than just saying that the way you model pleasure and pain is not one-to-one. They're not actually mirrored curves or something. That's approximately your view. And if that's...
AARON: You're actually right about this. I changed my mind. Yes, you're right that it's maybe one way, the approximate, kind vibe associates, but they're not. It really depends on how you want to technically interpret floor, ceiling, and better and worse. You can make those terms mean technically different things. And then I think one particular configuration kind of actually is at least isomorphic, for lack of a better word, to my claim, but not in general, just like how these words are used in English.
MAX: Sure.
AARON: I'm ready to wrap up. Honestly, I liked how Sarah was wrapping things up.
MAX: Up, and then we went on a whole day.
SARAH: I just wanted to bring it back around. I'm pretty sure I agree with Aaron, but I only understood about 60% of the conversation.
AARON: You shouldn't. You should just update your prior. You should just do a Bayesian thing, which is lesion update, Bayesian, whatever you.
MAX: Importantly, just think that you have similar thoughts about how to trade, that pain is pretty bad or something you don't have. I deny that you agree with Aaron. I win this one.
SARAH: Well, I agree with him that there exists some suffering which can't be traded off against by some arbitrarily high amount of pleasure. I agree with that part. I don't know if I agree with all the other stuff.
AARON: I do think that's my core argument, my core claim.
SARAH: But that was the main thing, right? So I feel like that seems...
AARON: I mean, look, I should.
MAX: As an ending thing, do you think that amount of pain has... Do you know of any things that you think maybe have been this that exists in the world? Do you think factory farming meets it? Or do you think some, maybe like World War Two has had this happen, or do you think we haven't done it yet?
SARAH: That seems like a separate question. I don't know how to quantify this, or even where the threshold is. I just feel like I have a sense that it's true.
AARON: Yeah.
MAX: Yeah.
SARAH: I don't know.
AARON: Independently we count, although it's hard to compare those things against each other. It's not impossible, but hard, empirically. Cluster headaches, factory farming, wild animal suffering.
SARAH: I don't think we...
AARON: The Holocaust. I think these are all.
SARAH: But then again, there's also a timing issue. If someone had preempted the Holocaust and decided to destroy the world to stop it, that still seems like it would have been bad because we wouldn't have had the future that happened after. But maybe that just means the Holocaust doesn't meet this bar. I actually don't know, but it seems like there's a bar somewhere. It just seems...because I'm pretty sure it is the case. I feel like, experientially, suffering can be a lot worse than pleasure can be good. That just feels true. And if that's true, then I don't have any more...
AARON: Yeah. Then you can talk about how this is just a peculiarity of evolution and biology, and how things shake out in terms of reproductive fitness. But we should be nuanced. I kind of want to dip after this. But wait, okay, so you think I'm right, but what? I think I'm 75% likely to be right. What do you guys think?
MAX: Importantly, or maybe I'm going to say this, and then maybe you can disagree with me thinking this about what you think, Sarah, is that you agree with some sort of grouping of views, one that there is some amount of suffering that can never be offset. But this is importantly different than Aaron, who has a very specific take on what this threshold is.
AARON: Yeah.
SARAH: Yeah.
MAX: 70% confidence in this. Where?
AARON: Okay, I want to. Somewhere in this, let me say, like, 80% confidence in my core philosophical thesis. That sets aside the question of what actual events would constitute such a bad thing that it would count as unjustifiable. And, like, yeah, I'm like, once you talk about specific events, I'm like, my plus. Sure. And all. Yeah, obvious.
MAX: I will say, though, then in that.
SARAH: I basically agree with Aaron.
AARON: Yeah.
MAX: I will say, though, that we haven't really talked about that point at all. We mostly didn't talk.
AARON: Yeah.
MAX: So maybe convince you otherwise. Maybe you wouldn't in the end, but we haven't discussed them.
AARON: Yeah, if you want to do a brief, we can.
MAX: No, we should just call it.
AARON: And also, one thing about this is that one intuitive thing you can do is bring up a really bad event. But then it might be empirically or historically the case that the counterfactual was also really bad. So you could talk about, for example, the bombing of Nagasaki, but a land invasion of Japan would also have been really bad. It's not obvious which one is worse, or at least, I actually think probably been worse, but it's not obviously a priori, just based on what I know about those words. So there can just be two really bad options, and both of them have things that would constitute something that was so bad that it's not justifiable with happiness.
MAX: Bring it to a close.
AARON: Thank you.
MAX: I will say that if Aaron hadn't been free today, the other proposal I had was to talk much more about theories of welfare and stuff within them, which is what we're getting into at the end. So, however many months between the last time we did a podcast and now, sometime in that time in the future, we could do a third episode, and now we've moved down the chain.
AARON: If you see what I'm saying.
MAX: We've done a series or something.
AARON: I feel like that one's going to be.
SARAH: Yeah, that sounds fun.
AARON: Oh, man, that's so boring. Because that's always obviously hedonic. That's the right answer. Come on. Yeah, well, that's kind of... Okay. Not obviously, but you guys, maybe you two can just do that. I would be bored out of my mind.
SARAH: We talked about this a little bit in the first episode, didn't we?
MAX: Yeah.
AARON: Yeah.
MAX: Now that we've done a bunch of stuff, we can circle back.
AARON: I think my episode just...Max, I think we talked about.
MAX: I don't remember. That is.
SARAH: Yeah. Okay, well, let's wrap. As the totally unqualified adjudicator, I decide that Aaron's right. Also, you guys are now one.
AARON: One.
MAX: That's.
SARAH: That's my best outcome.
AARON: Right?
SARAH: And the third one can be a tiebreaker.
MAX: I want to defend myself and see.
SARAH: How's the best takes.
MAX: I haven't been sleeping as much lately.
AARON: I didn't sleep all last night, and so...
MAX: And so any flummox I've made in this episode can be attributed to that. I don't know.
SARAH: I don't think you.
MAX: But don't go back to the last episode I did and compare my performance or what I was like.
AARON: Sarah is by far the most eloquent person here, so I hope that's not too offensive for you, Max.
SARAH: That definitely isn't true. Also, if that is true, it's just because I'm saying things that are easier to say coherently because they're not as complicated.
AARON: Okay, well, maybe I'm.
SARAH: I'm just gonna say the dumb thing, and it's not that difficult to phrase.
MAX: Well, I said this last time, and you have really good follow-up questions.
AARON: Yeah.
MAX: Philosophy student or whatever. If we had this conversation in front of a bunch of average philosophy students, they probably would just...
AARON: Oh, totally. My intro to ethics class. Oh my God, the takes were so bad. Just way worse than Max's takes.
MAX: Yeah, exactly. I was in his class too.
SARAH: That's a lot. Cool. Yeah, well, we had to round off with me being mean to myself. Because what would it be if I didn't?
AARON: Great.
SARAH: Thanks, guys. I think we should leave it there.
MAX: Yeah, always.
#13: Max Alexander and I debate whether total utilitarianism implies the very repugnant conclusion