Aaron's Blog
Pigeon Hour
#10: Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism* with Max Alexander and Sarah Hastings-Woodhouse
0:00
-1:08:17

#10: Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism* with Max Alexander and Sarah Hastings-Woodhouse

*Or something like that
Transcript

No transcript...

Intro

At the gracious invitation of AI Safety Twitter-fluencer Sarah Hastings-Woodhouse, I appeared on the very first episode of her new podcast “Consistently Candid1 to debate moral realism (or something kinda like that, I guess; see below) with fellow philosophy nerd and EA Twitter aficionado Max Alexander, alongside Sarah as moderator and judge of sorts.

What I believe

In spite of the name of the episode and the best of my knowledge/understanding a few days ago, it turns out my stance may not be ~genuine~ moral realism.

Here’s my basic meta-ethical take:

  1. Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but

  2. Genuinely normative claims like “Sam should (!) donate $20 and should not buy that fancy meal” are never objectively true.

Of course the label per se doesn’t really matter. But for a bunch of reasons it still seems wise to figure out which label really does work best.

Some definitions

Stanford Encyclopedia of Philosophy:

Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right. Moreover, they hold, at least some moral claims actually are true. That much is the common and more or less defining ground of moral realism (although some accounts of moral realism see it as involving additional commitments, say to the independence of the moral facts from human thought and practice, or to those facts being objective in some specified way).

Wikipedia:

The robust model of moral realism commits moral realists to three theses:

  • The semantic thesis: The primary semantic role of moral predicates (such as "right" and "wrong") is to refer to moral properties (such as rightness and wrongness), so that moral statements (such as "honesty is good" and "slavery is unjust") purport to represent moral facts, and express propositions that are true or false (or approximately true, largely false, and so on).

  • The alethic thesis: Some moral propositions are in fact true.

  • The metaphysical thesis: Moral propositions are true when actions and other objects of moral assessment have the relevant moral properties (so that the relevant moral facts obtain), where these facts and properties are robust: their metaphysical status, whatever it is, is not relevantly different from that of (certain types of) ordinary non-moral facts and properties.

And to settle the matter, here’s what some LLM have to say.

My prompt:

Suppose a person believes the following:

1: Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy meal”) can be and sometimes are true; but

2: Genuinely normative claims like “Sam should donate $20 and should not buy that fancy meal” are never objectively true.

Is this person best categorized as a moral realist?

ChatGPT

Given these two beliefs, the person's stance doesn't align completely with traditional moral realism, which typically asserts that there are objective truths about both moral facts (like the goodness or badness of actions) and moral duties (what one should or should not do). However, it also doesn't fully align with moral anti-realism, since the person believes in some form of objective truth regarding moral value.

One possible categorization for this stance could be a form of "value realism" combined with "normative anti-realism" or "normative subjectivism." This would mean that the person believes in objective truths about what is better or worse (value realism) but thinks that what one ought to do (normative claims) is not a matter of objective truth but perhaps of subjective judgment or societal conventions.

This position is quite sophisticated and might be best understood as a hybrid or intermediate position in the landscape of moral philosophy, rather than fitting neatly into the category of moral realism or its standard alternatives.

Great, so much for trying to fit things into nice little buckets.

Claude, fortunately, is more definitive:

No, the view described is not best categorized as moral realism. Here's why:

Moral realism is the view that there are objective moral facts or truths. A moral realist believes that some moral claims, including normative claims about what one should or shouldn't do, can be objectively and mind-independently true.

The view in the example accepts that there can be objective truths about relative degrees of goodness or badness (claim 1). However, it denies that there are objective normative truths about what one should or shouldn't do (claim 2).

By rejecting that genuine normative 'should/shouldn't' claims can be objectively true, the view denies a key commitment of moral realism - namely that moral claims can state objective normative facts. At best it allows for objective evaluative facts about goodness/badness, not prescriptive facts about obligations.

So this view would be more accurately categorized as a form of moral anti-realism like non-cognitivism or error theory. It does not qualify as moral realism because of its rejection of the existence of objective normative truths that could make strong normative claims justified or true. The commitment to objective evaluative facts alone does not sufficiently qualify the view as realist in the metaethical sense.

In summary, the view accepts objective "good/bad" facts but denies objective "should/shouldn't" facts. This combination puts it at odds with core commitments of moral realism, so it is better seen as an anti-realist view.

I’m gonna disagree with Claude here, at least in that the bolded claim above (objective normative facts are a key part of moral realism) just seems…wrong? Like all the definitions could replace the term “moral fact” or “moral claim” with “normative fact” and “normative claim” but they do not!

So I really don’t have a great answer to which label works best.

One of two ~human level chatbots giving me a definitive-sounding “anti-realism” answer is too much evidence for me, whose only formal philosophy training is an undergrad minor, to rule that one out. There are also good arguments, I think, for the “realist label,” as well as for “neither” (i.e., ‘secret third thing’). In fact all of these seem pretty similar in terms of argument convincingness/correctness.

So, in sum, 🤷‍♂️.

Some tweets on the matter, both of which are images that link to the original:

Ok now you can vote:

Loading...

Anyway, long story short, I believe something kinda sorta like moral realism, maybe, and that is what we discuss. Without further ado, here’s the (very imperfect) transcript of the episode.

Please note that some of the long blocks of text have been cleaned up a bit via minor rewording. I am too lazy to do that for everything.

Transcript

Sarah: Hello, and welcome to my first experiment in low effort podcasting. In this episode of the podcast, which I'm now calling Consistently Candid because some people thought that was funny, I talked to Aaron Bergman and Max Alexander about moral realism.

Sarah: They kind of debate it. And I, having read the Wikipedia page about five minutes previously, a, occasionally chime in with some opinions that I hadn't thought out very well. So enjoy!

Sarah: Anyway, I guess this is my podcast now, but I don't have a name for it yet.

Max: That's a good podcast name

Sarah: Introduce it.

Aaron: Can I broadcast this on Pigeon Hour as well?

Sarah: Yeah, sure.

Max: Okay, cool.

Aaron: Sweet.

Sarah: But I also want to make my own thing because people.

Aaron: No, totally. But yeah, you can say no, you can copyright it and then sue me.

Sarah: No. Well, that's fine. This is totally, like, anyone can broadcast it anywhere they want.

Max: You can text or whatever, get on the Trump website.

Sarah: Yeah. So you guys have a disagreement, apparently, about moral realism. I have briefly skimmed the Wikipedia page, and I don't have an opinion, but I thought we have it out.

Aaron: No, I feel like the format should be that we try to convince you…

Sarah: So, yeah, you try and convince me that you each try and convince me you're right, and I will come to a conclusion and let you know who I'm persuaded by. And if at any point I have, like, a thought that's worth articulating, I'll weigh in with that. But I think that's kind of unlikely because I don't really know anything; I'm playing a moderating role here.

Max: Well, confusion is worth pointing out or something like that, right?

Sarah: Yeah, I can do that at regular intervals. I can tell you how confused I am. That's definitely doable.

Aaron: Maybe you should start with, like, do you have an initial take at all, or are you really 50/50?

Sarah: I mean, from very briefly reading the Wikipedia, it liked doesn't sound true to me.

Max: Oh, hell yeah!

Aaron: No. Okay, podcast over.

Max: Way over the Wikipedia just to see what says. Did you read actual Wikipedia?

Sarah: Wikipedia? Yeah, it says “moral realism (also, ethical realism) is the position that ethical Sentences express propositions that refer to objective features of the world. That is, features independent of subjective opinion.”

Aaron: Yeah, facts.

Max: Good summary from Wikipedia.

Sarah: Fake.

Max: My job is going to be easy.

Sarah: Then, but I'm totally open to be persuaded.

Aaron: Okay. The first thing is that I recognize that it sounds fake, it sounds very sus, but then it actually surprisingly checks out. So I just want to get that on the table.

Sarah: Okay, what about if each of you do, like, a little opening spiel about why you think you're right, and then you can yell at each other about it afterwards.

Aaron: Yeah. Max, do you want to go first or second?

Max: I'll go second.

Sarah: Okay.

Aaron: Well, the first thing is that. The thing I always say is that I simply deferred to Sharon Hewitt Rawlette, who was on the 80,000 Hours Podcast. They had a whole podcast episode about this, and she's an actual philosopher who made the actual case for this.

Aaron: And so everything I say is just basically, like…the actual case is that you just listen to her. Well, I guess one thing is that, okay, what's the base rate of people being correct given that they're on the 80,000 hours podcast? Pretty high. Probably, like, 99%.

Max: I don't know if that's right.

Aaron: Not that.

Sarah: Has no one ever been on the 80,000 Podcast and argued the opposite?

Max: Sam Bankman Fried was on the 80,000 hours podcast.

Sarah: Oh, yeah, that's true. That was embarrassing.

Aaron: Well, that's why I said 99%, not 100%.

Max: Yeah, that was their one episode.

Aaron: Yeah, everything else - wait, I'm sorry. I was mostly joking about that, but no, I am serious: maybe I'll find a way to reference the URL in the show description or something, or just like Google “80,000 Hours podcast moral realism.”

Aaron: First of all, my actual point of view is a weak version of moral realism. I believe that truly normative statements, such as "a person should do X or Y," are not objectively true or false. However, I do think that sometimes, at least occasionally, statements that objectively order worlds or actions can be true or false. For example, saying "world A is objectively better than world B."

Aaron: The most intuitive argument, or perhaps intuition pump, that I can gesture to in favor of my point of view is this idea of comparing and objectively ordering worlds or actions. It's just like, okay, so you have one world and then another world, which is exactly the same, except it also creates a person who's feeling a lot of pain, and that's the only difference. And I want to say that this world is objectively worse, and the reason why it's objectively worse is just because it is built into both, sort of, semantically, the meaning of what we say, pain or suffering, but also not mean. That's true.

Aaron: But another perspective on this. It's sort of like a brute fact of the universe, in the same way that facts about physics are that suffering and pain are bad. And so if you just add some of this bad stuff, or on the other side, add some objectively good stuff, you get an objectively better shade of the world. And so I will leave it there for Max to tell me why I'm wrong.

Sarah: All right, well, okay, can I ask a question first? Max, you want to go. How do I phrase this? Sorry, I'm just clarifying. So you're basically saying that you can't make truth claims about what people ought to do, but you can about which states of affairs are better or worse than others.

Sarah: But if you can definitely say this circumstance is better than this one, objectively speaking, then if you could find some way of empirically determining which actions brought about more pleasure or pain, even if, I mean, maybe we can never actually determine which actions would do that. But say, if you could, then would those things not be like, would you not be able to make a claim about what you should do?

Aaron: I think you can make a claim. In fact, I think they actually would make the claim. But then what I wouldn't be able to say, at least what I currently think, is that those wouldn't be objectively true or false in the same way. I'm less sure about this, for what it's worth. I'm like, less.

Sarah: How can it be objectively the case that one situation could be better than the other, but it's not objectively true that you should do the thing that is most likely to bring about the better one?

Aaron: No, this is a good question. I actually just had this debate on Twitter, sort of.

Sarah: Okay.

Aaron: Although I think the person ended up agreeing with me. One thing is, I think some people just have the sense those two statements are basically just saying are just like, rewording of the same thing. And that's just not my sense. But maybe I'm the weird one, and everybody else has the sense that when they say, oh, x is better than y, and the statement like, oh, you should act so as to bring about x, that these are just exactly the same thing. It's just reworded. Is that your sense?

Sarah: I think they're exactly the same thing, but. Well, actually, no, I kind of do. I don't really understand how it can be true that X is better than y, and at the same time, it's not true that you should try and make X happen.

Aaron: Yeah, if they're not semantic and don't mean the exact same thing, then there's the question of what else you need to get from one to the other. If you've established one, what else do you need?

Aaron: For instance, if you're unsure, my somewhat unconfident perspective is that statements that are normative, like "you should do X," are their own thing. We might just not live in a universe where those have meaning outside of the social realm or above it. We can use them as useful fictions in the social world, but they're not fundamental in the same way that physics is or something like that.

Max: You're saying moral claims are. Sorry, moral claims are like this.

Aaron: True normative claims. So you ought to do x. Yeah.

Max: Well, I mean, depending on what you mean by them being different from air quotes, physics or something like this, kind of sounds like you might be an anti realist, maybe.

Sarah: I was going to say it does kind of sound like you.

Aaron: No, I know in some ways I'm like, maybe I am, but then if so, I just want to defend my actual position, which is that, okay, fine, you can call me whatever you want. But then I still think that we have objective ordering of states of the world, and that's kind of what really.

Max: As Sarah introduced, moral realism is about the truth of valuable propositions. This is the core of it. But then there's the actual core, which is something like "murder is wrong" evaluates to true sometimes. So, if I said murder is wrong and I'm talking about Sarah shooting me with a gun, that's true; it's bad for her to shoot me with a gun, all things considered.

Max: This is what people really focus on. It doesn't matter if your position is that objectively, conscious states imply an ordering of preferability or something to that effect. Stubbing your toe is preferable to being shot with a gun. Objectively, this is just what it is to experience suffering. And the same thing applies to positive experiences. It's better to have a birthday party than it is to eat one cookie. This is just what it is to experience things, and it applies to ordering states of affairs.

Max: I can accept this under some definitions. It's not objective per se, but it is true based on my preference function. Objectively, I prefer these things this way. And maybe it's true more broadly than this. But if you don't have that bit about being shot by a gun being wrong for Sarah to do, then you're not practically a realist, in my opinion.

Aaron: There's this sort of meta debate, which may be important. I kind of think it actually ends up with a couple different ways. There's the meta debate, and there's the meta-meta debate of, like, okay, is this actually an important question or not? Do you think it's not important to establish? Do you think it's sort of irrelevant unless you have true normativity?

Aaron: Sorry, honestly, I got distracted and kind of lost my train of thought in the last few seconds, so I'll let you take it wherever.

Max: Yeah, I mean, I guess Sarah where. Having heard these very basic things for it, though, I guess Aaron hasn't done the best job defending moral realism. If you'd like. Or like defending standard moral realism or something like that. I should say defending the best version.

Sarah: He hasn't defended.

Aaron: Making the best argument, the most convincing argument for moral realism. I'm just trying to claim or defend what I think is true.

Sarah: Yeah, no, I guess I still don't really understand how believing that there are states which are objectively more preferable than others is compatible with believing that there aren't actions that are objectively right and objectively wrong. I just don't really understand how those two things.

Aaron: Okay, I feel like maybe we should just table that and set aside the question. I feel like the question of whether normativity is legit or not. We can say, okay, maybe objective ordering implies normativity. Maybe it doesn't. But then we can sell this debate like objective ordering, which I think Max does not. I think our most core disagreement. Max does not think objective ordering is a thing.

Max: I actually. Sorry. You go, Sarah.

Sarah: No, that's a good clarification. So carry on.

Max: I'm not sure that's our core disagreement, but I am happy to debate that. Just in that, I would say that if I bought Aaron's thing about objective ordering, this does not make me a moral realist or something, or at least not in a really strong sense of the term. But also if I can convince Aaron that the objective ordering thing isn't the case, I guess this also works.

Aaron: Actually, I want to jump in and say I feel like actual court. The reason why moral realism is the question about it is sort of important, is that people want to be able to say, "Oh, these are just like my values or whatever. I can just assert them as the fundamental truth of the matter." And that's sort of like the core. I don't want to say that's the core, but that is certainly a core, an important part of the debate, which is like, I want to say no.

Aaron: If we're discussing, once we decided that we want to try to do what's right or whatever, then it's like an investigative, not empirical, but like quasi empirical, sort of similar ish to an empirical question. We have to uncover the truth, just like assert it or whatever. And that's like a core reason why realism is important.

Aaron: But if you have someone who's arguing like, "No, I just don't care what I ought to do or whether I'm just going to reject any claims about whether I ought to act morally or not," I feel like it doesn't actually matter. Maybe in some sense it does, but in a practical sense, it doesn't matter whether they're objectively acting wrong or not. There's no good way to respond regardless of whether realism is true. You know what I mean?

Max: Well, I suppose that's a slightly different thing. But in defending that or something, you kind of referenced ought statements, right?

Aaron: Yeah.

Max: Which I think is a bit to my point of, like, the relevant thing is here is normativity, the thing. People disagree. Assuming somebody says, I will do what I ought to morally or something like that. Right. It's not the case that an ordering of preferable states matters, unless that then implies that they need to take certain actions. Okay. Yeah, there's this ordering of pain versus suffering. Like different states of that. But why shouldn't I just blow up a city or something like that? You need to be able to tell me that or the reason.

Aaron: Do I.

Max: If you don't want me to blow up the city.

Aaron: No, but you haven't. Right. And so neither has anybody else.

Max: This debate has an action relevant thing.

Aaron: No. Okay. So I disagree there because I think evidently people, it is like a lucky, I mean, not exactly lucky, but I'll just say lucky. Fact of the matter that people are, in fact, a lot of people are inclined to try to act morally. The relevant hard question is figuring out what that means, not trying to convince people whether they ought to or not.

Max: Sure. But what I'm saying is that.

Aaron: You'Re.

Max: Saying here's this objective ordering thing or whatever. Basically, kind of what you're saying is here's this thing that actually what you're not saying is, here's this thing that leads to actions you should take. But what you're kind of trying to say is, this is the thing for you. This is the thing that leads to what actions should be taken. Like this sort of objective ordering of pain.

Aaron: What I'm saying is, let's just take is given. Or if you happen to want to act morally, whatever that means, then I can lay out at least like a framework, or not like a framework, but criteria.

Max: There's a question or something which is like, why shouldn't I be a. I guess it seems like you're a hedonist to some degree like that. Right? Why shouldn't I be a preference-ist? Like a preference utilitarian.

Aaron: Oh, yeah. So this is another different.

Max: But I can objectively.

Sarah: What's a preference utilitarian?

Aaron: Please somebody who's wrong.

Sarah: Okay.

Aaron: Go ahead.

Max: Aaron thinks something like, for a world state. So like, just a way the world could be to be more preferable or to be better, I guess I should say to be better is to have more happiness in it, which roughly is like the sort of doing drugs happiness. I don't know if that's unfair to Aaron, but that is a type of.

Aaron: No, that totally, yeah.

Max: Okay. Do more drugs and being stabbed would make it less preferable. That could kind of sharp pain and a preference theory of welfare. You don't have to be utilitarian. Just what it says for welfare to be what welfare is for. A preference theory is that when your preferences are satisfied, that is better. So if I have a preference for being covered in chocolate, jumping into a bathtub full of chocolate is like a really good state of affairs. Whereas for Aaron it would be if I did a bunch of weed or something is like the better thing.

Aaron: I mean, that's like the worst. That's like a very inefficient pharmacological route.

Max: Yeah, but it is legal where am.

Aaron: Okay.

Sarah: So, Aaron, you think that people.

Aaron: Sorry, go ahead. Okay.

Sarah: Surely it's just the case that different people can achieve different states of welfare by doing different things. Everyone wants to be high all the time.

Aaron: No, totally.

Sarah: What's your point?

Aaron: My thing is that preferences don't intrinsically matter. They certainly matter in the real world as how we actually do things or whatever, but there's no intrinsic value. Or insofar as preferences matter, they only matter because at the end of the day, they achieve, they increase happiness and decrease.

Max: I mean, Aaron might not be committed to this for various ways, but like a very naive, hedonic theory of welfare. That is kind of in the drug style of it. Like, dopamine is pleasure or something like that. Might say that you shouldn't go around force feeding people drugs, but if it was the case that everybody magically became on drugs, this would actually be quite good, assuming they don't die as a result. If you just could kind of bring that state of affairs about by pushing a button, you should do that. Even though it's the case, many people have a preference not for.

Sarah: This is like the AGI comes and plugs us all into weird endorphin machines, forever.

Aaron: Adding in, if you could just stipulate that also, maybe this inhibits a population growth, that would be good. And maybe this is also, like an x risk in some ways, because, okay, then we can't defend against asteroids or whatever, but if you just set those aside, then, yeah.

Max: And then, like, a preference theory would say, since people don't have a preference for this, this actually means it's not as good.

Sarah: Okay, I get that. So if you're not a preference utilitarian, then you think that people might not actually understand what states they would experience the most happiness in, and therefore their preferences might not be the best metric for, like, people might think, oh, I would be happier if I did this thing. But they don't know what would actually make them happier. So it isn't necessarily right to try and bring about their preference.

Aaron: I'm getting really tempted to decide to walk, maybe. Should I walk back?

Sarah: I don't know. What's your preference?

Max: I think, as, like, a vague quibble, probably a preference theorist would say, it's not so much about happiness, it's about welfare, because those are different. Whereas high welfare is like, high preference satisfaction for a preference theory. Whereas happiness might not be like, you might have a preference for making lots of paintings, but it's not the case that this makes you feel the emotion happy, or something like that.

Aaron: I think that scenario max prevented was like, I bite the bullet on that and said, yeah, if everybody could be magically happy, even if they didn't want to be, that would be good. But the other side of the coin. The other side of the coin is like, okay, you can also imagine just, like, a being that really wants to suffer and be tortured, in fact. And then there's nothing incoherent about that.

Aaron: And so should we, in fact, cause this immense pain if we invent this creature? And I think, no, we shouldn't. And I think that's a pretty compelling objection to preference theories of welfare.

Max: Wait, sorry, what's your objection again?

Aaron: Wait, sorry. What was that?

Max: What's the compelling objection?

Aaron: If there's, like, a pig that wants to be tortured, should be tortured. The pig.

Max: I see what you're saying. Yeah. And a preference theory might say you should.

Aaron: Yeah. And I think, pretty obviously, I think it becomes more clear if you imagine that, okay, we have genetic control over the pig. We can control. You're like, choose whether to create the pig, kid. Like, I want to create the pig. And then I do. It's like, seems bad.

Max: Yeah. I mean, for what it's worth, if I was a moral realist or whatever, I would not be a preference theorist, probably, but I might be a preference theorist because I'm not a realist.

Aaron: This is interesting. Also, in a minute, I'm going to have to maybe not actually go, but stop being a good conversationalist or a bad conversationalist. I'm going to become an even worse conversationalist.

Max: Nice.

Aaron: For, like, I don't know, ten minutes at least.

Max: Yeah.

Aaron: So you guys can talk about. You can gossip about me.

Max: Just in the middle of the podcast is gossip.

Aaron: I mean, we can rearrange it.

Max: Right. That's.

Sarah: I can use those ten minutes to speed run developing some actual opinions. I didn't have any coming in. I still kind of don't.

Max: If you want, I can give you a vague overview of the other prominent welfare theories.

Sarah: Okay. That would be useful. And then I'll sort of pick one with it.

Max: The third one is often something like objective list theory, which probably the easiest way to explain it is something like Aristotle's concept of eudiomia, I think, is.

Sarah: How you say it.

Max: So, something that is good for humans to live can be thought of as living a very fulfilling life, which is the height of welfare. Some other intuitions about this are related to the ideas of higher and lower level pleasures, as proposed by Bentham or another utilitarian philosopher. The concept suggests that it's better to be a dissatisfied human than a very happy pig.

Max: The reason for this is that although rolling around in the mud is pleasurable for a pig, there's something about being human and experiencing a wide range of emotions and experiences that might be more intense than what a pig can feel. Of course, someone might argue that pigs have their own unique experiences, but humans can do things like math and create TV shows, which are fulfilling on a higher order level. Engaging in these higher order pleasures contributes to a good human life.

Max: Living a fulfilling life involves engaging in objectively welfare-enhancing activities, such as loving well and embodying virtues like compassion and honor. Instead of welfare being limited by how many drugs one can consume or how many preferences one can fulfill, it's about living an "objectively good" human life. Many people have intuitions about this concept, and it resonates with their understanding of what it means to lead a meaningful existence.

Sarah: So is the thing that makes us able to live more fulfilling lives than pigs? Does that hinge on intelligence?

Max: Kind of depends who you like. The thing about the pigs is not from Aristotle, as I was taught, like Aristotle. He'd say something like, there is this thing that it is to live a very good life as a pig. And this is different from what it's.

Sarah: Like to live a very good life as a human.

Max: I don't know where he goes on. Like, is it more preferable to be a pig or a human? But most people think it's more preferable to be a human than a pig. And by most people, I mean most humans, not most people.

Sarah: Yeah. That seems like a ridiculously difficult thing to ascertain. Fair enough. Okay.

Max: And I guess the reason you might doubt this a bit is that it can be a bit elitist. Well, this isn't necessarily a reason to doubt it, but consider this: you could make an objective list theory that says it's really important to listen to opera. To be a good human and live a good human life is to kind of go to the opera and really get into it.

Max: It's like, I don't know, why would that be the case? That's kind of weird, right? You might be like, oh, to live a really good human life is to have a monogamous, straight relationship with four children or something. It's like, I don't know, why would that be the case?

Max: I mean, maybe I'm kind of just making up obviously wrong objective lists, but why is it the case that what it is to be good is to have these very specific sorts of things? Like, how do you actually go about making a list? Because a hedonist or a preference theorist can say, oh, it's by having lots of utility, like joy, or it's by having lots of fulfilling preferences. And I'm not quite sure how an objective list theorist goes about justifying their list outside of appeals to intuition or something like that.

Sarah: Okay, got you. Okay, so we've got pump yourself full of drugs, follow your preferences, and make a random, arbitrary list of things that you think a good person or a fulfilled person might do, and then do those and also project that onto everyone else in a sort of, like, controlling, elitist type way.

Max: I like how you've described all of them unfairly. That's not really the right word. Caricature. You've, like, caricatureized them all. That's how I describe them in private.

Sarah: Cool. Now I've got to pick one. Which one of these resonates with me? I mean, I guess the preference one seems like the way that most people actually go about life. Maybe it's that one.

Max: To be honest, I think it's probably all of them or something like what people think. That's why they all have obvious issues if you take them to the extreme. At the baseline, they might seem appealing. For example, what does it mean to live a good life? I don't know. Fulfilling my preferences seems pretty good. But wait, I have some bad preferences sometimes.

Max: Maybe it's to be really happy all the time. However, I don't want to be on drugs constantly. Well, perhaps it's to live a fulfilling human life. But now I don't know how to define this, and it's kind of just about defining it by my preferences. And now I'm back there, and you can go in a circle like this, around and around.

Sarah: Yeah. Difficult being a person isn't.

Max: Yeah.

Sarah: What a.

Max: You know, Aaron would say tougher bolts. The alternative?

Sarah: Being a pig.

Max: Well, being a person.

Sarah: Being a person is preferable to being a pig.

Max: No, I meant to like not being.

Sarah: To not being a person.

Max: To just being. Not existing.

Sarah: Dead.

Max: Yeah.

Sarah: Right. Yeah.

Max: I mean, I think it's preferable to be me than it is to be dead, but not objectively, I guess.

Sarah: Yeah, this reminds me of a conversation I had with someone on Twitter when I made a poll about whether people thought that the world was net negative or not. A surprising percentage of people did, by the way, which is pressing. Someone made a point about people having a survival instinct, and if people want to survive, then surviving is good.

Sarah: Someone else responded by saying that people have preferences that don't accurately reflect the best state of the world. So, people have a survival instinct, but actually, they ought not to have a survival instinct because they think it's better to exist. But actually, they're wrong about that. That's kind of what I thought they were saying. I feel like if you've backed yourself into that corner, something deep in my gut tells me that you've gone wrong somewhere.

Max: What I would say is something like what they're running into is the fact that moral realism might be fake. Right. Well, I guess this isn't strictly the case. Maybe the issue you're pointing out is what I'm describing here, though.

Max: If moral realism is correct, then there's a fact of the matter about some of this stuff. Like there's a right theory of welfare, if there's realism about welfare anyway. But if there's not a right theory of welfare, then you might be like, "Oh, the world is kind of worth it because people have this survival instinct," and someone's like, "Well, no, because of this other thing," and you're like, "Oh, that's kind of convincing."

Max: But then someone can just say, "Oh, but this other convincing thing," and you just go around in a circle forever because there's nothing to really ground yourself on since there's no actual fact of the matter.

Sarah: Well, that sounds like all of philosophy.

Max: I mean, that's what I'd say.

Sarah: I'm starting to get that unmoored feeling right now. It's kind of disorientating, though.

Max: It is the case that you can get better at articulating and maybe even thinking what you think or something like that. Maybe you just have to be kind of careful about it or something. Maybe if you thought about a lot about what welfare is or something, you kind of get a sense about the life you want to lead or something like that. And then you just have to be a bit careful about people coming in and gaslighting isn't the right word, but maybe gaslighting you about what you want to do with your life or something like that.

Sarah: Open to being gaslit as long as someone's telling me what to do.

Max: I would have done that online if I were you.

Sarah: Yeah, probably not. Please, I didn't mean that. Nobody gaslight me. I don't want to be gaslit. Do you want to give me any more philosophy crash courses?

Max: I mean, I can. I do think it's interesting how people think about realism, and the number of people who are "right" is higher than I'd expect. I say "right" because I'm an antirealist. So, you might think that people would have a default towards realism.

Max: However, this gets technical. I think religion is a subjectivist realist theory. By subjectivist, it means that God is the one telling you what to do, and it's realist because he's real. Some philosophers may argue that religion is also antirealist, but that's debatable. Many people are religious and believe it's objectively bad to kill your mom, for example.

Max: It's interesting that I encounter people my age who think antirealism seems right, or that realism seems false after some thought. This is especially strange considering many philosophers are realists, so it might be a bit weird.

Sarah: So realism is like a well subscribed position amongst professional philosophers.

Aaron: Let me google that again back.

Max: Hello. There is some data on this. Is this the right thing? Okay, so meta ethics, moral realism or moral antirealism? 62% accept or accept or lean towards moral realism. So 24% lean towards it, 37% accept it, and then 11.5% accept antirealism. And 14.5%. You're welcome. Sorry. Lean towards antirealism, and then, like, 12% do some other stuff, whatever that means. So, like, agnostic.

Sarah: That's pretty surprising.

Max: Yeah. I mean, one reason would be, like, why are you doing moral philosophy as your job if you are an antirealist? Right. Is like a relatively compelling question, especially when you could be paid more to do something else with your life. So there might be, like a selection effect to some degree.

Sarah: Yeah. I feel like I wish I had done philosophy at uni instead of doing English. That probably would have been more fun, and then I would know more things and I could hold my own in a conversation. Well, okay.

Max: So part of it is like, this is my hot philosophy take, I'm going to say, comes a bit from antirealism. So if you're an anti realist right. You think there's kind of no fact of the matter about or whatever you're an antirealist about. So you could be like an antirealist about art or something. So there's no objectively good art. Right.

Sarah: Yeah.

Max: So, whatever domain you're an anti-realist about, there's no fact of the matter. It comes down to who can squabble best or something like that. This really advantages people who have advantages, like me. I can squabble best, I think, because I've thought about this stuff a lot, know the terms, and can be confusing if I need to. Maybe I can talk somewhat well or something, so I can outmaneuver others.

Max: If you've ever seen Ben Shapiro debate, like in "Ben Shapiro owns liberals," what he's really doing isn't winning an argument. He's stumping them and making it difficult for them to respond. If you can do this in a conversation, you can just kind of win. A lot of philosophy, or conversations about philosophy, is actually about that.

Sarah: Yeah. Although the Ben Shapiro thing just depends what your algorithm is giving you, because if you're on the other side of TikTok, then you get the liberals owning Ben Shapiro. It's always just like the same interviews with different clips cut out of them or whatever.

Max: I'm glad to know there are clips of that happening.

Sarah: I've seen at least a few. I don't know. There was one where he went to the Cambridge union or something, and some girl yelled at him about how he's wrong about being pro life or something, and then everyone cheered. I don't know. To be honest, I obviously agreed with her. I don't think her arguments were that good. I think she was mostly just shouting at him. But given that she was right, it was still kind of satisfying to watch anyway. Yeah. I don't know. Sometimes I'm good at arguing with people, other times not. It just depends whether I'm in the zone or not. I'm kind of not right now, really.

Max: What I'm saying is you should be more confident or something, and not because you're really good or something like that. I mean, that's probably true as well, but more because if there's no fact of the matter, it's better to stand your ground, usually, and rather than kind of get somebody to override what you think by just talking louder at you or something. If you kind of find it to be the case where it's like, oh, I disagree with this, but I don't know how to say it. You should just be like. You should kind of make a mental note to be like, I'm going to stick to my position, or something like that.

Sarah: So you just always double down on everything, is what you're saying? Yeah, because nobody's right anyway.

Max: Well, so this isn't true about some empirical things. Kind of like, if you were like, I think it's the case empirically that dogs weigh twelve pounds on average. I don't know, maybe you don't have to double down on that. If somebody's like, actually, the National Dog Institute studied this and then hints, you study, but otherwise you can double down, I give you permission.

Sarah: Okay, excellent. I will be doing that.

Max: It makes convincing people harder, actually.

Sarah: As long as you're fine with me referring people back to you. When I'm a stubborn, belligerent pain in the ass, I can be like, well, Max told me that I was allowed to double down on every stupid take that I just came up with five minutes ago. So you can take it up with.

Max: Him for business inquiries, please see my twitter.

Aaron: I'm sort of back. I'm only sort of back.

Max: Hello?

Sarah: Okay.

Aaron: Honestly, I should be banned from any podcast forever.

Sarah: No, this is really funny.

Max: They should make you, like, the 80k host, but you never go to the studio. You just do your errands.

Aaron: No, I applied to be the CEO.

Max: Did they reject you already?

Aaron: Not yet. I don't think they should choose me, but I wasn't joking. You never know.

Max: Nice.

Sarah: It's a shame that this wasn't live when you submitted your application because it would have been such excellent evidence.

Aaron: Well, I did interview Rob Wiblin and their producer Kieran Harris, like, a year ago.

Sarah: You did?

Aaron: Yeah. It's like the coolest thing I've ever done. And so I'm always, like, smug when I tell people.

Sarah: That's so then follow you on Twitter.

Max: Brag about it more, or like a resume.

Aaron: It's a line on my resume.

Sarah: That should be more prominent in your online presence. That's so cool.

Aaron: Maybe I should put it on Pigeon Hour. Yeah, I'll send you guys. No, it's weird because it's not that big of a deal. But the other shit, it's cool. I'm not going to deny that.

Sarah: That is a massive deal. What are you talking about? How did you swing that?

Max: You don't get to know.

Sarah: Well, I don't know. I don't know who won this argument. I don't know if you guys finished having your argument, but I would say we really did.

Max: But I think you started in favor of what I thought and then Darren didn't convince you. So that's kind.

Sarah: I don't think he has convinced me. I'm confused about how the objective ordering thing can exist independently of there being things that it is right or wrong to do. I still don't get that.

Max: I actually can. Well, I don't know if I can give Aaron.

Aaron: Hello? Yeah, I don't know. I'll just be a fly on the wall, I guess. Keep going.

Max: Yeah, so think about it like this: you can rank every person based on height. That's something you can actually do. I can give you the ordering of heights of all people alive right now. However, there are two things that just giving you this ordered set of elements doesn't provide.

Max: One, it doesn't tell you what tallness is. It tells you who is taller than other people, like five foot nine is taller than five foot eight. You get that information, but you don't know who's tall because is tall 6ft? Is it 4ft? You don't know. That's an additional thing.

Max: It also doesn't tell you how tall you should want to be. It just tells you how tall you are. So, it's entirely possible to have this ordering of states of the world based on suffering or something like that, right? And it just doesn't have the next thing, which is what states of affairs are good. You can rank them, but you just don't get that information from that.

Sarah: If you were like having to choose between two actions and each of them would bring about a different state of the world, and it was objectively true that one was better than the other and you knew that, even though the tallness analogy doesn't map onto that, because then it's not like you wouldn't have to determine the cutoff point of where do things stop being bad and when do they start being good. If you had two states of affairs and you knew one was better than the other, then surely you would be able to say, oh, it is right for me to do the one that brings about state b as opposed to.

Aaron: State a. I think that makes a lot of sense. But if somebody was just committed to not arguing with you or just saying, you know what, I really don't care, then I don't know. That's where you get into the question of whether you have any response or whether there is a legitimate response to, well, maybe Max disagrees.

Max: So I guess, first, to Sarah's point, you might say this is a bit of like, so really what Aaron might be claiming. I guess it depends on what he thinks he's claiming. He could be claiming, you can objectively rank states based on a betterness relationship, and that betterness relationship is, like, the objective one you ought to use. Or what he's saying is you can rank states based on a betterness relationship of suffering. You could also rank them based on a betterness relationship of preference function as well. Right. And so there's kind of this question about which is the right ordering.

Aaron: I think there's, like, one true betterness function.

Max: Okay, but why is it the one true one? I guess say that.

Aaron: Oh, it's just, like, built into the meaning of, like, wait, is there a one true? So that's actually something I don't have. I think it's possible there's, like, some indeterminate ones, but there's, like, approximately one or something like that. Okay, so why is that? Yeah, it's just, like, the fact of the matter that suffering is good, happiness is bad, and nothing else is either intrinsically good or intrinsically bad. That is the short answer.

Max: Um, yeah. I think this makes Sarah's question really important, though. Like, why this just.

Aaron: I feel like this is sort of a bad thing for me to be arguing about, because, hey, I think it's great. Nobody wants to take this as a normative thing. I think it makes a lot of sense. It's like, okay, two thumbs up. You know what I mean? It's like, in some level, maybe. I don't think I can defend it at some intense metaphysical, fundamental level. Kind of, who cares?

Max: Well, you should care, probably because it's okay. Probably important for your worldview or something would be like, okay, darn.

Sarah: Okay. I feel like I'm still confused, but that's what I expected.

Max: Just know that your starting intuitions were right.

Sarah: I think they were right.

Aaron: No, I think your starting intuitions are, like, sensible and wrong.

Sarah: Sensible and wrong.

Max: I remembered your thing, Aaron. The question was something like, if someone's like, oh, I don't care about what's objectively moral, good, what do you say to them? And so I guess what a philosopher would say is, you can make mistakes, right? Like, I could say, if you want to earn $100, invest in bitcoin or whatever. That's a horrible example. Sorry. If you want to earn $100. Invest in us treasury bonds.

Max: And I've done all the math or whatever, and this is just objectively the case. This is like the way you could earn $100 and you could just be like, I don't give a shit, I'm going to go invest in bitcoin instead, and then lose all your money. It's like, sure, whatever, but you've made a mistake, right? It's the case you did the wrong action.

Aaron: If your goal was to bring whatever.

Max: And you only have these two options available to you, this is the set of things. And I kind of, like, supposed that also investing in treasury bonds actually does net you $100.

Sarah: But then isn't that, like, in that situation, the person has the goal of making more money, and in Aaron's scenario, they don't care about whether they're bringing about the best state of affairs.

Max: Yeah, that's fair. I think kind of what you say there is like, what would we say about a murderer or something, I guess. Especially if you're like a realist. They don't care about the fact they shouldn't murder. They're murdering, but they're still making a mistake or something.

Sarah: You might say different to, like, if someone murdered someone under the impression that by doing that they were, I don't know, say, saving five other people, that would be different to the person that murdered someone just because they didn't care whether or not that person, I think, to live or not.

Max: The way to say this would be that a realist theory is kind of necessarily paternalistic or something. It's saying you need to be a certain way or something like that. And that makes it sound bad or something like that. But there are times when parents are right. Like parents are paternalistic in saying you should eat your vegetables. And in terms of health outcomes, this is true or something.

Max: And so moral theories are paternalistic about what actions you should take to better the world. And you don't have to want to better the world, I guess, but you ought to want to better the world or something. Just like we often kind of think.

Aaron: Like, well, I mean, then you get into just recursion. I do feel like that's why realism.

Max: Doesn't work or whatever.

Aaron: I do think we're going off or not going off. But this particular question about whether the relation.

Max: Sorry. One final thing is something like, I think what philosophers will often say is something like what it means to be rational or to be a rational agent, or to be a certain, whatever synonym or qualifying word, type of person, like a moral agent. Or whatever.

Max: To be like a person is to do this sort of stuff, like a sort of kantian view of ethics or whatever says by being a properly rational agent, you kind of wrong about everything. I mean, yeah, but that's the answer, right? You could say, if you're Aristotle, it'd be like to be a proper human or whatever, you kind of have to care about these sorts of things. But, yeah, I mean, you're right. You do just kind of get into a recursion thing, or you can.

Aaron: I know this isn't a direct response. I just keep thinking that this sub question about what objective ordering imply, like, normativity is interesting. I just don't think it's like that. It's not like the main feel like it's like a little bit of a sideshow. And also one that's like, I feel like I'm trying to argue no, but also I'm sort of like thumbs up, no, but I'm trying to make my maximally defensible or something. Whereas if the answer was yes, that would sort of be convenient or something that would make me a full throated moral realist. You know what I mean?

Sarah: But I feel like the answer just is yes. I don't get how it isn't. It just seems like. Can't speak incoherent for the answer to not be yes. So maybe you should just take the strong version, because it kind of seems like. Well, that's the logical.

Aaron: I just want to ask where you think - okay, yeah - where you think. I think remembering Becca, you said you don't equate those two statements: "X would be better" and "you ought to do X." So, it seems like there's a gap between the former and the latter. The question is, how do you bridge that gap? Where do you get the extra bit of information from?

Aaron: Honestly, I look around and I don't see it. It's sort of like my juice. I'm just curious. They vibe associate. They don't merely vibe associate, but associate very strongly or something. However, there's still a little gap or something. I'm not doing a good job of explicating.

Sarah: Yeah. So how do you get from X would be better than y to. You should bring about X. Maybe you're right and I am, in fact, wrong.

Aaron: That was directed of Max.

Max: Nice.

Sarah: No, but Max, can you say something smart, please?

Max: Yeah, sometimes I can.

Sarah: Do you do it now to respond to that? I don't have a response. I'm getting.

Max: I mean, like, it's kind of the case that some orderings imply normativity and some don't, I guess, is the case.

Aaron: Like what.

Max: Utility functions in the preference theory sense of that implies normativity.

Aaron: That's like what we're talking about.

Max: Yeah, but not all orderings imply normativity. You can order a set of numbers, and that doesn't have any normativity. And you could order colors in the world based on wavelength or something, and that doesn't tell you which colors you ought to bring about or something. Right. But an ordering of preferences based on your preference function or, sorry, states of the world based on your preference function does tell you what you should bring about or something.

Aaron: No, maybe. But then there's the question of how do you get the really substantive should bit? You're an anti realist. I feel like the natural thing in an anti realist world, I think, is to say, it's like, where does this normativity come from? It's like, is there a God? Like, maybe. But setting that aside, what I would.

Max: Say is there's normativity insofar as you want there to be some, like what ought you to do or want to do, like whatever you want to ought to do. Was that sufficiently smart, Sarah?

Sarah: That sounded very sus, because, hang on, I'm confused again. Maybe my question was more like, I don't know. I don't have the technical language to express this, but it seems like, Aaron, what you're saying is there's like this unbridgeable gap between making a claim about states that are better than others and making a claim about a normative claim about what people ought to do. And there's like two fundamentally different categories, and it's impossible to kind of bridge the gap between them.

Sarah: But then I don't understand how if you believe that, then that seems like that would make you definitely not a moral realist, because you're basically saying that you can't use the. What's the opposite of normative? There's a word for that, right?

Max: Somebody give me the word.

Aaron: Sometimes it's positive.

Sarah: But yeah, you can't use the descriptive claims to make the normative claims. So if you think that, then doesn't that just make you not a realist?

Aaron: Well, I'm claiming that there are descriptive moral claims, and, for example, x is morally better than y.

Sarah: It's like a descriptive x is better than y. Doesn't really seem like a moral claim. But you should do x seems like.

Aaron: A moral claim, right?

Sarah: If you're saying one thing is better than the other. But that doesn't imply anything about what you should do. To me, that's not a moral.

Aaron: This is like a semantic question. Yeah, I guess I honestly don't really know how most people use the words. If I want to say, all things considered, you have these two worlds, and one is better and one is better. One is morally better. I think it would be great if people brought about the better one, but I don't think in some ultimate fundamental sense, there's this normativity thing.

Aaron: I think it's a very useful social thing to invoke. I think it would be great or whatever. Personally, I prefer that. But it's not like some fundamental truth of the universe that seems like, substantively seems like I'm talking about something that's at least related to morality.

Sarah: I feel like I'm getting more and more confused the further into this conversation.

Aaron: Welcome to philosophy.

Max: Yeah.

Sarah: This is horrible. Why do you guys do this?

Aaron: Acts of getting a PhD?

Max: Not actually. That's not a literally at the mean, not currently enrolled at a PhD program. But if you're on a hiring committee, feel free to hit me up.

Aaron: Yes. If Wilma cast will listen to this, and I feel like there's at least a .01% chance he is, that's then plausible, I guess.

Sarah: Come on. It's at least 0.2.

Aaron: Yeah. If we have a campaign to get them to listen.

Max: On average.

Aaron: No, there's definitely a couple of philosophy people who will be listening.

Sarah: Wait, will, if you're listening, I have a copy of what we owe the future under my tv, right on top of my high school musical two dvd.

Max: Is that your. Thanks for the content of them by quality?

Sarah: No, I would have put the dvd on top, obviously, if I was objectively ordering them.

Max: Okay, yeah, inversely ordered.

Aaron: I want to debate Will McCaskill. If you can get him on the podcast, that would be so sweet.

Sarah: Maybe I should finish his book first. I only read.

Aaron: I actually have critiques about what we are the future. And I feel like for non EA people that critique Will MacAskill, I am, like, the ultimate will MacAskill defender. But I also think he's wrong about stuff in what we are the future. So if you can get him on the pond, that would be so sweet.

Max: Did you ever put that on the.

Aaron: No, I only have 16 million drafts that will never see the light of day.

Max: Well, if you put one of them up the chance.

Aaron: Okay. I have, like, miscellaneous. It's in my list of. I have a twitter thread that's like, EA takes, and it's, like, in there.

Sarah: That's it?

Aaron: Yeah.

Max: At some point, maybe you could just dm him the Twitter thread and say, want to go on a podcast to debate this? The next time he's on 80K, send it to Rob Woodland and just have him read out the takes.

Aaron: Yeah, no, I definitely will.

Max: Okay, good. I'll hold you to that. It's in writing now, but audibly written.

Sarah: Cool. Well, I feel like we should wrap up soon. I have to decide who I agree with. Actually, I thought the whole point was for me to.

Aaron: Oh, wait. Yes, you do. Sorry.

Sarah: I thought we're going to solve this whole question.

Aaron: Yes.

Sarah: Just get on this.

Aaron: I was wrong. You're objectively right, obviously.

Sarah: I'm always. Never been wrong. Not even once.

Max: Okay.

Sarah: Who do I think is right?

Max: I don't know.

Sarah: I'm really confused. I don't know who's right.

Aaron: Do you think Max has made, like, a more compelling case? Although I do feel like I've been mildly handicapped by my traveling circumstances.

Max: Yeah, you probably did that because you knew I'd win no matter what.

Sarah: Yeah, it seems like you guys don't really disagree about that much. So it seems like, Aaron, you think there's, like, this objective ordering thing that is true, but doesn't have any practical implications because it doesn't actually tell anyone anything about what they should.

Aaron: No, no. Can I defend my. Okay, you got to add the normativity thing. I'm sorry to be annoying, but just to jump in, I think it has practical implications. Whenever somebody says, oh, I just assert that I'm a moral subjectivist, and I think that I just intrinsically value, say, human lives over other animal lives for reasons that go above and beyond, like sentience or something like that. And I want to be able to pull this out and say you're objectively. It's like the kind of thing that I'm interested.

Sarah: To. I didn't mean to say that it didn't have it. It clearly does have practical implications, like the one you just named. So I'm sorry for misrepresenting you. And then, Max, you think that there is no objective ordering of worlds, and also that you can't make any normative claims about what people should or shouldn't do objectively. Objectively speaking, who do I agree with? I don't know. I don't feel qualified to make the call because my brain feels a bit scrambled.

Aaron: As a reminder. Unfortunately, the world doesn't have to obey what you declare to be correct, so the stakes might be lower than you think.

Sarah: Oh, I thought I was solving this issue once. And for what you're telling me it doesn't even matter what I say.

Aaron: You were like, I'm not qualified. And I was like, well, maybe, but luckily the world, unfortunately, Oxford philosophy isn't going to stamp it solved for better and for worse.

Sarah: Unbelievable.

Aaron: I know it's bad because you have less power, but it does lower the stakes.

Sarah: Okay, well, no, that's a relief. I feel a little bit less pressure now. I guess we never even really discussed how you do the world ordering or how you justify which worlds are better than others. Maybe that would have been important to get into, but let's not do that right now. I think I agree with Max.

Max: No.

Sarah: Sorry.

Aaron: No. Can we edit in? I don't know, some bad sound effects of me, like falling or something.

Max: Oh, yeah.

Sarah: I don't know how to do that. If you can teach me how to.

Aaron: Do that, then sure, yeah. That is one thing. Well, I can do it on my end. I don't know what you're going to do on your chromebook.

Sarah: Yeah, I didn't think that through. Really? There's got to be something I can. Oh, I'll figure it out. I'll figure it out.

Max: It all works.

Sarah: Yeah. Okay, cool. Thanks for arguing about moral realism.

Aaron: It's been really, anytime, literally just wake me up in the middle of the night. I'm, like, barely even joking.

Sarah: I also think what I might have to do is sort of timestamp all the moments where I said a coherent sentence because there weren't that many of them.

Max: At least four or five.

Sarah: I actually don't know if I said a single coherent thing this whole time.

Aaron: No, you definitely did.

Sarah: Yeah.

Aaron: Awesome. Not more than I was expecting, but more than I would have expected. An arbitrary person. If we just pick a random person. This is, like, much better than that.

Sarah: You did better than I thought you would.

Aaron: No, you did better than the fake median human.

Sarah: Thank you.

Aaron: In fact, probably. Definitely better than the 75th percentile. Probably, like, better than that, in fact.

Sarah: Wow. High price.

Aaron: Once we're getting into, like, 99.99. We can debate that.

Sarah: Yeah. We can quibble over the. Quibble over whether it's 99.8 or 99.9, but I'll take that. Okay, cool. Thanks, guys.

Aaron: Lovely. All right. Pip pip cheerio.

Sarah: Have a good rest of your days. I'm going to go to bed. Yeah.

Aaron: Okay, cool. Adios.

Sarah: Cool. Bye.

1

Top tier name fr. In case you’re less online and/or personally invested in AI drama than me, it’s an allusion to the recent Earth-shattering firing of Sam Altman as CEO of OpenAI. From the vague, pre-backtrack announcement:

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

0 Comments