Aaron's Blog
Pigeon Hour
#6 Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense)
0:00
Current time: 0:00 / Total time: -2:05:22
-2:05:22

#6 Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense)

Listen on:

Note: the core discussion on ethics begins at 7:58 and moves into philosophy of language at ~1:12:19

Daniel’s stuff:


Blurb and bulleted summary from Clong

This wide-ranging conversation between Daniel and Aaron touches on movies, business drama, philosophy of language, ethics and legal theory. The two debate major ethical concepts like utilitarianism and moral realism. Thought experiments around rational beings choosing to undergo suffering feature prominently. meandering tangents explore the semantics of names and references.

  • Aaron asserts that total utilitarianism does not imply that any amount of suffering can be morally justified by creating more happiness. His argument is that the affirmative case for this offsetting ability has not been clearly made.

  • He proposes a thought experiment - if offered to experience the suffering of all factory farmed animals in exchange for unlimited happiness, even a perfectly rational being would refuse. This indicates there are some levels of suffering not offsettable.

  • Aaron links this to experiences like hunger where you realize suffering can be worse than you appreciate normally. This causes his intuition some suffering can't be outweighed.

  • Daniel disagrees, believing with the right probabilities and magnitudes of suffering versus happiness, rational beings would take that gamble.

  • For example, Daniel thinks the atomic bombing of Japan could be offset by reducing more suffering. Aaron is less sure given the pain inflicted.

  • Daniel also proposes offsets for animal farming, but Aaron doesn't think factory farming harm is offsettable by any amount of enjoyment of meat.

  • They discuss definitions of rationality and whether evolution pressures against suicide impact the rationality of not killing oneself.

  • Aaron ties his argument to siding with what a perfectly rational being would choose to experience, not necessarily what they would prefer.

  • They debate whether hypothetical aliens pursuing "schmorality" could point to a concept truly analogous to human morality. Aaron believes not.

Transcript

(Very imperfect)

AARON

O'how's, it going it's going all right.

DANIEL

Yeah, I just so yesterday I saw Barbie and today I saw Oppenheimer, so it's good to oh, cool. That cultural.

AARON

Nice, nice.

DANIEL

Do you have takes? Yeah, I thought it was all right. It was a decent view of Oppenheimer as a person. It was like a how? I don't know. I feel like the public can tend to be taken in by this physicist figures you get this with quotes, right? Like, the guy was just very good at having fun with journalists, and now we get these amazing nuggets of wisdom from Einstein. I don't know. I think that guy was just having good I don't know. The thing that I'm coming away from is I thought I only watched Barbie because it was coming out on the same day as Oppenheimer, right? Like, otherwise it wouldn't have occurred to me to watch it. I was like, yeah, whatever. Barbie is, like, along for the ride, and Oppenheimer is going to be amazing, but in like, maybe Oppenheimer was a bit better than Barbie, but I'm not even sure of that, actually.

AARON

Yeah, I've been seeing people say that on Twitter. I haven't seen either, but I've been seeing several people say that I'm following, say, like, Barbie was exceptional. And also that kind of makes sense because I'm following all these EA people who are probably care more about the subject matter for the latter one. So it's like, I kind of believe that Barbie is, like, aesthetically better or something. That's my take. Right.

DANIEL

Guess. Well, if you haven't seen them, I guess I don't want to spoil them for you. They're trying to do different things aesthetically. Right. Like, I'm not quite sure I'd want to say one is aesthetically better. Probably in some ways, I think Barbie probably has more aesthetic blunders than Oppenheimer does. Okay. But yeah, I don't know if you haven't seen it, I feel like I don't want to spoil it for you.

AARON

Okay. No, that's fine. This isn't supposed to be like probably isn't the most important the most interesting thing we could be talking about is that the bar?

DANIEL

Oh, jeez.

AARON

Oh, no, that's a terrible bar. That was like an overstatement. That would be a very high bar. It would also be, like, kind of paralyzing. I don't know. Actually know what that would be, honestly. Probably some social juicy gossip thing. Not that we necessarily have any.

DANIEL

Yeah, I think your interestingness. Yeah, I think I don't have the know, the closest to gossip thing I saw was like, do you see this bit of Carolyn Elson's diaries and letters to SBF that was leaked to the.

AARON

No, I don't. Was this like today or recently? How recently?

DANIEL

This was like a few days ago.

AARON

I've been seeing her face on Twitter, but I don't actually think I know anything about this. And no, I would not have.

DANIEL

Background of who she is and stuff.

AARON

Yeah, hold on. Let the audience know that I am on a beach family vacation against my will. Just kidding. Not against my will. And I have to text my sister back. Okay, there we go. I mean, I broadly know the FTX story. I know that she was wait, I'm like literally blanking on the Alameda.

DANIEL

That's the name of research.

AARON

Okay. Yeah. So she was CEO, right? Yeah. Or like some sort of like I think I know the basics.

DANIEL

The like, she was one of the OG Stanford EA people and was around.

AARON

Yeah, that's like a generation. Not an actual generation, like an EA generation. Which is what, like six years or.

DANIEL

Like the I don't know, I've noticed like, in the there's like I feel like there's this gap between pre COVID people and post COVID people. No one left their house. Partly people moved away, but also you were inside for a while and never saw anyone in person. So it felt like, oh, there's like this crop of new people or something. Whereas in previous years, there'd be some number of new people per year and they'd get gradually integrated in. Anyway, all that is to say that, I don't know, I think SBF's side of the legal battle leaked some documents to The New York Times, which were honestly just like her saying, like, oh, I feel very stressed and I don't like my job, and I'm sort of glad that the thing is blown up now. I don't know. It honestly wasn't that salacious. But I think that's, like, the way I get in the loop on gossip like some of the New York Times.

AARON

And I eventually I love how it's funny that this particular piece of gossip is, like, running through the most famous and prestigious news organization in the world. Or, like, one of them or something. Yeah. Instead of just being like, oh, yeah, these two people are dating, or whatever. Anyway, okay, I will maybe check that out.

DANIEL

Yeah, I mean, honestly, it's not even that interesting.

AARON

The whole thing is pretty I am pretty. This is maybe bad, but I can't wait to watch the Michael Lewis documentary, pseudo documentary or whatever.

DANIEL

Yeah, it'll be good to read the book. Yeah, it's very surreal. I don't know. I was watching Oppenheimer. Right. And I have to admit, part of what I'm thinking is be if humanity survives, there's going to be this style movie about open AI, presumably, right? And I'm like, oh, man, it'll be amazing to see my friend group depicted on film. But that is going to happen. It's just going to be about FTX and about how they're all criminals. So that's not great.

AARON

Yeah, actually, everybody dunks on crypto now, and it's like low status now or whatever. I still think it's really cool. I never had more than maybe $2,000 or whatever, which is not a trivial I mean, it's not a large amount of my money either, but it's not like, nothing. But I don't know, if it wasn't for all the cultural baggage, I feel like I would be a crypto bro or I would be predisposed to being a crypto bro or something.

DANIEL

Yeah. I should say I was like joking about the greedy crypto people who want their money to not be stolen. I currently have a Monero sticker on the back of my a big I don't know, I'm a fan of the crypto space. It seems cool. Yeah. I guess especially the bit that is less about running weird scams. The bit that's running weird scams I'm less of a fan of.

AARON

Yeah. Yes. I'm also anti scam. Right, thank you. Okay, so I think that thing that we were talking about last time we talked, which is like the thing I think we actually both know stuff about instead of just like, repeating New York Times articles is my nuanced ethics takes and why you think about talk about that and then we can just also branch off from there.

DANIEL

Yeah, we can talk about that.

AARON

Maybe see where that did. I luckily I have a split screen up, so I can pull up things. Maybe this is kind of like egotistical or something to center my particular view, but you've definitely given me some of the better pushback or whatever that I haven't gotten that much feedback of any kind, I guess, but it's still interesting to hear your take. So basically my ethical position or the thing that I think is true is that which I think is not the default view. I think most people think this is wrong is that total utilitarianism does not imply that for some amount of suffering that could be created there exists some other extremely large arbitrarily, large amount of happiness that could also be created which would morally justify the former. Basically.

DANIEL

So you think that even under total utilitarianism there can be big amounts of suffering such that there's no way to morally tip the calculus. However much pleasure you can create, it's just not going to outweigh the fact that you inflicted that much suffering on some people.

AARON

Yeah, and I'd highlight the word inflicted if something's already there and you can't do anything about it, that's kind of neither here nor there as it pertains to your actions or something. So it's really about you increasing, you creating suffering that wouldn't have otherwise been created. Yeah. It's also been a couple of months since I've thought about this in extreme detail, although I thought about it quite a bit. Yeah.

DANIEL

Maybe I should say my contrary view, I guess, when you say that, I don't know, does total utilitarianism imply something or not? I'm like, well, presumably it depends on what we mean by total utilitarianism. Right. So setting that aside, I think that thesis is probably false. I think that yeah. You can offset great amounts of suffering with great amounts of pleasure, even for arbitrary amounts of suffering.

AARON

Okay. I do think that position is like the much more common and even, I'd say default view. Do you agree with that? It's sort of like the implicit position of people who are of self described total utilitarians who haven't thought a ton about this particular question.

DANIEL

Yeah, I think it's probably the implicit default. I think it's the implicit default in ethical theory or something. I think that in practice, when you're being a utilitarian, I don't know, normally, if you're trying to be a utilitarian and you see yourself inflicting a large amount of suffering, I don't know. I do think there's some instinct to be like, is there any way we can get around this?

AARON

Yeah, for sure. And to be clear, I don't think this would look like a thought experiment. I think what it looks like in practice and also I will throw in caveats as I see necessary, but I think what it looks like in practice is like, spreading either wild animals or humans or even sentient digital life through the universe. That's in a non as risky way, but that's still just maybe like, say, making the earth, making multiple copies of humanity or something like that. That would be an example that's probably not like an example of what an example of creating suffering would be. For example, just creating another duplicate of earth. Okay.

DANIEL

Anything that would be like so much suffering that we shouldn't even the pleasures of earth outweighs.

AARON

Not necessarily, which is kind of a cop out. But my inclination is that if you include wild animals, the answer is yes, that creating another earth especially. Yeah, but I'm much more committed to some amount. It's like some amount than this particular time and place in human industry is like that or whatever.

DANIEL

Okay, can I get a feel of some other concrete cases to see?

AARON

Yeah.

DANIEL

So one example that's on my mind is, like, the atomic bombing of Hiroshima and Nagasaki, right? So the standard case for this is, like, yeah, what? A hundred OD thousand people died? Like, quite terrible, quite awful. And a lot of them died, I guess a lot of them were sort of some people were sort of instantly vaporized, but a lot of people died in extremely painful ways. But the countercase is like, well, the alternative to that would have been like, an incredibly grueling land invasion of Japan, where many more people would have died or know regardless of what the actual alternatives were. If you think about the atomic bombings, do you think that's like the kind of infliction of suffering where there's just not an offsetting amount of pleasure that could make that okay?

AARON

My intuition is no, that it is offsettable, but I would also emphasize that given the actual historical contingencies, the alternative, the implicit case for the bombing includes reducing suffering elsewhere rather than merely creating happiness. There can definitely be two bad choices that you have to make or something. And my claim doesn't really pertain to that, at least not directly.

DANIEL

Right. Sorry. But when you said you thought your answer was no, you think you can't offset that with pleasure?

AARON

My intuition is that you can, but I know very little about how painful those deaths were and how long they lasted.

DANIEL

Yeah, so the non offset so it's like, further out than atomic bombing.

AARON

That's my guess, but I'm like.

DANIEL

Okay, sure, that's your guess. You're not super confident. That's fine. I guess another thing would be, like, the animal farming system. So, as you're aware, tons of animals get kept in farms for humans to eat, by many count. Many of them live extremely horrible lives. Is there some amount that humans could enjoy meat such that that would be okay?

AARON

No. So the only reason I'm hesitating is because, like, the question is, like, what the actual alternative is here, but, like, if it's like, if it's, like, people enjoy, like, a meat a normal amount and there's no basically the answer is no. Although, like, what I would actually endorse doing depends on what the alternative is.

DANIEL

Okay, but you think that factory farming is so bad that it's not offsettable by pleasure.

AARON

Yeah, that's right. I'm somewhat maybe more confident than the atomic bombing case, but again, I don't know what it's like to be a factory farm pig. I wouldn't say I'm, like, 99% sure. Probably more than 70% or something. Or 70%, like, conditional on me being right about this thesis, I guess something like that, which I'm like. Yeah, okay. I don't know. Some percent, maybe, not probably not 99% sure, but also more than 60. Probably more than 70% sure or something.

DANIEL

All right. Yeah. So I guess maybe can you tell us a little bit about why you would believe that there's some threshold that you like where you can no longer compensate by permitting pleasure?

AARON

Yes. Let me run through my argument and sort of a motivation, and the motivation actually is sort of more a direct answer to what you just said. So the actual argument that I have and I have a blog post about this that I'll link, it was part of an EA forum post also that you'll also link in the show description is that the affirmative default case doesn't seem to actually be made anywhere. That's not the complete argument, but it's a core piece of it, which is that it seems to be, like, the default received view, which doesn't mean it's wrong, but does mean that we should be skeptical. If you accept that I'm right, that the affirmative case hasn't been made, we can talk about that. Then you should default to some other heuristic. And the heuristic that I assert and sort of argue, but kind of just assert is a good heuristic is. Okay. Is you do the following thought experiment. If I was a maximally or perfectly rational being, would I personally choose to undergo this amount of suffering in compensation or not compensation, exchange for later undergoing or earlier undergoing some arbitrarily large amount of happiness. And I personally have the intuition that there are events or things that certainly conceivable states and almost certainly possible states that I could be in such that even as a rational being, like as a maximum rational being, I would choose to just disappear and not exist rather than undergo both of these things.

DANIEL

Okay.

AARON

Yeah.

DANIEL

Why do you think that?

AARON

Yeah, so good question. I think the answer comes at a couple of different levels. So there's a question of why I'm saying it and why I'm saying it is because I'm pretty sure this is the answer I would actually give if actually given if Credibly offered this option. But that just pushes the question back. Okay, why do I feel that.

DANIEL

Even what option are we talking about here? There exists a thing such that for.

AARON

All pleasures, basically, for example, let's just run with the fact, the assumption that a genie God descends. And I think it's credible, and he offers that I can live the life of every factory, farmed animal in exchange for whatever I want for any amount of time or something like that. Literally, I don't have to give the answer now. It can just be like an arbitrarily good state for an arbitrarily long period of time.

DANIEL

Oh, yeah.

AARON

And not only would I say the words no, I don't want to do that, I think that the words no, I don't want to do that, are selfishly in a non pejorative sense. Correct. And then there's a question of why do I have that intuition? And now I'm introspecting, which is maybe not super reliable. I think part of my intuition that I can kind of maybe sort of access via introspection just comes from basically, I'm very fortunate to not have had a mostly relatively comfortable life, like as a Westerner with access to painkillers, living in the 21st century. Even still, there have definitely been times when I've been suffered, at least not in a relative sense, but just like, in an absolute sense to me, in a pretty bad way. And one example I can give was just like, I was on a backpacking trip, and this is the example I give in another blog post I can link. I was on a backpacking trip, and we didn't have enough food, and I was basically very hungry for like five days. And I actually think that this is a good and I'm rambling on, but I'll finish up. I think it's illustrative. I think there's some level of suffering where you're still able to do at least for me, I'm still able to do something like reasoning and intentionally storing memories. One of the memories I tried to intentionally codify via language or something was like, yeah, this is really bad, this really sucks, or something like, that what.

DANIEL

Sucked about it, you were just like, really hungry yeah.

AARON

For five days.

DANIEL

Okay. And you codified the thought, like, feeling of this hunger I'm feeling, this really sucks.

AARON

Something like that. Right. I could probably explicate it more, but that's basically okay. Actually, hold on. All right. Let me add so not just it really sucks, but it sucks in a way that I can't normally appreciate, so I don't normally have access to how bad it sucks. I don't want to forget about this later or something.

DANIEL

Yeah. The fact that there are pains that are really bad where you don't normally appreciate how bad they are, it's not clear how that implies non offset ability.

AARON

Right, I agree. It doesn't.

DANIEL

Okay.

AARON

I do think that's causally responsible for my intuition that I lend link to a heuristic that I then argue does constitute an argument in the absence of other arguments for offset ability.

DANIEL

Yeah. Okay. So that causes this intuition, and then you give some arguments, and the argument is like, you think that if a genie offered you to live liable factory farmed animals in exchange for whatever you wanted, you wouldn't go for that.

AARON

Yes. And furthermore, I also wouldn't go for it if I was much more rational.

DANIEL

If you were rational, yeah. Okay. Yeah. What do I think about this? One thing I think is that the I think the case of live experience this suffering and then experience this pleasure, to me, I think that this is kind of the wrong way to go about this. Because the thing about experiencing suffering is that it's not just we don't live in this totally dualistic world where suffering just affects only your immaterial mind or something in a way where afterwards you could just be the same. In the real world, suffering actually affects you. Right. Perhaps indelibly. I think instead, maybe the thing I'd want to say is suppose you're offered a gamble, right, where there's like a 1% chance that you're going to have to undergo excruciating suffering and a 99% chance that you get extremely awesome pleasures or something.

AARON

Yeah.

DANIEL

And this is meant to model a situation in which you do some action in which one person is going to undergo really bad suffering and 99 other people are going to undergo really great pleasure. And to me, I guess my intuition is that for any bad thing, you could make the probability small enough and you can make the rest of the probability mass good enough that I want to do that. I feel like that's worth it for me. And now it feels a little bit unsatisfying that we're just going that we're both drilling down to, like, well, this is the choice I would make, and then maybe you can disagree that it's the choice you would make. But yeah, I guess about the gambling case, what do you think about that? Let's say it's literally a one in a million chance that you would have to undergo, let's say, the life of one factory farmed animal.

AARON

Yeah.

DANIEL

Or is that not enough? Do you want it to be like, more?

AARON

Well, I guess it would have to be like one of the worst factory farmed animals. Life, I think would make that like.

DANIEL

Yeah, okay, let's say it's like, maybe literally one in a billion chance.

AARON

First of all, I do agree that these are basically isomorphic or morally equivalent, or if anything, time ordering in my example does mess things up a little bit, I'll be happy to reverse them or say that instead compare one person to 1000 people. So, yeah, you can make the probability small enough that my intuition changes. Yeah. So in fact, 1%, I'm very like, no, definitely not doing that. One in a million. I'm like, I don't know, kind of 50 50. I don't have a strong intuition either way. 100 trillion. I have the intuition. You know what? That's just not going to happen. That's my first order intuition. I do think that considering the case where you live, one being lives both lives, or you have, say, one being undergoing the suffering and then like 100 trillion undergoing the pleasure makes small probabilities more if you agree that they're sort of isomorphic makes them more complete or something like that, or complete more real in some. Not tangible is not the right word, but more right.

DANIEL

You're less tempted to round it to zero.

AARON

Yeah. And so I tend to think that I trust my intuitions more about reasoning. Okay, there's one person undergoing suffering and like 100 trillion undergoing happiness as it pertains to the question of offset ability more than I trust my intuitions about small probabilities.

DANIEL

I guess that's strange because that strikes me as strange because I feel like you're regularly in situations where you make choices that have some probability of causing you quite bad suffering, but a large probability of being fun. Like going to the beach. There could be a shark there. I guess this is maybe against your will, but you can go to a restaurant, maybe get food poisoning, but how often are you like, oh man, if I flip this switch, one person will be poisoned, but 99 people will?

AARON

Well, then you'd have to think that, okay, staying home would actually be safer for some reason, which I don't affirmatively think is true, but this actually does work out for the question of whether you should kill yourself. And there hopefully this doesn't get censored by Apple or whatever, so nobody do that. But there I just think that my lizard brain or there's enough evolutionary pressure to not trust that I would be rational when it comes to the question of whether to avoid a small chance of suffering by unaliving myself, as they say on TikTok.

DANIEL

Hang on, evolution is pressured. So there's some evolutionary pressure to make sure you really don't want to kill yourself, but you think that's like, irrational.

AARON

I haven't actually given this a ton of thought. It gets hard when you loop in altruism and yeah, the question also there's like some chance that of sentient's after death, there's not literally zero or something like that. Yeah, I guess those are kind of cop outs. So I don't know, I feel like it certainly could be. And I agree this is sort of like a strike against my argument or something. I can set up a situation you have no potential to improve the lives of others, and you can be absolutely sure that you're not going to experience any sentience after death. And then I feel like my argument does kind of imply that, yeah, that's like the rational thing to do. I wouldn't do it. Right. So I agree. This is like a strike against me.

DANIEL

Yeah. I guess I just want to make two points. So the first point I want to make is just methodologically. If we're talking about which are you likely to be more rational about gambles of small risks, small probabilities of risk versus large rewards as opposed to situations where you can do a thing that affects a large number of people one way and a small number of people another way? I think the gambles are more like decisions that you make a bunch and you should be rational about and then just the second thing in terms of like, I don't know, I took you to be making some sort of argument along the lines of there's evolutionary pressure to want to not kill yourself. Therefore, that's like a debunking explanation. The fact that there was evolutionary pressure to not kill ourselves means that our instinct that we shouldn't kill ourselves is irrational. Whereas I would tend to look at it and say the fact that there was very strong evolutionary pressure to not kill ourselves is an explanation of why I don't want to kill myself. And I see that as affirming the choice to not kill myself, actually.

AARON

Well, I just want to say I don't think it's an affirmative argument that it is irrational. I think it opens up the question. I think it means it's more plausible that for other I guess not even necessarily for other reasons, but it just makes it more plausible that it is irrational. Well.

DANIEL

Yeah, I take exactly the opposite view. Okay. I think that if I'm thinking about, like, oh, what do I really want? If I consider my true preferences, do I really want to kill myself or something? And then I learn that, oh, evolution has shaped me to not kill myself, I think the inference I should make is like, oh, I guess probably the way evolution did that is that it made it such that my true desires are to not kill myself.

AARON

Yeah. So one thing is I just don't think preferences have any intrinsic value. So I don't know, we might just like I guess I should ask, do you agree with that or disagree with.

DANIEL

That do I think preferences have intrinsic value? No, but so no, but I think like, the whole game here is like, what do I prefer? Or like, what would I prefer if I understood things really clearly?

AARON

Yes. And this is something I didn't really highlight or maybe I didn't say it at all, is that I forget if I really argue it or kind of just assert it, but I at least assert that the answer to hedonic utilitarian. What you should do under hedonic utilitarianism is maybe not identical to, but exactly the same as what a rational agent would do or what a rational agent would prefer if they were to experience everything that this agent would cause. Or something like that. And so these should give you the exact same answers is something I believe sure. Because I do think preferences are like we're built to understand or sort of intuit and reason about our own preferences.

DANIEL

Kind of, yeah. But broadly, I guess the point I'm making at a high level is just like if we're talking about what's ethical or what's good or whatever, I take this to ultimately be a question about what should I understand myself as preferring? Or to the extent that it's not a question of that, then it's like, I don't know, then I'm a bit less interested in the exercise.

AARON

Yeah. It's not ideal that I appeal to this fake and that fake ideally rational being or something. But here's a reason you might think it's more worth thinking about this. Maybe you've heard about I think Tomasic makes an argument about yeah. At least in principle, you can have a pig that's in extreme pain but really doesn't want to be killed still or doesn't want to be taken out of its suffering or whatever, true ultimate preference or whatever. And so at least I think this is pretty convincing evidence that you can have where that's just like, wrong about what would be good for it, you know what I mean?

DANIEL

Yeah, sorry, I'm not talking about preference versus hedonic utilitarianism or anything. I'm talking about what do I want or what do I want for living things or something. That's what I'm talking about.

AARON

Yeah. That language elicits preferences to me and I guess the analogous but the idea.

DANIEL

Is that the answer to what I want for living things could be like hedonic utilitarianism, if you see what I mean.

AARON

Or it could be by that do you mean what hedonic utilitarianism prescribes?

DANIEL

Yeah, it could be that what I want is that just whatever maximizes beings pleasure no matter what they want.

AARON

Yeah. Okay. Yeah, so I agree with that.

DANIEL

Yeah. So anyway, heading back just to the suicide case right. If I learn that evolution has shaped me to not want to kill myself, then that makes me think that I'm being rational in my choice to not kill myself.

AARON

Why?

DANIEL

Because being rational is something like optimally achieving your goals. And I'm a little bit like I sort of roughly know the results of killing myself, right? There might be some question about like, but what are my goals? And if I learned that evolution has shaped my goals such that I would hate killing myself right, then I'm like, oh, I guess killing myself probably ranks really low on the list of states ordered by how much I like them.

AARON

Yeah, I guess then it seems like you have two mutually incompatible goals. Like, one is staying alive and one is hedonic utilitarianism and then you have to choose which of these predominates or whatever.

DANIEL

Yeah, well, I think that to the extent that evolution is shaping me to not want to commit suicide, it looks like the not killing myself one is winning. I think it's evidence. I don't think it's conclusive. Right. Because there could be multiple things going on. But I take evolutionary explanations for why somebody would want X. I think that's evidence that they are rational in pursuing X rather than evidence that they are irrational in pursuing X.

AARON

Sometimes that's true, but not always. Yeah, there's a lot in general it is. Yeah. But I feel like moral anti realistic, we can also get into that. Are going to not think this is like woo or Joe Carl Smith says when he's like making fun of moralists I don't know, in a tongue in cheek way. In one of his posts arguing for explicating his stance on antirealism basically says moral realists want to say that evolution is not sensitive to moral reasons and therefore evolutionary arguments. Actually, I don't want to quote him from memory. I'll just assert that evolution is sensitive to a lot of things, but one of them is not moral reasons and therefore evolutionary arguments are not a good evidence or are not good evidence when it comes to purely, maybe not even purely, but philosophical claims or object level moral claims, I guess, yeah, they can be evidenced by something, but not that.

DANIEL

Yeah, I think that's wrong because I think that evolution why do I think it's wrong? I think it's wrong because what are we talking about when we talk about morality? We're talking about some logical object that's like the completion of a bunch of intuitions we have. Right. And those I haven't thought about intuitions are the product of evolution. The reason we care about morality at all is because of evolution under the standard theory that evolution is the reason our brains are the way they are.

AARON

Yeah, I think this is a very strange coincidence and I am kind of weirded out by this, but yes, I.

DANIEL

Don'T think it's a coincidence or like not a coincidence.

AARON

So it's not a coincidence like conditional honor, evolutionary history. It is like no extremely lucky or something that we like, of course we'd find it earthlings wound up with morality and stuff. Well, of course you would.

DANIEL

Wait. Have you read the metafic sequence by Elizar? Yudkowski.

AARON

I don't think so. And I respect Elias a ton, except I think he's really wrong about ethics and meta ethics in a lot of like I don't even know if I but I have not, so I'm not really giving them full time.

DANIEL

Okay. I don't know. I basically take this from my understanding of the meta ethics sequence, which I recommend people read, but I don't think it's a coincidence. I don't think we got lucky. I think it's a coincidence. There are some species that get evolved, right, and they end up caring about schmorality, right?

AARON

Yeah.

DANIEL

And there are some species that get evolved, right? And they end up caring about the prime numbers or whatever, and we evolved and we ended up caring about morality. And it's not like a total so, okay, partly I'm just like, yeah, each one of them is really glad they didn't turn out to be the other things. The ones that care about two of.

AARON

Them are wrong, but two of them are wrong.

DANIEL

Well, they're morally wrong. Two of them do morally wrong things all the time. Right?

AARON

I want to say that I hate when people say that. Sorry. So what I am saying is that you can call those by different names, but if I'm understanding this argument right, they all think that they're getting at the same core concept, which is like, no, what should we do in some okay, so does schmorality have any sort of normativity?

DANIEL

No, it has schmormativity.

AARON

Okay, well, I don't know what schmormativity is.

DANIEL

You know how normativity I feel like that's good. Schmormativity is about promoting the schmud.

AARON

Okay, so it sounds like that's just normativity, except it's normativity about different propositions. That's what it sounds like.

DANIEL

Well, basically, I don't know, instead of these schmalians wait, no, they're aliens. They're not shmalians. They're aliens. They just do a bunch of schmud things, right? They engage in projects, they try and figure out what the schmud is. They pursue a schmud and then they look at humans, they're like, oh, these humans are doing morally good things. That's horrible. I'm so glad that we pursue the schmood instead.

AARON

Yeah, I don't know if it's incoherent. I don't think they're being incoherent. Your description of a hypothetical let's just take for granted whatever in the thought experiment is in fact happening. I think your description is not correct. And the reason it's not correct is because there is like, what's a good analogy? So when it comes to abstract concepts in general, it is very possible for okay, I feel like it's hard to explain directly, but here an analogy, is you can have two different people who have very different conceptions of justice, but fundamentally are earnestly trying to get at the same thing. Maybe justice isn't well defined or isn't like, actually, I should probably have come up with a good example here. But you know what? I'm happy to change the word for what I use as morality or whatever, but it has the same core meaning, which is like, okay, really, what should you do at the end of the day?

DANIEL

Yeah.

AARON

What should you do?

DANIEL

Whereas they care about morality, which is what they should do, which is a different thing. They have strong desires to do what they should do.

AARON

I don't think it is coherent to say that there are multiple meanings of the word should or multiple kinds. Yeah.

DANIEL

No, there aren't.

AARON

Sorry. There aren't multiple meanings of the word should. Fine.

DANIEL

There's just a different word, which is schmood, which means something different, and that's what their desires are pegged to.

AARON

I don't think it's coherent, given what you've already the entire picture, I think, is incoherent. Given everything else besides the word schmoud, it is incoherent to assert that there is something broadly not analogous, like maybe isomorphic to normativity or, like, the word should. Yeah. There is only what's yeah. I feel like I'm not gonna I'm not gonna be able to verbalize it super well. I do. Yeah. Can you take something can you pick.

DANIEL

A sentence that I said that was wrong or that was incoherent?

AARON

Well, it's all wrong because these aliens don't exist.

DANIEL

The aliens existed.

AARON

Okay, well, then we're debating, like, I actually don't know. It depends. You're asserting something about their culture and psychology, and then the question is, like, are you right or wrong about that? If we just take for granted that you're right, then you're right. All right. I'm saying no, you can't be sure. So conditional on being right, you're right. Then there's a question of, like, okay, what is the probability? So, like, conditional on aliens with something broad, are you willing to accept this phrase, like, something broadly analogous to morality? Is that okay?

DANIEL

Yeah, sure.

AARON

Okay. So if we accept that there's aliens with something broadly analogous to morality, then you want to say that they can have not only a different word, but truly a pointer to a different concept. And I think that's false.

DANIEL

So you think that in conceptual space, there's morality and that there's, like, nothing near it for miles.

AARON

The study, like yeah, basically. At least when we're talking about, like, the like, at the at the pre conclusion stage. So, like, before you get to the point where you're like, oh, yeah, I'm certain that, like, the answer is just that we need, like, we need to make as many tennis balls as possible or whatever the general thing of, like, okay, broadly, what is the right thing to do? What should I do? Would it be good for me to do this cluster of things yeah. Is, like, miles from everything else.

DANIEL

Okay. I think there's something true to that. I think I agree with that in some ways and on others, my other response is I think it's not a total coincidence that humans ended up caring about morality. I think if you look at these evolutionary arguments for why humans would be motivated to pursue morality. They rely on very high level facts. Like, there are a bunch of humans around. There's not one human who's, like, a billion times more powerful than everyone else. We have language. We talk through things. We reason. We need to make decisions. We need to cooperate in certain ways to produce stuff. And it's not about the fact that we're bipedal or something. So in that sense, I think it's not a total coincidence that we ended up caring about morality. And so in some sense, I think because that's true, you could maybe say you couldn't slightly tweak our species that it cared about something other than morality, which is kind of like saying that there's nothing that close to morality in concept space.

AARON

But I think I misspoke earlier what I should have said is that it's very weird that we care about that most people at least partially care about suffering and happiness. I think that's just a true statement. Sorry, that is the weird thing. Why is it weird? The weird thing is that it happens to be correct, even though I only.

DANIEL

Have what do you mean it's correct?

AARON

Now we have to get okay, so this is going into moral realism. I think moral realism is true, at least.

DANIEL

Sorry, what do you mean by moral realism? Wait, different by moral realism?

AARON

Yes. So I actually have sort of a weak version of moral realism, which is, like, not that normative statements are true, but that there is, like, an objective. So if you can rank hypothetical states of the world in an ordinal way such that one is objectively better than another.

DANIEL

Yes. Okay. I agree with that, by the way. I think that's true. Okay.

AARON

It sounds like you're a moral realist.

DANIEL

Yeah, I am.

AARON

Okay. Oh, really? Okay. I don't know. I thought you weren't. Okay, cool.

DANIEL

Lots of people in my reference class aren't. I think most Bay Area rationalists are not moral realists, but I am.

AARON

Okay. Maybe I was confused. Okay, that's weird. Okay. Sorry about that. Wait, so what do I mean by it happens to be true? It's like it happens to coincide with yeah, sorry, go ahead.

DANIEL

You said it happens to be correct that we care about morality or that we care about suffering and pleasure and something and stuff.

AARON

Maybe that wasn't the ideal terminology it happens to so, like, it's not morally correct? The caring about it isn't the morally correct thing. It seems sort of like the caring is instrumentally useful in promoting what happens to be legitimately good or something. Or, like legitimately good or something like that.

DANIEL

But but I think, like so the aliens could say a similar thing, right? They could say, like, oh, hey, we've noticed that we all care about schmurality. We all really care about promoting Schmeasure and avoiding Schmuffering. Right? And they'd say, like, they'd say, like, yeah, what's? What's wrong?

AARON

I feel like it's not maybe I'm just missing something, but at least to me, it's like, only adding to the confusion to talk about two different concepts of morality rather than just like, okay, this alien thinks that you should tile the universe paperclips, or something like that, or even that more reasonably, more plausibly. Justice is like that. Yeah. I guess this gets back to there's only one concept anywhere near that vicinity in concept space or something. Maybe we disagree about that. Yeah.

DANIEL

Okay. If I said paperclips instead of schmorality, would you be happy?

AARON

Yes.

DANIEL

I mean, cool, okay, for doing the.

AARON

Morally correct thing and making me happy.

DANIEL

I strive to. But take the paperclipper species, right? What they do is they notice, like, hey, we really care about making paperclips, right? And, hey, the fact that we care about making paperclips, that's instrumentally useful in making sure that we end up making a bunch of paperclips, right? Isn't that an amazing coincidence that we ended up caring our desires were structured in this correct way that ends up with us making a bunch of paperclips. Is that like, oh, no, total coincidence. That's just what you cared about.

AARON

You left at the part where they assert that they're correct about this. That's the weird thing.

DANIEL

What proposition are they correct about?

AARON

Or sorry, I don't think they're correct implicitly.

DANIEL

What proposition do they claim they're correct about?

AARON

They claim that the world in which there is many paperclips is better than the world in which there is fewer paperclips.

DANIEL

Oh, no, they just think it's more paperclipy. They don't think it's better. They don't care about goodness. They care about paperclips.

AARON

So it sounds like we're not talking about anything remotely like morality, then, because I could say, yeah, morality, morality. It's pretty airy. It's a lot of air in here. I don't know, maybe I'm just confused.

DANIEL

No, what I'm saying is, so you're like, oh, it's like this total coincidence that humans we got so lucky. It's so weird that humans ended up caring about morality, and it's like, well, we had to care about something, right? Like anything we don't care about.

AARON

Oh, wow, sorry, I misspoke earlier. And I think that's generating some confusion. I think it's a weird coincidence that we care about happiness and suffering.

DANIEL

Happiness and suffering, sorry. Yeah, but mutatus mutantus, I think you want to say that's like a weird coincidence. And I'm like, well, we had to care about something.

AARON

Yeah, but it could have been like, I don't know, could it have been otherwise, right? At least conceivably it could have been otherwise.

DANIEL

Yeah, the paperclip guys, they're like, conceivably, we could have ended up caring about pleasure and suffering. I'm so glad we avoided that.

AARON

Yeah, but they're wrong and we're right.

DANIEL

Right about what?

AARON

And then maybe I don't agree. Maybe this isn't the point you're making. I'm sort of saying that in a blunt way to emphasize it. I feel like people should be skeptical when I say, like okay, I have good reason to think that even though we're in a very similar epistemic position, I have reason to believe that we're right and not the aliens. Right. That's like a hard case to make, but I do think it's true.

DANIEL

There's no proposition that the aliens and us disagree on yes.

AARON

The intrinsic value of pleasure and happiness.

DANIEL

Yeah, no, they don't care about value. They care about schmalu, which is just.

AARON

Like, how much paperclips there is. I don't think that's coherent. I don't think they can care about value.

DANIEL

Okay.

AARON

They can, but only insofar as it's a pointer to the exact same not exact, but like, basically the same concept as our value.

DANIEL

So do you reject the orthogonality thesis?

AARON

No.

DANIEL

Okay. I think that is super intelligent.

AARON

Yeah.

DANIEL

So I take the orthogonality thesis to mean that really smart agents can be motivated by approximately any desires. Does that sound right to you?

AARON

Yeah.

DANIEL

So what if the desire is like, produce a ton of paperclips?

AARON

Yeah, it can do that descriptively. It's not morally good.

DANIEL

Oh, no, it's not morally good at all. They're not trying to be morally good. They're just trying to produce a bunch of paperclips.

AARON

Okay, in that case, we don't disagree. Yeah, I agree. This is like a conceivable state of the world.

DANIEL

Yeah. But what I'm trying to say is when you say it's weird that we got lucky the reason you think it's weird is that you're one of the humans who cares about pleasure and suffering. Whereas if you were one of the aliens who cared about paperclips. The analogous shmarin instead of Aaron would be saying, like, oh, it's crazy that we care about paperclips, because that actually causes us to make a ton of paperclips.

AARON

Do they intrinsically care about paperclips or is it a means of cement?

DANIEL

Intrinsically, like, same as in the Orphogonality thesis.

AARON

Do they experience happiness because of the paperclips or is it more of a functional intrinsic value?

DANIEL

I think they probably experience happiness when they create paperclips, but they're not motivated by the happiness. They're motivated by like, they're happy because they succeeded at their goal of making tons of paperclips. If they can make tons of paperclips but not be happy about it, they'd be like, yeah, we should do that. Sorry. No, they wouldn't. They'd say, like, we should do that and then they would do it.

AARON

Would your case still work if we just pretended that they're not sentient?

DANIEL

Yeah, sure.

AARON

Okay. I think this makes it cleaner for both sides. Yeah, in that case, yes. So I think the thing that I reject is that there's an analog term that's anything like morality in their universe. They can use a different word, but it's pointing to the same concept.

DANIEL

When you say anything like morality. So the shared concepts sorry, the shared properties between morality and paperclip promotion is just that you have a species that is dedicated to promoting it.

AARON

I disagree. I think morality is about goodness and badness.

DANIEL

Yes, that's right.

AARON

Okay. And I think it is totally conceivable. Not even conceivable. So humans wait, what's a good example? In some sense I intrinsically seem to value about regular. I don't know if this is a good example. Let's run with it intrinsically value like regulating my heartbeat. It happens to be true that this is conducive to my happiness and at least local non suffering. But even if it weren't, my brain stem would still try really hard to keep my heart beating or something like that. I reject that there's any way in which promoting heart beatingness is an intrinsic moral or schmoral value or even that could be it could be hypothesized as one but it is not in fact one or something like that.

DANIEL

Okay.

AARON

Likewise, these aliens could claim that making paperclips is intrinsically good. They could also just make them and not make that claim. And those are two very different things.

DANIEL

They don't claim it's good. They don't think it's good.

AARON

They think it's claim it schmud.

DANIEL

Which they prefer. Yeah, they prefer.

AARON

Don't. I think that is also incoherent. I think there is like one concept in that space because wait, I feel like also this is just like at some point it has to cash out in the real world. Right? Unless we're talking about really speculative not even physics.

DANIEL

What I mean is they just spend all of their time promoting paperclips and then you send them a copy of Jeremy Bentham's collected writings, they read it and they're like all right, cool. And then they just keep on making paperclips because that's what they want to do.

AARON

Yeah. So descriptively.

DANIEL

Sure.

AARON

But they never claim that. It's like we haven't even introduced objectivity to this example. So did they ever claim that it's objectively the right thing to do?

DANIEL

No, they claim that it's objectively the paperclipy thing to do.

AARON

I agree with that. It is the paperclippy thing to do.

DANIEL

Yeah, they're right about stuff. Yeah.

AARON

So they're right about that. They're just not a right. So I do think this all comes back down to the question of whether there's analogous concepts in near ish morality that an alien species might point at. Because if there's not, then the paperclippiness is just like a totally radically different type of thing.

DANIEL

But why does it like when did I say that they were closely analogous? This is what I don't understand.

AARON

So it seems to be insinuated by the closeness of the word semantic.

DANIEL

Oh yeah, whatever. When I was making it a similar sounding word, all I meant to say is that they talk about it plays a similar role in their culture as morality plays in our culture. Sorry. In terms of their motivations, I should say. Oh, yeah.

AARON

I think there's plenty of human cultures that are getting at morality. Yeah. So I think especially historically, plenty of human cultures that are getting at the same core concept of morality but just are wrong about it.

DANIEL

Yeah, I think that's right.

AARON

Fundamentalist religious communities or whatever, you can't just appeal to like, oh, we're like they have some sort of weird it's kind of similar but very different thing called morality.

DANIEL

Although, I don't know, I actually think that okay, backing up. All I'm saying is that beings have to care about something, and we ended up caring about morality. And I don't think, like I don't know, I don't think that's super surprising or coincidental or whatever. A side point I want to make is that I think if you get super into being religious, you might actually start referring to a different concept by morality. How familiar are you with classical theism?

AARON

That's not a term that I recognize, although I took a couple of theology classes, so maybe more of them if I hadn't done that.

DANIEL

Yeah, so classical theism, it's a view about the nature of God, which is that I'm going to do a bad job of describing it. Yeah, I'm not a classical theist, so you shouldn't take classical theist doctrine from me. But it's basically that God is like sort of God's the being whose attributes are like his existence or something like that. It's weird. But anyway, there's like some school of philosophical where they're like, yeah, there's this transcendent thing called God. We can know God exists from first principles and in particular their account of goodness. So how do you get around the Euphyro dilemma, right? Instead of something like divine command theory, what they say is that when we talk about things being good, good just refers to the nature of God. And if you really internalize that, then I think you might end up referring to something different than actual goodness. Although I think it's probably there's no such being as God in the article. Theist sense.

AARON

Yeah. So they argue what we mean by good is this other.

DANIEL

Concept. They would say that when everyone talks about good, what they actually mean is pertaining to the divine nature, but we just didn't really know that we meant that the same way that when we talked about water, we always meant H 20, but we didn't used to know that.

AARON

I'm actually not sure if this is I'm very unconfident, but I kind of want to bite the bullet and say, like, okay, fine, in that case, yeah, I'm talking about the divine nature, but we just have radically different understandings of what the divine nature is.

DANIEL

You think you're talking about the divine nature.

AARON

Right?

DANIEL

Why do you think that?

AARON

Sorry, I think I very slightly was not quite pedantic enough. Sorry, bad cell phone or whatever. Once again, not very confident at all.

DANIEL

But.

AARON

Think think that I'm willing to I'm so I think that I'm referring to the divine nature, but what I mean by the divine nature is that which these fundamentalist people are referring to. So I want to get around the term and say like, okay, whatever these fundamentalists are referring to, I am also referring to them.

DANIEL

Yeah, I should say classical theism is not slightly a different when people say fundamentalists, they often mean like a different corner of Christian space than classical theists. Classical. Theists think like Ed Fesser esoteric Catholics or something. Yeah, they're super into it.

AARON

Okay, anyway yes, just to put it all together, I think that when I say morality, I am referring to the same thing that these people are referring to by the divine nature. That's what it took me like five minutes to actually say.

DANIEL

Oh yeah, so I don't think you are. So when they refer to the divine nature, what they at least think they mean is they think that the divine is sort of defined by the fact that its existence is logically necessary. Its existence is in some sense attributes it couldn't conceivably not have its various attributes. The fact that it is like the primary cause of the world and sustainer of all things. And I just really doubt that the nature of that thing is what you mean by morality.

AARON

No, those are properties that they assert, but I feel like tell me if I'm wrong. But my guess is that if one such person were to just suddenly come to believe that actually all of that's right. Except it's not actually logically necessary that the divine nature exists. It happens to be true, but it's not logically necessary. They would still be sort of pointing to the same concept. And I just think, yeah, it's like that, except all those lists of properties are wrong.

DANIEL

I think if that were true, then classical theism would be false.

AARON

Okay.

DANIEL

So maybe in fact you're referring to the same thing that they actually mean by the divine nature, but what they think they mean is this classical theistic thing. Right. And it seems plausible to me that some people get into it enough that what they actually are trying to get at when they say good is different than what normal people are trying to get at when they say good.

AARON

Yeah, I don't think that's true. Okay, let's set aside the word morality because especially I feel like in circles that we're in, it has a strong connotation with a sort of like modern ish analytics philosophy, maybe like some other things that are in that category.

DANIEL

Your video is worsen, but your sound is back.

AARON

Okay, well, okay, I'll just keep talking. All right, so you have the divine nature and morality and maybe other things that are like those two things but still apart from them. So in that class of things and then there's the question of like, okay, maybe everybody necessarily anybody who thinks that there's any true statements about something broadly in their vicinity of goodness in the idea space is pointing to the meta level of that or whichever one of those is truly correct or something. This is pretty speculative. I have not thought about this. I'm not super confident.

DANIEL

Yeah, I think I broadly believe this, but I think this is right about most people when they talk. But you could imagine even with utilitarianism, right? Imagine somebody getting super into the weeds of utilitarianism. They lived utilitarianism twenty four, seven. And then maybe at some point they just substitute in utilitarianism for morality. Now when they say morality, they actually just mean utilitarianism and they're just discarding the latter of the broad concepts and intuitions behind them. Such a person might just I don't know, I think that's the kind of thing that can happen. And then you might just want a.

AARON

Different thing by the word. I don't know if it's a bad thing, but I feel like I do this when I say, oh, x is moral to do or morally good to do. It's like, what's the real semantic relationship between that and it's correct on utilitarianism to do? I feel like they're not defined as the same, but they happen to be the same or something. Now we're just talking about how people use words.

DANIEL

Yeah, they're definitely going to happen to be the same in the case that utilitarianism is like the right theory of morality. But you could imagine that. You could imagine even in the case where utilitarianism was the wrong theory, you might still just mean utilitarianism by the word good because you just forgot the intuitions from which you were building theory of morality and you're just like, okay, look, I'm just going to talk about utilitarianism now.

AARON

Yeah, I think this is like, yeah, this could happen. I feel like this is a cop out and like a non answer, but I feel like getting into the weeds of the philosophy of language and what people mean by concepts and words and true the true nature of concepts. It's just not actually that useful. Or maybe it's just not as interesting to me as I'm glad that somebody thought about that ever.

DANIEL

I think this can happen, though. I think this is actually a practical concern. Right. Okay. Utilitarianism might be wrong, right? Does that strike you as right? Yeah, I think it's possible for you to use language in such a way that if utilitarianism were wrong, what that would mean is that in ordinary language, goodness, the good thing to do is not always the utilitarian thing to do, right? Yes, but I think it's possible to get down an ideological rabbit hole. This is not specific to utilitarianism. Right. I think this can happen to tons of things where when you say goodness, you just mean utilitarianism and you don't have a word for what everyone else meant by goodness, then I think that's really hard to recover from. And I think that's the kind of thing that can conceivably happen and maybe sometimes actually happens.

AARON

Yeah, I guess as an empirical matter and like an empirical psychological matter and yes. Do people's brains ever operate this way? Yes. I don't really know where that leaves that leaves us. Maybe we should move on to a different topic or whatever.

DANIEL

Can I just say one more thing?

AARON

Yeah, totally.

DANIEL

First, I should just give this broad disclaimer that I'm not a philosopher and I don't really know what I'm talking about. But the second thing is that particular final point. I was sort of inspired by a paper I read. I think it's called, like, do Christians and Muslims worship the same god? Which is actually a paper about the philosophy of naming and what it means for proper names to refer to the same thing. And it's pretty interesting, and it has a footnote about why you would want to discourage blasphemy, which is sort of about this. Anyway.

AARON

No, I personally don't find this super interesting. I can sort of see how somebody would and I also think it's potentially important, but I think it's maybe yeah.

DANIEL

Actually it's actually kind of funny. Can I tell you a thing that I'm a little bit confused about?

AARON

Yeah, sure.

DANIEL

So philosophers just there's this branch of philosophy that's the philosophy of language, and in particular the philosophy of right. Like, what does it mean when we say a word refers to something in the real world? And some subsection of this is the philosophy of proper names. Right. So when I say Aaron is going to the like, what do I mean by know who is like, if it turned out that these interactions that I'd been having with an online like, all of them were faked, but there was a real human named Bergman, would that count as making that send is true or whatever? Anyway, there's some philosophy on this topic, and apparently we didn't need it to build a really smart AI. No AI person has studied this. Essentially, these theories are not really baked into the way we do AI these days.

AARON

What do you think that implies or suggests?

DANIEL

I think it's a bit confusing. I think naively, you might have thought that AIS would have to refer to things, and naively, you might have thought that in order for us to make that happen, we would have had to understand the philosophy of reference or of naming, at least on some sort of basic level. But apparently we just didn't have to. Apparently we could just like I don't have that.

AARON

In fact, just hearing your description, my initial intuition is like, man, this does not matter for anything.

DANIEL

Okay. Can I try and convince you that it should matter? Yeah, tell me how I fail to convince you.

AARON

Yeah, all right.

DANIEL

Humans are pretty smart, right? We're like the prototypical smart thing. How are humans smart? I think one of the main ingredients of that is that we have language. Right?

AARON

Yes. Oh, and by the way, this gets to the unpublished episode with Nathan Barnard.

DANIEL

Coming out an UN I think I've seen an episode with him.

AARON

Oh, yeah. This is the second one because he's.

DANIEL

Been very oh, exciting. All right, well well, maybe all this will be superseded by this unpublished episode.

AARON

I don't think so. We'll see.

DANIEL

But okay, we have language, right. Why is language useful? Well, I think it's probably useful in part because it refers to stuff. When I say stuff, I'm talking about the real world, right?

AARON

Yes.

DANIEL

Now, you might think that in order to build a machine that was smart and wielded the language usefully, it would also have to have language. We would have to build it such that its language referred to the real world. Right. And you might further think that in order to build something that use languages that actually succeeds at doing reference, we would have to understand what reference was.

AARON

Yes. I don't think that's right. Because insofar as we can get what we call useful is language in, language out without any direct interaction, without the AIS directly manipulating the world, or maybe not directly, but without using language understanders or beings that do have this reference property, that's what their language means to them, then this would be right. But because we have Chat GPT, what the use comes from is like giving language to humans, and the humans have reference to the real world. But if the humans you need some connection to your reference, but it doesn't have to be at every level or something like that.

DANIEL

Okay, so do you think that suppose we had something that was like Chat GPT, but we gave it access to some robot limbs and it could pick up mice. Maybe it could pick up apples and throw the apples into the power furnace powering its data center. We give it these limbs and these actuators sort of analogous to how humans interact with the world. Do you think in order to make a thing like that that worked, we would need to understand the philosophy of reference?

AARON

No. I'm not sure why.

DANIEL

I also don't know why.

AARON

Okay, well, evolution didn't understand the philosophy of reference. I don't know what that tells us.

DANIEL

I actually think this is, like, my lead answer of, like, we're just making AIS by just randomly tweaking them until they work. That's my rough summary of Scastic gradient descent. In some sense, this does not require you to have a strong sense of how to implement your AIS. Maybe that's why we don't need to.

AARON

Understand philosophy or the SDD process is doing the philosophy. In some sense, that's kind of how I think about it or how I think about it now. I guess during the SDD process, you're, like, tweaking basically the algorithm, and at the end of the day, probably in order to, say, pick up marbles or something, reference to a particular marble or the concept of marble, not only the concept, but both the concept and probably a particular marble is going to be encoded. Well, I guess the concept of marble will be if that's how it was trained, that will be encoded in the weights themselves, you know what I mean? But then maybe a particular marble will be encoded vision to see that marble be encoded in a particular layers activation.

DANIEL

Or something, something like that, maybe. Yeah, I think this is like yeah, I guess what we're getting at is something like look, meaning is like a thing you need in order to make something work, but if you can just directly have a thing that gradually gets itself to work, that will automatically produce meaning, and therefore we don't have to think about it.

AARON

It will have needed to figure out meaning along the way.

DANIEL

Yeah, but we won't have needed to figure it out. That'll just happen in the training process.

AARON

Yeah. I mean, in the same way that everything happens in the training process. Yeah, that's where all the magic happens.

DANIEL

All right, so do you want to hear my new philosophy of language proposal?

AARON

Yes.

DANIEL

Yeah. So here's the new proposal. I think the theory of reference is not totally solved to everyone's satisfaction. So what we're going to do is we're going to train Chat GPT to manipulate objects in the physical world, right? And then we're going to give the weights to the philosophers. We're also going to give it like, a bunch of the training checkpoints, right?

AARON

And then they're going to look at.

DANIEL

This, and then they're going to figure out the philosophy of meaning.

AARON

What are training checkpoints?

DANIEL

Oh, just like the weights at various points during training.

AARON

Okay, and your proposal is that the philosophers are going to well, we haven't solved neck interpretability anyway, right? Yeah. I feel like this is empirically not possible, but conceptually, maybe the outcome won't be like solving meeting, but either solving meeting or deciding that it was a confused question or something. There was no answer, but something like resolvative.

DANIEL

Yeah. I don't know. I brought this up as like a reductiod absurdum or something or sort of to troll. But actually, if we get good enough at mechanical interpretability, maybe this does just shine light on the correct theory of reference.

AARON

I mean, I'm just skeptical that we need a theory of reference. I don't know, it seems like kind of like philosopher word games to me or something like that. I mean, I can be convinced otherwise. It's like haven't seen that.

DANIEL

I'm not sure that we need it. Right. I think I get fine without an explicit one, but I don't think you can tell.

AARON

Yes. Okay.

DANIEL

Can I tell you my favorite? It's sort of like a joke. It's a sentence that yeah. All right, so here's the sentence. You know Homer, right? Like the Greek poet who wrote the Iliad and the.

AARON

Oh, is that the.

DANIEL

No, this is the setup, by the way, do you know anything else about Homer?

AARON

Male? I don't know that I think that.

DANIEL

Yeah, okay, all right. This is not going to be funny as a joke, but it's meant to be a brain tickler, right? So the Iliad and the Odyssey, they weren't actually written by Homer? They were written by a different Greek name, by a different Greek man who.

AARON

Was also named I thought I saw somebody tweet this.

DANIEL

I think she got it from me.

AARON

That'S my okay, cool.

DANIEL

She might have got it from the lecture that I watched.

AARON

Maybe you can explain to me. Other people are saying, oh yeah, I don't think they were rolling on the ground laughing or whatever, but they were like, oh, ha, this is actually very funny after you explain it. And I did not have that intuition at I'm like, okay, so there's two guys named the where's the brain tickly part?

DANIEL

Oh, the brain tickly part is this. How could that sentence possibly be true when all you knew about Homer was that he was a Greek guy who wrote The Iliad and The Odyssey and that he was named.

AARON

How could that sentence okay, so I feel like the sentence on its own doesn't have a truth value, but what it implies. If I just heard that in normal conversation in fact, when I heard it just now, and if I were to hear it in normal conversation, what I would take it to mean is the famous guy who all the academics talk about, turns out, yes, that is Person A. And there was also this other person who is not somebody else has a better, more solid understanding of Homer beyond defining him as the author of The Iliad, The Odyssey, even though that's really all I know about him. I trust there's other people for whom this is not the case. And implicitly, I'm thinking, okay, so there's some philosophy or history dudes or whatever, who they know where he was born, they know his middle name or whatever, and so we're just going to call him Person A. And in fact, there was another guy named Domer, and there's no contradiction there or whatever.

DANIEL

What if nobody alive? What if everything that so I think this is actually plausible, I think, in terms of what living people know about Homer, I think it's just that he was a guy named Homer, he was Greek, he wrote The Iliad and The Odyssey, or at least is reputed to have. And maybe we know something about the period in which he lived, and maybe you can figure out the part of Greece in which he lived from the language, but I think that's probably all humanity currently knows about.

AARON

So maybe, maybe the statement can be it feels like it can be false. And the way it could be false is if we took a census of just suppose we had a census of everybody who ever lived in that period and there was only one Homer, well, then we would know that statement is false.

DANIEL

What do you mean, only one Homer?

AARON

I mean, there was not two individuals in the census, this hypothetical census named.

DANIEL

Homer who were given the name Homer. Gotcha yeah, that would make it.

AARON

And so it seems to be carrying substantive information that, in fact, we have historical evidence of two different individuals, and we have reason to believe there were two different individuals who went by the name Homer, and one of them wrote The Iliad and The Odyssey. And given those two facts, then the statement is true.

DANIEL

Okay, if the statement were so in the past, there are two different people named Homer, and only one of them wrote The Iliad of The Odyssey. But then why would we not say that The Iliad of The Odyssey were written by Homer? Why would we say they weren't written by Homer if they were written by a different guy who was also named Homer?

AARON

Yeah, so this gets back to the difference between the statement per se and my interpretation. So the statement per se, it sounds like there's no difference there. Or the phrase, like, some other guy named where it's, like, redundant, maybe not wrong, but like redundant or something they may have even wrong. I don't know. The information carried in the statement would be equivalent if you just said, we have good reason to believe there was not merely one Homer, but two. And indeed, one of these people wrote The Odyssey and the same statement, basically.

DANIEL

All right, so here's the thing I'm going to hit you up with. I think usually people have, like most people have names that other people also have, right?

AARON

Yes.

DANIEL

Like, there's more than one person named Daniel. There's more than one person named Aaron.

AARON

Right.

DANIEL

There was probably more than one person named Homer around the time when Homer was supposed to have all right, so so, yeah, homer didn't write Thalia and The Odyssey. They were written by some other guy who was also named Homer.

AARON

Yeah, I think that's a true statement.

DANIEL

Oh, I think it's a false. Can I try and convince you that you're wrong to say that's a true statement?

AARON

Yeah.

DANIEL

All right, here's one statement. Homer wrote the iliad and the odyssey. Right?

AARON

Yes.

DANIEL

Do you think that's true?

AARON

Okay, so think it is both true and false, depending on the reference of Homer.

DANIEL

Oh, yeah. So what is the reference?

AARON

Something like, yeah, maybe I'm willing to take back the thing that I previously said because this feels like more normal language or something when I say I'm talking to Daniel, right, that feels like a true statement. But maybe my sister has a friend named Daniel, and if I told that to her right, like, she would be right to say that it's false because you know what? I keep getting back to the fact that who gives a shit? You know? What I mean. I still struggle to see. You can dig down into the true, whether a particular proposition is true or false or indeterminate or something. But in normal language, we have a million psychological and maybe not psychological, but we have a million ways to figure out what is meant by a particular proposition beyond the information contained in its words. Okay. I don't know. This is not an answer or whatever, but it still seems like it's all fine, even if we never figure out.

DANIEL

I guess sorry, I'm going to do a little bit of sweeping. Your audience doesn't want to hear that. I'm going to sweep them, then.

AARON

No, that's totally cool. We're pro sweeping.

DANIEL

All right. Finish. All right.

AARON

Yeah.

DANIEL

I'm inclined to agree that it's fine. So when you say there's a million opportunities to understand the content of a sentence other than just the information contained the words, or understand what somebody means beyond just the statement info containing the words, you might still want to know what the info contained the words actually is. I should say, broadly, the way I relate to this is as an interesting puzzle.

AARON

Yeah, no, I kind of agree. Maybe I'm just like more. Yeah, I think it's like I can see why somebody was lined. It interesting.

DANIEL

Yeah. It gets to a thing where when you try to think of what we mean by something like Homer or what we mean by something like Daniel Filon, at least when other people say it, often it will be you'll come up with a candidate definition. And then there'll be some example which you hadn't anticipated, which I think is part of what makes this interesting. So, for instance, you might think that Daniel Phylan is the person named Daniel Filon, but here's a sentence, daniel Filon could have been named Sam. Or actually, here's a better one. Daniel Filon could have been named Patrick. Like, my dad actually sort of wanted to for a while. My dad was thinking of calling me Patrick. Right.

AARON

I was almost, yeah.

DANIEL

Yeah. So if you think about the sentence, daniel Filon could have been named Patrick. If Daniel Filon just means, like, a person named Daniel Filon, then that's.

AARON

I mean yeah, but that shouldn't.

DANIEL

So then you might say, like, oh, what Daniel Filon means is it's actually just an abbreviation of a bunch of things you might know about me. Right. Like, Daniel Filon is this guy who is Australian, but now lives in Berkeley and hosts this podcast and a few other things. And then the trouble is, you could imagine a parallel world right, where I didn't do any of those things.

AARON

Well, I feel like that's a bad definition. It would be, Janopilot is a human being who was both psychologically and genetically continuous with the being who existed before he was named, or something like that.

DANIEL

Okay. But you still have to nail down which being Daniel Fallon is supposed to be psychologically and genetically continuous with wait, what? Sorry. When you say, like, Daniel Phylan means just beings that are like, human beings that are psychologically and genetically continuous with the being before they were named. I think that's what you said.

AARON

Which angry definition. Yeah, well, I'm talking about you. Yeah, beyond that, I don't think there's any other verbal mishmash I can say will point to that. There's, like, a human being there's, like, a human being that's, like, the atoms aren't the same. Plus not all the memories are the same. There's personal identity issues, but there's a human being with basically your genetics, like, whatever, how many your age, plus a couple months. And that is also Daniel filon.

DANIEL

Yeah. Can you try and say that without using the word you imagine it's somebody who you're not talking to, and so you don't get to wait, what?

AARON

I don't even know wait, what am I supposed to be trying to gesture towards what I'm trying to say?

DANIEL

Yeah, give a definition of what you mean by Daniel Filon in a way that's valid. Like, I would still be Daniel Filon in a way where imagine a counterfactual world where, like, grown up to hate EA or something. You would want to still call that guy Daniel Filon. Daniel but but you're not allowed to use the word you okay?

AARON

Yeah. Daniel Filin is the human being who is currently not. I feel like I kind of mean two different things. Honestly, I don't think there's one definition. One is, like, the actual is the current instantiation and the current and actual instantiation of a particular human being. And the more and the other definition or meaning I have is all like, all human all human beings who either were or will be. I don't know about could be, honestly, or I think could be. I don't know about could have been. Yeah, maybe could have been. Yes. Let's go with what could have been. So throughout the multiverse, if that's a thing, all those beings who either were will be, could have been or could be psychologically and genetically continuous with a human being who was conceived or, like I guess I guess I guess this being started existing when he was a genetic entity or, like, had his full genome or something, which is hard.

DANIEL

Which beings are.

AARON

The counterfactual alternatives of the current being named Daniel Phylan? And this being, in turn, is defined as the current instantiation of an original past self. And that original past self can be delineated in time by the moment that a particular human being had the had all the genes or whatever.

DANIEL

So it's things that branch off the current being that is named Daniel Filon right.

AARON

Or things that branch off the yeah, branch off, but ultra retrospectively, I guess, but yeah.

DANIEL

Okay. And the current being suppose, like so I haven't actually told you this, but my legal name is actually Steve Schmuckson, not Daniel Phylan. Is there anything that the name Daniel Filon refers to?

AARON

Like, there's no factor of the matter.

DANIEL

You think there's no factor of the.

AARON

Here'S my concern. Where is the fact of the matter located? Or something like that. Is it in my neuro? Yeah. Is it, like, moral truth? What is it like, referential truth? Is there anything referential truth is like.

DANIEL

Oh, I don't know. I guess probably not.

AARON

Okay.

DANIEL

But I guess when you say the person names Daniel Filon, I think there's still a question of, like, wait, who is the like, how do you figure out who the person named Daniel Filon is? Like, I think that gets back to.

AARON

The probably it's probably multiple people. Wait, hold on. Pause. Okay, I'll cut this part out. Lindsay, I'm in the middle of a sorry. Sorry. Bye. Okay, I'm back.

DANIEL

Yeah, but when you say, like, the person names Daniel Filon and you're using that in your definition of what do I mean by Daniel filon? That strikes me as kind of circular because how do we know which person is the one who's named Daniel Filon?

AARON

Yeah, I agree. That's a poor definition. I feel like I very weekly think that I could come up with a more rigorous definition that would be, like, really annoying non intuitive.

DANIEL

Okay.

AARON

Not super sure about that.

DANIEL

You should try and then read some phil articles because it's all totally doesn't.

AARON

Matter and it's like a fake question. Oh, yeah, it doesn't matter.

DANIEL

I just think it's a fun puzzle.

AARON

Yeah, but it feels like it's not even yeah, so there's, like, a lot of things I feel like there's mathematical questions that don't matter but are more meaningful in some sense than even this feels kind of like maybe not. How many angels dance in the head of a pin? Yeah, actually kind of like that. Yeah. How many angels can dance in the head of a pin?

DANIEL

I think that question is meaningful.

AARON

What's the answer?

DANIEL

What's the answer? I guess it depends what you mean by angel. Normally in the Christian tradition, I think angels are supposed to not be material.

AARON

I think maybe, like, tradition. I'm asking about the actual answer.

DANIEL

Yeah, I mean, the actual answer to how many angels can dance on the yeah, I think when you use the word angel okay, the tricky thing here is when you use the word angel, you might be primarily referring to angels in the Jewish tradition about which no.

AARON

I'm referring to real angels.

DANIEL

There aren't any real angels.

AARON

Okay, well, then how many angels can dance in the head of a pen?

DANIEL

Zero. Because there aren't any.

AARON

I'm kind of joking sort of adopting your stance when I came to whatever, the aliens with the weird word.

DANIEL

I gave you an answer. What do you want?

AARON

Yeah, I'm also going to give you, like, a series of answers. I mean, I'm not actually going through I think it'll be annoying, but I could give you a series of answers like that or whatever, like I'm referring.

DANIEL

To I'm not sure you could give me another question. That's my answer.

AARON

Oh, okay.

DANIEL

As for how many actual angels, could.

AARON

I feel like I might be trapped here because I thought that was going to trip you up, and it's just like, yeah, it sounds like the right answer. Honestly.

DANIEL

Well, I guess you might think that. Suppose all dogs suddenly died, right. And then later I asked you how many dogs could fit in this room, there would still be an answer to that question that was like greater than zero. Yeah. I think the word angels just, like it just depends on what the word angels refers to. And I'm like, well, if it has to refer to actual angels, then there aren't any actual angels. If we're referring to angels as conceived of in the Christian tradition, then I think infinitely many. If we're referring to angels as conceived of in other traditions, then I think that I don't know the answer.

AARON

Yes, that sounds right. I'm glad you find this sorry. That was like an hour, so that was an annoying way of putting it.

DANIEL

I liked it. That was a fine thing to say.

AARON

At the metal level. At the metal level. I find it interesting that some people find this interesting.

DANIEL

Yeah. Okay, before you go away and try and figure out theory of naming, can I add some side constraints? Some constraints that you might not have thought of?

AARON

Sure.

DANIEL

Okay, so here's a sentence. Like, harry Potter is a wizard. Right.

AARON

There are no wizards.

DANIEL

You think it's false that Harry Potter is a wizard?

AARON

Yes.

DANIEL

All right, but let's just take the okay, like like you kind of know what that means, right?

AARON

Yes.

DANIEL

Let's take another sentence. Like, thor is the god of lightning, right?

AARON

Yes.

DANIEL

Now, I take it you don't believe in the literal existence of Thor or of Harry Potter. Right?

AARON

Yeah. Right.

DANIEL

But when I talk about I'm, I'm wielding the name Harry Potter, and I'm doing a sort of similar thing as when I wield the name Aaron Bergman. Right.

AARON

Similar. Not the same, but similar.

DANIEL

Yeah. Okay, cool. So Harry Potter the thing about Harry Potter is it's like an empty name, right? It's a name that doesn't refer to anything that actually exists. Right.

AARON

Doesn't refer to any configuration of actually existed molecules. It refers to some abstractions, and it refers to a common set of a grouping of properties in various people's minds.

DANIEL

Oh, you think it refers to the grouping of properties rather than so if I said, like, Thor actually exists, that would be true, according to you?

AARON

No, I'm trying to figure out why. I think I figured why I totally.

DANIEL

Think this is a solvable problem, by the way.

AARON

Okay.

DANIEL

I'm not trying to say this is some sort of deepity, like, you will never know. I think this is conceivable. Anyway, the point is, Harry Potter and Thor are examples of names that don't refer to actual humans or gods or whatever, but they're different, right?

AARON

Yes. So that's interesting.

DANIEL

You might have thought that names were nailed down by the sets of things they referred to.

AARON

Hold on. I think something can refer to something without or sorry, there are things besides maybe you don't have a good word, but there are thingy like things, for lack of a better term, that exist in some meaningful sense of exist that are not configurations of quarks or identifiable configuration, or like, yeah, let's go to configurations.

DANIEL

Quarks and leptons. Sure. And you don't just mean like, the Em field. You mean like, things can refer to non physical stuff.

AARON

I think physical is in a useful category. This is also a hot take in some.

DANIEL

Like, wait, do you think that Harry Potter is like, this non physical being that flies around on a broomstick, or do you think that Harry Potter is like, the concept?

AARON

So I think there's multiple things that that term means, and the way it's actually used is depends on Bergman.

DANIEL

Do you think Aaron Bergman means multiple?

AARON

No.

DANIEL

What's the difference?

AARON

Well, I can in fact, Harry Potter might only refer to exactly two things.

DANIEL

What are the two things that Harry Potter refers to?

AARON

Sorry, wait, maybe I'm wrong about that. Okay, hold on. So, like, if I use the not, I don't know, because what I want to say is harry refers to what you think it refers to in two different contexts. And one context is where we pretend that he exists, and the other context is when we recognize or pretend that he doesn't. And now you're going to say, oh, who's you referring to. Am I right?

DANIEL

Yeah.

AARON

Okay, that sounds like what I'm going to say. Okay. No, I feel like there's, like an er Harry Potter, which is like a cluster of traits, like a cluster of things. There's no hardened, well defined thing in the same way there's no well defined notion of what is a bottle of wine. You can keep adding weird tidbits to.

DANIEL

The bottle of wine, but the er Harry Potter is like a bundle of traits.

AARON

Characteristics. Traits. Okay.

DANIEL

Rishi Sunak a bundle of traits.

AARON

I think there's, like two levels. There's, like, the metal Rishi Sunak and the thing that people normally refer to when they refer to Rishi Sunak, which is not merely which is not a bundle of traits. It is distinguished from other from like it is a physical or like a biological mind like thing that is individuated or pointed out in person space by the bundle of traits or something like that.

DANIEL

Yeah, he is that. But I think that when people say Rishi Sunak, I don't think they ever mean the bundle of traits. I think they mean, like, the guy. I think the guy has the bundle of traits, but they don't mean the. Traits, they mean the guy.

AARON

Yeah, I think that's right. I think the way that they, with their mind brain lands on that actual meaning is, like, in some sense, recognizing those letters as pointing to characteristics, as pointing to things, to maybe things or characteristics such as the Prime Minister of Britain or UK or whatever, like things.

DANIEL

That embody the they don't they don't refer to the characteristics themselves. They refer to the things that embody the characteristics. Right.

AARON

I think as an empirical matter, this is true. I can imagine a world in which it's sometimes the first of the bundle of characteristics.

DANIEL

Yeah, I guess I think that would be people speaking a different language. Right. Like, there are all sorts of different languages. Some of them might have the word Rishi sunak. That happens to mean, like, the property of being the Prime Minister of Great Britain and Northern Ireland.

AARON

Well, like, okay, so let's say in a thousand years or whatever, and there's still humans or whatever, there's like a mythology about some being. And in the same way that there's mythology about Thor, there's mythology about this being who's in various myths plays the role of the not plays the role, but is the role in the myths of the Prime Minister of the UK. Which is like some ancient society and has these various traits, then it would behave kind of thought. But yeah, this is like a conceivable thing, in which case there is a reference I wouldn't say that means that the language people speak is in English anymore because they use rishi sunak in that way.

DANIEL

But when they said rishi sunak, they were actually referring to the traits not like some sort of being.

AARON

Well, maybe there were historians in that society who were referring to the being, but most normal people weren't or something.

DANIEL

I guess I think they would be referring to, like I guess to them I would call rishi sunak. Like, sorry, what kinds of things do these people believe about rishi sunak? But how are they using sentences involving rishi sunak?

AARON

So somebody might say, oh, you know, rishi sunak isn't actually a lie. That would be a true statement. It would also be a true sorry. Sorry, or like, wait, yeah.

DANIEL

Sorry is the idea that these people have myths about. Right, all right, sorry. That's the question I was asking. Okay, all right, cool. I guess this would be sort of similar to the case of Santa Claus. The phrase Santa Claus comes from St. Nicholas, who was probably a real guy from Turkey named okay, I like, vaguely.

AARON

Knew that, I think.

DANIEL

Yeah, but I guess this gets us back to where we started with when we say Santa Claus, do we mean like, the bundle of ideas around Santa Claus or do we mean like a guy who dispenses a bunch of presents.

AARON

On I mean, I want to step back.

DANIEL

Anyway.

AARON

Yeah, I feel like maybe insofar as I feel like maybe it does matter, or like, yeah, the question of meaning or sorry, it can matter, but it just has a different answer in particular different cases. And so the right way to go about it is to just discuss reference in the case of morality, for example, the case of Santa Claus and another. And there's no general answer. Or maybe there is a general answer, but it's so abstract that it's not.

DANIEL

Useful in any way that might be. Well, I think even abstract answers can be pretty yeah, I think you might have some hope that there's a general answer for the case of proper names to be even concrete. I think you might think that there's some theory that's sort of specific that unifies the names aaron Bergman, Santa Claus and Zeus.

AARON

Yeah. And I guess I think, oh, it'll be a lot easier and quicker just to actually disambiguate case by case. Maybe I'm wrong. Maybe I'm wrong. So if some tenured philosophers at whatever university want to work on this, people.

DANIEL

Can do that, I should say. I've read theories that purport to explain all of these three naming practices that I found somewhat convincing. When I say papers, I mean one paper. It's actually the paper I cited earlier.

AARON

Okay, you can send it to me or, like, send me a link or whatever, if you want.

DANIEL

Yeah, really, what's happening in this conversation is I read one paper and now I'm trolling you about it. I hope it's a good kind of trolling.

AARON

Yeah, it feels like benevolent trolling. But I actually do think this is kind of meaningful in the context of morality, or at least it's actually kind of non obvious in that case, whereas it generally is obvious, like, what a particular person in real life is referring to. In the case of Santa Claus, just depending on and morality happens to be important. Right. So maybe there's other cases like that. Or I could see legal battles over, like, what does a law refer to? There's, like, two different people. It's like the guy, the state, there's the name itself. Yes, sure. I don't know.

DANIEL

Yeah. This reminds me of various formulations of originalism, which is you've heard of originalism, I guess constitutional.

AARON

Yeah.

DANIEL

So original it's this theory that when you're interpreting laws, you should interpret the original thing going on there, rather than what we currently want it to be, or whatever. And there's this question of, like, wait, what thing that was originally going on? Should we interpret? And sometimes you occasionally hear people say that it's about the original intent. I think this is definitely false, but more often people will say, oh, they mean the original public meaning. But sometimes people say, oh, no, it's the original meaning. In a legal context, people try to get at what exactly they mean by originalism, and it has some of its flavor.

AARON

Yeah, I could talk about object level or at the level we've been talking. I don't think it's, like a fact of the matter, but object level. If you convince me that originalism was true, maybe you couldn't what I want to say is because those people weren't playing by the rules or whatever, we just got to norm it out or something sorry. People writing the Constitution weren't doing it under the pretext of originalism. I don't know. It I could be wrong about this.

DANIEL

Okay. Why do you think it.

AARON

Maybe looks pretty plausible that I'm wrong? I vaguely feel like this is a thing that was, like, developed in, like, the 20th century by, like, legal scholars.

DANIEL

I think that's sort of right. So they had this notion of strict constructionism in the 19th century that I think is kind of analogous to originalism. I think when people talk about originalism, they mean simple enough concepts that it seems plausible to me that people could have been to me. I don't know, maybe this is my bias, but it seems very intuitive to me that when people were writing the Constitution, maybe they were thinking, hey, I want this law to mean what it means right now.

AARON

Yeah. There's a question. Okay. What is me and me. Yeah.

DANIEL

I guess everybody thinks yeah, all right. There's one game which is, like, what did the framers think they were doing when they wrote the Constitution? There's a potentially different question, which is, like, what were they actually doing? They could have been wrong about legal theory. Right. That's conceivable. And then there's a third game, which I think is maybe the best game, which is, like, what's the best way to sort of found a system of laws? Should we hope that all the courts do originalism, or should we hope that all the courts do like, I'm not exactly sure what the alternative is supposed to be, but like, yeah, but what.

AARON

Should we ask from an alternative?

DANIEL

Is like, sorry.

AARON

Yeah, I agree. I assume you mean, like, what actually in 2023 should be the answer, or how should judges interpret the Constitution?

DANIEL

That's the game whereby should I hear means something like, what would cause the most clarity about the laws? And something like that.

AARON

I don't mean that exact same thing. I think I mean something like, more in some sense, ultimately moral, not like clarity is not I don't know. There's other values besides clarity.

DANIEL

Yeah, sure. We might want to limit scope a little bit to make it easier to think about. Right.

AARON

Yeah.

DANIEL

When I'm building a house, if I'm building a house, I probably want to think, like, how will this house not fall down?

AARON

I don't know.

DANIEL

I'm going to have a bunch of concrete requirements, and it's probably going to be better to think about that rather than, like, what should I build? Because I don't want to solve philosophy before building my house.

AARON

Yeah, it's not as obvious what those requirements are for. Possible that just because you can have just, like, two statements issued by the federal court, or you can imagine that the last two judgments by the Supreme Court include unambiguous propositions that are just opposites of one another. And I don't think this would mean that the United States of America has fallen. You know what? Okay, like, nobody knows. What should we do? I don't.

DANIEL

Mean yeah. I would tend to take that as saying that legal judgments don't follow the inference rules of classical logic. Seems fine to me.

AARON

Sure. Also, I think I'm going to have to wrap this up in him pretty soon. Sorry.

DANIEL

Yeah, we can go for ages.

AARON

Do this again. Yeah, this will be the longest one yet.

DANIEL

I feel a bit guilty for just trolling. I don't even properly understand.

AARON

Especially I do think the morality thing is interesting because I think there's definitely, like, a strain of rationalist thought that it's directionally like you were at least in terms of vibes, like where you were coming from. That's pretty influential, at least in some circles.

DANIEL

Yeah, I guess I'm not sure if I did a good job of articulating it. And also, I've sort of changed my mind a little bit about I don't know, I feel like when I talk about morality, I want to get caught in the weird weeds or the semantics rather than, like I think an important fact about morality is it's not a weird contingent fact that humans evolved to care about it. I don't know. To me, it's really interesting that evolutionary accounts of why we care about morality, they don't rely on really fine grained features. They rely on very broad. People talk to each other, and we have common projects, and there's not one guy who's stronger than every other human. I don't know. Yeah, I feel like that's somehow more real and more important than just the weird semantics of it. Anyway, before we close up, can I plug some of my stuff?

AARON

Yes, plug everything that you want.

DANIEL

All right. I have two podcasts. One of my podcasts is called Axrp. It's the AI X Risk Research podcast, and you can listen to me interview AI X Risk researchers about their work and why they do it. I have another podcast called The Phylan Cabinet, where I just talk to whoever about whatever I want. I think if you want to hear some people who strongly who I guess the audience of this podcast is mostly EA's, like young atheist kind of EA types, if you want to hear people who are kind of not like that. I have a few episodes on religion and one three and a half hour conversation with my local Presbyterian pastor about what he thinks about God. And I have another episode with an objectivist about just I don't know, I guess everything Ayn Rand thinks the culmination.

AARON

Oh, no, you cut out at the word objectivist. Sorry, wait, you cut out at the word objectivist.

DANIEL

Oh, yeah, I'll try to say it again. I have one episode where I talk to this objectivist just about a bunch of objectivist thought. So I think we cover objectivists, like, ethics, metaphysics, and a bit of objectivist aesthetics as well. And I don't know, the thing objectivists are most famous for is they're really against altruism. And I ended up thinking that I thought the body of thought was more persuasive than I expected it to be. So maybe I recommend those two episodes to.

AARON

Have been sort of actually haven't listened to it in, like, a week, but was listening to your one with Oliver habrica. But after I finish that, I will look at the objectivist one. Yeah. Everybody should follow those podcasts. Like me.

DANIEL

Everyone. Even if you don't speak English.

AARON

Everyone. In fact, even if you're not a human, like Santa Claus, including yeah. Okay. So anything else to plug?

DANIEL

If you're considering building AGI don't.

AARON

Hear that. I know. Sam, you're listening okay. I know you're listening to Pigeonhoue.

DANIEL

Okay, yeah, I guess that's not very persuasive of me to just say, but I think AI could kill everyone, and that would be really bad.

AARON

Yeah, I actually agree with this. All right, well, yeah, there's more people we can cover this in more nuance next time you come on pigeonholer. Okay, cool.

DANIEL

I'm glad we have a harmonious ending.

AARON

Yeah. Of conflict. Disagreement is good. I'm pro discourse. Cool. All right, take care. See ya. Bye.

Discussion about this podcast