Aaron's Blog
Pigeon Hour
#8: Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all
0:00
-1:10:41

#8: Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all

Episode #8 of Pigeon Hour
Transcript

No transcript...

Summary

In this philosophical and reflective episode, hosts Aaron and Max engage in a profound debate over the nature of consciousness, moral realism, and subjective experience. Max, a skeptic of moral realism, challenges Aaron on the objective moral distinction between worlds with varying levels of suffering. They ponder the hard problem of consciousness, discussing the possibility of philosophical zombies and whether computations could account for consciousness. As they delve into the implications of AI on moral frameworks, their conversation extends to the origins of normativity and the nonexistence of free will.

The tone shifts as they discuss practical advice for running an Effective Altruism group, emphasizing the importance of co-organizers and the balance between being hospitable and maintaining normalcy. They exchange views on the potential risks and benefits of being open in community building and the value of transparency and honest feedback.

Transitioning to lighter topics, Max and Aaron share their experiences with social media, the impact of Twitter on communication, and the humorous side of office gossip. They also touch on the role of anonymity in online discussions, pondering its significance against the backdrop of the Effective Altruism community.

As the episode draws to a close, they explore the consequences of public online behavior for employment and personal life, sharing anecdotes and contemplating the broader implications of engaging in sensitive discourses. Despite their digressions into various topics, the duo manages to weave a coherent narrative of their musings, leaving listeners with much to reflect upon.

Transcript

AARON: Without any ado whatsoever. Max Alexander and I discuss a bunch of philosophy things and more.

MAX: I don't think moral realism is true or something.

AARON: Okay, yeah, we can debate this.

MAX: That's actually an issue then, because if it's just the case that utilitarianism and this an axiology, which is true or something, whether or not I'm bothered by or would make certain traits personally doesn't actually matter. But if you had the godlike AI or like, I need to give it my axiological system or something, and there's not an objective one, then this becomes more of a problem that you keep running into these issues or something.

AARON: Okay, yeah, let's debate. Because you think I'm really wrong about this, and I think you're wrong, but I think your position is more plausible than you think. My position is probably. I'm at like 70%. Some version of moral realism is true. And I think you're at, like, what? Tell me. Like, I don't know, 90 or something.

MAX: I was going to probably 99% or something. I've yet to hear a thing that's plausible or something here.

AARON: Okay, well, here, let's figure it out once and for all. So you can press a button that doesn't do Nick. The only thing that happens is that it creates somebody in the world who's experiencing bad pain. There's no other effect in the world. And then you have to order these two worlds. There's no normativity involved. You only have to order them according to how good they are. This is my intuition pump. This isn't like a formal argument. This is my intuition pump that says, okay, the one without that suffering person and no other changes. Subjectively, not subjectively. There's a fact of the matter as to which one is better is, like, not. I mean, I feel like, morally better and better here just are synonyms. All things considered. Better, morally better, whatever. Do you have a response, or do you just want to say, like, no, you're a formal argument.

MAX: What makes this fact of the matter the case or something like that?

AARON: Okay, I need to get into my headspace where I've done this or had this debate before. I do know. I'll defer to Sharon Roulette not too long ago, like ADK podcast guest who basically made the case for hedonic moral realism and hedonic value being the one thing that intrinsically matters and a moral realist view based on that. And I basically disagree with her. Okay. It's like settling in now. Yeah. So it is just the fact of the matter that pleasure is moral is good. And if you say that's not true, then you're wrong and pain is bad. And if you say that that's not true, you're just wrong. That's kind of the argument. That's it. And then I can build on top of it. Where do you get ordering of the world from? But that's the core of the argument here.

MAX: Yeah. I think you need an explanation for why this is the fact of the matter or something.

AARON: Okay. I mean, do I need an explanation for why one equals one or something like that? Do you need an explanation?

MAX: Yes, I think yes. Really? Because we take this to be the case or something, but the symbols one plus one equals two or something is like by itself not true or something. It's like just a bunch of lines, really, or something. Like there's all these axioms and things we build on with the mathematical system and you could do other ones. There are like a bunch of other systems.

AARON: I guess if you're a true epistemological nihilist and you think there are no statements that are true, then I'm probably not going to convince you. Is that the case for you?

MAX: I don't think it's the case that there aren't things that are true or something.

AARON: Do you think there's anything that is true? Can you give me an example? Is there a bed behind you?

MAX: I'll say yes, but that's probably couched. You could probably, I think, represent the universe or something as a bunch of a matrix of atom positions or subatomic particle positions and these things, and maybe the rules that govern the equations that govern how they interact or something.

AARON: Yeah, I agree.

MAX: Make claims that are truth valuable based on that matrix or something. And then you could be like, we can then draw fuzzy concepts around certain things in this matrix and then say more true and true things or whatever.

AARON: So I think our disagreement here is that maybe, I don't know if it's a disagreement, but the hard problem of consciousness introduces the fact that that description of the world is just not complete. You have subjective experience also. Are you a phenomenal realist?

MAX: What's the definition of that again?

AARON: So you think qualia is like a real legit thing?

MAX: Is qualia just experience or something?

AARON: I'm sorry, I feel like I'm just depending, assuming that, you know, every single term that has ever been used in philosophy. I feel like the thing that I want to say is qualia is like. Yeah, I'll say it's like subjective experience, basically. But then people will say, like, oh, qualia exists, but not in the sense that people normally think. And. But I want to use the strong version of qualia that people argue about. It's real. It's, like, genuine. There's no illusions going on. There's no such thing as functional qualia. If there's functional pain, it is like a different thing than what people mean when they say pain. Most people mean when they say pace and pain. Most of the time, there's, like, a real, genuine, legit subjective experience going on. Do you think this thing is real?

MAX: I would say yes, but it does seem like what I would say subjective experiences or something is like a type of computation or something.

AARON: I actually lean towards functionalism. Are you familiar with functionalism and theory of philosophy of mind or whatever?

MAX: Yeah. Wouldn't that just be the. I think that's what I said. Right. It's just computations or whatever.

AARON: So I'm actually not super sure about this. So I apologize to the philosophical community if I'm getting this wrong, but my sense is that when people say functionalism, sometimes they mean that, as in not exactly empirical, but sort of empirical fact of the world is that if you have some computations, you get Qualia, and other people mean that they are just identical. They are just the same thing. There's no, like, oh, you get one and you get the other. They're just the same thing. And I think to say that computations are identical just mean the same thing as qualia is just not true, because it's at least conceivable that. Tell me if you disagree with this. I claim that it's at least conceivable that you have computers that do some sort of computation that you hypothesize might be conscious, but they are not, in fact, conscious. And I think this is conceivable.

MAX: Yeah, I agree. Though I would say when it is the case, something, it doesn't seem that means there's something besides just computations going on, or, like, the specific type of computations, like, really, what you said there is. It's conceivable you could have computations that.

AARON: Look like we want. Okay, yeah, sorry. You're right. Actually, what I mean is that. Sorry, not what I mean, but what I should have said is that it is conceivable that functionalism is false. Meaning that you can get. Meaning that you can have two sets of systems doing computations, and one of them has qualia and the other one does not. One of them is conscious, the other one is not. Do you think this is conceivable?

MAX: Well, do you mean like identical computations or something? I think that'd be the necessary thing or something, because my computer is doing computations right now, but I don't think it's conscious. And I'm doing computations right now, and I do think I'm conscious.

AARON: So if you think that you could run your brain's program on an arbitrarily powerful large computer, do you think it's conceivable that that hypothetical computer would not have conscious experience?

MAX: I do, but I think this is like the real question. Or the reason I would say it's conceivable that's not having conscious experience is because I would think your simulation just isn't doing the right sort of thing. You think it is, but it's not. For whatever reason, carbon and atoms have interactions that we didn't realize, but they do.

AARON: Yeah, actually, this is a good point. I'm actually very sympathetic to this. When people say, I feel like functionalism and I forget what the term is, but substance based, at some point you're just going to get down to like, oh, no, it needs to be like, yeah, if you really want the quarks to be doing the exact same things, then you're just getting two identical physical systems. Physical. I don't like that word so much, but whatever, I'll use it. I think we just reinvented the zombies thing. Like, are zombies conceivable? And I claim yes.

MAX: I would think no, or something. Okay, I guess I don't know the P zombie example super well, but I would guess at a certain level of your ability to, like, if you knew all the physical things about the world, like all the physical things that were knowable, p zombies would not be possible for something like you'd be able to tell.

AARON: But are they conceivable? I think that's the crux. I think.

MAX: I think P zombies are like an epistemic problem or something. Right? Is probably what I would say. It's a lack of knowledge or something. Like if you knew all the relevant things and you would be able to.

AARON: Tell, maybe, yeah, I think the fact that it's an epistemic problem is like a demonstration of its conceivability. I don't think it's conceivable that there's like, wait, what's a good example of something that's inconceivable? I feel like they're all very abstract, like a circular square or something. But I don't have a better example at hand. But no, one thing is, you don't know that I'm. I guess you know that you're a conscious, right. But, like, it's really, like, if you just stipulate that there's another version of you that is, like, yeah, in another everettian branch, maybe that's not a good example. I don't know. Because. I don't know. I feel like it's just weird with physics, but as close as you could possibly make it to you in just, like, whatever way, just, like, throw in all the stipulations you want. Do you think there's any chance it's not conscious, but you are?

MAX: I guess I'm pretty confident it would.

AARON: Be conscious or something.

MAX: Like, Spock Teleporter thing. Yeah, I think we're probably both conscious. Or I was wrong about me being conscious in the first place, I guess. Or there is a very small chance that's incredibly. Probably not worth mentioning. But I'm already mentioning it, that I've lost my mind and the whole universe is just my brain making shit up or whatever. And so it's all just like, I'm.

AARON: The only real thing connecting this back to moral realism. Yes. My original claim was that the matrix of quarks and positions and physical laws is just not a complete description of the universe. And I think that you want to say that it is and that valence. So, like, valence is just encoded in that description somewhere.

MAX: Yeah. Or, like, the thing you're interested in is, like, because you have to zoom in or something. I don't know if you've heard of Spinoza's God or something, but he got excommunicated from Judaism for this.

AARON: I didn't even know you could get excommunicated.

MAX: I didn't know either, but he did, and they still aren't over it. Somebody wrote a letter to someone high up in whatever jewish thing, and they were like, documentary, and they're like, we'll never talk about Spinoza. So his claim is, like, God is the universe. So if the totality of all physical interactions and matter is, like, the universe, he probably won't put it that way, but that's his idea. And so if that's true, the rock over there is not conscious, but I am. And so it's like a merger. Property of a smaller part of a system is consciousness or something. It's not just like the universe being conscious. Yeah, I would say that it is. The smaller. It's just a bunch of interactions is what Wally is or whatever.

AARON: Yeah. I don't think, unfortunately, that we're going to solve this year. I actually do know we disagree on. Now, more fundamentally, I don't know what the appropriate term is, but I think that you're wrong about Valence just being, I guess, logically implied by the whole mathematical thing, like all the quirks and physical laws. You think it's like a logical necessity that falls out of that, that valence is real and happens, in fact.

MAX: I don't know if logical. I don't know enough logic, I guess, to say this is the case, but it is like given the physical rules governing our universe and all the.

AARON: Oh, no. You know, but they're not given. That was a part of the description.

MAX: What do you mean?

AARON: The description of how quirks interact, or like physical laws or whatever is like part of the model that I'm referring to. And I think you want to say that given this whole picture of the physical world, and I'll just use physical to mean what most people think as physical. And yet the thing that science is concerned with, just things that are subject to observation and causality and measurable causality, I guess you think that it's like, okay, we have a whole picture of the universe under this world, under this view, and then it just is definitely the case, given all this, that I know that you're sentient. I'm sentient. Both sentience and dalence is like, wherever it is happening is just like directly implied by this whole description of the universe or whatever.

MAX: Yeah. Or like, insofar as it does exist or something.

AARON: Yeah. Okay. Unfortunately, I don't think we're going to resolve this debate, but I do think this is like the crux of disagreement.

MAX: I think probably that sort of approach that I take you to be taken is probably the most convincing way one could go about doing it.

AARON: Yeah, I'm not convinced of it, but.

MAX: I do think this is the case.

AARON: Can I get you to 2%?

MAX: Because I could give you this example. Right. And what you're saying maybe gets you out of it or something, or can get you more out of it than someone else would be able to. And so, like a typical way you might go about saying moral realism is true or something, there's like this book, or it's like a series of lectures, they turn into a book by Christine Korsgaard, I think is her name. She's like a know.

AARON: I don't believe she's a.

MAX: She is called kantian. I don't know if she's literally a.

AARON: Okay.

MAX: But it uses whatever and like the idea is sort of the way you get normativity, which is the thing that tells you. You can imagine the space of possible ethical systems or something like, there's a bunch. One tells you to stab everyone. We probably shouldn't do that. And the way you get to pick which one is normativity that tells you what you ought to do. And the way you get normativity is like, it follows from human reason or something like this, right? We kind of have this natural reason, whatever Kant called it. He used a different term, I think. And you can make these arguments, and it follows from this, right? And then the question is like, well, what if there are aliens? Right? And then their reason leads them to pick a different ethical theory out of the available ones they could have picked. And these two ethical theories clash with each other. They say they do different things. Who gets to win that clash or something? Like, who ought to. You need now a meta normative theory or something. And I think what your response would be is like, oh, the answer is in qualia or whatever, or something.

AARON: No, I don't think normativity is. I'm actually not super sure about this, but I think, yeah, when I say moral realism, I mean, as far as I know, I'm, like, the only person who espouses this. I'm sure that's not true. There's definitely, like, 10,000 phds, but I don't know who they are. But my claim is that objective ordering of worlds exists, normativity per se. Not necessarily.

MAX: I would say that. Okay, it's not clear what the difference is here or something, because earlier I kind of claimed that ethical theories are just isomorphic to orderings of possible worlds or something.

AARON: More formally, the claim, it would be good if x happened, or even it would be good if you did x sometimes has an objective truth value, true or false. But the claim, like, you should do x. I don't know. I'm actually pretty divided. Or not, I don't know. But it seems like there's, like, an extra thing being added there when you go from, it would be good if you did x objectively to, you should objectively do x. And I kind of not willing to make that jump.

MAX: I guess the problem is you have to or something at a certain point.

AARON: Or it's like.

MAX: That'S fair, what you get out of being like, there's this objective ordering or something, but it's not the case you have to do anything with. It's just like, I have this thing here.

AARON: Oh, no, actually, yeah, this actually makes a lot of sense, I guess. I haven't thought about that much. Yeah, you might think, okay, if you're a moral realist, that includes normativity, but you don't think there's any form of divine punishment, then maybe just we're describing the same thing and we're just using different words or something like that. Because. Yeah. It's like, okay, I don't know. At the end of the day, no one's going to make you do it. There's no punishment. You really should or something. And I want to claim you really should. Well, it would be good if you did x. You should. Maybe you should, but I don't know if you objectively should or something. Yeah, maybe this is just the same thing.

MAX: Yeah. It's, like less clear where we disagree then. Because I might be willing to say you have a preference function. Right. Maybe you could call it objective. It emerges objectively from the fact that there's an errand, that there's this betterness ordering of possible worlds based on the.

AARON: Things for me or in my judgment of the world. Okay. Yeah, sure.

MAX: And then I could say that. Right. And if that's what we're calling moral realism, sure. But probably what people really mean by moral realism is like, you have one of those two orderings, whichever one.

AARON: Oh, no, that's not what I mean by moral realism. That's a psychological fact that is very contingent. What I mean is that one of those is true. My claim is that, in fact, my ordering of worlds. Well, there's two claims. One of them is, like, the normative ethical position, which is like, oh, I think it's true. But the other one is that, okay, conditional on it being true, it is objectively true. I feel like. Like kind of lost in word salad now.

MAX: Yeah, I guess what is objectively true here? Mean or something at this man.

AARON: Yeah. I don't know.

MAX: Just like, is I'm trying to remember my McDowell or whatever and my Mackie. These are like people who wrote a bunch of articles against each other about this sort of objective, subjective thing.

AARON: Maybe we should find another thing to discuss, argue about. I don't know. Do you have any ideas? It can be something like, really not deep.

MAX: I mean, the other thing I'm almost certain is true is like, there's no free will or something.

AARON: Yeah. Nobody thinks that that's not true.

MAX: People really like free will. Even people who don't, who think free will is false. Like free will.

AARON: Oh, yeah. I mean, I'm kind of sympathetic to what's the word for it. Yeah. I think that compatibilism has like some. There's like, no, hold on, hold on. Before you block me on Twitter. I don't think it's like saying anything fundamental about the universe or like the nature of reality in the way that moral realism is. I do think it's like a useful semantic distinction to make between. I don't know if useful is like maybe something like slightly stronger than useful, but the sense in which, okay, I can choose to pick up my phone here, even though there's no libertarian free will that is meaningfully different than somebody like my dad running over here and making me do that physically or something like that.

MAX: I think you would just call that like coerced or uncoursed or something. I mean, the reason I say that is.

AARON: Yeah, sure.

MAX: When you say free will, most people think libertarian free will. And that carries things with it that compatibilism doesn't or something. And people basically just.

AARON: Bailey. Yes, people with this. I agree with this. I think nobody. Yeah, basically nobody. Compatibilism is like a fake philosopher thing. Not entirely fake, but like basically fake philosopher thing.

MAX: Or it's used to justify things that shouldn't be justifiable or something like that.

AARON: Yeah. Although honestly, I kind of like it as a psychological crutch. So I'm just going to keep on doing that. Well, you can't stop me. Okay. Sorry.

MAX: That's true. I could if I put like a button in your brain or a tumor or whatever.

AARON: Yeah. But you can't do that, at least for now. I hope you'll choose not to if you do get the ability to do that. Yeah. Okay. Are there any other deep philosophy takes before, I don't know, say something else?

MAX: No, I think you can move it.

AARON: No. Okay. There's actually nothing particular on my mind. As usual, I did not prepare for this conversation. So sorry about that. Yeah. So do you want to tell me your life story or like something interesting about your life story that you want to discuss on a podcast? Not necessarily the whole thing, an arbitrary part, but is there. And one of these could be your experience in college, like running an EA group, for example.

MAX: I guess I could give advice to EAA group organizers or something.

AARON: Yes, do that.

MAX: One is get co organizers because it's really difficult to do alone, especially if you're a certain type of person, which I am, which is like, I don't like sending emails. I can do it kind of, but I'll put them off sometimes too much and sometimes you're just like I have other things to do. I can't make this presentation. And if you have to do literally all of it yourself and all the tabling and these things, it can get demoralizing and just difficult to do. So if you're like, oh, I should do this high impact thing. Start a K group or something. Make sure you have other people with you and that they're like, the value aligned is probably the wrong way to say it, but committed or something. Or committed enough to put in at least the amount of time you're putting in or something.

AARON: No, I totally agree with this. Even though I'm like a much less, I feel like committed community builder. I did have my stint in college and luckily I had a very good type, a co organizer who's also extroverted. I'm like me. So this was very, extremely good.

MAX: Yeah, I think probably so there's kind of this community sentiment now. Maybe, or like maybe where we are, I should say in the community, people should be more normal. Why can't you just be normal or something like this?

AARON: I'll represent the other position here.

MAX: Well, I'm going to say something like, I think that's kind of bad to some degree. I think what you should do is be hospitable or something, which maybe means isn't that much different. But I think, I think lots of weird things. I may even say lots of weird things a lot of the time and these sorts of things, but one can be nice and understanding and welcoming while doing this. And that probably does mean there are certain types of weird you can't do. Like if you're a group organizer, probably don't start a group house with your members or something. I guess though if you're in college, maybe it's slightly different because dorms are kind of like a weird space. And is it bad if a bunch of EA's live in the same dorm or friends live in the same room? I don't know. That's fine. But if you're a group organizer, probably be careful about flirting with your members or something. Don't do power imbalances, I guess is the thing, but I think it's okay.

AARON: To be like, don't do power. Kids. Do not do power imbalances. Okay. No, I agree with all this. Sorry, keep going.

MAX: And maybe sometimes this does mean you shouldn't always say what you think is the case. Probably at the first EA meeting you don't go like, I think utilitarianism is true and we should donate the US budget to shrimp or something.

AARON: I don't think that's true, actually. Only 90%.

MAX: Okay. Yeah, see, that's reasonable. But most people will need room. And I think it is the case that part of letting people explore things and think for themselves is not weighing them down with what you think or something. And so you can be pushed back when they think things to help. Basically, you might think there are certain objectively true philosophy things. I don't necessarily think there's philosophy stuff, but you might take this to be the case. Good philosophy professor will not indoctrinate or something their students or very strongly convince them of this thing. They'll give them the space to sort of think through things rather than just tell them the right answer or something. And good group organizing looks like this too, I think. Especially when you're onboarding new eas or something.

AARON: Yeah, I largely agree with this. I do think this can actually, and I'm not sure if we. Tell me what your thoughts are on this. I feel like people are going to say, oh, Aaron's pro indoctrination. I am not pro indoctrination. I love me a good critic. Everybody should write to me and tell me what I'm really wrong about. But I actually think that just being kind of upfront about what you think is actually sort of. Sorry, let me back up. Sometimes what people will go really hard on the don't indoctrinate and what that looks like is kind of refusing to let on what they think is the case on a forgiven subject. And I think this is actually not necessarily, I don't want to say literally in all cases that I can possibly imagine, but basically, yeah, as a rule of thumb, no, you should be okay telling people what you think and then if it's in fact the case that people disagree, especially in the EA context, in the EA community say that too. But yeah, I actually think community builders, this is a generalization that I'm not super confident in, but we think that community builders are a little too hesitant to just say like, oh yeah, no, I think this is true. Here's why. These are some of the other plausible views. Tell me why I'm wrong.

MAX: Yeah, I think my guess would be that's maybe true in some sense, but there's like a sort of line you want to walk or something, and being on one side of the line could be very bad or lead to certain issues that I don't read the EA form that much. I'm a fake EA, but those are doing community building better or something post where people I think kind of the takeaway. I might have the title wrong, but it seems like the takeaway of the post I'm thinking of is like, epistemics are sometimes bad in community things or something.

AARON: Was this kind of recent ish?

MAX: Yeah, I think it's like people are way too into AI or something.

AARON: Okay. I think this is like a good faith criticism that's absolutely wrong. And I actually have a twitter thread that I'll like. Maybe I'll actually, I'll see if I can find it right now, but if I can, like 2 seconds. But yeah, wait, keep going.

MAX: Yeah, I mean, like, what I would say is there are failure modes that look like that or something, and you might want to avoid it for various reasons or. Yeah.

AARON: Yes. Also, empirically, I did get like one round of feedback that was like, oh, no, thanks, Aaron, for not being so coy. So maybe I'm going hard. I'm like, the fact that I got ten people or whatever, I think probably.

MAX: It'S like you'd expect it to be a dynamic system or something. Like the amount of coinness you should have goes down as number of sessions you've interacted with this person go up or something like that. The way to do there's not a catch all for how to run a discussion group or something. It's kind of based on the people you have in your group or something. And so I think you do actually want more tibbinness towards the start. You want to be more normal, more. Not just saying what you think is the case, or something like the first session to get people warmed up, kind of understand where their boundaries are or something like this. Boundaries probably isn't the right word, but how they are, because it can be intimidating. I think if the person, the group organizer is like, this is what I think is the case, tell me what is wrong. You just might not want to say why it's wrong. Because you're scared or something. You don't want them to dislike you for normal human reasons.

AARON: Yeah. No, to be clear, I very much buy into the thing. Do not say what. Just because you think an arbitrary proposition to be true doesn't mean that it is a good idea to say that. There's trivial examples here. If you go to something I can make up, I don't know. You don't like your aunt's dress at a family gathering. You think that's like a true statement, that it's ugly? No one's going to say, oh yeah, well, maybe you shouldn't lie. If she really asks you, we can argue about that. And I think I'll probably defend that. You probably shouldn't lie, but you should be as nice as possible or whatever, but you don't like it if she in fact asks you point blank. But no, you shouldn't just say, oh, let me raise my hand like, hi, Aunt Mary, your dress is ugly, or whatever. On the other hand, then we could get into, okay, if somebody asks you, okay, what do you think about wild animals or whatever? I don't know. Yeah, you should be as polite and normal sounding as possible. But even if you're not directly lying, you shouldn't just say things that are like people would naturally take to be like a conclusion that you don't believe in or whatever.

MAX: Yeah, I agree with this, though, probably there is another thing here. This is not really what you're saying or something, but if somebody maybe advice, I would say or something, which I think people often get, is also don't derail conversations or something. I guess even the pursuit of true things, like if you mention, if you're doing a discussion on animal welfare or something, and you're like, yes, in passing, you frame it for. You give like a 92nd spiel. Like framing. You're like, yes. And people also kind of care about animal welfare here. You could read Brian Tamask or something. If anyone's interested, I can send you links. And someone asks later on, what are your thoughts here? Maybe don't talk about animal welfare, wild animal welfare in the context. Because maybe you just think of derail it because you have to be like, here's all this stuff or something. It's just more productive to be like, we'll talk about it later or something.

AARON: Yeah, okay. No, I agree. We're sort of debating vibes, right?

MAX: We're like debating.

AARON: I think we're like, no, sorry, I was using that sort of like maybe facetiously is the right word, like sarcastically or whatever. Unlike an analytic philosophy where we get to have propositions and then you say p and I say, not p, and then we yell at each other. Unfortunately, this isn't as conducive to that. Although I do think also for the case of EA fellowships per se, and maybe like the general case of group discussions, you also shouldn't let. There's like a thing where at least I've noticed, and maybe this is just like n equals three or whatever, but there will be people who are well meaning and everything. They just don't know what they're talking about and they'll derail the conversation. And then you're afraid to rerail the conversation because. I don't know, because that would be like indoctrination or something. And I think sometimes rerailing is just, like the right thing to do.

MAX: Yeah. I mean, I haven't experienced it that much, I think. But see, no issues. That or something.

AARON: I think rerailing is also. Maybe this is like a general point. I feel like I'm talking from the position as the leader or whatever, but no, just even in the other side of things, I feel like in general, in college, I feel like that's the biggest thing I could think of, like that category, taking college classes or whatever. I would want to know whether the professor thinks that the thing I said is a good point or a bad point. Right. I feel like a lot of times they would just say good point no matter what, and then it's no evidence as to whether it's actually a good point or not. Well, I'm being slightly overstating this a little bit.

MAX: I think you're conflating good and true or something. Maybe.

AARON: Yeah, sure. But, yeah, I don't remember any specific examples that I can give, but all I'm trying to say is that it's not just from the perspective of somebody who's trying to get people to lead them to the right answer or whatever. No. I don't know. I feel like it's also, well, in one sense, if you really value your epistemics, it might be like, help. You might want people to give you more people who, in fact, more knowledgeable and have thought a lot about some subject to be more upfront about what they think, but also as sort of a matter of just like, respect is not exactly the right word. I don't think the professors were being disrespectful to me, but it's like if we're just friends talking, or it puts you more on an equal playing field in some way. If the one person who is ostensibly and in fact more knowledgeable and ostensibly in a position of de facto power or whatever, leading this discussion or whatever, it puts them on a pedestal. If they try to be super coy about what they think and then not give you legible feedback as to whether a point you said is plausible or not. You can imagine, I don't know, just making an argument that is just kind of terrible. And then, I don't know, I would kind of want the pressure to say no. That's being polite about it, probably, but say, like, no, bad or something.

MAX: Yeah. I mean, I guess there's something where maybe I do think professors should not just say everything's good or something. They should only say it when it's good or something. Where good doesn't mean true, in my mind. Yeah. But I think I probably lean towards this, which is like risk aversion in sort of community building and classroom building is valuable or something, because you might take the approach that if you're like, I think it is a more risky or something, risky probably isn't the right thing to say there. I mean like risking, like a risk averse versus risk, sort of like, yeah, it's more risky to sort of do the things you're describing and maybe it's like risk neutral or something. The EV is positive because you'll get lots of really engaged people as a result, and you'll also push people out and discounts out or something. I think I tend to favor being risk neutral here. So you maybe aren't making as many, really. You're losing out some value at one end, but you are including more people. And I sort of anticipate this on net being better or something like producing better results, at least like in community health sorts of senses.

AARON: Yeah, I actually think. I agree with that. I do think that people in social situations are just like, in fact, I don't know if it's exactly a bias formally, but people are just like risk averse. Because the whole evos thing, it's like, oh, we used to if you angered somebody in your tribe, they might kill you or whatever, but now it's like, okay, somebody leaves your club, it's fine. No, maybe it's not fine. Right. That is in fact a loss potentially. Maybe it's not. But I think we sort of have to adjust, make ourselves adjust a little bit more in the pro risk direction.

MAX: Yeah. Though I think probably if you take the project of EA seriously or something, there are probably good reasons to want various types of diversity and that sort of approach of, I guess being more risk averse is more likely to get you diversity or something.

AARON: Yes, good point. Excellent. Good job. Everything you said is true, in fact and good. Any other hot or cold takes, in fact, or medium or lukewarm takes?

MAX: Um, I mean, certainly. Right is the question.

AARON: I don't know. It doesn't have to be related to the space is like wider than you probably initially think the space here is like. As long as it's not an info hazard. Not illegal to say. Yeah, you can kind of bring up whatever and not like singling out like random people and not mean, not mean.

MAX: Like global parties gossip. I, like, know what Toby or did last.

AARON: Done. Okay. Unfortunately, I'm not hip to that.

MAX: And I'm sure there is gossip, like somebody didn't change coffee pot or something. But I don't work there, so I don't know.

AARON: Maybe. Hopefully you will soon. CEa, if you're listening to this.

MAX: Well, they're different organizations.

AARON: I don't know. Oxford, every. Ea.org, if you're listening to this.

MAX: Yeah. They're all.

AARON: Wait, maybe. Like, what else? I don't know. What do you think about Twitter, like, in general? I don't know. Because this is how we met.

MAX: Yeah.

AARON: We have not met in real life.

MAX: Worse as a platform than it was two years ago or something.

AARON: Okay.

MAX: Stability wise, and there are small changes that make it worse or something, but largely my experience is unchanged, I think.

AARON: Do you think it's good, bad? I don't know. Do you think people should join Twitter on the market?

MAX: I think EA should join EA. Twitter. I'm not sure if you join Twitter rather than other social medias or something. I think sort of the area of social media we're on is uniquely quite good or something.

AARON: I agree.

MAX: And some of this is like, you get interactions with people, which is good, and people are very nice or something, and very civil where we are. And it's less clear the sorts of personal ability or something and niceness that you get where we are in, like, are elsewhere in Twitter because I don't go elsewhere. But basically you should join Twitter, I guess, if you're going to enter a small community or something, if you're just going to use it to browse memes or something, it's not clear this is better than literally any other social media that has no.

AARON: Yeah, I agree. Well, I guess our audience is, of all, maybe four people, is largely from Twitter. But you never know. There's like a non zero chance that somebody from the wider world will be listening. I think it's at least worth an experiment. Right. Maybe you could tell me something that I should experiment with. Is there anything else like Twitter that we don't have in common that you think that maybe I don't do? It's like, oh, he's an idiot for not doing.

MAX: Oh, probably not. I mean, I'm sure you do better things than I do. Probably.

AARON: Well, I mean, probably this is a large. Right? Like, I don't know.

MAX: I think a benefit of using Twitter is like, it kind of opens you up or something. Probably is the case. It probably does literally build your social skills or something. I mean, maybe not in an obviously useful way, because it's like you're probably not necessarily that much better at doing in person stuff or something as a result of these Twitter. Maybe it improves you very slightly or something, but it's a different skill, texting versus talking.

AARON: Actually, here's something I want your thoughts on recently. Maybe this is outing me as a true Twitter addict, but no, I, by and large, have had a really good experience and I stand by that. I think it's net on net. Not just on net, but just in general, added value to my life and stuff. And it's great, especially given the community that I'm in. The communities that I'm in. But yeah, this is going to kind of embarrassing. I've started thinking in tweets. I'm not 100% of the time, not like my brain is only stuck on Twitter mode, but I think on the margin there's been a chef toward a thought verbalizes an Aaron's brain as something that could be a tweet. And I'm not sure this is a positive.

MAX: Like it is the case. I've had my friends open Twitter in front of me, like my Twitter and go through and read my tweets. Actually, many people in my life do this. I don't know why. I don't really want them to do that. And it does change the way you talk. Certainly part of that is probably character element, and part of it is probably like culture or something. So that's the case. I don't know if I experienced that or I do sometimes if I thought of a really stupid pun. Normally you don't do anything with that, but now I can or something. Right. It's worth holding on for the 6 seconds it takes to open my phone. But I think I actually kind of maybe think in tweets already or something. Like, if you read my writing, I've gotten feedback that it's both very poetic or something. And poems are short or something. It's like very stanza or something, which is kind of how Twitter works also. Right. I think if you looked at the formatting of some of my writing, you would see that it's very twitter like or something. In some sense, there's no character limit, and so maybe this is just the sort of thing you're experiencing or something. Or maybe it's more intense.

AARON: Yeah, probably not exactly. Honestly, I don't think this is that big of a deal. One thing is, I think this is a causal effect. I've blogged less and. And I think it's like, not a direct replacement. Like, I think Scooter has been like an outlet for my ideas that actually feels less effortful and takes less. So it's not like a one for one thing. So other more worky things have filled in the gap for blogging. But I think it has been a causal reason that I haven't blogged as much as I would like to. Really would like have to or something. Yeah, I can see that being thing that is like ideas, there's no strong signal that a particular tweet is an important idea that's worth considering. Whereas if you've written a whole blog post on it and you have 200 subscribers or whatever, you put in a lot of effort. People are at least going to say like, oh, this is me. At least plausibly like an important idea. Like when they're coming into it or something like that.

MAX: Yeah. And if you think something is valuable or something, maybe this is different for you or something. But I get like three likes on all my tweets. It's very rare I get ten likes or something. The number of followers. Gross. It's just stuck there forever.

AARON: Feel like it's not true. Should I read all your bangers? I have a split screen going on. Should I search from Max Alexander?

MAX: Search for my baggers. That's 20,000 tweets or posts.

AARON: How many?

MAX: The Ted like ones you'll find over Ted likes are know very small percentage of the total do.

AARON: So why is your handle absurdly, Max? Is there a story you don't have to answer mean?

MAX: It's very simple. So I think existentialism is right or something. And absurdism specifically. And my name is.

AARON: Wait, really? Wait, what even is absurdism?

MAX: Humans have this inherent search for meaning, and there's no inherent meaning in the universe.

AARON: This is just moral realism in continental flavor.

MAX: Well, and then you have to. So there is no moral truth. Right. And you have to make your own meaning or you could kill yourself.

AARON: I guess this is not true. Okay, whatever, but whatever. This is just continental bullshit.

MAX: If you're a moral antirealist or something, you probably end up being an existentialist or don't care about philosophy, I suppose.

AARON: Oh, this is a good one. If anyone at OpenAI follows me, I just want to say that I'd probably pay $20 a month, maybe even more, for a safely aligned super intelligence. I will actually second that. So, open AI. We can promise you $40 if you do that.

MAX: A month, in fact.

AARON: Yes. Yeah. There's lots of bangers here. You guys should all follow Max and look up his bangers.

MAX: I would be surprised if somebody's listening to this and isn't already.

AARON: You never know. I'm going to have to check my listening data after this and we'll see how big our audience is. Kind of forget. Yeah. So once again, the space of. Also, I'm happy to take a break. There's no formalized thing here. Structure.

MAX: I mean, I'll go for hours.

AARON: Oh, really? Okay, cool. No. Are there any topics that are, like. I feel like. Yeah, the space is very large. Here's something. Wait, no, I was going to say, is there anything I do that you disagree with?

MAX: That's like a classic.

AARON: Yeah, I find you very unobjectionable. That's a boring compliment. I'm just kidding. Thank you. It's actually not.

MAX: I have, like, I suppose writing takes or something, but I don't know if I can find the book.

AARON: Oh, wait, no. We have those two similar blog posts, but you know what I'm talking about. Okay, can I just give the introduction to this?

MAX: Okay.

AARON: No, I think we just have similar blog posts that I will hopefully remember to link that I think are substantively very similar, except they have totally different vibes. And yours is very positive and mine is very negative. In fact, mine is called on suffering. I don't remember what yours is called.

MAX: It's a wonderful life.

AARON: There you go. Those are the vibes, but they're both like, oh, no. Hedonic value is actually super meaningful and important, but then we take it in opposite directions. That's it.

MAX: Yeah.

AARON: I don't know if I feel like your title actually appears to something. I feel like your title is bad, but besides that, the piece is good.

MAX: Well, there's a movie called it's a wonderful life and the post is, like Christmas themed.

AARON: Oh, I feel like I keep not getting things like that.

MAX: It's okay.

AARON: I feel like I vaguely knew that was like a phrase people said, but wasn't sure where it came from. I remember there's like an EA thing called non trivial that I helped on a little bit and I didn't realize it was a reference to non trivial pursuits, which is like a board game or something. No, I think it was actually originally called non trivial pursuits, I think. I'm sorry, Peter McIntyre, if you're listening to this, I apologize for any false, like, I don't know, like a reasonable amount of time, and I had no idea. And then the guy who I was working under brought this up and I was like, wait, that's a board game. Or something. Anyway, this is not an important aside, I'm kind of a bad podcaster because Dwarkesh Patel, who's about 100,000 times better at podcasting than me, goes for like six or 8 hours. That stresses me out so much. Even from an out. Like, not even doing it, doing it, I would just die. I would simply fizzle out of existence. But even thinking about it is very stressful.

MAX: It depends who guessed it, but I could probably talk to somebody for six continuous hours or something.

AARON: Tell me we don't have to discuss this all but one thing could be the virtues of being anonymous. Not anonymous.

MAX: Sure. Okay. I think the virtue is probably like comfortableness or something is the primary one, and maybe some risk aversion or something.

AARON: Yeah.

MAX: Probably. It's not that common or something, but it's probably more common than people think. But being socially ostracized or being very publicly canceled or something, and maybe for bad reasons, one might say as well does occur. And you might be the sort of person who wants to avoid this. I mean, in some sense you've kind of been piled on by various points and it seems like you just were fine with this.

AARON: TBD. So I have not been formally hired by anyone since I've discussed this. No, there was like the tweet where I was like, we should bury poor people underground. Just kidding. That's not what I said. That is what a subset of people who piled on me said. I said, which is not in fact what I said. I said, I asked a question, which was like, why are we not building underground more? No, but yeah, I feel like this is definitely just like a personal taste thing. I don't know. I'm sure there are, but on very broadly, don't even want to say like EA Twitter, but like extended, I don't know, like broadly, like somewhat intellectual ish, English speaking Twitter or something like that. Are there examples of people with under, say, 5000 followers who have said something that is not legitimately indicative? I mean, this is doing a lot of work here, I want to say not legitimately indicative of them being a terrible person. Right. That's doing a lot of work. Right. But I think that neither of us are. So then this is the question.

MAX: My guess would be, like, teachers or something, maybe that sometimes happens to, or like, I think the more, I think probably the sorts of jobs we end up wanting or something like this are more okay with online presence. Because I don't know, everyone at your, ethink priorities is terminally online, right? And so they're not going to mind if you are too.

AARON: I'm not counting on getting an EA or a job.

MAX: Like corporate and public sector jobs I think are generally are like you should make your social media private or something.

AARON: Yeah.

MAX: They tell you that not just the.

AARON: Corporate world is like a big category, right? If you're a software engineer, first of all, I think if you actually have reprehensible opinions, I think it is in your self interest to be an alt. If you want to say them out loud. Not being a bad person is doing a lot of work here. But for what it's worth, I really do think I'm extending there are communists that I would say are genuinely not bad people. I really think they're wrong about the whole communism thing, but that does not fall into my category of automatically makes it such that you should be an alt to avoid being exposed.

MAX: I think probably there are good reasons in EA specifically to maybe make an alt that are not the case elsewhere. And this is like the community is homogeneous and lots of people with lots of power are also on Twitter, basically. Maybe if somebody rethink it's a bad vibe of you or something, that's indicative they shouldn't hire you. I guess maybe. But maybe it doesn't matter how I use Twitter. I don't think this is true, but maybe I use Twitter in a way that's abrasive or maybe they just don't like my puns or something. This is information that will make them change their opinion about me in an interview. And maybe it's not that impactful and maybe it is a little impactful and maybe sometimes it's positive it will make them like me more or something. But the community is so insular, such that this is more of a problem or something.

AARON: Yeah, I feel like my sense is just that it's more symmetrical than this is giving it credit for. Yeah, I guess if you have a lot of heterodox views that are on hot especially, I don't know. Yeah. If you disagree with a lot of people in EA who are powerful, not just not in a weird way, just have hiring power or whatever, on topics that are really emotionally salient or whatever. Yeah. I would probably say if you want to very frankly discuss your opinions and get hired at the orgs where these people have hiring power, it's like probably make an alt. I just don't think that's true for that many people.

MAX: My guess is this is more important the smaller the is or something. My guess would be open philanthropy, like where you think I don't know, global parties institute. It doesn't matter if you make an alternate, really, you're okay. But there are some EA orgs that are like three people or something. Right. And probably your online presence matters, but.

AARON: I mean that's also. They also hire a fewer open spots. Yeah. I just feel like the upside is or maybe the downside. I think we probably disagree on the downside somewhat. I mean, just like stepping away from the abstract level. I just personally think that you, in fact, I can take this out, but can we just say that your last name is not in fact Alexander? Yeah, that's okay. I feel like you could just use your real name and your life would.

MAX: Well, it is my real name. It just my middle name.

AARON: Okay. Wow. Okay. Very, very slate sarcodex esque.

MAX: That's why I did it, because.

AARON: Oh, nice.

MAX: But it was like really nice.

AARON: I actually don't know why I didn't connect the dots there. Yeah, I feel like just like you personally in expectation, it would probably not be a dramatic change to your life if I were you and I was suddenly cast into your shoes. But with my beliefs, I would just use my real name or something like that.

MAX: Yeah, I think it's like at the point now it doesn't matter or something. There's no real upside to changing it and there's no real downside.

AARON: Yeah, it's not a huge deal.

MAX: Yeah, I think probably, well, some people like to be horny on Twitter or something and probably if you want to do that, you should be anonymous or.

AARON: I mean, but once again, even I feel like it depends what you mean by be horny. Right. If you're going to post nude photos on Twitter. Yeah. Actually had a surprisingly good dm conversation with a porn account. I did not follow them, for what it's worth.

MAX: They're presumably run by real.

AARON: No, no, it was actually. No, she was very polite. Yes, I think I'm correctly assuming the name clearly. I won't say the name, but like accounting. But it was clearly indicative. It was a woman. She basically objected to me suggesting that minimum wage jobs in the US are uncommon. And I think this is actually, in fact, I deleted my tweet because it was like, I think giving a false impression that, yeah, I live in a wealthy area. It's true that server sector jobs generally pay better than minimum wage, but elsewhere in the US it's not true. It's a very polite interaction. Anyway, sorry, a total side story. Oh yeah, horny on Twitter. Yes, probably. Yeah, I agree. I don't know, but that can mean multiple things. I guess I have participated somewhat in gender discourse. I guess. I don't think I've been extremely horny. Yeah, fair enough. I don't think I've been very horny on me.

MAX: Yeah. I think probably there's maybe something to be the case that, and you see this as prestige increases or something. This isn't totally true because I think Elie Iser Zukowski posted about whatever or something, but Will McCaskill and Peter Wildefer are not engaging in gender discourse or whatever. Right. Publicly anyway.

AARON: Wait, do I want to put my money or not my money, my social reputation doesn't use Twitter very much, which I think, I don't know. I would in fact take this question much more seriously if I was him. Like the question of whether to just be more frank and open and just, I don't know, maybe he doesn't want to do. I'm like totally hypothesizing here. Wait, Peter totally does sometimes, at least in one case. I know. No, I have disagreed with Peter on gender discourse and that's just like somewhat. I don't know. Right.

MAX: I just don't see. He's not starting gender discourse, I guess I should say.

AARON: Well, I mean, Adam and Eve started gender. No, but I don't know. I don't want to single anybody out here. I feel like, yeah, if you're like the one face of a social movement, yeah, you should probably take it pretty seriously. At least the question of like, yeah, you should be more risk averse. I will buy into that. I don't think my position here is like absolute whatsoever. I think, yeah. For people with fewer than 5000 followers, it's like a gradient, right? I just think on the margin people are general or like in general, people maybe are more risk averse than they have to be or something like that.

MAX: Yeah. Though I'm not sure there are that many major examples. Because one reason you might be anonymous is not because you are scared about people who follow you finding you elsewhere. It's because you don't want the inverse to be the case or something.

AARON: Wait, what's the, I'm sorry, you don't.

MAX: Want your parents to google you Twitter account and read your tweets? Yeah, everyone I know knows my Twitter. I don't even know how they all found it, but.

AARON: That'S interesting. Okay, well, I mean, yeah.

MAX: Read stupid tweets of me in front of me.

AARON: Well, there you go. That's like a good real life example of like, okay. That's like a real downside, I guess. Or maybe it's not because it's kind of a funny thing, but.

MAX: I think you do get this. People will joke about this sometimes, I think on Twitter or something, or I've seen it ever where it's like, I hope my boss is not using Twitter today because they'll be like, why weren't you working or something?

AARON: Oh, yeah.

MAX: Literally just tweeting or something.

AARON: Yeah. I know. If you've called in sick to work and you're, like, lying about that, you're.

MAX: Just tweeting a bunch, they might be kind of suspicious or something. If your boss could see you all the time, you open Twitter for like a minute to tweet or something, they'd probably be like a little judgy or something, right? Maybe they wouldn't.

AARON: Yeah, yeah, sure. I mean, it's just like a matter of degree.

MAX: These are probably the most real world scenarios, is like somebody, you know, gets information that isn't really that damaging but slightly inconveniences you or something.

AARON: Yeah. Oh, actually, I talked about this more with Nathan Young, but then I didn't record his audio. I'm sorry. I've apologized for this before, but I'm so bad at me. This is before. I really hope that I've solved this for this particular, I think I have for this episode. But, yeah, on my one viral tweet or whatever, one thing is just like, oh, yeah. A lot of people basically, I think they were earnest. Not earnest, exactly. That's like, maybe too generous, but they genuinely thought that I was saying something bad, like something immoral. I think they were wrong about that, but I don't think they were lying. But then, in fact, okay, several of my real life friends either, it's like, come up somehow and none of them actually thought I was saying anything bad. And in fact, I met somebody basically at a party who was talking about this. And then it was like, oh, that's me. I was the one who posted that. But basically the point here is that nothing bad in my. As far as I can tell, maybe I'm wrong, but as far as I can tell, nothing bad in my life has come with a viral tweet that mainly was viral from people quotating it, saying that. I was saying something like immoral. N equals one.

MAX: Yeah. I don't know how come it would be or something. I think, like Contra points or something. I believe it's her. Has videos about canceling and stuff like this. I think she was canceled at various. Lindsay Elsie, you may have heard of, I think, kind of got run off the Internet.

AARON: Actually don't know this person.

MAX: She does, like, video, or she did video essays about media and stuff. She's a fiction author now. There are orders of magnitude more well known than we are for something.

AARON: There's, like, canceling that really genuinely ruins people's lives. And then there's, like, Barry Weiss. I don't know, she has like a substack or like, I don't even know, like, IDW person or I think. I don't even know if she got. There's people like this. I'm sure they have other academics and they're professors at universities who don't like them, but they have successful, profitable substacs. And it seems to me like their lives aren't made a lot worse by being, quote unquote, canceled. No, but then there really are. Right. I'm sure that there's definitely cases of normalish people. Yeah.

MAX: I don't know, but maybe it's just more psychologically damaging, regardless of consequences, to be piled on when it's your real face or something.

AARON: Yeah. Also, I think 90. I don't know about at least 90%. Probably not like 99 and then like ten, nine. But something in between these two numbers of pylons just don't have any real world consequences whatsoever. Canceling is like a much less common phenomenon.

MAX: Yeah, that seems right to me.

AARON: Yeah. Let it be known that we are two white. We can also take this part out if you don't want this identifying information. 220 something white males discussing being canceled on a podcast, which is possibly the most basic thing that has ever happened in the universe.

MAX: I mean, I want to be known for this. When I, my children come up to me and say, we found the one podcast interview you did.

AARON: Yeah. I feel like I want to reiterate that there's a core thing here, which is, like, if you really hate jewish people and you think they should die, there's like, a fundamental thing there where if you're open and honest about your opinions, people are going to correctly think that you're a bad person. Whereas I think neither of us hold any views that are like that.

MAX: Agree. I think.

AARON: Yeah, no, maybe I wouldn't know, which is fine. But I'm sort of using an extreme example that I think very few people, at least, who I would interact with, hold. But if you're in fact, like, a person who has views that I think really indicate immoral behavior or something. Sorry. Maybe I'll even take that part out because it could be like, clip eclipsed or whatever, but if you just say like, oh yeah, I steal things from street vendors or whatever. I don't know, that's a bad thing. And you're like being honest, don't do that. And then people are like, yeah, you have to make, I guess, have some confidence that you're like a person who's like, I don't know, doesn't do very, doesn't do pretty things or have views that true legitimately indicate that you either do or would do very immoral things that are regarded to be immoral by a large number of people or whatever. This is doing a lot of the.

MAX: Mean, like I think this goes back to earlier things or something. I just thought of it or something. I mean, it's not clear the sign of this or something, but I wrote a whole blog post about joining EA or something which I think has some genuinely very good parts in it that are really good explanations of why someone should be motivated to do EA or something. There's lots of really embarrassing stuff about life in high school or something in it.

AARON: Yeah, no, I'm pretty sure everyone in.

MAX: 80 who works at 80k. Not everyone, but I have good reason to think many people at read this, which is like, I don't know what to make of that. It's just kind of weird. Right, people?

AARON: Yeah, no, I agree. It's kind of weird. I think like object level. I don't know. I don't even know if embarrassed. Embarrassing is like the right word. Yeah, it's like genuinely, I guess, more vulnerable than a lot of people are in their public blogs or whatever. But I don't really think it reflects poorly on you when take it in aggregate. Right. I mean maybe you disagree with that, but especially I don't think so.

MAX: Or if you think it does, you probably are judgy to a degree I think is unreasonable or something from when I was like 13 or something.

AARON: Yeah, sure.

MAX: That bad? Really?

AARON: What bad things? Should I. No, there's something that I actually don't even think is immoral, but it's like somewhat embarrassing. Maybe I'll even say it on the next podcast episode after I've thought about it for ten minutes instead of 1 minute or something like that. But I don't know, I think if I said it, it would be okay. Nothing that bad would happen. I guess. You're going to continue being Max Alexander instead of Max.

MAX: I mean, like it's the brand. Like I can't.

AARON: Wait. I feel like this is not a good reason. I feel like pretty quickly. Yeah. I feel like path dependence is like a real thing, but in this particular case, it's just like, not as big as maybe scenes or something like that. I don't know. Yeah.

MAX: I just don't care what the upside is or something.

AARON: Yeah, true. Yeah. I'm thinking maybe we wrap up.

0 Comments