Aaron's Blog
Pigeon Hour
#12: Arthur Wright and I discuss whether the Givewell suite of charities are really the best way of helping humans alive today, the value of reading old books, rock climbing, and more
0:00
-2:13:25

#12: Arthur Wright and I discuss whether the Givewell suite of charities are really the best way of helping humans alive today, the value of reading old books, rock climbing, and more

Transcript

No transcript...

Please follow Arthur on Twitter and check out his blog!

Thank you for just summarizing my point in like 1% of the words

-Aaron, to Arthur, circa 34:45

Real photo of Arthur and I

Summary

(Written by Claude Opus aka Clong)

  • Aaron and Arthur introduce themselves and discuss their motivations for starting the podcast. Arthur jokingly suggests they should "solve gender discourse".

  • They discuss the benefits and drawbacks of having a public online persona and sharing opinions on Twitter. Arthur explains how his views on engaging online have evolved over time.

  • Aaron reflects on whether it's good judgment to sometimes tweet things that end up being controversial. They discuss navigating professional considerations when expressing views online.

  • Arthur questions Aaron's views on cause prioritization in effective altruism (EA). Aaron believes AI is one of the most important causes, while Arthur is more uncertain and pluralistic in his moral philosophy.

  • They debate whether standard EA global poverty interventions are likely to be the most effective ways to help people from a near-termist perspective. Aaron is skeptical, while Arthur defends GiveWell's recommendations.

  • Aaron makes the case that even from a near-termist view focused only on currently living humans, preparing for the impacts of AI could be highly impactful, for instance by advocating for a global UBI. Arthur pushes back, arguing that AI is more likely to increase worker productivity than displace labor.

  • Arthur expresses skepticism of long-termism in EA, though not due to philosophical disagreement with the basic premises. Aaron suggests this is a well-trodden debate not worth rehashing.

  • They discuss whether old philosophical texts have value or if progress means newer works are strictly better. Arthur mounts a spirited defense of engaging with the history of ideas and reading primary sources to truly grasp nuanced concepts. Aaron contends that intellectual history is valuable but reading primary texts is an inefficient way to learn for all but specialists.

  • Arthur and Aaron discover a shared passion for rock climbing, swapping stories of how they got into the sport as teenagers. While Aaron focused on indoor gym climbing and competitions, Arthur was drawn to adventurous outdoor trad climbing. They reflect on the mental challenge of rationally managing fear while climbing.

  • Discussing the role of innate talent vs training, Aaron shares how climbing made him viscerally realize the limits of hard work in overcoming genetic constraints. He and Arthur commiserate about the toxic incentives for competitive climbers to be extremely lean, while acknowledging the objective physics behind it.

  • They bond over falling out of climbing as priorities shifted in college and lament the difficulty of getting back into it after long breaks. Arthur encourages Aaron to let go of comparisons to his past performance and enjoy the rapid progress of starting over.

Transcript

Very imperfect - apologies for the errors.

AARON

Hello, pigeon hour listeners. This is Aaron, as it always is with Arthur Wright of Washington, the broader Washington, DC metro area. Oh, also, we're recording in person, which is very exciting for the second time. I really hope I didn't screw up anything with the audio. Also, we're both being really awkward at the start for some reason, because I haven't gotten into conversation mode yet. So, Arthur, what do you want? Is there anything you want?

ARTHUR

Yeah. So Aaron and I have been circling around the idea of recording a podcast for a long time. So there have been periods of time in the past where I've sat down and been like, oh, what would I talk to Aaron about on a podcast? Those now elude me because that was so long ago, and we spontaneously decided to record today. But, yeah, for the. Maybe a small number of people listening to this who I do not personally already know. I am Arthur and currently am doing a master's degree in economics, though I still know nothing about economics, despite being two months from completion, at least how I feel. And I also do, like, housing policy research, but I think have, I don't know, random, eclectic interests in various EA related topics. And, yeah, I don't. I feel like my soft goal for this podcast was to, like, somehow get Aaron cancelled.

AARON

I'm in the process.

ARTHUR

We should solve gender discourse.

AARON

Oh, yeah. Is it worth, like, discussing? No, honestly, it's just very online. It's, like, not like there's, like, better, more interesting things.

ARTHUR

I agree. There are more. I was sort of joking. There are more interesting things. Although I do think, like, the general topic that you talked to max a little bit about a while ago, if I remember correctly, of, like, kind of. I don't know to what degree. Like, one's online Persona or, like, being sort of active in public, sharing your opinions is, like, you know, positive or negative for your general.

AARON

Yeah. What do you think?

ARTHUR

Yeah, I don't really.

AARON

Well, your. Your name is on Twitter, and you're like.

ARTHUR

Yeah. You're.

AARON

You're not, like, an alt.

ARTHUR

Yeah, yeah, yeah. Well, I. So, like, I first got on Twitter as an alt account in, like, 2020. I feel like it was during my, like, second to last semester of college. Like, the vaccine didn't exist yet. Things were still very, like, hunkered down in terms of COVID And I feel like I was just, like, out of that isolation. I was like, oh, I'll see what people are talking about on the Internet. And I think a lot of the, like, sort of more kind of topical political culture war, whatever kind of stuff, like, always came back to Twitter, so I was like, okay, I should see what's going on on this Twitter platform. That seems to be where all of the chattering classes are hanging out. And then it just, like, made my life so much worse.

AARON

Wait, why?

ARTHUR

Well, I think part of it was that I just, like, I made this anonymous account because I was like, oh, I don't want to, like, I don't want to, like, have any reservations about, like, you know, who I follow or what I say. I just want to, like, see what's going on and not worry about any kind of, like, personal, like, ramifications. And I think that ended up being a terrible decision because then I just, like, let myself get dragged into, like, the most ultimately, like, banal and unimportant, like, sort of, like, culture war shit as just, like, an observer, like, a frustrated observer. And it was just a huge waste of time. I didn't follow anyone interesting or, like, have any interesting conversations. And then I, like, deleted my Twitter. And then it was in my second semester of my current grad program. We had Caleb Watney from the Institute for Progress come to speak to our fellowship because he was an alumni of the same fellowship. And I was a huge fan of the whole progress studies orientation. And I liked what their think tank was doing as, I don't know, a very different approach to being a policy think tank, I think, than a lot of places. And one of the things that he said for, like, people who are thinking about careers in, like, policy and I think sort of applies to, like, more ea sort of stuff as well, was like, that. Developing a platform on Twitter was, like, opened a lot of doors for him in terms of, like, getting to know people in the policy world. Like, they had already seen his stuff on Twitter, and I got a little bit, like, more open to the idea that there could be something constructive that could come from, like, engaging with one's opinions online. So I was like, okay, fuck it. I'll start a Twitter, and this time, like, I won't be a coward. I won't get dragged into all the worst topics. I'll just, like, put my real name on there and, like, say things that I think. And I don't actually do a lot of that, to be honest.

AARON

I've, like, thought about gotta ramp it.

ARTHUR

Off doing more of that. But, like, you know, I think when it's not eating too much time into my life in terms of, like, actual deadlines and obligations that I have to meet, it's like, now I've tried to cultivate a, like, more interesting community online where people are actually talking about things that I think matter.

AARON

Nice. Same. Yeah, I concur. Or, like, maybe this is, like, we shouldn't just talk about me, but I'm actually, like, legit curious. Like, do you think I'm an idiot or, like, cuz, like, hmm. I. So this is getting back to the, like, the current, like, salient controversy, which is, like, really just dumb. Not, I mean, controversy for me because, like, not, not like an actual, like, event in the world, but, like, I get so, like, I think it's, like, definitely a trade off where, like, yeah, there's, like, definitely things that, like, I would say if I, like, had an alt. Also, for some reason, I, like, really just don't like the, um, like, the idea of just, like, having different, I don't know, having, like, different, like, selves. Not in, like, a. And not in, like, any, like, sort of actual, like, philosophical way, but, like, uh, yeah, like, like, the idea of, like, having an online Persona or whatever, I mean, obviously it's gonna be different, but, like, in. Only in the same way that, like, um, you know, like, like, you're, like, in some sense, like, different people to the people. Like, you're, you know, really close friend and, like, a not so close friend, but, like, sort of a different of degree. Like, difference of, like, degree, not kind. And so, like, for some reason, like, I just, like, really don't like the idea of, like, I don't know, having, like, a professional self or whatever. Like, I just. Yeah. And you could, like, hmm. I don't know. Do you think I'm an idiot for, like, sometimes tweeting, like, things that, like, evidently, like, are controversial, even if they, like, they're not at all intent or, like, I didn't even, you know, plan, like, plan on them being.

ARTHUR

Yeah, I think it's, like, sort of similar to the, like, decoupling conversation we had the other night, which is, like, I totally am sympathetic to your sense of, like, oh, it's just nice to just, like, be a person and not have to, like, as consciously think about, like, dividing yourself into these different buckets of, like, what sort of, you know, Persona you want to, like, present to different audiences. So, like, I think there's something to that. And I, in some ways, I have a similar intuition when it comes to, like, I try to set a relatively strong principle for myself to not lie. And, like, it's not that I'm, like, a Kantian, but I just, like, I think, like, just as a practical matter, the problem with lying for me at least, is then, like, you have to keep these sorts of two books, sets of books in your head of, like, oh, what did I tell to whom? And, like, how do I now say new things that are, like, consistent with the information that I've already, like, you know, falsely or not, like, divulge to this person. Right. And I think, in a similar way, there's something appealing about just, like, being fully honest and open, like, on the Internet with your real name and that you don't have to, like, I don't know, jump through all of those hoops in your mind before, like, deciding whether or not to say something. But at the same time, to the, like, conversation we had the other night about decoupling and stuff, I think. I think there's, like, it is an unfortunate reality that, like, you will be judged and, like, perhaps unfairly on the things that you say on the Internet, like, in a professional sphere. And, like, I don't know, at some level, you can't just, like, wish your.

AARON

Way out of it. Yeah, no, no, that's, like, a. Okay, so I. This is actually, like, I, like, totally agree. I think, like, one thing is just. I, like, really, honestly, like, don't know how, like, empirically, like, what is the actual relationship between saying, like, say, you get, like, I don't know, like, ten, like, quote tweets, people who are, like, misunderstanding your point, like, and, like, I don't know, say, like, 30 comments or whatever replies or whatever. And, like, it is, like, not at all clear to me, like, what that corresponds to in the real world. And, like, I think I may have erred too much in the direction of, like, oh, that's, like, no evidence at all because, sorry, we should really talk about non twitter stuff. But, like, this is actually, like, on my mind. And this is something like, I didn't. Like, I thought about tweeting, like, but didn't, which is that, like, oh, yeah, I had, like, the building underground tweet, which, like, I think that's a good example. Like, anybody who's, like, reasonably charitable can, like, tell that. It's, like, it was, like, I don't know, it was, like, a reasonable question. Like, and we've mentioned this before, like, this is, like, I don't want to just, like, yeah, it's, like, sort of been beaten to death or whatever, but, like, I feel like maybe, like, I came away from that thinking that, like, okay, if people are mad at you on the Internet, that is, like, no evidence whatsoever about, like, how it, like, how a reasonable person will judge you and or, like, what will happen, like. Like, in real life and, like, yeah, maybe I, like, went too hard in that direction or something.

ARTHUR

Yeah, yeah, yeah. I mean, to, like, agree, maybe move on to non twitter, but, yeah, like, to close this loop. I think that, like, I agree that any. Any one instance of individuals being mad at you online, like, it's very easy to, like, over react or extrapolate from that. That, like, oh, people in the real world are gonna, like, judge me negatively because of this. Right. I think in any isolated instance, that's true, but I just. I also get the sense that in the broad world of sort of, like, think tanks and nonprofits and things where, like, your position would. Especially if you're, like, in a research position, like, to some degree, like, representative the opinions of an employer. Right. That there's a kind of, like, character judgment that goes into someone's overall Persona. So, like, the fact that you have, like, one controversial tweet where people are saying, like, oh, you think, you know, like, poor people don't deserve natural light or something like that. Like, that. Any one instance, like, might not matter very much, but if you, like, strongly cultivate a Persona online of, like, being a bit of a loose cannon and, like, oh, I'm gonna say, like, whatever controversial thing comes to mind, I can see any organization that has, like, communicating to a broader audience is, like, an important part of their mission. Like, being hesitant to, like, take a chance on a young person who, like, is prone to, you know, getting into those kinds of controversies on, like, a regular basis.

AARON

Yeah, yeah. And actually, like, maybe this is, like, sort of meta, but, like, I think that is totally correct. Like, you should 100% up. Like, up if you're an employer listening to this. And, like, I don't know. Who knows? There's, like, a non zero chance that, like, I don't know, maybe, like, not low. Lower than, like, 0.1% or something like that. That will be the case. And, like, no, it is totally true that, like, my. I have, like, subpart. Wait, you better. I'm gonna, like. No, no quoting out of context here, please. Or, like, not know, like, clipping the quote out of, like, so it becomes out of context. But, like, it is, like, I have definitely poor judgment about how things will be, um, like, taken, uh, by the people of the Internet, people of the world. I, like, legitimately, I think I'm below not. Probably not first percentile, probably below 50th percentile, at least among broadly western educated, liberal ish people. And so, yes, it's hiring me for head of communication. I mean, there's a reason I'm not. I wouldn't say that I'm not applying to be a communications person anywhere, but I don't know, it's not crazy that I would. If you want to. Yeah, you should. Like, it is, like, correct information. Like, I'm not trying to trick anybody here. Well, okay. Is there anything else that's on your mind? Like, I don't know, salient or, like.

ARTHUR

That'S what I should have done before I came over here, but nothing, like, on the top of my head, but I feel like there's, I don't know, there's all kinds of, well, like, there's something you've, like, wandered into.

AARON

Yeah, like, I think you have bad cause prioritization takes.

ARTHUR

Oh, right.

AARON

Like, maybe we shouldn't just, like, have the AI versus, like, I don't know, it's like my, like, the AI is a big deal. Tribe is like, yeah, not only winning, but, like, pretty obviously and for obvious reasons. So, like, I don't know, I don't, like, really need to have, like, the, you know, the 70th, like, debate ever about, like, oh, it's like, AI.

ARTHUR

Wait, sorry. You mean they're winning for obvious reasons insofar as, like, the victories are apparent or that you think, like, the actual arguments leading to them.

AARON

Oh, yeah.

ARTHUR

Becoming more prominent are obvious.

AARON

Yeah. Setting aside the. In the abstract, what, non, like, empirical or empirical, but, like, only using data, like, pre chat, GPT release, like, setting aside that whole cluster of arguments, there is the fact that, like, I don't know, it seems very, very apparent to, like, the chattering classes of people who care about this stuff that, like, AI is, like, both the overt has expanded tremendously, like, also moved. It seems like the AI is as big of a deal, like, as the Internet is, like, the lower bound and, like, much, much more important than that. Is, like, the upper bound. And so, like, and, like, that's a. That's like, a significant shift, I guess. One thing is just, like, there have a lot been a lot of conversations, like, in EA spaces, and, like, I'm just, like, thinking about the AdK podcast. I feel like I've heard it multiple times, but maybe I'm making that up where it's like, one person is, like, makes the case for, like, I don't know, taking AI or, like, thinking that, like, yeah, AI broadly is, like, the most important altruistic area, right? And then the other person says no, and then they do the same, like, five discussion points back and forth.

ARTHUR

Yeah.

AARON

So, like, I don't think we should do that.

ARTHUR

Sure.

AARON

That was a really long winded way of saying that.

ARTHUR

I see. So, so, but, but you're, you're trying to emphasize that, like, the kind of, like, reality of the pace of, you know, improvement in artificial intelligence and the fact that it is going to be, like, an incredibly important technology. Like you said, the lower bound being, like, as important as the Internet, I think, of the upper bound is like, I don't know, something like electricity provided we're not gonna, you know, all die or something. Or maybe more transformational extra. But. But I guess we're trying to say is that, like, the Overton window has, like, shifted so much that, like, everyone kind of agrees this is a really transformative technology. And, like, you know, therefore.

AARON

Well, I guess I. Sorry, wait, I interrupted. I'm an interrupting person. I'm sorry.

ARTHUR

That's good. It's a natural part of conversation, so I don't feel bad.

AARON

Continue.

ARTHUR

Oh, oh, no, no. I just. I like, like, yeah, maybe we don't need to rehash the, like, whether or not AI is important, but I'm curious, like, what you think. Yeah, like, what do you think is sort of wrong about my.

AARON

No, I was just about to ask that, like, when I interrupted you. I actually don't fully know what you believe. I know we, like, go into different, like, vibe camps or, like, there's another. There's like, a proper noun, vibe camp. This is like a lowercase letters.

ARTHUR

Vibe count, vibe sphere.

AARON

Yeah, yeah. And, like. But, like, I don't know, do you have, like, a thesis?

ARTHUR

Yeah, see, okay. I don't. I think in many ways, like, maybe just to lay out, like, I think my lack of a thesis is probably the biggest distinction between the two of us when it comes to these kind of cause prioritization things.

AARON

Right.

ARTHUR

Because, like, I think I, like, over the years have, as I became more interested in the effect of altruism, have sort of changed my views in many different directions and iterations in terms of, like, my basic moral philosophy and, like, what I think the role of EA is. And I think over time, like, I've generally just become, like, more kind of pluralistic. I know it's a bit of a hand wavy word, but, like, I think I have sufficient uncertainty about, like, my basic moral framework towards the world that, like, this is just a guess. Maybe we'll discover this through conversation. But I think, like, perhaps the biggest disagreement between you and I that, like, leads us in different directions is just that I am, like, much more willing to do some kind of, like, worldview diversification sort of move where like, just, you know, going from, like, a set of assumptions, you know, something like hedonistic utilitarianism and, like, viewing ea as, like, how can I as an individual make the greatest, like, marginal contribution to, like, maximizing this global hedonistic welfare function, right. I think, like, I hold that entire project with a little bit of, like, distance and a little bit of uncertainty. So, like, even if, you know, like, granting the assumptions of that project that spits out, like, okay, AI and animals are, like, the only things that we should care about. I think, like, I'm willing to, like, grant that that might follow from those premises. But I think, like, I hold the premise itself about, like, what the kind of EA project is or what I, as an individual who's, like, interested in these ideas should do with my career at, like, sufficient, you know, distance that I'm, like, willing to kind of, like, entertain other sets of assumptions about, like, what is valuable. And, like, therefore, I'm just, like, far less certain in committing to any particular cause area. I think before, before we get deeper into the weeds about this, just to put like, a sharper point on the, like, more meta point that I'm trying to make is that, like, so I think, like, I don't know if there was this ADK episode from like, a long time ago about solutions to the Fermi paradox. And I know this sounds unrelated, but I'm gonna try.

AARON

No, no, that's cool.

ARTHUR

And one of the things he talked about was like, you know, basically, like, the Fermi paradox isn't actually a paradox if you, like, understand the ways that, like, essentially, like, when you have uncertainty in, like, a bunch of different point estimates, those uncertainties, like, when combined, should yield like, a probability distribution rather than just like, the headline is often, like, the point estimate of, like, oh, we should expect there to be like, so many aliens, right? But it's like when you have uncertainties on, like, each decision, you know, like, each assumption that you're making in all of the parameters of the equation, right? Like, so I think, like, I guess to apply that a little bit to kind of my, like, sort of moral philosophy is, like, I think, like, the reason why I just am very kind of, like, waffly on my cost prioritization and I'm, like, open to many different things is just that, like, I start from the basic assumption that, like, the, you know, the grounding principle of the EA project, which is like, we should try to do good in the world and we should try to do, like, you know, good in the world in ways that are, like, effective and actually, like, you know, have the consequences that we, we want. Right. That, like, I am very bought into that, like, broad assumption, but I think, like, I have sufficient uncertainty at, like, every chain of reasoning from, like, what does good mean? Like, what, you know, what is the role of, like, me as an individual? Like, what is my comparative advantage? What does it mean to be cause neutral, like, at all of these points of decision? I feel like I just have, like, sufficiently high level of uncertainty that, like, when you get to the end of that chain of reasoning and you arrive at some answer of, like, what you ought to do. Like, I think I hold it sort of very lightly, and I think I have, like, very low credence on any, like, one, you know, conclusion from that chain of research.

AARON

Yeah, yeah. That's what cut you off too much. But, like, but, like, I think there's, like, a very, like, paradigmatic conversation which is like, oh, like, should we be pluralistic? And it's happened seven bazillion times. And so, like, I know I want to claim something different. So. Sorry. I guess there's two separate claims. One is like. And you can tell if, like, I sort of. I was sort of assuming, like, you would disagree with this, but I'm not sure is, um. Yeah, like, even if you just, like, purely restrict, um, you're, like, philosophizing or, like, restrict your ethics just to, like, um, humans who are alive right now and, like, like, basically, like, have the worldview that, like, implies malaria nets. Yeah, um, I, like, think it's, like, very unlikely that, like, actually, like, the best guess intervention right now, like, is the set of, like, standard yay interventions or whatever. And, like, another, like, very related, but, like, some, I guess, distinct claim is, like, I don't know exactly. I don't. Yeah, I really don't know at all what this would look like. But, like, it seems very plausible to me that even under that worldview, so not a long term is worldview at all, like, probably doing something related to, like, artificial intelligence. Like, is, like, checks out under. Yeah, under, like, the most, like, norm, like, normal person version, like, restricted version of EA. And, like, I don't know.

ARTHUR

I think I. Yeah, so I think I am inclined to agree with the first part and disagree with the second part. And that's why I want you to spell this out for me, because I. I actually am sympathetic to the idea that, like, under sort of near termist restricting our class of individuals that we want to help to human beings, like, who are alive today. Right. Under that set of assumptions, I similarly think that there's, like, relatively low likelihood that, like, the standard list of sort of, like, give well, interventions are the best. Right.

AARON

Well, not.

ARTHUR

Or.

AARON

Yeah, yeah, or, like, I'm telling you, like, yeah, if you think. Sorry. Um, yeah, my claim was, like, stronger than, like, that. That. Or, like, what one would interpret that as, like, if you just, like, take it like, super literally. So, like, I think that, like, um, not only expose, like, they're not even our, like, real best guesses, like, like, an actual effort would, like, yield other best guesses. Not only like, oh, yeah. Like, this is our, like, this is like a minority, but like, a plurality of the distribution, if that makes sense.

ARTHUR

Okay, then. Then I do think we disagree because I think where I was going to go from that is that I think to me, like, I'm not as informed on these arguments as I should be. So, like, I will fully admit, like, huge degree of, like, epistemic limitation here, but, like, I think my response was just going to be that I think, like, the case for AI would be sort of even weaker than those givewell style interventions. So even though they're, like, unlikely to be, you know, the best, like, you know, like x post in some, like, future where we, like, have more information about other kinds of ways that we could be helping people. Right. They're like, still, you know, better than the existing alternatives and.

AARON

Yeah, yeah, I'm gonna.

ARTHUR

So what is the case for, like, near termist case for AI? Like, what if you could.

AARON

Yeah, yeah. Just to, sorry. I like, promise I will answer that. But like, just to clarify. Yeah, so I'm like, more confident about, like, the give world charities are, like, not the ex ante best guess than I am that the better, like one of the best. Like, in fact, ways to help only humans alive right now would involve AI. So, like, these are related, but like, distinctive and the AI one I'm like, much less confident in and haven't, I guess, in some sense, just because it's so much more specific.

ARTHUR

Actually, let's do both parts because I realized earlier also what I meant was not ex ante, but ex post. Like, with much larger amount of information about other potential interventions, we might determine that something is better than Givewell. Right. But nonetheless, in the world that we actually live in, with the information that we currently have, the evidence is sufficiently strong for impact under the kinds of assumptions we're saying we're operating under. Right. That, like, you know, other, other competing interventions, like, have a very high bar to click. Like, maybe they're worthwhile in, like, a hit space giving kind of way. Like, in that, like, it's worth, like, trying a bunch of them to, like, see if one of them would outperform givewell. But, like, for the time being, you know, that whatever givewell spreadsheet says at any current time, I think is pretty, like, is pretty compelling in terms of, like, you know, higher certainty ways to help individuals.

AARON

Yeah. So, um.

ARTHUR

So, so one, I want to hear, like, why you disagree with that. And then two, I want to hear, like, your case for, like, AI.

AARON

Yeah, okay. I think I'm responding to this. Like, you can cut me off or whatever. Um, so, like, fundamentally, I want to, like, decouple. Haha. Or. Yeah, this is something I like doing and decouple, um, the, like, uh, yeah, who we care about. And, like, um, how, like, how aesthetically normal are we gonna be? So, like, I want to say, like, okay, even, yeah, if you. If you're, like, still in the realm of, like, doing analytic philosophy about the issue. And, like, you just, like, say, like, okay, we're just, like, gonna restrict, like, who we care about to, like, humans alive right now. There's, like, still a lot of weird shit that can, like, come out of that. And so, like, my claim, I think actually, like, what's what. Maybe this is, like, somewhat of a hot take, whatever. But I think, like, actually what's happening is, like, there is, like, a, quote, unquote like, worldview that, like, vibe associates and to some extent, like, explicitly endorses, like, only just like, for whatever reason, like, trying to help humans who are alive right now, or, like, maybe, like, who will become alive in the near future or something. But, like, this is always paired with, like, a default, like, often non explicit assumption that, like, we have to do things that look normal. Or, like. And to some extent you can. Some extent you can, like, formalize this by just, like, saying you, like, care about certain deep impact. I think there's, like, not even that technical, but, like, mildly, like, technical reasons why. Like, if you're still in the realm of, like, doing analytical philosophy about the issue, like, that doesn't check out, like, for example, you don't actually know, like, which specific person you're gonna help. I'm, like, a big fan of, like, the recent reaping priorities report. So I spent, like, five minutes, like, rambling and, like, doing a terrible job of explaining what I. What I mean. And so the idea that I'm getting at is that I think there's like a natural, like, tendency to think of risk aversion in like, an EA or just like generally, like, altruistic context. That basically means, like, we like, understand like, a chain of causality. And there are like, professional economists, like, doing RCT's and they like, know what works and what doesn't. And, like, this isn't, like, there's like, something there that is valuable. Like, doing good is hard. And so, like, you know, careful analysis is actually really important. But I think this, like, doesn't, there's a tendency to, like, ignore the fact that, like, these type of, like, give well style, like charities and give well, style, like, analysis to identify the top charities. Basically to, as far as I know, almost exclusively, like, looks at just one of, like one of like, the, the most salient or like, intended, like, basically first order effects of an intervention. So we, like, it's just not true that we know what the impact of like, giving $3,000 to the gens malaria foundation is. And, like, it's like, you know, maybe there are, like, compelling, compelling reasons to, like, think that basically it all washes out or whatever. And, like, in fact, like, you know, reducing deaths from malaria and sickness is like the absolute, like the single core effect. But, like, as far as I know, there's, like, not, that seems to be mostly just like taken as a given. And I don't think this is justified. And so I don't think this, like, really checks out as like, a type of risk aversion that stands up to scrutiny. And I found this tweet. Basically, I think this is like, good wording. The way to formalize this conception is just have narrow confidence intervals on the magnitude of one first order effect of an intervention. And that's an awfully specific type of risk aversion. This is not generally what people mean in all walks of life. And then I mentioned this rethink priorities report written by Laura Duffy first, Pigeonhauer Guest. And she basically lists three different types of risk aversion that she does in some rating priorities, like analysis. So, yeah, number one, avoiding the worst. Basically, this is the s risk style or modality of thinking. The risk. The thing we really, really want to avoid is the worst states of the world happening to me. And I think to many people, that means a lot suffering. And then number two, difference making risk aversion. Basically, we want to avoid not doing anything or causing harm. But this focus is on, like, not on the state of the world that results from some action, like, but like your causal effect. And then finally, number three, ambiguity aversion. Basically, we don't like uncertain probabilities. And for what it's worth, I think, like, yeah, the givewell style, like, leaning, I think, can be sort of understood as an attempt to get to, like, addressed, like, two and three difference making an ambiguity aversion. But like, yeah, for reasons that, like, are not immediately, like, coming to my head and like, verbalize, I, like, don't think. Yeah, basically for the reasons I said before that, like, there's really no comprehensive analysis there. Like, might seem like there is. And like, we do have like, decent point estimates and like, uncertainty ranges for like, one effect. One. But like, I. That doesn't, as far as I can tell, like, that is not like, the core. The core desire isn't just to have like, one narrow, like, nobody. I don't think anyone thinks that we, like, should intrinsically value, like, small confidence intervals. You know what I mean? And this stands in contrast to, as I said before, also s risk of french organizations, which are also, in a very real sense, doing risk aversion. In fact, they use the term risk a lot. So it makes sense. The givewell vibe and the s risk research organization vibes are very different, but they, in a real sense that they're at least both attempting to address some kind of risk aversion, although these kinds are very different. And I think the asterisk one is the most legitimate, honestly. Yeah. Okay, so that is a. There was sort of like a lemma or whatever and then. Yeah. So, like, the case for AI in like, near term only affecting humans. Yes. So, like, here's one example. Like, this is not the actual, like, full claim that I have, but like, one example of like, a type of intervention is like, seeing if you can make it basically like, what institutions need to be in place for, like, world UBI. And let's actually try to get that policy. Let's set up the infrastructure to get that in place. Like, even now, even if you don't care about, like, you think long termism is false, like, don't care about animals, don't care about future people at all, it seems like there is work we can do now, like, within, you know, in like, the realm of like, writing PDF's and like, building. Yeah, building like, like, political institutions or like, at least. Sorry, not building institutions, but like, affecting political institutions. Like via. Like, via, like, I guess like both like, domestic and like, international politics or whatever that, like, still. And sorry, I like, kind of lost like, the grammatical structure of that sentence, but it seems plausible that, like, this actually is like better than the givewell interventions, just like if you actually do like an earnest, like best guess, like point estimate. But the reason that I think this is plausible is that all the people who are willing to do that kind of analysis are like, aren't, aren't restricting themselves to like, only helping humans in like the near future. They're like, I don't know. So there's like a weird, like missing middle of sorts, which, depending on what the counterfactual is, maybe bad or good. But I'm claiming that it exists and there's at least a plausible gap that hasn't really been ruled out in any explicit sense.

ARTHUR

Okay, yeah, great. No, no, that's all very useful. So I think, I guess setting x risky things aside, because I think this is a usual way to get at the crux of our disagreement. Like, it's funny, on the one hand, I'm very sympathetic to your claim that sort of like the kinds of things that give, well, sort of interventions and, you know, RCT's coming out of like development economics are interested in, like, I'm sympathetic to the idea that that's not implied by the kind of like basic near termist EA, philosophical presupposition.

AARON

Thank you for just summarizing my point in like 1% of the words.

ARTHUR

Yeah, yeah. So I'm like, I actually strongly agree with that. And it's precisely why, like, I'm more open to things that aren't like givewell style interventions, why I'm very sympathetic to the economic growth side of the growth versus RCT perennial debate, all that. That's maybe interesting side discussion. But to stay on the AI point, I guess putting existential risk aside, I want to make the standard economist argument for AI optimism and against what you were just trying to say. So like, to me, like, I think it is like plausible enough that we should be concerned that, like, increasing AI progress and dissemination of AI technologies decreases returns to labor in the global economy. I think it's plausible enough that we should care about that and not dismiss it out of hand. But, like, I think it's far less likely that, like, or sorry, not. Well, I want to be careful with. I think it's potentially more likely that almost exactly the opposite is true. So, like, if I look at like the big picture history of like global economic growth, like the classic, you know, hockey stick graph where like GDP per capita for the world is like totally flat until, you know, like about 200 years ago. Right? Like, I think the standard, like this is a super interesting rich topic that I've been like learning a lot more about over the last few years. And I think, like the devil is very much in the details. But nonetheless, I think the kind of like classic, you know, postcard length summary is basically correct that like, why did that happen? That happened because like productivity of individual workers, like dramatically increased, like orders of magnitude due to technological progress, right? And like, whether that to what degree that technological progress is sort of like political institutional technologies versus like direct, like labor augmenting technologies is like, you know, whatever, way too deep to get into in this discussion. I don't have like good informed takes on that. But like, nonetheless, I think that, like, the basic like, sort of lump of labor fallacy, like, is strongly at play at these worries that AI is going to displace workers. Like, I think if like, you look at all these, you know, previous technologies, like the, you know, Luddites destroying the power looms or they weren't really power looms, but they were like this more like, you know, better kinds of handlers or whatever. Right, right. Like, I think the worry that people have always had, and again, I get, I'm giving the standard economists soapbox thing that everyone has heard before, but like, I just don't see why AI is categorically different from these other technological advancements. And that, like, at a glance, like, for me as an individual, like trying to build a research career and like get a job and stuff, my ability to access GPT four and Claude, like has I think, like dramatically increased my marginal productivity and like would presumably also increase my wage in the long term because I can just do a lot more in the same amount of time. So, like, it seems to me like just as if not more likely that the better AI technology gets, you have people that are able to produce more in economic value with the same amount of labor and therefore are going to increase economic growth and increase their wages rather than just somehow displace them out of the labor market. And I think there is something that I think EA should maybe paying more attention to, but like maybe they're too concerned with existential risk. There is some interesting experimental economics research already, like looking at this question, which is like having people who work in kind of like standard sort of like, you know, operations and middle management sort of office jobs, like using AI in their work. And I think one of the interesting findings seems to be like a lot of these experiments are finding that it has sort of an equalizing effect, which is like for the most productive employees at a given task, their productivity is like only very modestly improved by having access to large language models. But like, the least productive employees see like very large improvement in their productivity from these technologies. So, like, in my opinion, it seems plausible that like, you know, better access to these sorts of technologies would, if anything, make your, like, standard, you know, employee in the global economy, like, you know, not only more productive, but have this sort of like leveling of the playing field effect. Right. Where like people who, who do not have the current capacities to like produce a lot of value are sort of, you know, brought up to the same level as like.

AARON

Yeah. So, like, I think these are all reasonable points. I also think, um, sorry, I think I have like three, like points, I guess. Yeah. On the object level, I like, don't think I have anything to like, add to this discussion. The one thing I would point out is that it seems like there's, as far as I can tell, like no disagreement that like in principle you can imagine a system that is better than all humans at all tasks that does not have the effect you're talking about in principle, better than humans, better than, better and cheaper than all humans at all tasks.

ARTHUR

Right. With no human input required.

AARON

Yeah, in principle, yeah.

ARTHUR

Okay.

AARON

Yeah, yeah. Like, I don't think this is a radical claim. So like, then there's like the now moving away from the object level. Like, okay, so we've like set this now. Like the normal, like default thing is to like have an debate where, oh, you make some more points in the direction you just said. And I said makes more points. I just said. But like, the thing I want to point out is that like this discussion is like absent from near termist EA because all the people who are taking ideas seriously have already moved on to other areas. And there was one more, but just.

ARTHUR

To jump on that for a second. But I think I totally take your point that then maybe a lot more people should be thinking about this. Right. But to me, like, whether that's possible in principle, like, like, and I think you're obviously going to agree with me on this. Like, to what degree that's relevant depends on like whether we are living in a world where like those systems are on the horizon or are going to exist in the near term future. Right. And like, to what degree that, you know, imprincible possibility, like, represents the actual path we're heading on is like sort of the real crux of the issue.

AARON

Oh, yeah. Okay. Maybe I actually wasn't sure. Yes, because we're living in a more.

ARTHUR

Standard story where like this just increases the marginal product of labor because everyone gets more productive when they, like, learn how to use these technologies, and it doesn't mean it's not going to be disruptive, because I think there's a lot of interesting IO research on how, with the implementation of computer technologies in a lot of workplaces, it was very difficult to train older employees to use the new systems. So really, the only solution for a lot of firms was essentially just, like, fire all of their old employees and hire people who actually knew how to use these technologies. But presuming we get past the disruptive transition where the old people get screwed or have to learn how to adapt, and then the young people who grew up learning how to use AI technologies enter the workforce, it seems very possible to me that those people are just going to be the most productive generation of workers ever. Accordingly.

AARON

Yeah. Yeah. Again, I think there's, like, sorry, I, like, don't. I guess I was about to just, like, make this. It make the same point that I. That I was before. I guess, like, put a little bit more, like, yeah, be a little bit clearer about, like, what I mean by, like, this debate isn't happening. It is, like, it doesn't seem. Maybe I'm wrong, but, like, like, reasonably confident that, um, givewell isn't doing the thing that, like, the long term esteem on open philanthropy is where they're like, try to answer this question because it's really fucking important and really informs and really informs what kind of, like, what the best near term interventions are. And, like, maybe that's, like, I don't want to pick on Givewell because, like, maybe it's in, you know, givewell is, like, maybe it's, like, in their charter or, like, in some sense, just like, everybody assumes that, like, yeah, they're going to do, like, the econ RCT stuff or whatever, but, like, well, but there'd be value.

ARTHUR

That, like, that would be my defensive give. Well, like, is that, like, you know, you, like, comparative advantage is real, and, like, you know, having an organization that's like, we're just not gonna worry about these. Like, they don't even do animal stuff, you know? And I think that's a good decision. Like, I care a lot about animal stuff, but I'm glad that there's an organization that's, like, defined their mission narrowly enough such that they're like, we are going to, like, do the best sort of econ development rct kind of stuff. And if you're, like, into this project, like, we're gonna tell you the best way to use your.

AARON

Yeah, I think that like, I don't know, in the abstract. Like, I think. I guess I'm, like, pretty, pretty 50 50 on, like, whether I think it's good. I don't think they should, like, if anybody's deciding, like, whether to, like, give a dollar to give well or, like, not give well with Nea, I think, like, yeah, it's like, don't give a dollar to give well. Like, I don't think they should get any funding, EA funding or whatever. And I can defend that, but, like, so, yeah, maybe that particular organization, but I. Insofar as we're willing to treat, like, near term sea as, like, an institution, like, my stronger claim is, like, it's not happening anywhere.

ARTHUR

Yeah, well, I mean, I, like, you're right. At one level, I think I more or less agree with you that it should be happening within that institution. But I think what, at least to me, like, your broad sketch of this sort of near termist case for AI, like, where that discussion and debate is really happening, is in, like, labor economics. You know what I mean? Like, it's not that aren't people interested in this. I just think the people who are interested in this, like, and I don't think this is a coincidence are the people that, like, don't think, you know, the paperclip bots are going to kill us. All, right? They're like, the people who are just, like, have a much more, like, normie set of priors about, like, what this technology is going to look like.

AARON

Yeah, I do.

ARTHUR

And, like, they're the ones who are, like, having the debate about, like, what is the impact of AI going to be on the workforce, on inequality, on, you know, global economic growth, like, and I think, like, but, like, I guess in a funny way, it seems like what you're advocating for is, like, actually a much more, like, Normie research project. Like, where you just have, like, a bunch of economists, like, being funded by open philanthropy or something to, like, answer these questions.

AARON

I think the answer is, like, sort of, um. Does some extent. Yeah, actually, I think, like, I. Like, I don't know. I'm, like, not. Yeah, I actually just, like, don't know. Like, I don't, like, follow econ, like, as a discipline. Like, enough to, like, I, like, believe you or whatever. And, like, obviously it's, like, it's, like, pretty clearly, like, both. I guess I've seen, like, examples I've thrown around of, like, papers or whatever. Yeah, there's, like, clearly, like, some, like, empirical research. I, like, don't know how much research is, like, dedicated to the question, like, yeah, I guess there's a question that's like, if you know, like, yeah. Is anybody, like, trying to. With, like, reasonable. With, like, reasonable parameters, estimate the share of, like, how, like, late the share or the returns to, like, labor or whatever will, like, change in, like, the next, like, ten years or five? Not. Not only. Not only, like, with GPT-3 or, like, not. Not assuming that, like, GPT four is going to be, like, the status quo.

ARTHUR

Yeah, I mean, to my knowledge, like, I have no idea. Like, basically I don't have an. All the stuff that I'm thinking of is from, like, you know, shout out Eric Brynjolfsson. Everyone should follow him on Twitter. But, like, like, there's some economists who are in the kind of, like, I o and, like, labor econ space that are doing, like, much more, like, micro level stuff about, like, existing LLM technologies. Like, what are their effects on, sort of, like, the, like, you know, I don't know, knowledge work for lack of a better word, like, workforce, but that, yeah, I grant that, like, that is a very much more, like, narrow and tangible project than, like, trying to have some kind of macroeconomic model that, like, makes certain assumptions about, like, the future of artificial.

AARON

Yeah, and, like, which maybe someone is doing.

ARTHUR

And, I mean, I.

AARON

No, yeah, I'm interested. People should comment slash dm me on Twitter or whatever. Like, yeah, I mean, I think we're just, like, in agreement that. I mean, I mean, like, I think I have some, like, pretty standard concerns about, like, academic, like, academia, incentives, which are, like, also been, like, rehashed everywhere. But, like, I mean, it's an empirical question that we, like, both just, like, agree is an empirical question that we don't know the answer to. Like, I would be pretty surprised if, like, labor economics has, like, a lot to say about. About fundamentally non empirical questions because, like, it doesn't. Yeah, I guess, like, the claim I'm making is, like, that class of research where you, like, look at, like, yeah. Like, how does chat GPG, like, affect the productivity of workers in 2023? 2024? Really? Just like, I mean, it's not zero evidence, but it's really not very strong evidence for, like, what the share of labor income will be in, like, five to ten years. Like, yeah, and it's, like, relevant. I think it's, like, relevant. The people who are actually building this technology think it's going to be, like, true, at least as far as I can tell. Broadly, it is a consensus opinion among people working on building frontier AI systems that it is going to be more transformative or substantially more transformative than the Internet, probably beyond electricity as well. And if you take that assumption, like, premise on that assumption, it seems like the current. I would be very surprised if there's much academic, like, labor economics that, like, really has a lot to say about, like, what the world would be, like in five to ten years.

ARTHUR

Yeah, I think I was just gonna say that I'm, like, sufficiently skeptical that people, like, working on these technologies directly are, like, well positioned to, like, make those kinds of. I'm not saying the labor econ people are, like, better positioned than them to make those progress, but, like, I think.

AARON

No, that's totally fair. Yeah, that is really to be fair.

ARTHUR

Also that, like. Like, I think some of this is coming from, like, coming from a prior that I, like, definitely should, like, you know, completely change with, like, the recent, you know, post GPT-3 like, explosion these technologies. But I just think, like, for, like, just if you look at the history, like, I'm not. I'm not saying I endorse this, but, like, if you look at the history of, like, you know, sort of AI, like, not like optimism per se, but, like, enthusiasm about, like, the pace of progress and all this, like, historically, like, it had a, like, many, many decade track record of, like, promising a lot and failing that, like, was only, like, very recently falsified by, like, GPT-3 and.

AARON

I mean, like, I think this is basically just, like, wrong. It's like, a common misconception. Not like you're. I think this is, like, totally reasonable. This is, like, what I would have. Like, it seems like the kind of thing that happened. I'm pretty sure, like, there have been some, like, actually, like, looking back analyses, but it's, like, not wouldn't. It's not like there's zero instances, but, like, there's been a real qu. It is not, like, the same level of AI enthusiasm, like, as persistent forever. And, like, now we're like, um. Yeah, now it seems like. Oh. Like we're getting some, like, you know, results that, like, that, like, maybe justify. It seems like, um. Yeah, the consent, like, people are way. Hmm. What am I. Sorry. The actual thing that I'm trying to say here is I basically think this is just not true.

ARTHUR

Meaning, like, the consensus was like, that.

AARON

Like, people didn't think Agi was ten years away in 1970 or 1990.

ARTHUR

Well, I mean, some people did. Come on.

AARON

Yeah. So I can't.

ARTHUR

You mean, just like, the consensus of the field as a whole was not as I like.

AARON

So all I like, I have, this is, like, this is the problem with, like, arguing a cast opinion. Like, my cashed opinion is, like, I've seen good, convincing evidence that, like, the very common sense thing, which is like, oh, but AI, there's always been AI hype is, like, at least misleading and, like, more or less, like, wrong and that, like, there's been a, like, a, yeah, and like, I don't actually remember the object level evidence for this. So, like, I can try to, like, yeah, that's fine.

ARTHUR

And I also like to be clear, like, I don't have a strong, like, strongly informed take, like, for the, like, AI. Yeah, hype is overblown thing. But, like, putting that aside, I think the other thing that I would wonder is, like, even if individuals, like, who work on these technologies, like, correctly have certain predictions about the future that are pretty outside the window or that people aren't sufficiently taking seriously in terms of what they think progress is going to be. And maybe this is some lingering, more credentialist intuitions or whatever, but I think that. I am skeptical that those people would also be in a good position to make kinds of economic forecasts about what the impacts of those technologies.

AARON

Yeah, I basically agree. I like, yeah, it's, I guess, like, the weak claim I want to make is, like, you don't have to have that high a percentage on, like, oh, maybe there's, like, some, like, maybe these people are broadly right. You don't have to think it's above 50% to, like, think that. Like, I think the original claim I was, like, making is, like, um, is like, why probably, like, standard, like, labor economics, like, as a subfield. Like, isn't really doing a ton to, like, answer the core questions that would inform my, like, original thing of, like, oh, like, is ubi, like, like, a better use of money than, like, I give you against malaria foundation or whatever? Um, I, like, yeah, I just, like, don't. Yeah, maybe I'll be, like, pleasantly surprised. But, like, yeah, we could, we could also, I don't know. Do you want to move on to.

ARTHUR

A. Yeah, yeah, sure.

AARON

Sorry. So I didn't mean to. You can have the last word on.

ARTHUR

No, no, I don't. I don't think I have the last word. I mean, I think it's funny, like, just how this has progressed in that, like, I think, like, I, I don't completely, like, I think, I don't completely disagree, but I also don't feel like my, like, mind has been, like, changed in a big way, if that makes sense. It's just like, maybe we're in one of these weird situations where, like, we kind of, like, do broadly agree on the, like, actual object level, like, questions or whatever, but then there's just some, like, slight difference in, like, almost, like, personality or disposition or, like, some background beliefs that we, like, haven't fully fleshed out that, like, that, like, at least in terms of how we, like, present and emphasize our positions. Like, we end up still being in different places, even if we're not actually that.

AARON

No, something I was thinking about bringing up earlier was, like, oh, no. Yeah, basically this point. And then, like. But, like, my. My version of, like, your defensive of the, like, I guess, like, the give. Well, class is, like, my defense of donating to, like, the humane league or whatever, and, like, maybe it doesn't check. And, like, I don't know. I just. Yeah, it's for whatever reason, like, I. I, like, yes, something. I'm still, um. I guess I still don't. Sorry. I just did, like, a bunch of, like, episodic, like, jumps in my head, and I, like, I always forget, like, oh, they can't see my thought patterns in the podcast. Yeah, it seems, like, pretty possible that a formal analysis would say that even under a suffering focused worldview, yet donating to s risk prevention organizations, beats, for example, are at least beats like the Humane League or the Animal Welfare Fund, which we recently raised funds for.

ARTHUR

Do you want to talk? So, there's many things we could talk about. One potential thing that comes to mind is, like, I have a not very well worked out, but just, like, sort of lingering skepticism of long termism in general, which, like, I think doesn't actually come from any, like, philosophical objection to long termist premises. So, like, I think the.

AARON

Yeah, I think.

ARTHUR

I don't know what you want to talk about.

AARON

I mean, if you really want. If you're, like, really enthusiastic about it.

ARTHUR

I'm not.

AARON

Honestly, I feel like this has been beaten to death in, like, on 80k. There's cold takes. Like, there have been a. Sorry. I feel like we're not gonna add anything. Like, I'm not gonna add anything either.

ARTHUR

Okay. I don't feel like I would.

AARON

I mean, we can come. Another thing is, like, yeah, this doesn't have to be super intellectual. Talk about climbing. We're talking about having a whole episode on climbing, so, like, maybe we should do that. Like, also anything. Like, I don't know. It doesn't have to be, like, these super, like, totally.

ARTHUR

No, no. That was something that came to mind, too, and I was like, oh, the long term isn't thing, but like, it would be fun to just like, talk about something that's like, much less related to any of these topics. And in some ways, given both of our limitations in terms of contributing to these object level EA things, that's not a criticism of either of us, but just in terms of our knowledge and expertise, it could be fun to talk about something more personal.

AARON

Yeah, I need to forget that it's. Yeah. I don't know what is interesting to you.

ARTHUR

I'm trying to think if we should talk about some other area of disagreement because I feel like, like, I'm like, this is, this is random and maybe we'll cut this from the podcast. This is a weird thing to say, but I feel like Laura Duffy is one of the few people that I've, like, met where we just have like a weird amount of the same opinions on like, many different topics that wouldn't seem to like, correlate with one another, like, whatsoever. And it's funny, like, I remember ages ago, like, listening to y'all's discussion on this podcast and just being like, God, Laura is so right. What the fuck does Aaron believe about all these things?

AARON

And I'm willing to relitigate some if it. If it's like something that hasn't been beaten to death elsewhere.

ARTHUR

So I think we should either talk about like, something more personal, like we should talk about like rock climbing or something, or we should, like, now I.

AARON

Have to defend myself. You can't just say, you know, yeah. Was it the, like, oh, old philosophy is bad.

ARTHUR

Old philosophy.

AARON

Old philosophy is fucking terrible. And I'm guessing you don't like this take.

ARTHUR

I do not. Well, I find this take entertaining and, like, I find this take, like, actually, like, I mean, this totally, like, this sounds like a huge backhand compliment, but, like, I actually think it's, like, super useful to hear something that you just, like, think is, like, so deeply wrong, but then you, like, take for granted when you, like, surround yourself with people who, like, would also think it's so deeply wrong. So I think it's, like, actually, like, very useful and interesting for me to, like, understand why one would hold this opinion.

AARON

Also, I should. I guess I should clarify. So, like, I. It's like, this is like, the kind of thing that, like, oh, it's like, kind of is like, in my vibe disposition or whatever. Yeah. And, like, also, it is not like, the most high stakes thing in the world, like, talking in the abstract. So, like, when I said, like, oh, there, it's fucking terrible. I was, like, I was, like, being hyperbolic.

ARTHUR

Oh, I know.

AARON

I know. No, but, like, in all serious, like, not in all seriousness, but just, like, like, without being, I don't know, using any figurative language at all, I'm, like, not over. There are definitely things that I'm, like, much more confident about than this. So, like, I wouldn't say I'm, like, 90. Oh, it's a. Me too.

ARTHUR

I'm, like, pretty open to being wrong on this. Like, I don't think I have, like, a deep personal vested stake.

AARON

Yeah, no, I don't think.

ARTHUR

It's just, I think. Okay, so this is something. Or actually maybe. Maybe an interesting topic that we are by no means experts on but could be interesting to get into is, I think, like, a lot of the debates about, like, what is, like, the role of, like, kind of higher education in general or, like, somewhat hard to separate from these questions of, like, oh, yeah, old text. Because I'm sort of have two minds of this, which is, like, on the one hand, I think I buy a lot of the, like, criticisms of, like, the higher ed, like, sort of model. And that I think, like, this general story which is not novel to me in any way, shape or form, that, like, we have this weird system where, like, universities used to be a kind of, like, the american university system, like, you know, comes in a lot of ways from, like, the british university system, which. Which, if you look at it historically, is sort of like a finishing school for elites, right? Like, you have this, like, elite class of society, and you kind of, like, go to these institutions because you have a certain social position where you, like, learn how to, like, be this, like, educated, erudite, like, member of the, like, elite class in your society. And there's, like, no pretense that it's any kind of, like, practical, you know, skills based education that'll help prepare you for the labor force. It's just like, you're just, like, learning how to be, like, a good, you know, a good, like, member of the upper class, essentially. Right? And then that model was, like, very, like, successful and, like, I think, in many ways, like, actually important to, like, the development of, like, lots of institutions and ideas that, like, matter today. So it's not like, it's not like, you know, it's, like, worth, like, taking seriously, I suppose. But, like, I think there's some truth to, like, why the hell is this now how we, like, certify and credential, like, in a more kind of, like, merit, like, meritocratic sort of, like, world with more social mobility and stuff. Like, why is this sort of, like, liberal arts model of, like, you go to, like, learn how to be this, like, erudite person that, like, knows about the world and, like, the great texts of the western tradition or whatever. Like, I think there's something to the, like, this whole thing is weird. And, like, if what college is now supposed to do is, like, to train one to be, like, a skilled worker in the labor force, like, we ought to seriously rethink this. But at the same time, I think I do have some, like, emotional attachment to, like, the, like, more flowery ideals.

AARON

Of, like, the liberal, oh, you know, I'm sorry.

ARTHUR

You're good. You're good. So I think, like, that, I don't know, that could maybe be interesting to talk about because I think in some ways that's, like, very related to the old text thing, which is that I think, like, some of my attachment to the, like, oh, we should read these great works from the past. Like, very much is difficult to cash out in terms of, like, some kind of, like, ea or otherwise, like more like practical concrete. Like, oh, this is the, like, value that one gains from this. And I think more of it is like, my, like, lingering intuition that there's something, like, inherently enriching about, like, engaging in this, like, tradition of knowledge.

AARON

Oh, yeah, I think we probably agree on this, like, way more than. Or like, at least the. You just said all the things that you just, like, said. Maybe more than you suspect. Yeah, like, I am like, sort of, I think, like, relative to, like, I don't know, maybe what you would guess, like, knowing my other views, I am, like much. I am like, sort of, like, anti. I'm like, anti anti flowery. Like liberal arts. Like, I don't know, sort of like a bunch of, like, ideas in that, like, space. No. Just yet to be, like, concrete about the philosophy thing. Like, my claim is it's bad, like, if you personally, like, want to like. Or like, for whatever, like, just like, take as a given that, like, what you want to do is like, read and like, think about philosophy. Um. Uh, like, not, but like, um, like, look for the, like, object level content, which is like, you know, ideas and stuff. Then, like, on the merits, it's fucking terrible. I like, yes. Which is like, it's like, so this.

ARTHUR

We totally disagree about.

AARON

Okay.

ARTHUR

Yeah, okay.

AARON

Yeah, so I don't know, like, er. Um, yeah, but, but like, but like, I. Yeah, we can, we can do like a. We can, like, double team convincing the audience that like, actually, like, uh, no, it's we shouldn't just, I don't know, sort of. Sort of like. Like, flower. Yeah, flowerly. Like, like, liberal arts stuff is, like. Is, like, probably good.

ARTHUR

I think it might be more interesting to talk about what we disagree about them.

AARON

Yeah.

ARTHUR

Which is, like.

AARON

I mean. Yeah. So do you have an example of, like, I don't know, it's like, it seems like plausibly we just disagree on the merits about philosophy, like, rather than, like. And it's, like, being hashed out in this, like, weird way about, like, old philosophy in particular. I don't know. Like, Aristotle seems, like, wrong about everything and, like, his reasoning doesn't seem very good. And I read, oh, I mean, this is a hobby horse of mine. Like, I don't need to beat a dead horse too much. But, like, kantian ethics is just, like, totally fucking incoherent. Not incoherent. It is, like, just so obviously wrong that, like, my, like. And I've. Yeah, again, this is, like, a hobby wars on Twitter. But, like, I am, like, truly willing to say that, like, there are. I have not identified, like, a single reasonable, like, person who is, like, who will, like, bite the bullets on, like, what Kantianism, like, actually implies. People actually, like, say that they're Kantians. Say they have, like, kantian intuitions, but, like, kantian ethics is just, like, absolutely batshit insane. Right, sorry.

ARTHUR

Okay. Well, I'm not going to defend kantian ethics, but I think, well, a few things. One, just since Kant is maybe an interesting example, right. I think that the fact that, like, kantian ethics is, like, weird and incoherent and no one is actually willing to bite these bullets, it's kind of funny to me that you chose this example because, like, you know, Kant has this, like, incredibly large and rich body of work that goes, like, far beyond, oh.

AARON

Yeah, totally ethical view.

ARTHUR

Right. And I think, like, you know, reading, like, the critique of pure reason or, like, the lesser known, like, the critique of the power of judgment, his stuff on aesthetics and, like, his metaphysics, like, I don't know, I think a lot of those tech, like, as utterly frustrating as it is to read Kant, like, I think, you know, like. Like, I don't know. What I'm trying to say is Kant is a bad choice for, like, you know, old philosophy is bad because object level, they're wrong. When I think, like, a lot of Kant's work is, like, not obviously wrong at all.

AARON

Okay. Yes. So part of this is that, like, I, like, don't. I'm like, actually, I'm really not familiar yes. So one thing is, like, I am totally, like, not claiming it. Like, I just don't have an opinion on, like, most things that Kant wrote. What I am willing to claim, though, is that, like, no, if you're looking. Even if he's right. Even if, like, he's, like, right or broadly right or whatever. Like. Like that, sir, I guess there's a couple claims here. One is, like, the more object level, which I think of, like, sort of this is, like, restate. Like, I think broadly, there's like, a, like, a correlation between, like, time and, like, takes gets more correct. But, like, also, like, the whole idea of reading foundational texts. Yes. Is just. Doesn't make a lot of sense if what you care about is, like, the content of philosophy. Got it.

ARTHUR

Okay, good. Okay. So, yes, I think this is where we really disagree. And, like, on the one hand, I'm like, I think this actually still does end up being related to the, like, flowery justifications for a liberal arts education. Because I think, like, on the one hand, like, I totally agree when it comes to a question of, like, philosophical pedagogy for, like, intro level courses, right? Because, like, the vast majority of people taking an intro to philosophy course, like, are not going to be professional philosophers. They're probably not even going to take a second philosophy course. So, like, I'm sympathetic to the idea that, like, secondary sources that more succinctly, in excessively summarize the ideas of old philosophers, or, like, maybe better suited than, like, this sort of weird valorization of the primary text. Right? Like, I think that is probably true. Nonetheless, I think if you buy into in any degree, these, you know, liberal arts ideals, like, I think an inextricable part of that project is engagement with a historical tradition. And I just think that, like, like, like, well, maybe let me separate two things. So claim one is that, like, if you're interested, insofar as you're interested in ideas at all, like, you should be interested in the history of ideas. And I think if you're interested in the history of ideas, then, like, reading primary sources actually does matter because it's, like, very hard to, like, understand, like, to, like, understand how ideas are situated within, like, a long running historical conversation and. And are, like, products of the, like, cultural, social, historical context that they came out of without reading them in their original or, like, their original translation. So that would be, like, claim one is, like, the history point, but even, like, object level, I think, like, if you want to go past a, like, philosophy 101, like, introduction to certain historical ideas, like, I think Laura tried to make this point in the. In the podcast, and this is where I was like, yes, Laura, yes.

AARON

Like, get him.

ARTHUR

You know, like, I think she was trying to make a point about, like, translation, and she said something about, I think it was like, like, wisdom or eudaimonia, or, like, one of these, like, concepts from, like, aristotelian ethics. I think, like, she made the point, which I want to reiterate, which is that, like, when you abstract these ideas from the context of their original text, it's, like, much harder to understand particular concepts that aren't easily amenable to translation, either into English or into contemporary sort of vernacular ideas. And this may be part of why this seems more obvious or seems more true to me, is that a lot of what I studied as an undergraduate was the buddhist tradition. And I think, especially when you step outside of ideas that super directly influenced a lot of later developments in western philosophical thought, and you go to a very different culture and tradition. It's obvious to me that if you want to understand Buddhism and buddhist ideas in english translation, you have to leave certain terms untranslated, because there are certain things that just have a very complicated semantic range that do not neatly map onto english concepts. And I think when that is the case, it's much harder to understand those terms and concepts in, like, an accessible secondary source, and it's much easier to get a sense for, like, what the, like, true semantic range of a term is, like, when you read it in its original context. Yeah, so I can give an example.

AARON

If you want, but I mean, I. Couple things. One is, like, there is, like, a self, like, self reinforcing, like. Sorry. Yeah, so, so I think some of the justification you just laid out is true. It's sort of, like, begging the question, or, like, as. I think that's, like, a terrible, like, phrase, but actually, yeah, this is sort of relevant. I think we should change begging the question. Just mean what it actually sounds like. Okay.

ARTHUR

Yeah, yeah, because it's a useful concept. Yeah, but it's so easy to misunderstand if you don't.

AARON

Yeah, okay. No, I'm gonna. So what I actually mean is, like, it's like, like, somewhat circular should be.

ARTHUR

Called just, like, assuming the conclusion or something like that.

AARON

Yeah, so, so, like, if, like, the, like, state of the world is such that, like. Oh, yeah. Like, like, if you are, like, you will be expected to, like, know, like, what Aristotle's ideas are in. Like, in, like, a meaningful. Like. Like a, um. Yeah, in, like, like, a content sensitive way. Then. Then, like, then, like, yeah, like, reading, then, like, leaving things untranslated is, like. I guess what I mean is there is, like, a. Once you've accepted the premise that, like, that, like, there is virtue in, like, reading old stuff. Yes. Then, like, sometimes, like, the old, like, the original words are indeed good at helping you understand the old stuff or whatever structure there. And, like, um. Yeah. Um, separately, I am not against philosophers coining terms. Like, uh. So, like, I am. Yeah, I am, like, pretty happy for somebody for, like, a contemporary analytic philosopher to, like, out of just, like, you know, out of, like, uh, not even respect, but, like, out of tradition, say, like, okay, I'm gonna, like, co op sort of co opt or, like, recycle the term. You like, new Daimonia. And then, like, here, I'm gonna, like, give, like, as good of a definition as, like, I possibly can, and it's not even gonna be a fully explicit definition. It's also going to be, like, I pointed some examples and say, like, that's a central example of eudaimonia, et cetera. But, like, I don't want to repeat this entire, like, chapter where I try to define this word every time I use the word. So I'm gonna, like, use that as a variable, and it, like, yeah, I'm, like, not against analytic philosophy, like, coining terms and, like, using them that way. And, like, if we want to do, like, a nod to history by, like, using, you know, old terms, like, from, like, another language, whatever, Latin here and there. That's cool. Yeah, those are my points. What was your. Wait, there was your first point. What was your first point?

ARTHUR

Well, I was just saying that, like, okay, like, to, like, maybe offer a more full throated defense then, because you were saying some degree is begging the question then. Yeah, like, I see. Yeah, I think you're on it. Like, I think that's true. But, like, like, I guess to me, like, part of the, like, sort of ideas for their own sake kind of, you know, like, vibe or inclination or whatever. Is that, like, history kind of history, like, ought to matter, right? Because, like, I think, like, I take your point that it's begging the question that, like, if history matters, then, like, you have to, like, read the historical text or whatever to, like, understand the history. Right.

AARON

But, like. No, no, but there's a separate point here.

ARTHUR

Okay.

AARON

Yeah, sorry. Brief interruption. I think. I think I. Like, I'm. Yeah, I think this is, like, substantively different. Yes. And I think I just, like, disagree that, like, sorry, I think we're both. Before giving like, lip service to, like, the truth term, like, oh, flower. Like liberal arts. But, like, my conception of that is, like, seemingly different, I think, than yours. And yours is more history based, so. Sorry, keep going.

ARTHUR

Yeah. Okay, well, then maybe I'll defend that a little bit, which is just that I think, like, I. And this is. This is one of the ways in which I feel, like, not vibe associated with ea, even though I, like, love ea, like, in terms of, like, the merits of its ideas, is that I think a lot of people with the kind of, like, ea, you know, sort of like, spf. Every book should be a six paragraph blog post, sort of like, whatever orientation. Like, I think there's something like, like, really, really mistaken about that, which is just that I think, like, people have, like, far, like, they. They cloak themselves in this, like, rhetoric of epistemic humility that's all about, like, confidence intervals and, like, you know, like, credences and, like, how sure you are about your beliefs. But I think, like, people miss the, like, meta level point, which is that entire way of thinking is, like, totally alien to how, like, humans have thought about many topics for, like, most of human history. And that it would be really weird if, like, we, you know, like, yeah, weird in, like, the social psychology sense. Like, we western educated, industrialized, democratic, like, citizens who, like, went through the, like, you know, like, hoops of, like, getting super educated or whatever, like, in the 2020s, like, happened across the, like, correct, you know, like, like, higher level framework for, like, investigating questions of, like, value and morality and ethics, right? And, like, I think people should just, like, be a lot more humble about, like, whether that, like, project itself is justified. And I think because of that, that leads me to this, like, history matters not just for its own sake, but, like, for doing philosophy in the present, right? Which is, like, if you think that, like, we ought to have a lot of humility about, like, whether our current, like, frames of thinking, like, are correct. And, like, I think doing the kind of, like, almost sort of, like, postmodern foucault deconstruction thing of, like, just pointing out that those are, like, really historically contingent and weird and, like, have a certain, like, lineage to, like, how we, like, arrived at the set of assumptions. Like, I think that, like, people should respect or, like, should take seriously that, like, move, right? And, like, that move being this, like, exposing a lot of what we think is, like, natural incorrect and just, like, the best way to, like, come about our, like, views of the world, right, is, like, a product of our, like, time and era and circumstance and culture. Right? And I think, like, if you take that really seriously, then you become much more interested in history, and then you become much more interested in what is the actual genesis of these ideas that we take for granted. I know I've been rambling for a little bit, but I think to put a sharper point on this with a particular issue, I think we don't have to talk about this object level, though maybe it would be interesting, would be the whole debate over free will. The whole debate over free will takes for granted that we as human agents have a lot of the discussion about free will takes for granted this idea that we intuitively view ourselves as these solitary, unified self that can act in the world and that can make these free agential decisions. And that if our scientific image of ourselves as naturally evolved creatures that are the product of evolution and are, like, governed by the same, like, laws of physics and things as everyone else, like, then there's this, like, fundamental metaphysical problem where it, like, seems like we are these, like, determined beings, but then, like, in our personal experience, it, like, feels like we're these, like, choosing beings that are, like, somehow, like, outside of the causal nexus of, like, scientific processes, right? And I think, like. Like, I use this as an example because there's some interesting work in, like, sort of cross cultural philosophy that just suggests that this is just, like, actually the product of, like, a christian intellectual heritage that, like, is trying to solve a certain theodicy problem, which is, like, how can we be, like, viewed as, like, moral agents that can make decisions that are worthy of salvation and damnation when, like, also, you know, when, like, when, you know, we, like, live in a world, like, governed by, like, a benevolent God, right? So sort of, like, reconciling, like, God's sort of benevolence and omnipotence with the idea that we as humans are, like, free agents who can make good and bad decisions, right? And that, like, if God was truly, like, Justin, right. Like, why wouldn't he just, like, make us make all of the right decisions? Like, how do you hang. I'm not presenting the problem very well, but, like, there's this philosopher, Jay Garfield, who does a lot of work in, like, Buddhism, that basically argues that this, like, whole framing of the free will debate comes from a christian cultural intellectual legacy. If we actually want to understand the problem of free will, I think it's useful to know where this whole framing of the problem came from and how other cultures and traditions have thought about this historically in ways that are very different. The buddhist tradition being the example in this case, which just doesn't seem to think there is a there there, so to speak. Like, there's not really some kind of problem to be solved because they start from a different set of metaphysical assumptions. Like, I think that is interesting. And, like, that should matter when we're trying to answer questions about, like, what should we value? And, like, how do we be good in all these things?

AARON

Okay, you're really taxing my working memory because I. Wait, sorry, sorry. That was supposed to be a light hearted thing. Yeah, a lot there. Um, I think the most fundamental point is, um, I disagree. So I reject that. Primary texts are generally a good way of doing the good thing, which you just argued for, which is learning about intellectual history. I think intellectual history is great and that a terrible way to do it, at least if you're a finite being, like all beings are, is, or like, with finite time and mental energy, et cetera. Like, I'm not gonna say, like, no, there's, like, never. Yeah, my claim is like, yeah, intellectual history, you're better off reading a book about the. About the texts where they are, like, the object of study or, like, not this, or I guess, the ideas of the object of study. And, like, actu. Like, um. Yeah, it's just, like, not very efficient or whatever to, like, to, like, read primary text. And now, like, if you're for whatever reason, like, really interested in, like, one specific sub area of something, like. And, yeah, yeah. So if you're interested in, like, a very specific slice of intellectual history, history of ideas, then I. Then, yes, I agree. That is a good reason to read primary text. That is very uncommon.

ARTHUR

Sure. Yeah. So I want to respond to that, but I realized, hey, maybe you can edit this in or something. I slipped out of conversation in a podcast mode. I was like, fuck. I did a terrible job of explaining the whole theodicy problem, origin of free will, which is really that the theodicy problem is like the problem of evil, right? It's like, how do we have a benevolent. How do we reconcile, like, God's omnipotence and benevolence with the problem of evil? Yeah, it's like the problem of evil, right? But the specific free will move is then, like this, postulating this libertarian, not in the political sense, postulating this libertarian free will comes from christian theologians who are saying, like, oh, this is how we get around. The problem is that humans have been endowed by this omnipotent God to be able to make choices, and why God would want us to make choices if he's benevolent is obviously still a deep problem that I don't really understand how you would possibly arrive at that place. But something about we don't fully understand God's plans. It's all very mysterious. Yada yada. But anyways, going back to what you just said, I think again, I'm inclined to agree at some level, for many people in many circumstances, it's like a very inefficient way to learn about these ideas is to read the primary text. I totally agree, but I think in some ways, the farther back you go in history, in a weird way, I think the less that that is true. And the reason why I think that is for the language translation thing that I mentioned earlier, which is just to bring up Buddhism again, because I think it's useful here. Like, in Buddhism we have one of the central ideas is this idea of dukkha, which gets often translated as suffering. So you hear the first noble truth is translated as life is suffering, or is pervaded by suffering. But dukkha doesn't really mean suffering because surprise, surprise, ancient pali words aren't easily translatable into English. Like maybe something like unsatisfactoriness would be like a better translation, but like, it's just like much more like subtle and multifaceted than that, right? And it, like, there isn't a single english word that it maps well onto. And I understand that, like, some, like, really good, like, scholar who's like a good writer could maybe like give a few paragraphs about, like how, like, what dukkha really means. But to me, like, I felt like over like, years of study, like, I got a much better sense of that through just like, reading a lot of, like, original text and like, seeing where.

AARON

The word came up in English or in.

ARTHUR

Yeah, in english translation. But I'm saying, like, with, like, technical terms, like, yeah, left untrained. Right? And like, it was only through, like, understanding that context that I was able to feel like I could like, wield the word appropriately, right? So, like, and I think this to me comes back to my more sort of like, like wittgenstein like view of, like, what language is, right. Is that I don't think, like, every term like, has some kind of like, correct set of, like, necessary and sufficient conditions. Like, I think my account of language would be, like, much more, excuse me. So my throat would just be much more like, would be much more like social practice and, like, usage based, right? Which is that, like, meaning is more like defined by its, like, use in actual language. And I think like, once you get there, right? Like, if you grant that kind of premise, then it's, like, hard to see, like, how, given how much, like, language has changed and evolved over the years, like, a skillful secondary source interpreter is going to be able to, like, clearly just, like, lay out, like, this is what eudaimonia means in three paragraphs. And it's going to be, like, much more necessary if you have this philosophical view about, like, what languages to, like, actually understand how that term is, like, used in its original context in order to, like, grasp its meaning. Yeah, I just. I think there's just, like, limits of, like, how much these secondary sources can really tell you.

AARON

I somewhat agree. The thing that I agree about is that, yes, it actually seems like if your goal is to get the most genuine unbiased, and I'm actually thinking unbiased in an econometric sense, where you want to get a true point estimate of the real meaning at the very heart of the wittgensteinian, multidimensional linguistic us, like, sub. Like, you know, a hyperspace or whatever, you want to, like, accurately point that, like, target that, um, is like, yes, probably best to, like, read the original texts. Um, and, like, unfortunately, like, probably, um. Uh. So, like, one thing is, like, there's the question of, like, does there exist a, um. Like, what would a really good attempt at not defining in three paragraphs, but maybe defining in half of the length of, like, the total corpus that you just said, like, in the terms of analytical philosophy, like, how close could you get to, like, an unbiased. Yeah, like, point estimate of, like, what dukkha means or whatever. And I actually don't really know the answer. I like. I like. Yeah, suspect that, like. Yes, that, like, you don't.

ARTHUR

I like your putting this in, Conor match returns because I actually think it, like, it gives me an easy way to conceptualize how we can converge on agreement, which is, I think we both agree that to get the least biased estimate possible, you have to have the largest sample size in this case, which is just going to be reading the entire text. But there's going to be some optimal trade off between how biased is your estimate of what a particular term means and how much time does it take to invest in actually reading those primary texts? So, yeah, maybe there's some optimal. That's a mix of extended excerpts and secondary exegesis or whatever that would find the correct balance between those two for any given person at a certain level of interest and time commitment. So, yeah, it's not a hard and.

AARON

Fast yeah, no, Minty is perfect. It's like lukewarm. Do you want it hot or cold or lukewarm?

ARTHUR

Lukewarm is fine.

AARON

Okay.

ARTHUR

I've already, like, way overdone it on, like, caffeine for the day, so I was, like, so tempted by the tea, but, like, mint tea. That's perfect.

AARON

Yeah.

ARTHUR

I think a funny thing about, like, um, or I don't know, also on the, like, vibes.

AARON

Yeah.

ARTHUR

Like, how ea might. I think this is loosely related to, like, the other, like, vibe based way in which I feel, like, different than a lot of other, like, more like, rat type ea people.

AARON

Yeah.

ARTHUR

Is I feel like there's like, a certain, like, like, philistine. How the fuck do you say that word? Philistinism. Philistine. Vibe.

AARON

Don't actually know what that means.

ARTHUR

A philistine is like one who, like, eschews, like, arts and aesthetics and, like, doesn't care about sort of, like, you know, like, like literature, music, cultural, stuff like that. And, like, I feel like that's another just sort of, like, vibe. Vibe aligned way in which I'm like, what the hell are these people doing? And I just feel like there's a lot of, like, there's a lot of, like, philistinism in, like, kind of like, rap culture, which is just like, sort.

AARON

Of, I feel like that's more a stereotype, I think. I think there's, there is like a, so, like. Sorry, sorry. Keep going.

ARTHUR

No, it's good. Because this all, like, I don't know, this is all, like, sounds like, so, like, incredibly pretentious to, like, say, like this, but I do think there is this grain of truth just like, when people, like, really, really strongly believe that, like, I don't know, the world is going to end in ten years or that, like, the goal of their life ought to be able to, like, maximize their personal contribution to, like, global utility. I think that, like, naturally leads you to, like, care a lot less about these, like, other.

AARON

Yeah, maybe. I mean, I think that's like a stereotype. And it's like, what? Like, you would think if, like, the only, like, thing you knew is like, oh, there's a crazy group of people who call themselves the rationalists. I think, like, by and large, it's like, no, it's like, on average people, like, yeah, it's like people are mostly pretty normal. On the other hand, it's true. So this is actually something I've been thinking about about myself, which is that, like, I am not a total philistine. I, like, like, in fact, I think, like, music is, like, a pretty important. It's an important part of my life. Not in that it's like, like, not in an interesting way, but in the sense that, like, oh, I like music and I, like, listen to it a fair amount. I think Spotify is great, even though I have, like, very basic tastes. But, like, besides that, I think I have, like, really weak aesthetic intuitions. And, like, I was thinking about, like, whether that means that I have, like, slightly autistic or whatever.

ARTHUR

Like, yeah, this just reminded me. I don't, like, we can cut this out or whatever, but this fucking. This liam bright tweet was so awesome. He was like, I just thought of this because I think one of the things that I find funny about, like, rationalists, like, calling themselves rationalists, is that it, like, it's kind of.

AARON

Kind of pretentious or something.

ARTHUR

Yeah, I mean, it's pretentious, but also the, like, philosopher in me is, like, so annoyed that they've, like, co opted this term that, like, has a very specific meaning in, like, epistemology. You know what I mean? But this tweet is so funny. I like that. The argument for rationalism is that its practitioners have easily the best track record of success as scientists, whereas the argument for empiricism is that. The problem is, when you think about it, rationalism is kind of wacky bullshit.

AARON

Like, I need to be, like, way more philosophical to, like, truly appreciate.

ARTHUR

Yeah, yeah. Well, like, the joke is just that, like, if you, like, look, historically, a lot of the, like, you know, like, like, early modern, sort of, like, renaissance people, like, descartes or whatever, it was like, the rationalists who, like, did all of the, like, good empirical, like, science.

AARON

I didn't know.

ARTHUR

And then. And then, like, the empiricists, like, most of their critiques of rationalism are, like, well, from first principles. Like, rationalism doesn't really make a lot of sense, you know? So, like, I don't know. It was just a good joke. Unlike the irony of that, honestly, I.

AARON

Think you just know, like, way more about intellectual history. For better. I mean, obviously there's, like, you know, like, for better. I mean, the worst part was gonna, which I was, like, sort of half planning to say was, like, there was, like, probably some opportunity costs there, but, like, maybe not. Some people just, like, know way more than me about, like, everything. It's just like, yeah, totally possible. Were you a philosophy. You were a philosophy major, right? Okay, so that's my one out, is I wasn't. I was a philosophy minor. For a big difference. I know.

ARTHUR

Nice.

AARON

In fact, actually, that's sort of endogenous. Ha, ha. Because one of the reasons that I chose not to do a major was because I didn't want to do, like, the old philosophy, like, you had to.

ARTHUR

Do, like, some history requirement or.

AARON

Yeah, like, yeah, a couple things that were like that. Yeah, I think, like, maybe like, two or three, like, classes. Like, yeah, basically old bullshit. Um. Sorry. I mean, to reopen that. Yeah. Also, I'm like, you don't have, like, I'm happy to go, actually. I'm not happy to go for literally.

ARTHUR

As long as you want.

AARON

Like, honk out sooner than I guess. But, like, yeah, just, like, saying, like, you don't have to. You don't have to, like, no, I'm not a slave here.

ARTHUR

I'm having a good time. Okay.

AARON

Yeah. Okay. I feel like I actually don't really want to talk about climbing, in part because I. I'm just. Yeah. Honestly, like. Like, so I, like, made this clear to you. Maybe I'll cut this part out. But, like, yeah, I'm not, like, doing amazing, and there's, like, a lot of baggage there. Not that I don't think anything would, like, it's really just, like, if there's nothing, like, deep or, like, oh, no. I think I'm gonna, like, handicap myself for life by, like, having a conversation about climbing. I just, like, honestly, like, don't really feel like it right now. That's okay. Yeah.

ARTHUR

We can call it. Or if there's some, like, I don't know, some, like, very light hearted topic we could conclude on.

AARON

Um. I don't know. I'm trying. Like, I'm sort of drawing a blank. There's, like, too many. Oh, on the opposite of. There's, like, too many options. Like, one. Like, one thing that, like, I thought that ran through my head was, like, uh. Like, I don't know. Like, what's your, like, life story? Slash, like, is there anything in your. We've been talking about ideas. Like, not. Not ourselves.

ARTHUR

Like, right.

AARON

Yeah. I don't know. Is there an interesting. Is you want to tell your whole life story or maybe a sub part or just, like, a single episode that's like. Yeah, I don't know. We can also just. Yeah, this is just, like, one of, like, I don't know. N thoughts.

ARTHUR

Yeah. Yeah, I think, like, how do I even summarize? It's always funny when people. This is so mundane. We can just cut all of this out if it's really boring and mundane, but, like, it one of the things I've noticed about this when like I talk to people and people are you from like I feel like I like moved when I was in middle school. So like I never know exactly how to answer the question of like where I'm from because it's not like I'm some like like you know like diplomats kid who like moved around my whole life. It's more just like I like grew up in like the like San Francisco Bay Area like until I was twelve and then like moved to Denver. So it's like. But yeah, I feel like I don't.

AARON

Have an answer for you because I'm really boring. I basically lived in DC forever.

ARTHUR

Yeah, but you know I think Denver was a really nice place to grow up. I feel like this inherently gets me down climbing path.

AARON

No, that's fine. There's no like, there's no like oh big, like there's no like trauma or whatever.

ARTHUR

But I feel very lucky that like my like parents are very like outdoorsy people. Like my mom is like a professional ski racer for some time. Like she grew up like skiing a lot and then I had an uncle who like both of my parents grew up in the Denver area so like, and have a bunch of siblings. So like almost all of our extended family like lives there and one of them, my dad's brother, like their parents weren't like really that outdoorsy or like rock climbers or anything but he like loved rock climbing and when he was 18 he like went to see you boulder and like started climbing and stuff. So like, yeah, I feel like for my like life story, the big thing for me was just like having a sort of like mentor who had like been like rock climbing for a long time like well before the age of like gyms and stuff and was just like very experienced in the outdoors and like you know after like a few times of going to the climbing gym when I was twelve, like I went like to El Dorado Canyon and he like took me up some like big like multi pitch like routes and stuff and I feel like just like, I don't know, having that experience in outdoors was like so formative for me because then when I like got a certain level of like experience and competence and was able to go do that on my own, like I had a friend who, who his dad was a rock climber and he kind of had a somewhat similar background of like doing it when he was little and then we were both at the level where we could like safely like sport climb on our own and I got, like, quick draws for, like, my birthday and then, like, you know, just, like, started going rock climbing with him. And I feel like a lot of my, like, in high school. I don't know. In high school, like, I don't know if this comes off as, like, surprising or unsurprising, but I was, like, actually a fucking terrible student for, like.

AARON

No, that's surprising. Yeah, that's surprising because you're. I don't know. Because for the obvious reason, they, like, obviously smart. And, like, usually being smart correlates with not being a terrible student.

ARTHUR

Right. Yeah, but, yeah, I was like. I feel like. I don't know, to be honest. Like, kind of the classic, like, underachieving, like, smart kid for, like, most of my life. Like, I just, like, something about the. Just, like, discipline and, like, having to learn all this stuff that, like, wasn't that interesting to me. I feel like I had the very stereotypical story of just kind of, like, blowing off school a little bit and, like, I kind of got my shit together, like, halfway through high school because I was, like, you know, very much had the expectation, like, I'm gonna go to college and, like, realize, oh, shit, if I want to get into, like, a good college, I have to, like, do well in high school. So I kind of, like, got my shit together and got, like, good grades for two years. But, like, for most of my life, I did not get very good grades. And, like. But I think, like, throughout all of that, like, much more important or, like, thing that I was, like, focused a lot more on was literally just, like, going climbing with my friend Eric.

AARON

Yeah, no, fuck it. Let's talk. Let's talk about it. Although, like. Yeah, no, no, I was like, I don't know. Maybe we'll have to supplement or whatever with, like, another, like, half episode or something. Yeah, no, no, because I remember on the hike, we, like. Yeah, I don't know. Have we mentioned this? That, like. Yeah, I think we've met in person. Yeah, that is true. Hike. Which is fun, which we should do. Not the same thing, but, like, another hype again. Yeah, if you are so inclined. Agreed. Oh, yeah. So I was just like, I don't get. That's interesting. I, like. I feel like I'm bad. I don't want to say. Oh, wow. Thanks for sharing your answer. Let me share my. But, like, one. Like, I don't know, maybe. Maybe people will, like, find interesting like, that. Like, I was the. Also very similar, like, timeline and, like, importance to me was, like, climbing. Like, yeah, I think I literally started when I was twelve, just like you or whatever, but I was a total gym rat and so, like, I don't know. Did you? I don't know. Sorry, I feel like I was. I've been like, the modality of like saying, like, oh, convince me that climbing outside is better, but like, I don't know, like, what are you. Yeah, yeah. Like, um. Oh, I doing.

ARTHUR

Certainly don't think it's better or worse, I think.

AARON

No, no.

ARTHUR

Very different experiences.

AARON

Yeah, yeah, totally. Like, I don't know. I don't even have a quet. Like, I don't know if I like, have it. Have an actual question, though. Like, did you think, you know, like, why you were like, drawn to climbing outside?

ARTHUR

Yeah, I think for me I do. Which was just that, like, I. It's always hard to disentangle, like, you know, I'm sure you've thought too about the, like, how much does parenting matter versus like genes and environment and things like that. So it's like, you know, it's hard to know how much of it was, like, literally my parents influence versus just like, the fact that I am my parents children genetically. But like, I think I was always like drawn to kind of like outdoor, sort of like adventuring, just like generally because like, even prior to climbing, I like, grew up, like skiing and like hiking and like reading the Boy Scouts by the chance. No, I was not. Yeah, I think I was. Part of my like, bad student ness was like wrapped up in the same reason why I never like, did like Boy Scouts or like, or like team sports, which I was just like very much like a stubborn kind of individualist. Like, I want to do my own thing and like yada, yada, yada, you know, from a young age. So I think, like, organized things, like the Boy Scouts were like, not that appealing to me. And that was part of what was appealing about climbing outside in particular was that sense of like incredible, like freedom and independence, you know? So, yeah, I think, like, at first I just fell in love with like, the movement of rock climbing through the gym, like, just like you did, like the first few times I in the gym. But then when my uncle took me tread climbing, I was like, oh, no, this is the thing, like, you know what I mean? Like, getting to the top of these like, giant walls. Like, I think it's super, like, cliche whenever, like, people talk about this stuff, but it's like, there's something about that experience of like, when you climb a big route outside and you're like, staying standing on the top of like, a, you're halfway up, like, a 400 foot cliff and you just have this deep sense of, like, humans are, like, one not supposed to be here in some sense. And that, like, without this, like, modern equipment and shit, like, you would never be in this position. Like, this is so dangerous and crazy, even though it's not necessarily dangerous and crazy if you're, like, doing it correctly. And that sense of, like, oh, very few people are able to, like, be in these kinds of positions. Like, there was something, like, very, like, aesthetically appealing to me about that, honestly. And, like, I think so. That was a big aspect of it. Just, like, the actual places that you get to see and go was really inspiring. I think I love just, like, being in nature in that way, in a way that's very interactive. You know? It's not just like, you're, like, looking at the pretty trees, but you're, like, really getting to understand, like, especially in, like, trad climbing. Like, oh, like, this kind of, like, part of the rock is like something that, like, with my gear and abilities and skills, I can, like, safely travel. And it, like, gives you this whole, like, interactive sense of, like, understanding this part of nature, which is like a rock wall, you know? And that was quite beautiful to me.

AARON

That's awesome. We're totally the opposite people in every possible way. Not every possible way, but the other.

ARTHUR

Thing that I was gonna say, oh, there's so there's, like, the aesthetic side of things. And then there was also a big part of it for me was, like, this, like, risk sort of thing, which is not like, I think a lot of this, especially whenever you tell anyone you're interrupting rock climbing, they're like, oh, have you seen free solo? You know, it's like the meme or whatever. But, like, I think when, like, people think about rock climbing, like, they just think of, like, sort of a reckless, kind of, like, adrenaline junkie sort of pursuit.

AARON

Yeah.

ARTHUR

I think what was really beautiful about rock climbing to me and, like, spoke to me on, like, both an intellectual and aesthetic level was that, like, something that's interesting about it is, like, gym climbing, right, is, like, extremely safe, right? Like, way safer than, like, driving to the rock climbing gym, right? But there's this whole spectrum in climbing from slightly more dangerous but still relatively safe and probably safer than the drive to and from is sport climbing outside on modern bolted routes where the falls are safe, that's also very safe all the way to free soloing on the far other end, hard routes or whatever. And then somewhere in the middle would be, like, trad climbing, like, well traveled, established routes that are, like, within your ability are, like, scary. And, like, I think what I loved about it was, like, there's this whole spectrum of, like, risk level, right?

AARON

Yeah.

ARTHUR

And, like, a lot of what, like, becoming a better rock climber in, like, this, like, outdoor context is. Is, like, learning how to, like, rationally analyze and assess risk, like, on the fly, right? Like, because you have to learn, like, on a mental level to. To overcome this inbuilt fear of falling and fear of heights. And you have to look at it and be like, okay, my little lizard brain is like, this is absolutely insane. I'm 200ft off the ground looking straight to the river. I am totally freaked out right now. You have to learn how to override that and think more rationally and be like, how far below me is my last piece of gear? Like, is that piece of your solid? Do I, like, trust the cam that I placed in that crack or whatever, you know? And, like, I think, like, I love that, like, mental skill of, like, like, learning how to, like, work with your mind in that way and, like, learning how to, like, overcome these, like, instinctual feelings and, like, put them in this, like, rational context and, like, do that on the fly that, you know? And one of those, like, parameters is, like, your, like, physical ability, right? Like, that's something that happens in trad climbing is, like, sure. Like, maybe some routes are, like, kind of sketchy and not well protected, but they're like, you know, five seven, and you're, like, you know, you can climb five seven with your eyes closed, you know, in your sleep. And, like, like, you have to, like, learn to, like, trust that physical ability and, like, I don't know. I'm just rambling, but, like, I think at an aesthetic level, like, both the beauty of where you are and the pursuit itself being, like, something that's, like, where you're, like, safety is very much in your own hands and, like, you can, like, totally be safe and secure, but you have to, like, no one can draw that line for you. Like, you have to decide, like, what risk you're willing to tolerate and, like, how you're willing to manage it. Like, I, like, was so addicted.

AARON

That's so. That's awesome. No, I'm glad I can't say it all, like, resonates, but, like, that's. I don't have anything interesting to say. That's just really cool. Like, I'm glad you. I'm glad you got to expect that. I, like, yeah, I mean, it's kind of like, this is sort of a dumb conversational movie. Just like, oh, just like, oh, just like doing the same, saying all that. But for me. But, like, I, like, interjected before and said that we were, like, very different. I forget what, like, prompted that exactly. Yeah. I mean, for one thing, I, like, guess I. Yeah, so, I mean. I mean, like, I don't want to. I don't want to overplay. Like. Like, the difference is I liked climbing outside. I would always, um, definitely, like, much more bouldering, though. Um, yeah.

ARTHUR

Side note, I think is a lot more dangerous than before realize.

AARON

Oh, yeah. Outdoors at least. Yeah, yeah.

ARTHUR

Just like, a lot of, like, I think, like, the most common, like, climbing injuries are, like, bouldering injuries because it's just like.

AARON

And there's a huge range from, like, you know, if you're. It's like a nine foot tall thing and you have seven crash pads, you're fine.

ARTHUR

Yeah. Like a very flat.

AARON

Yeah, yeah. Straight forward landing. No, you are also just, like, evoking, like, memories of, like. Yeah, both. Like, I. One thing I never engaged in is what you're just talking about, which is dealing with fear in the context of an actual, like, something where you actually do have to evaluate safety. So, like, I was fighting past. I was like, escape cat in the sense of, like, getting used to, like, climbing in the gym where it's, like, really safe. Actually, I was. I was, like, looking at my leg. At one point, I did get one injured, which is, like, at one point, like, I fell with, like, a rope, like, wrapped around my leg. And that was really fucking painful. But, like, I can't really see it anymore. At one point, it was. It was like a scar for a while, but it was like. It was, like, wrapped around my, like, the bottom of my leg or whatever. Um, I was fine. Um.

ARTHUR

Don't put your leg behind the rope.

AARON

Yeah, yeah, no, yeah. Backstopping. Don't do that. No, but, like, it was like. I mean, there was, like, some mental challenge there. And also there was, like, the hardest, like, actual, as in outdoor, like, route that I did was called, uh. I don't know. No one's gonna, like, recognize this out of, like, the nine people who are. Yeah, but buckets of blood, which is local, actually. It's like a local boulder. I don't think I did forget if I did it. Like, it's like a b ten in car. Yeah. Like, near the.

ARTHUR

Holy shit.

AARON

Like, in Carter rock. It's like a couple hard moves. It was like, right? Yeah, like, my. So the hard moves are short, but then there's, like, a b five top, which, like, honestly. Yeah, probably one of the most dangerous. Like, I guess not. Not dangerous, but, like, I guess, like. Like, danger in terms of, like, expected value. Things I did was like, yeah. In fact, topping that, and this is literally only one time. I don't want to, like, overplay this. And I would not. It was not like a life or death thing. It was like, I don't know. Yeah, you could also. I also have friends who are spotting me. I, like, they told me after. Sorry. Let me get back to the point, which is that. Yeah, basically, like, after, like, the hard part, like, doing the v five top, which was, like, you know, not. Not a super high ball, which means, like, I don't know, about 20ft or something. But, like, yeah, I don't know. Probably I'll look at the order of, like. I mean, I can just go and measure, like, 15 or 20ft, but, like, not over a super solid landing or. I don't remember the exact circumstances, but anyway, yeah, but, like. But, like, that was. That was, like, a one time thing, I guess maybe I'm also like, this is my signaling blame, like, brain, like, flexing my credentials or whatever. No, but for the. For the. For the most part, like, safety. Like, that, being scared was, like, just, like, I just went on, like, the cost side of the. Of the equation. I didn't like mixer. I mean, I definitely did, like, pushing, like, physical boundaries. Yeah. Um, and also, I mean, yeah, maybe this is, like, uh. I don't know, like, my. The cynical side of my brain, which I think is actually, like, correct in this case. It's just like, okay, like, as much as, like. Like, I'm not talking about this for you, but, like. Like, for me, you can always, like, put things in, like, high minded terms and in fact, and sometimes, like. Like, you really do enjoy stuff, but, I mean, a lot of that, like, is derived from your brain, like, trying to do, like, status y things and, like, signaling things or whatever. Um, but, like, yeah, with that said, um. Uh, yeah, like. Like, one thing I've found in, like, especially actually indoor rope climbing and, like, pushing my boundaries, like, there was. It, um. I had never, like, previously been really good at anything physical. And, like, I was not. Like, I wasn't like, it was, like, pretty, like, average or whatever. Like, I played baseball. I was like, okay. And, like, I mean, I was, like, kind of short. So, like, I don't think I ever had, like, big basketball. I wasn't like, a rec basketball team. You know, I wasn't like, super, like, overweight. I wasn't, like, super, you know, I actually was tiny at what, some point. That's a whole nother story. But, like, you know, by the time I was, like, 1415 or whatever, yeah, it was, like, pretty average. Um, I mean, yeah, this is, like, sort of just besides the point, but, like, I guess the opposite of you. Like, I was always sort of like a try, you know, tryhard kid in school, like, did pretty well. So, like, kind of, like, used, you know, like, success for, like, lack of a better word in, like, that domain or whatever. But, like, finally, like, one thing I guess I figured out is that I was, yeah, I guess both be a combination of, like, really liking it and, like, working, like, pretty hard. And also I'm, like, pretty sure that you can get into, like, the genetic aspect or whatever, but, like, to some extent just, like, like, not, like, being, like, lucky. Is that, like, I was really good at endurance, especially if I trained it. I think I was, like, naturally good. Yeah, yeah. And so, like, I don't know. I don't have, like, a ton of, like, insight here. It was just cool to be, like, yeah. To be actually, like, quite good at even, like, this one, like, subtype of climbing, which is, like, basically indoor compet not only competitions. I mean, I was. I did like, competitions, but also, also just, like, in general being, like, pretty good at, like, you know, the 40 foot, like, indoor. Indoor roots or whatever and, yeah, I mean, like, this is maybe just, like, take this part out and, like, the last thing I'm trying to do at all is, like, is like, I don't think this actually comes across as, like, a brag because at this point, like, I am, like, not nearly as good shape as I was. Like, look, when I was, like, going to the competitions, like, there were a couple years where, like, I was able to make, like, the national, like, level competition, which is, like, like, a couple, like, late, like, levels, I guess two yet two, like, competitions, but, like, those decompose into, like, three, like, cutoffs or whatever that you, like, have to get to or whatever and no, like, I don't think, honestly, was there anything, like, super deep here? I could, like, make something up? It was just cool. Like, yeah, like, being like, um.

ARTHUR

Uh, yeah, I think that that resonates with me, too. And that, like, in some ways, probably that status y sort of stuff explains part of why.

AARON

Oh, I mean, who cares?

ARTHUR

Like, no, no, no. But I'm saying it maybe explains part of why I, like, wasn't that interested in competition climbing. Was that like, I mean, one, like, it was the sort of contrarian, like whatever individualist streak I talked about earlier, which was the. I like because I think very early on I had this much more kind of like whatever high minded aesthetic climbing is beautiful kind of like, notion. Like, I think that made competitions less interesting to me. But also the other thing was that, like similar to you, I was like, holy shit, this is sport that I'm actually really good at. But like, I wasn't insanely good at like competition climbing. Like I was, I was, you know.

AARON

Did you do any better than average competitions like that?

ARTHUR

I. Yeah, like some. I did some competitions, yeah, but like very little. Like I was on like the competitive team at like my gym for like a little while and then I was like, nah. And I noped out of that. But you know, I mean I climbed like, like, you know, solidly, like twelve plus. Yeah, some like, you know, low 13s in the gym or whatever.

AARON

Yeah, that's, that's like, that's like seriously talented.

ARTHUR

Yeah, but, but like, I wasn't like. I think because I was. It's certainly like in part like largely influenced by some of those status things as well. I think part of why I like, was like, oh, competitions aren't for me was that I found that like everything that I was rambling about earlier and the sort of like cognitive skill, like managing risk sort of stuff, like I realized that was actually the skill that I was like on the much farther right tail of the distribution on. And like, you know, there's a lot of so much just like luck and privilege of like having the like, you know, like parents and mentors and friends and connections to like be able to like get into that kind of like outdoor climbing at a young age that a lot of people just like don't have access to. So it's like a very limited sample. But I felt like I was like of the like gym kids that I knew my age. Like a good chunk of them were like stronger than me and climbing at the gym, but I was like, but you guys. Yeah, like, you know, climb, like, you know, 511 trad.

AARON

Climb.

ARTHUR

You know what I mean?

AARON

Like an example is me. Like, I definitely. There was never a point in time when I was able to climb 511 trad.

ARTHUR

Yeah, so I think like some of that, like, some of that was certainly part of like, what motivated me was it was like, oh, I'm actually really good at this like sport or whatever, or at least compared to like, you know, not compared to people who like make it their lives or whatever, but in terms of, like, casual hobbyist, like, teenagers, early twenties. So I think, like, that was. That was really appealing to me about it as well. Like, no doubt.

AARON

Yeah, there's, like, a. Yeah, I, like, maybe I'll just, like, end up, like, keep, like, keeping talking or keep talking until, like, I just, like, say everything there's to be said, but, like, there's, like, I don't know. I feel like I have, like, a lot to say about, like, climbing and, like, my climbing history or whatever, but, like, part of this. And, like, I really don't mean this to be, like, oh, like, what was me or whatever, but, like, another interesting. Yeah, so, like, I guess at least I find it interesting. Maybe you all too pejanora listeners, like, highlights. Like, the thing I just said was, like, this is such a. This is. This sounds so, like, melodramatic or something, but, like, I really, I think, like, learned, like, for the first time and in a way that. Sorry. Without hedging, like, sorry. Basically, I, like, discovered, I guess, for the first time that, like, like, you can't just hardweight your work to relative success. And I actually, this is, like, a. Something, like, very close to that is, like, in a. In, like, my list of, like, hot takes or, like, my hot takes thread, like, ongoing hot takes it on Twitter or whatever. And it sounds. I don't know, I don't want to be, like. So I don't want to come across as, like, cynical. I don't actually. I am, like, a skeptical person. I don't think I'm, like, broadly a cynical person. Um, and, like, I don't want. I actually don't think this is cynical. I think this is just, like, I don't know, like, maybe it, like, kind of reads that way and it's the kind of thing that, like, a lot of people, like, implicitly believe. But, like, I don't know, it sounds like very pessimist coded to say, but, like, um, I think it's a lot of. It's, like, very important in, like, a lot of domains. Um. Uh, but, like, yeah, I do think at some point, and maybe this had to do with. This is getting way back, but, like, maybe it had to do with, like, baseball at least, like, being largely a skill sport in some sense. Like, yeah, like, like, most healthy adult humans. Like, there is not. If they could, like, figure out how to swing a bat in, like, such a way that they, like, made, like, there is no, like, hardcore obvious physical constraint on, like, any arbitrary human being. Like, being, like, the best baseball player in the world. And that's not literally true because of course, like, you know, hand eye coordination and willpower and like a lot of things like that are in fact like, like largely genetically determined, probably even if you like, take away the willpower part or like just go to like, you know, hand eye coordination, reaction time, whatever, etc. Etc. But it's like not, it's not as salient to you or whatever. And like, I think it's true that like definitely I remember I was like getting frustrated with like just not being as good as I wanted to be. And I do think it's to some extent, like if you push it to the limit and you like, and you like, I think, you know, a random, like mediocre, like twelve year old or a 14 year old or whatever, tries like insanely hard to like be a really good baseball player that, um, either they can or they're like getting wrong information somehow. And I think climbing. Yeah, there was just like a real. It made it a lot more salient or like the physical constraints became a lot more salient, especially in the sense of just watching other people, like, not, again, this sounds like melodramatic. Like not work nearly as hard. Yeah. And just really like naturally being able to do a lot like harder climbs and like there I think, hmm. Yeah, again, this like sounds melodramatic or whatever. I like, I think there was like, maybe there was a time where it kind of was in like an emotion, like some sort of like emotional sense, like hard. Like I didn't get both, like non obvious so I was like learning something new, but also like wasn't like super obviously like true for or something. There's like a, like a fate, like a. Yeah, maybe like a, you know, one to like three year long phase where like, um, I was sort of like watching other people get bigger and stronger and yet me trying to push the limits in terms of doing what I could to make myself stronger. And at the end of the day, they were still just a lot better than me and. Yeah, I don't know. I keep saying like, oh, I don't want to come across a cynical. I'm like. And like, I don't. Yeah, I don't. I don't know. But that's, that's the whole story or not the whole story. That's like the whole. It's not. No, like, you know, bigger lesson in there, right?

ARTHUR

I think it. Part of what's fun about it to me as a sport is like, like, I mean, I guess if you're lucky enough to, like, have that natural talent is one thing, and, like, clearly both of us did to some degree, you know? But, like, also, I think being just, like, a sort of, like, naturally very skinny, like, not super, like, athletic build person. Like, part of what I loved about climbing was, like, despite all of what you're saying, I feel like there's, like, a reasonable, like, diversity of, like, body types that are, like, to be very successful because it's all, like, very relative, like, strength, like, to your body weight and all that. And, like, I think that aspect of it, plus the fact that I think there's a much higher, like, level of achievement that's possible than people realize with, like, very little physical training. Like, just an enormous amount of climbing at the, like, typical range of, like, hobbyist skill level. Like, so, so much of it is, like, technique based.

AARON

Oh, yeah, I totally disagree.

ARTHUR

Really? Yeah, I mean, like, I think it. I think. I. I don't know. I want to hear what you have to say, but, like, I also think I'm talking about, like, a particular range or distribution that, like, I think, like, up until, like, I don't know, to put it in concrete terms, like, in typical gym grades, I think you can you bolder, like, up to, like, v six, maybe even v seven with, like, very little, like, serious climbing strength. Like, I think it's only when you get into that upper echelon of, like, serious competitors and, like, strong amateurs where you, like, you just need to hang board, you know what I mean? But, like, I think there you can get much farther than people realize on, like, technique alone. And, like, this is at least obvious to me because over the last, like, two plus two and a half years or so, I have, like, basically not rock climbed at all. And just like, a month ago, I realized that I live now a very, like, physically inactive lifestyle and that I should start being physically active again. And I'm, like, going on, like, probably, like, five gym sessions in the last month. Like, I've been finally, like.

AARON

Or, like, crystal city.

ARTHUR

Yeah, yeah, yeah. Cause I was like, oh, I'll just, like, start going to the climbing gym again. And I was, like, almost a little bit. Like, I had this almost ego thing where I was a bit embarrassed to start climbing again because I was like.

AARON

Oh, man, this is.

ARTHUR

I'm gonna be so bad. But, like, people aren't gonna know that. Like, really, I'm, like, good. You know what I mean? And it was so stupid. But, like, I got over that and, like, to be honest, like, I have done no serious training. I literally just, like, go and boulder until I'm tired, and then I go home and, like, already on my, like, fifth session, I'm climbing at, like, 85% of my maximum ever bouldering level. You know what I mean? And, like, I'm not in good climbing shape, so, like, I think. I don't know. It's just surprising, like, how far you can get when, like, you know, you don't forget your technique. Right. And at the end of the day, like, I've been climbing for, like, 13 years now, you know? And it's like, I have, like, just built up a lot of, like, skill and experience that's, like, still there.

AARON

Yeah. This is. Man, you're saying, like, the perfect things to, like, get me to, like, keep talking, and I pre. Like, I, like, I'm like, that's, like, a good thing or whatever. Yeah. So maybe. I know. I know I'm not. I'm sort of inviting myself here. Maybe I'll. Maybe I'll join you one of these days. I have been thinking. This has been on my mind on and off, you know, for. Yeah. Like, yeah, I don't know how much I'll cut, but, like, long story short, I. Sorry. Yeah, so. Bunch of, like, connected thoughts. Yeah. Like, one thing I want to say is, like, I don't think that will be true for me. It's an empirical question. Right. Like, um. And I think that's largely just because of, like, the, um. Now. Yeah. Connecting this back, like, to, like, the body type thing. Like. Like what? Um. Yeah, one of the reasons other people were, like, more talented and also something that, like, I have. Has been, like, you know, just. Just, like, I have, like, I guess struggled with is such, like, a. I don't know, like, therapy term or whatever. Sorry. I'm, like, dancing around just saying that, like, I was, like, there's a lot of pressure. Not from anyone, like, explicitly, but, like, at least in my case, just, like, for myself to, like, be good and, like, that means being light and there's. But I think there's been, like, a fair amount of, like, hand wringing in, like, especially, like, youth competitive circles about. About, like, eating disorders and stuff. And, like, um. I was. I don't. I don't think I was, like, ever, like, full on anorexic. I was definitely, uh, not doing, like, I was definitely, like, affect. Yeah, pretty consciously trying to, like, limit my weight in, like, a time where, like, I really wasn't supposed to be. And, like, combine that with, like, a couple other, like, situation. Like, things, like, I can get into. But, like, as, yeah, can just, like, mention is, like, other factors or whatever. Like, was probably, like, a pretty bad idea for me. And all that is to say is, like, yeah, it would have been nice to be one of the people who was, like, naturally very, like, you know, you know, very, very skinny, frankly. But, like, and that was something I was like, I guess, like, yes, to some extent, like, jealous of. And also, like, again, like, super matter of factly, like, yeah, I am, like, probably a good almost double my. I mean, I've grown vertically in this time period, but probably almost double the weight when I was climbing at my best right now, which is like, yeah, I mean. I mean, like, one thing that, like.

ARTHUR

I. I feel bad now because I realize that, like, I think my general point about, like, technique being really important is totally true. But, like, when I put that point estimate on it.

AARON

No, no, no.

ARTHUR

That should have come with, like, the enormous asterisks of, like, I am naturally an extremely skinny person. Like, so much so that I, like, literally, like, this is something I struggled with a lot, actually, as a teenager was, like, given our, like, body expectations for men or whatever, it just, like, became obvious to me that, like, I, like, literally could not gain weight by eating unless I, like, severely over ate to the point of discomfort at, like, every meal. Right? And because of that, like, when I go through times where I don't climb or do any other form of fitness, like, all that happens is I lose weight because I lose muscle, you know? So, like, every time I take time off from climbing and come back, like, I am starting in the position of, like, actually weighing less than I did when I was really into climbing and, like, building muscle on top of that. So, like, enormous asterisks to my, like, no, no, no. Three weeks back, 80% of my original endurance is going to take longer to build. So I don't think I could sport climb very hard, but, like, bouldering strength or whatever, like, a huge part of that is, like, I stay very skinny and, like, thankfully not because I have, like, eating problems, but just because I, like, don't gain weight.

AARON

Yeah, yeah, yeah, sure, sure. No, no, I mean, like, yes, I think you're actually. I think you're totally, like, right? Like, you didn't need to add the disclaimer, but the disclaimer is, like, definitely true. And actually, like, I don't know, this is, like, whatever. At first I said, I was like, oh, I don't really want to talk about this. Like, whatever. Let's just like, yeah, like, dive in or whatever. I do think, like, there are. I don't. I can't remember, like, specific examples of this. I'm pretty sure, like, somehow, like, either, like, media sources or, like, specific people basically said something along the lines of, like, no, actually being skinny, like, doesn't help. And this is, like, basically just, like, gaslighting or whatever. Oh, no, no, it totally does. Like, yeah, yeah.

ARTHUR

I mean, the way.

AARON

Not just being skinny. I mean, like, I guess, you know, to some extent, that's, like, a little, like, it's like, the laugher curve or whatever. Like, the ideal amount of, like, muscle weight isn't literally zero.

ARTHUR

Right.

AARON

You know, but, like, it's pretty. You know, it's pretty skinny is, like, it's, like, ideal.

ARTHUR

And the way I think about, like, how, like. And thankfully, I think there's been, like, I was never super deep in, like, the competition world, but I think a lot of, like, prominent people have, like, spoken out about this in the last few years. I think there's, like, more attention on, like, the risk of eating disorders now. Like, the problem is, I'm sure by, like, no means solved. I'm sure it's still a huge issue. But I think part of what's so tricky about it and, like, why it'll continue to be an issue is, like, it just is a fact that, like, at a physical level, like, one of the most important things in rock climbing is, like, how strong are you in some very particular modalities versus how much do you weigh? And, like, you're saying there's some curve in the optimum mushroom, not zero, but starting from a baseline of being relatively fit and in good climbing shape and having strong muscles in those very particular muscle groups and modalities that are necessary for climbing when you're looking at that equation. Right. It's much easier to just like. Or not like, again, I'm like.

AARON

No, you're easier to lose weight.

ARTHUR

Yeah.

AARON

Oh, yeah.

ARTHUR

Very practical level. Virtually everyone, especially when you get to the point that you're, like, pretty close, you're, like, pretty far in the diminishing returns of, like, training hard, right?

AARON

Yeah, yeah.

ARTHUR

Once you get well into that diminishing returns, part. Part of, like, the curve of, like, training climbing strength, it's just going to be much easier for people to, like, lose and again, easier in a very, like, circumscribed, like.

AARON

I mean, for most people, though.

ARTHUR

Yeah. I mean, the reason why I'm adding that caveat is just, like, I want to be sensitive to the, like, there are, like, very real, like, psychological and like, other health costs to that. So it's not, like, easy.

AARON

Like, oh, yeah.

ARTHUR

But, like. But in terms of, like, if you are single mindedly focusing on. On getting better at rock climbing, like, there becomes a point where it's much easier for people to lose weight than it is to get stronger. And I think, like, that, like, fundamental fact, like, people are gonna have to, like, figure out how to, like, grapple with that for this sport to not be incredibly unhealthy for a lot of people that take it really seriously, you know? Like, I think it's just, it's no to me, like, that's the reason why there's no, like, sadly, like, no, like, mystery as to why there's this eating disorder problem.

AARON

Yeah, and, like, yes. Something you said before. It's like, oh, like you were, you know, I forget which one is, like, maybe, like, embarrassed. Like, like, when you went to the gym for the first time or whatever, and, like, yeah, and you were, like, pleasantly surprised. But, like, this sort of. There's been a lot of reasons why I haven't gone climbing since. I think I forget the exact. I think it was, like, I want to say, like, October 22 or 21 or something.

ARTHUR

Yeah.

AARON

Could be 2021, something like that. And there's, like, a proximate cause, which is like, oh, I was also, like, doing. I was, like, climbing college. But, like, actually, yes, it's, like, worth mentioning. Like, one thing, I think I was like, I. It's like, oh, you know, you don't never know the counterfactual or whatever, but, like, I think one of the something, like, very fortunate that happened was that I wound up at a college where I was, by. I was so far, at least coming in as a freshman. I was so far and away the best person on the very casual climbing team that I was able to sort of detach a little bit and, like, without it incurring such a, like, social psychological cost or whatever. And then eventually I, like, in fact, this is relevant, you know, there's a. Went to EA global, came back, got really sick, and, like, was, like, very, very sick for like, a week or two. And, like, and by that time, like, because of, like, the being sick and ye global, I think at some point I had been away, like, not climb for, like, three weeks or something, which is, like, by far the most. Yeah, or it was like a whole month or something. Yeah, yeah, the most. I had, like, been away from climbing, like, you know, in, like, years and years, and then I just, like, never went back. And I might, I don't know? Yeah, I. Honest to God, I'm sounding so melodramatic. But, like, I don't know. Yeah, yeah, I might. I think I do. Like, but, like, the dynamic you were mentioning, which, like, you were pleasantly surprised and, like, maybe it's not impossible that, like, I will be too, but, like, no, it's, like, not gonna be as fun or whatever. Like, you know, being like, oh, yeah, I used to be, like, the best person in the gym, and, like, now I'm really, really, really not. Yeah. And, like, much worse than I used to be. I mean, yeah, people like improvement. You know, it's psychologically, I mean.

ARTHUR

But that is the flip side, right? Well, I guess, one. It's interesting how much my personal story in some ways, like, mirrors yours. And that, like, I started taking climbing a lot less seriously when I was in college. And similarly, like, like, you were very similar to you. Like, I wasn't the strongest person on the casual climbing team. There's one guy who's, like, a super strong boulderer, but I was, like, definitely the most experienced rock climber. And, like, we would do outdoor excursions and stuff. And, like, you know, it was just like, I had a lot of other things going on. I was focusing on life, and I was casual. Right. It was like, it was something I still did regularly, but I wasn't, like, like, you know, really, really serious about, like, training and, like, fitness and all that. And then similarly, I went and studied abroad the fall of my junior year and was in, like, you know, was in, like, northeastern India, where there are no rock climbing gyms and no rock climbing to be found and, like, didn't rock climb at all for months and then came back and then, like, probably went to the gym a couple times, then COVID happened.

AARON

Yeah. Yeah.

ARTHUR

And, like, it was just all these things. Like, I just sort of, like, stopped doing it. Kind of, like, I kind of never came back, you know? And then I tried to come back to rock climbing after my kidney surgery, but I was, like, too.

AARON

Arthur donated his kidney. How did we not talk about that? Sorry, man, we're gonna have to cut this part from the podcast.

ARTHUR

But, like, I came back too soon after that and, like, sort of, like, injured my abdomen and then, like, took a long time off after that because I was like, I want to make sure when I start climbing again that I won't injure myself.

AARON

Yeah.

ARTHUR

Thankfully hasn't happened. I think I'm fine. Fine. But I will say on this psychological point, like, if you really internalize, like, if you really try to, like, let go of that, like, self comparison.

AARON

Yeah.

ARTHUR

Like how you used to be.

AARON

Yeah.

ARTHUR

I think part of what's been fun about coming back to climbing again is that, like, I am already so much noticeably stronger on like, the fifth trip to the climbing gym than I was on the first trip. Right. Because when you're starting from, like, a baseline of, like, no rock climbing fitness, like, especially when you already know sort of what to do to, like, get that back, like, you progress very quickly at the beginning. So, like, I think, I don't know. If I was to, like, lightly encourage you, I would say that if you're able to, like, let go of that, like, comparison to your previous self or whatever and just like, have fun kind of being a beginner again, like, at least in terms of fitness, like, and just like starting from that zero baseline, like, it's pretty cool how fast you improve, you know? Like, and I see that with my friends who, like, have gotten into it. Like, they get strong really fucking quickly because it's like these very weird, specific muscles that you just don't have if you don't do it.

AARON

Yeah.

ARTHUR

Anyways, I think TLDR we have solved everything. All of the problems of ethics and AI and reading old philosophy. Indeed.

0 Comments