<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Aaron's Blog]]></title><description><![CDATA[
]]></description><link>https://www.aaronbergman.net</link><generator>Substack</generator><lastBuildDate>Fri, 01 May 2026 07:45:35 GMT</lastBuildDate><atom:link href="https://www.aaronbergman.net/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Aaron Bergman]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[aaronb50@gmail.com]]></webMaster><itunes:owner><itunes:email><![CDATA[aaronb50@gmail.com]]></itunes:email><itunes:name><![CDATA[Aaron Bergman]]></itunes:name></itunes:owner><itunes:author><![CDATA[Aaron Bergman]]></itunes:author><googleplay:owner><![CDATA[aaronb50@gmail.com]]></googleplay:owner><googleplay:email><![CDATA[aaronb50@gmail.com]]></googleplay:email><googleplay:author><![CDATA[Aaron Bergman]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[#15: Robi Rahman and Aaron tackle donation diversification, decision procedures under moral uncertainty, and other spicy topics ]]></title><description><![CDATA[In response to the previous episode, Vegan Hot Ones]]></description><link>https://www.aaronbergman.net/p/15-robi-and-aaron-tackle-donation</link><guid isPermaLink="false">https://www.aaronbergman.net/p/15-robi-and-aaron-tackle-donation</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Sun, 25 Jan 2026 02:24:07 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/185687554/34d79747aad2560592b98f3492ecb2a2.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>Summary</h1><p>In this episode, Aaron and <a href="https://www.robirahman.com/">Robi</a> reunite to dissect the nuances of effective charitable giving. The central debate revolves around a common intuition: should a donor diversify their contributions across multiple organizations, or go &#8220;all in&#8221; on the single best option? Robi breaks down standard economic arguments against splitting donations for individual donors, while Aaron sorta kinda defends the &#8220;normie intuition&#8221; of diversification.</p><p>The conversation spirals into deep philosophical territory, exploring the &#8220;Moral Parliament&#8221; simulator by Rethink Priorities and various decision procedures for handling moral uncertainty&#8212;including the controversial &#8220;Moral Marketplace&#8221; and &#8220;Maximize Minimum&#8221; rules. They also debate the validity of Evidential Decision Theory as applied to voting and donating, discuss moral realism, and grapple with &#8220;Unique Entity Ethics&#8221; via a thought experiment involving pigeons, apples, and 3D-printed silicon brains.</p><h2>Topics Discussed</h2><ul><li><p><strong>The Diversification Debate:</strong> Why economists and Effective Altruists generally advise against splitting donations for small donors versus the intuitive appeal of a diversified portfolio.</p></li><li><p><strong>The Moral Parliament:</strong> Using a parliamentary metaphor to resolve internal conflicts between different moral frameworks (e.g., Utilitarianism vs. Deontology).</p></li><li><p><strong>Decision Rules:</strong> An analysis of different voting methods for one&#8217;s internal moral parliament, including the &#8220;Moral Marketplace,&#8221; &#8220;Random Dictator,&#8221; and the &#8220;Maximize Minimum&#8221; rule.</p></li><li><p><strong>Pascal&#8217;s Mugging &amp; &#8220;Shrimpology&#8221;:</strong> Robi&#8217;s counter-argument to the &#8220;Maximize Minimum&#8221; rule using an absurd hypothetical deity.</p></li><li><p><strong>Moral vs. Empirical Uncertainty:</strong> Distinguishing between not knowing which charity is effective (empirical) and not knowing which moral theory is true (moral), and how that changes donation strategies.</p></li><li><p><strong>Voting Theory &amp; EDT:</strong> Comparing donation logic to voting logic, specifically regarding Causal Decision Theory vs. Evidential Decision Theory (EDT).</p></li><li><p><strong>Donation Timing:</strong> Why the ability to coordinate and see neglectedness over time makes donation markets different from simultaneous elections.</p></li><li><p><strong>Moral Realism:</strong> A debate on whether subjective suffering translates to objective moral facts.</p></li><li><p><strong>The Repugnant Conclusion:</strong> Briefly touching on population ethics and &#8220;Pigeon Hours.&#8221;</p></li><li><p><strong>Unique Entity Ethics:</strong> A thought experiment regarding computational functionalism: Does a silicon chip simulation of a brain double its moral value if you make the chip twice as thick?</p></li></ul><h1>Transcript</h1><p><strong>AI generated, likely imperfect</strong></p><p>AARON</p><p>Cool. So we are reporting live from Washington DC and New York. You&#8217;re New York, right?</p><p>ROBI</p><p>Mm-hmm.</p><p>AARON</p><p>Yes. Uh, I have strep throat, so I&#8217;m not actually feeling 100%, but we&#8217;re still gonna make a banger podcast episode.</p><p>ROBI</p><p>Um, I might also, yeah.</p><p>AARON</p><p>Oh, that&#8217;s very exciting. So this is&#8212; hope you&#8217;re doing okay. It was&#8212; I hope you&#8217;re&#8212; if you, if you, like, it was surprisingly easy to get, to get, uh, tested and prescribed antibiotics. So that might be a thing to consider if you have, uh, you think you might have something. Um, mm-hmm. So we, like, a while ago&#8212; should we just jump in? I mean, you know, we can cut.</p><p>ROBI</p><p>Stuff or whatever, but&#8212; Yeah, um, you can explain, uh, so I talked to Max, uh, like 13 months ago.</p><p>AARON</p><p>It&#8217;s been a little while. Yeah, yeah. Oh yeah, yeah. And so this is, um, I just had takes. So actually, this is for, for the, uh, you guys talked for the as an incentive for the 2024, uh, holiday season EA Twitter/online giving fundraiser. Um, and I listened to the&#8212; it was a good&#8212; it was a surprisingly good conversation, uh, like totally podcast-worthy. Um, I actually don&#8217;t re&#8212; wait, did I ever put that on? I&#8217;m actually not sure if I ever put that on, um, the Pigeon Hour podcast feed, but I think I will with&#8212; I think I got you guys&#8217; permission, but obviously I&#8217;ll check again. And then if so, then I, I will. Um, and I just had takes because some of your takes are good, some of your takes are bad. And so that&#8217;s what we have to&#8212;.</p><p>ROBI</p><p>Oh, um, I think your takes about my takes being bad are themselves bad takes. Uh, at least the first 4 in a weird doc that I went through. Um, yeah, I saw you published it somewhere on YouTube, I think. I don&#8217;t know if it also went on Pigeon Hour, but it&#8217;s up somewhere.</p><p>AARON</p><p>Yes, yes. So that we will&#8212; I will link that. Uh, people can watch it. There&#8217;s a chance I&#8217;ll even just like edit these together or something. I&#8217;m not really sure. Figure that out later. Um, yeah. Yes. So it&#8217;s, yeah, definitely on you. Um, so let me pull up the&#8212; no, I, I think at least two of&#8212; so I only glanced at what you said. Um, so two of the four points I just agree with. I just like concede because at least one of them. So I just like dumped a ramble into, into some LLM.</p><p>ROBI</p><p>Yeah.</p><p>AARON</p><p>Like, These aren&#8217;t necessarily like the faithful, um, uh, things of what I believe, but like the first one was just, um, so like I have this normie intuition, and I don&#8217;t have that many normie intuitions, so like it&#8217;s, it&#8217;s like a little suspicious that like maybe there&#8217;s a, a reason that we should actually diversify donations instead of just maximizing by giving to the one. Mm-hmm. Like just like, yeah, every dollar you just like give to the best place. And that like quite popular smaller donors say people giving less than like $100,000 or quite possibly much more than that, up to like, say, a million or more than that. Um, that just works out as, as donating to like a single organization or project.</p><p>ROBI</p><p>Yeah. Okay. Um, I, I think we should explain, uh, what was previously said on this. So there&#8217;s some argument over&#8212; okay. So like, um, normal people donate some amount of money to charity and they just give like, I don&#8217;t know, $50 here and there to every charity that like pitches them and sounds cute or sympathetic or whatever. Um, And then EAs want to, um, first of all, they, I don&#8217;t know, strive to give at least 10% or, I don&#8217;t know, at least some amount that&#8217;s significant to them and, uh, give it to charities that are highly effective, uh, and they try to optimize the impact of those dollars that they donate. Whatever amount you donate, they want to, like, do the most good with it. Um, so the, like, standard economist take on this is, um, So every charity has, uh, or every intervention has diminishing marginal returns, right, to the&#8212; or every cause area or every charity, um, possibly every intervention, um, or like at the level of an individual intervention, maybe it&#8217;s like flat and then goes to zero if you can&#8217;t do any more. Anyway, um, so cause areas or charities have diminishing marginal returns. If you like donate so much money to them, they&#8217;re no longer, um, they&#8217;ve like done the most high priority thing they can do with that money. And then they move on to other lower priority things. Um, so generally the more money a charity gets, the less, um, the less effective it is per dollar. This is all else equal, so this is not like&#8212; like, actually, if you know you&#8217;re going to get billions of dollars, you can like do some planning and then like use economies of scale. Uh, so it&#8217;s like not strictly decreasing in that way with like higher-order effects, but for like Time held constant, if you&#8217;re just like donating dollars now, there&#8217;s diminishing marginal returns. Okay, so, uh, it is&#8212; the economist&#8217;s take is like, it is almost always the case that the level of an individual donor who donates something like, let&#8217;s say, 10% of $100K, like, the, the world&#8217;s best charity is not going to become like no longer the world&#8217;s best charity after you donate $10,000. And most people donate like much less than that. So the, uh, like standard advice here is, um, if you are an individual donor, not a like, um, institutional donor or grantmaker or someone directing a ton of funds, um, you should just like take your best guess at the best charity and then donate to that. And then there are ways to optimize this for like bigger amounts. So you&#8217;ve probably heard of donor lotteries, which is like 100 or 1,000 people who want to save time all pool their money and then, then someone is picked at random and then they do research and maybe they split those donations 3 ways. Or like it all goes to something, or like&#8212; [Speaker:HOWIE] Yeah.</p><p>AARON</p><p>[Speaker:Kevin] Hmm.</p><p>ROBI</p><p>$10,000 times, uh, 100 or 1,000 is like a million or $10 million. At that level, it&#8217;s plausible that you should donate to multiple things. Um, so in that case, maybe it makes sense.</p><p>AARON</p><p>Um, so I don&#8217;t, I don&#8217;t&#8212; oh, sorry, go ahead.</p><p>ROBI</p><p>Uh, so that&#8217;s the standard argument. Um, and, um, I, I&#8217;m happy to, um, explain why this still holds, uh, to anyone who is like engaged at least this far. Um, most people haven&#8217;t even heard of it and they&#8217;re like, um, well, but what if I&#8217;m not sure about which of these two things, then I should like donate 50/50 to them.</p><p>AARON</p><p>Um, uh.</p><p>ROBI</p><p>I&#8217;ll let you go on, but I just want to say this is a really lucky time to record this podcast because Yesterday someone replied to me on the EA forum linking to some, um, uh, have you heard of, uh, Rethink Priorities, um, Moral of Parliament simulator?</p><p>AARON</p><p>[Speaker:Howie] Yes.</p><p>ROBI</p><p>[Speaker:Keiran] Okay, so it has some, um, pretty wacky and out-there decision rules, and, um, so I was&#8212; I was arguing with someone on the EA forum about this, like, um, saying Uh, it doesn&#8217;t make sense to, um, to, to split your donations, uh, at the level of an individual donor, um, even moral uncertain&#8212; and they said, but what about moral uncertainty? What if I&#8217;m not sure, like, if animals even matter? Um, uh, and I said, well, even then you should take your, like, probability estimate that animals matter and then get your, like, EV of a dollar to each and then give all of your dollars to whichever is better. Um, and they said, well, but I, I, I plugged this into the Rethink Priorities moral parliament simulator and there&#8217;s a bunch of these like different worldviews and different decision procedures here. And then these two decision procedures say you should like split your donations. Um, it was like, um, wait, that can&#8217;t be right. And I went and looked at it and actually, um, That is a correct outcome, which I can tell you about later. But, um, yeah, uh, you were gonna.</p><p>AARON</p><p>Say&#8212; wait, wait, hold on, we have to&#8212; I feel like I just got cut off at the best part. So there are coherent worldviews where you.</p><p>ROBI</p><p>Think you said&#8212; um, not, not worldviews. I think the, the worldviews decide what you value, but then there are, there are decision rules. So the two that this person gave me on the forum as examples are, um, Okay, so the&#8212; maybe the, the standard one, uh, under which I was arguing is like&#8212; or like a simple one is, um, uh, I guess I&#8217;ll explain what a moral parliament is. So if you have moral uncertainty, you&#8217;re like, let&#8217;s say you&#8217;re not sure whether deontology is true or consequentialism is true or virtue ethics is true, um, instead of So if you are sure, it&#8212; like, let&#8217;s say you&#8217;re just a consequentialist, then you simply decide according to consequentialist rules, uh, like what you want to do. Um, maybe you just like max the&#8212; or like you try to&#8212; you pick the action that will maximize your expected value based on your, uh, best guess as the consequences. Um, or if you&#8217;re a deontologist, you just like follow the rules that tell you what is right. Uh, in a moral&#8212; so if you have moral uncertainty, the moral parliament is one way to, um, decide what to do when you&#8217;re not sure which of these views is right. And in the metaphor, so if you&#8217;re, uh, you give them seats in the parliament proportional to, um, how likely you think it is that that view is correct, or you&#8217;re proportional to your credence in each view. So if you&#8217;re completely uncertain, uh, like consequentialism, deontology, and virtue ethics, you&#8217;re like equally sure those are&#8212; or like you find those equally plausible, um, and you, you&#8217;ve ruled out everything else, then you would give them each like 33 seats in the parliament or 33% of the votes, um, and then they would vote on what to do.</p><p>AARON</p><p>Um.</p><p>ROBI</p><p>So one simple decision rule Although maybe non-consequentialists will argue this favors consequentialism or so. I don&#8217;t know, it&#8217;s unclear. It&#8217;s like, do the decision that maxes the total value according to all of the representatives. So under each view, there&#8217;s like different values of the different possible outcomes. And then you add up the possible value over all the representatives or like take the integral with respect to the probability, um, uh, sum over all the probabilities. The, the weighted sum of value and probability. Anyway, um, uh, so that&#8217;s one way. Um, but there&#8217;s like some criticism of this, which is like, um, well, maybe this is, uh, this is bad because Uh, it gives extra weight to views that believe in, like, that more value is possible or something. Um, which I&#8217;m not sure is like really a flaw, but I mean, I guess that&#8217;s true. Um, okay, so the, the two rules which are somewhat exotic and unintuitive, uh, the, the first one I had heard of, the first one Rethink Priorities calls it moral marketplace. And this is, um, uh, this is maybe the least fancy decision procedure. This is just, um, your total allocation&#8212; your overall allocation is, um, each of the factions in the parliament gets a fraction of the funds proportional to their credence or proportional to their representation, and then they all independently decide Um, what they want to do with their money, and then your overall action is you just do, uh, you just add all those together. So if the consequentialists want to do option A with their 33%, and the.</p><p>AARON</p><p>Um.</p><p>ROBI</p><p>Deontologists want to do option B with their 33%, and the virtue ethicists want to do, uh, option C with their 33%, you, you give like 33% of your donations each to option A, B, and C. Okay, uh, I think this is&#8212; this is pretty straightforward, and it&#8217;s very plausible as a strategy for.</p><p>AARON</p><p>Um.</p><p>ROBI</p><p>What you do if you&#8217;re, um, allocating a large portfolio. Uh, so like, if you rethink priorities and your staff, uh, collectively are uncertain or can&#8217;t agree on what is the right worldview and you have a large amount of funding, then this would make sense to me. Um, I&#8217;m not sure this is reasonable to&#8212; like, I&#8217;m not sure if it would be reasonable to let this guide your actions as an individual, but I just heard of this yesterday, so&#8212; or like, I, I&#8217;ve heard of this before, but I hadn&#8217;t heard of it used in this context until yesterday. So I haven&#8217;t fully thought through whether this makes sense for individuals. I will&#8212; let me just tell you the last, the other example they gave. So the other example they gave was, it was, uh, this is&#8212; I think this is completely ridiculous. This is maximize minimum. And so the strategy is, uh, it completely disregards um, like, share of the parliament, and it&#8217;s just, uh, you do the option of all your possible actions that, um, that maximizes the satisfaction of the most unhappy party. So this is like minimax over all of the different possible actions and all the different, uh, people with any seats at the table. Um, and this is complete and utter nonsense. So, um, First of all, this gives equal weight to, uh, a view that you&#8217;re like 99% sure of, a weight that you&#8217;re&#8212; a view that you&#8217;re 80% sure of, a view that you&#8217;re 30% sure of, and a view that you&#8217;re 0.000000 like 10 to the minus 20% sure of. Um, those are all like weighted equally, and then the 10 to the minus 20 view is like infinitely more, uh, has infinitely more consideration than the 0% view. Um, you can&#8212; so this is, this is just like completely unworkable. I don&#8217;t know. I, I, okay, I just heard of this yesterday.</p><p>AARON</p><p>I&#8217;m not gonna&#8212; I agree with you, by the way.</p><p>ROBI</p><p>Yeah, so, okay, look, uh, I just heard of this yesterday, so I don&#8217;t know if anyone has like published a philosophy paper refuting this yet, but if they haven&#8217;t, I will. Um, you can mug anyone who believes in this nonsense by just proposing&#8212; okay, so the example I gave was, um, Uh, if someone believes this or like follows this rule, you can mug them into doing literally anything at any time. You just, uh, say, hey, I heard of this, uh, I I saw a preacher on the street corner and he was preaching to me the, uh, religion of shrimpology. Um, it turns out the universe is created by an omnipotent shrimp deity, uh, who will torture, uh, 10 tetraded to the 20 sentient beings for, uh, 3 octillion years unless you donate all of your money to shrimp welfare. Um, and they can be like&#8212; you can be like 1 to a googolplex, uh, confident this isn&#8217;t true. But&#8212; or like, it&#8212; maybe the probability of this is 1 in a googolplex because it&#8217;s so ridiculous. Um, you&#8217;re more sure this is not true than anything you&#8217;ve heard in your life, but you can&#8217;t be 100% sure, like, that&#8217;s not a, uh, a real credence you can assign. And so just by me saying this, you would be forced to donate all of your&#8212; like, you could make up any view that wants anything and is the most unhappy, like, view you&#8217;ve ever heard of, and then your action just has to follow that. Um, okay, so, uh, I think I&#8217;ve ruled that one out. Um, so Um, the upshot is I&#8217;m much less certain than I was 2 days ago that, uh, you should not diversify your donations, uh, at the level of an individual. I still think you shouldn&#8217;t, or like, I don&#8217;t see a real-life case where you would&#8212; where it would make sense to do this. Um, but I retract my, uh, previous perhaps possibly overconfident assertion that it&#8217;s like completely 100% always illogical. Uh, like maybe there is some way where it makes sense to do this.</p><p>AARON</p><p>Yeah. Okay. Interesting. I, um, so the one thing I like want to say the words two envelopes problem, because isn&#8217;t that like where some of this comes from in a, like, if you try to just be ruthlessly, maybe that&#8217;s not a good word, just like, um, what, like full-mindedly, uh, and full-heartedly. Utilitarian and consequentialist, like you run into the problem of one worldview that, um, takes a position of like ants and one that takes a position of elephants. And I, I feel like I&#8217;m just not smart enough to always have this. I would, you know, like think through it a little bit to like load it into my brain. I&#8217;m just not smart enough to like immediately have it booted up. But like, that is like one thing I like wanna&#8212; wanna say. Although I, I almost just like wanna investigate Like, I&#8217;m inclined to disagree with you because, like, like, at an intellectual.</p><p>ROBI</p><p>Level, um.</p><p>AARON</p><p>Although, yeah, we should talk&#8212; we can talk about that. I guess the plausibility of the moral parliament view. I&#8217;m, I am just like almost&#8212; I&#8217;m.</p><p>ROBI</p><p>Actually very on board with moral parliament. Um, I have moral uncertainty. I&#8217;m not like 100% consequentialist or anything like that. Um, I think it makes sense if you&#8217;re like deciding among things. Um, and again, I think it makes a lot of sense for like Rethink or Open Philanthropy to diversify because they have large donations. Um, I do not think it makes sense for me to apply moral parliament with like moral marketplace decision rule to my own donations as a small donor. Um, and then there are, I think, better decision rules that&#8212; so for example, I think Was it Toby Ord who proposed the, the, um, what&#8217;s the one you, um, I think his rule makes more sense. You, um, select a dictator with probability proportional to, um, to&#8212;.</p><p>AARON</p><p>[Speaker:David] Wait, that&#8217;s crazy. Why would you do that?</p><p>ROBI</p><p>[Speaker:Elana] Oh, &#8216;cause it, uh, this is optimal in various ways. Uh, um, let me, let me propose, Let me mention a couple of the, like, improvements I&#8217;ve heard of. Um, so there&#8217;s more of a parliament, but instead of everyone just gets a&#8212; instead of everyone votes on something. So that, that has the problem, there&#8217;s like unstable coalitions. Um, you know, Arrow&#8217;s Impossibility Theorem, and like, um, two factions prefer option A over B, but then two of three factions prefer B over C, and then two of three factions prefer C over A. There&#8217;s like a lot of, a lot of voting problems. Um, uh, one is, um, the groups are allowed to bargain with each other. So, um, they&#8217;re allowed to say, uh, like, hey, uh, I&#8217;ll&#8212; the utilitarians can say, hey, I&#8217;ll, um, vote to, like, protect the sanctity of the rainforest if you&#8217;ll allocate just, like, 1% of the light cone to, um, hedonium if you&#8217;re&#8212; if you, you end up in charge, and like they will offer different things based on like how much voting power they have, um, and who the other&#8212; the players are and what everyone values. Um, I think this is an improvement. Um, there&#8217;s&#8212; so, uh, one of the rules proposed in one of the moral parliament papers is, um, instead of &#8220;take a vote&#8221;&#8212; so, so one of the problems with just &#8220;take a vote&#8221; is, um, If you have 51% credence in one view and 49% in the other, the 51% will just like completely override the 49%. Bargaining helps this a little bit, but actually no, bargaining doesn&#8217;t help in that case. So the, the 51% might just like override the desires of the 49% and do something that they like slightly more than what the smaller group wanted, but is like completely and utterly horrible to the smaller group. Um, uh, one thing you can do is&#8212; so one proposal is, um, everyone votes as if it&#8217;s like a straight-up majority vote, but instead of simply taking the results of the vote, you pick a voter, uh, with probability proportional to their representation, and then you enact that voter&#8217;s, uh&#8212; this has beneficial properties for the same reason, um, This would be good in like a regular democratic election, uh, which I won&#8217;t get into because.</p><p>AARON</p><p>I was just going to say&#8212; [Speaker:KEVIN] No, I just want to say, um, this sounds crazy to me because, um, I think it&#8212; maybe it makes sense. I like believe you, um, if you like sort of expand the analogy of like an in&#8212; like, um, or expand the situation of an individual person&#8217;s moral uncertainty to like a society where you&#8217;re having multiple people vote. But like, you can actually just do way better than that insofar as you&#8217;re a single person because you can credibly and you can just decide to be honest and say, like, so what&#8212; one problem with eliciting, uh, preferences in democracy is it&#8217;s really hard to elicit, uh, like, strength of various preferences, um, from various people. I can just, like, assert, oh, I am, like, a quadrillion times more certain than you are that, you know, Donald Trump is bad, and maybe I actually am or whatever. But if you&#8217;re just a person, um, yeah, you could just, like, in fact, uh, care about the 49% in, like, uh, in, in, like, proportion to, like, how much, like, um, how strong that preference actually is and like not, not pretend that this is unelicitable information. And so that&#8217;s why, like, it&#8217;s the dictator&#8212; like, I mean, I also just have the strong intuition that like, wait, hold on. Uh, there&#8217;s no way, there&#8217;s no way leaving it up to chance at the end of the day is like the best thing to do. But this is like sort of&#8212; oh, wait, wait. Okay.</p><p>ROBI</p><p>Actually, I think you reminded me of something. So, so, uh, another, another proposed improvement. Gives&#8212; so you, you tell all the representatives in the parliament your decision procedure is you will, you will, you will pick someone randomly and then they vote as if they&#8217;re, as if they believe some&#8212; the dictator is being picked randomly. But then you actually, in fact, go with the majority vote.</p><p>AARON</p><p>[Speaker:Howie] Um, wait, okay. Like me, like, why would you do that?</p><p>ROBI</p><p>I don&#8217;t know. This has some kind of elegant, like, properties where it, like, I don&#8217;t know. Yeah, actually, I must be misremembering it. That, that doesn&#8217;t make sense because, like, if there&#8217;s not a majority, uh, or if it&#8217;s more than two options, you.</p><p>AARON</p><p>Could just take the plurality, like, in principle. I don&#8217;t know.</p><p>ROBI</p><p>Uh, if there&#8217;s more than&#8212; if there&#8217;s more than two parties voting on more than two options, um, I think it gets complicated. But yeah, it&#8217;s something like that.</p><p>AARON</p><p>I feel like we&#8217;re over&#8212; I feel like&#8212; so there&#8217;s like, not this is like intrinsically like too, too complex or anything, but like the thing that I want to investigate is like, so I in fact donated a little bit of money this year, um, and I donated like part&#8212; I donated to 3 different, um, organizations, uh, pretty sure, and like also maybe some small, like some one-off, like small things for like various reasons, but like basically Uh, Late Cone, um, the EA Animal Welfare Fund, and Alex Borris. And like, I don&#8217;t do this because, like, at a conscious level, it just seems like the right thing to do. Um, it&#8217;s not necessarily a great&#8212; I.</p><p>ROBI</p><p>Think this is&#8212; so, uh, leaving aside Alex Borris, which I think has some special, um, uh, actually, no, maybe that&#8217;s not right.</p><p>AARON</p><p>Yeah, it&#8217;s like, it&#8217;s like you might not&#8212; if you&#8217;re only limited to like $7,000 and you have $8,000, you might donate the $7,000. I didn&#8217;t donate $7,000 to him.</p><p>ROBI</p><p>Oh, okay. Well then I think you&#8217;re&#8212; okay. I think you&#8217;ve definitely done something unreasonable. Although I think you have built&#8212; yeah.</p><p>AARON</p><p>No, for like, for that one, for political stuff, there is like some benefit to just saying, oh, like, um, I have like X number of donors. So it&#8217;s like not totally exactly the same, but we can even just do the EA Animal Welfare Fund or like pretend it was just Animal Welfare Fund and, and Light Cone. I like, so like at a con&#8212; like at one level, like I have this, I just like have a strong intuition that like doesn&#8217;t, and I don&#8217;t normally have strong normal intuitions. Not that strong. It&#8217;s not like as strong as like, oh, like the feeling that I have qualia, but it&#8217;s like, it&#8217;s like somewhat strong that like, um, there&#8217;s like something going wrong if I, uh, um, yeah, I don&#8217;t&#8212; and the problem is that I can&#8217;t actually articulate it, right? I like, I like in fact basically just endorse what you, what you in fact think, and I like don&#8217;t know what you mean.</p><p>ROBI</p><p>So I think your intuition is wrong. I believe that, uh, By your own lights, you have done less good by splitting your donations among Lightcone and EAIF, uh, than you would have if you&#8212; or sorry, uh, you said Animal Welfare Fund?</p><p>AARON</p><p>Yeah, AWF.</p><p>ROBI</p><p>Yeah, AWF. Okay, uh, you split your donations between Lightcone and AWF. Um, I am pretty sure that, uh, you would have done better, or like the world would be better according to your own views, uh, and values if you had given all of the money to one or the other. Um, I think we both agree there are like, uh, there are some like small exceptions to this. Like political donations are limited to like X dollars per person. Uh, so I maxed out. I, I donated $7K to Alex Horace. Um, and, uh, yeah, and I do, I do do like really small donations. So like if it&#8217;s someone&#8217;s birthday and they have like, I don&#8217;t know, they ask for donations for their birthday, I&#8217;ll donate like $50 to some charity on GiveWell&#8217;s like top 10 that I think they would like&#8212; yeah, might get them interested or something. Um, or it&#8217;s like valuable to be able to say you&#8217;ve donated to that in case it comes up in a conversation with normies, and then you might be able to get them to like do more charity research or something. Um, apart from that, if you&#8217;re just&#8212; if you&#8217;re just&#8212; if you&#8217;re just looking at the direct first-order, uh, help to the recipients of the charity, of the dollars you&#8217;re donating, of your donation budget, um, I think you have to be.</p><p>AARON</p><p>Uh.</p><p>ROBI</p><p>At least a bigger individual donor than we are, uh, before&#8212; [Speaker:KEVIN] Yeah.</p><p>AARON</p><p>I mean, the problem is that I don&#8217;t actually, I don&#8217;t actually know which of animal welfare funded, like, what is the better use of my&#8212; [Speaker:ASH] Oh.</p><p>ROBI</p><p>Well, in that case, uh, in that case, that&#8217;s easy. Then, um, just like spend 5 minutes doing a BowTech and then just give all the money to the top one.</p><p>AARON</p><p>Or&#8212; [Speaker:KEVIN] I&#8217;m still gonna, so I&#8217;m gonna wind up with, I&#8217;m not exactly 50/50. So I actually did, actually, or like, I need to check back on what I did, but I actually gave, I think, more to&#8212; I think I gave more, more to like John, um, sort of in proportional, but like, I think what I was implicitly doing was something like that. Oh, everybody gets to see that the moral parliament, but almost like not everyone because only the, only like the views that are actually compelling to me or something, which is like sort of like a hybrid 5E decision procedure.</p><p>ROBI</p><p>Um, yeah, you did. Okay. You did like boundedly rational moral marketplace. Like you&#8217;re not, you&#8217;re going to round off any, like, you&#8217;re going to round off any views with like less than 5% like credibility. And then the, the top 2 or 3 views get to spend money according to their, um, yeah.</p><p>AARON</p><p>Although, although, like, I, I, it&#8217;s, my brain doesn&#8217;t clearly, at least like my conscious mind doesn&#8217;t clearly distinguish between like moral uncertainty and empirical uncertainty. Like in this case, it sort of all just mashed together. I don&#8217;t know if that matters.</p><p>ROBI</p><p>Yeah. Okay. So I am 100% sure empirical uncertainty is not a valid reason to split your donations. Moral uncertainty might be.</p><p>AARON</p><p>So wait, but now, so like, I know you commented on this thing about&#8212; so my like first hypothesis is like, maybe there&#8217;s an ED thing going on, and I&#8217;m actually not entirely sure. So like your point is, so basically the, the, the thing, the idea is like, okay, um, like is the con&#8212; so does the world look better from the position where everybody behaves [Speaker:HOWIE] Um, yes, yes, yes.</p><p>ROBI</p><p>So if everyone would stop, uh, if everyone would stop splitting their donations and just donate to whatever they think is best, at least if most people are well-informed, uh, then this would improve. Maybe that&#8212; maybe this doesn&#8217;t improve the world for like, because normies are donating like $10 to 100 different charities and 99% of charities are like approximately worthless compared to effective charities. Um, then actually it&#8217;s probably good for them. It&#8217;s probably good that they split coincidentally, just because if they took my advice and gave all the money to their favorite charity, then 100% of the money would go to like church or like homeless shelter for cute kittens. And then, but like, maybe they&#8217;re better.</p><p>AARON</p><p>Than, maybe, maybe they&#8217;re, maybe they&#8217;re better at than, than, than chance at guessing which is the best, right? Isn&#8217;t it? I actually don&#8217;t think that your argument falls down. In the normie case, it&#8217;s like, oh, maybe there&#8217;s like a 2% chance that they donate to the one out of 100 that is like the best.</p><p>ROBI</p><p>[Speaker:Howie] Uh, that&#8217;s actually, uh, that, that&#8217;s plausible to me. Um, I won&#8217;t get into that, but okay. So, um, you&#8212; so this argument often comes up for like, okay, so there&#8217;s this argument people sometimes give, uh, that it&#8217;s not worth your time to vote because there&#8217;s already 150 million other people voting in For example, the US presidential election, there&#8217;s like 140 million voters already voting. Um, and the, uh, margin in almost every state is going to be&#8212; it&#8217;s not even going to be close. Uh, even if you live in a swing state, it&#8217;s pretty unlikely that it comes down to one vote. So like the election outcome is going to be the same, uh, whether or not you vote. So therefore it&#8217;s, uh, it&#8217;s not worth your time to vote. You should just, like, stay home and do something else. Okay, here&#8217;s a really bad counterargument to that. People often say, like, I think the most common argument I&#8217;ve heard is.</p><p>AARON</p><p>Um.</p><p>ROBI</p><p>No, but that, that can&#8217;t be right. If everyone did that, then no one would vote. And then by me voting, I&#8217;ll be the only voter and I&#8217;ll decide the election. So I have to vote. Okay, so, um, my, uh, the straightforward response to that is, well, okay, yeah, if everyone followed this bizarre logic and, uh, if everyone&#8212; if everyone acted in this way and no one voted and you didn&#8217;t, then you would be the&#8212; then you would be the only voter and you would decide the election. However, we know for a fact That is not the case. Like, 80 million people have already voted. You are not going to do anything. Okay. Um, there&#8217;s a rationalist&#8212; yeah, yeah. So, uh, okay. So there&#8217;s a better rationalist version of this, which is, uh, instead of&#8212; so the&#8212; here&#8217;s a rationalist argument, um, in favor of voting. So they concede the causal decision theory outcome, like, like if you subscribe to causal decision theory, then What I&#8217;ve said so far is correct. So 80 million people have already voted. Your marginal vote will do nothing. So therefore you should not vote. It&#8217;s not worth the time. However, if you subscribe to evidential decision theory, then you&#8217;re not just casting one vote. You&#8217;re in some sense, maybe, um, uh, your vote is like correlated influenced by, maybe not causally, but somehow correlated with the decisions of everyone else who thinks like you. And there are enough people who are smart and thinking this way that you should vote as if you&#8217;re, like, kind of directing all of their votes. And so you should vote, uh, because, like, you&#8217;re better informed than the average person, and across the multiverse, you, like, improve a lot of election outcomes. Yeah, um, so I basically endorse this, I think. Sure, yeah, okay. So, um, uh, let me not&#8212; let me leave that aside for elections. And so you, you took this argument, which I think is better than the usual argument for voting, uh, for elections, and then you applied this to donations. So, um, you said, so does it make sense to split your donations because EDT&#8212; uh, for&#8212; on EDT grounds, um, should you diversify because then everyone will diversify and then you&#8217;re&#8212; this is correlated with like not just one person&#8217;s donations but many people&#8217;s donations will go to like both of these good charities. Um, and, uh, I think I have like, uh, refuted this with the observation that, um, this might make sense for an election where everyone votes simultaneously. However, this is not what the scenario is for donations. So everyone donates at like a different time throughout the year. You can, uh, you can talk to each other, like the donors can talk to each other. Um, you can look up what all the previous donations were and you can see what is the most neglected charity at the time of your donation. And then you can leave instructions or like talk to people donating later. And then based on whatever is most neglected after you and more people donate, they can change their donation.</p><p>AARON</p><p>So this is not obviously a, like a Like, I think this is a plausibly good point, not like a clear refutation. So like, the first thing that comes to mind is just, um, that you don&#8217;t&#8212; it&#8217;s not&#8212; so in an election, you can have a system where everybody commits to voting once in a given time period, and then, like, you know, for example, every&#8212; the 365 people, everybody in, in the year, and everybody has an assigned date to vote, whereas donations, like, there&#8217;s uncertainty around, like, how much people are going to vote, how much you&#8217;re even going to vote. You might not, like, have decided that yet.</p><p>ROBI</p><p>Um, uh.</p><p>AARON</p><p>When that&#8212; yeah, and like when that happens, and like to some extent this is solvable with communication, but it&#8217;s just communication that empirically doesn&#8217;t actually happen. Um, and like maybe it makes sense to happen.</p><p>ROBI</p><p>That&#8217;s not true. Wait, that&#8217;s not true at all. When&#8212; whenever I donate to a fundraiser, there&#8217;s like&#8212; there&#8217;s like that thermometer that is like, we have raised $5,000 of our $30,000 goal.</p><p>AARON</p><p>Yeah, but like you, you don&#8217;t&#8212; you don&#8217;t have complete information. My point is that you don&#8217;t have complete information about every other&#8212; sure, you.</p><p>ROBI</p><p>You don&#8217;t need complete information. The scenario, the argument breaks down as long as there&#8217;s any information, like, um, you can look up like approximately&#8212; you don&#8217;t have to know the exact amounts donated to every charity for you to, um, for it to, uh, not make sense to split. You just have to know the approximate, like, neglectedness of different options. And then it only makes sense to donate to the most neglect&#8212; to the, to your best guess of the most neglected option at that time. And then that will change over time. Like, maybe it doesn&#8217;t&#8212; maybe they don&#8217;t, like, do their financial reporting right away. Maybe there&#8217;s some lag on when the charity donates&#8212; how much&#8212; uh, updates how much money they&#8217;ve raised in donations. But later donors can find out about that and then donate differently, donate to different things, even if they are following the exact same decision procedure as you, and even if they have the exact same values as you. Um, your, your values might be, like, donate to AMF if that&#8217;s the most neglected, and donate to, um, Uh, uh, what&#8212; lead, uh, LEAP if that&#8217;s the most neglected. And then you donating first when AMF is most neglected will donate all of your money to AMF, and then this person donating later in the year who has the exact same values and wants the same thing will instead donate to, um, to, uh, LEAP because that&#8217;s the most neglected later in the year. But, uh, and so you&#8217;ve collectively split your donations even though you, uh, want the same thing, and it wouldn&#8217;t have made sense for you to split your donations initially. Um, what was good changed throughout the course of the year with the intervening events as more funding came in. Um, yeah, but so actually, this is.</p><p>AARON</p><p>A good point but might already be accounted for. So it&#8217;s like, there&#8217;s a reason why, like, part of my moral parliament doesn&#8217;t say, like, give to, like, Will McCaskill so he can, like, seed effective altruism as a movement. It&#8217;s like, that&#8217;s already totally funded. I already&#8212; it&#8217;s already, like, accounted for or something. Like, I still&#8212; like, I still don&#8217;t know which is the more neglected one, like, between EA Animal Welfare Fund and Light Code. And so at some point, like, I still have this genuine uncertainty, you know what I mean? Like.</p><p>ROBI</p><p>Um, I think&#8212; well, this all comes down to worldviews. I think one or the other is the more neglected one based on how much you value animals and how and what your discount rate is for the, the far future. I think once you decide those parameters, one or the other is the one you should donate to.</p><p>AARON</p><p>Well, you might&#8212; maybe, but what if you have&#8212; are you&#8212; is that still true if you have, um, you know, uh, probability distributions over those parameters?</p><p>ROBI</p><p>Oh yes, yes. Then, then at the scale of an individual donor, you just like integrate over those and then one or the other is the one you should donate to.</p><p>AARON</p><p>I want to think about that.</p><p>ROBI</p><p>That&#8217;s for empirical uncertainty. Um, someone raised the point that moral uncertainty is like maybe more ambiguous for this than I realized. So I have retracted my claim that you should not diversify even if there&#8217;s moral uncertainty and you are following like some&#8212; Yeah. Strange procedure.</p><p>AARON</p><p>I&#8217;m kind of in the position of the normie that you talked about with the&#8212; like you said you didn&#8217;t want to get into it. Maybe I&#8217;m going to pressure you to. You still don&#8217;t have to. About like the normie who has like, oh, donate $10 to 100 charities. I feel like in some respects I am just in that position where like, I, um, like maybe I can do better than chance, but like my intuition is something like, um, like I am trying to&#8212; like we, like we, we maybe like share the same intuition or like, like in, in some respect, which is like, oh, I have like, and, you know, and charities and one of them is going, going to be the hit and I just don&#8217;t know which the hit is. Um, but like, maybe this isn&#8217;t necessarily contradict anything you&#8217;ve already said, but, but.</p><p>ROBI</p><p>That doesn&#8217;t&#8212; then, then you should just like donate to whichever one you have the highest probability of is going to be the hit.</p><p>AARON</p><p>Yeah. Um, yeah, I think you&#8217;re probably right. That, to be clear, that&#8217;s my like takeaway.</p><p>ROBI</p><p>Yeah.</p><p>AARON</p><p>And yet, yeah, go ahead.</p><p>ROBI</p><p>Oh, uh, we can talk about the, the normie 100 charities thing if you want. I, I don&#8217;t have a strong opinion on that. I just It wasn&#8217;t quite relevant to what I was saying before.</p><p>AARON</p><p>No, I, I think, I think it just, it just like, in fact, the same, that&#8217;s like, in fact, just like structurally the same position that, like the actual position that I&#8217;m in, maybe like less extreme. And like maybe I have&#8212; you&#8217;re, you&#8217;re smarter. You can do.</p><p>ROBI</p><p>Calculus.</p><p>AARON</p><p>No, no. So I&#8217;ve, I&#8217;ve a Pareto, I have a Pareto improvement over them by like having more information, maybe being smarter, but like I still, um, I&#8217;m still in the position of like sort of having 100 options and like, even though the worst one is like plausibly, at least in expected terms, like plausibly better than like, um, like even potentially the best that they&#8217;ve identified, it&#8217;s still like by my own lights, like probably like a, like a, something like a power law distribution, um, like ex post. And so like the question is like, how do I, how do you give when like most of the impact comes from basically guessing, comes from the money that you give to the like best single choice or something.</p><p>ROBI</p><p>Yeah, I, I agree with that. Um, in this case, I would either, like, satisfy&#8212; or, like, bandedly rational&#8212; like, decide what amount of time it makes sense to spend, and then, like, spend that amount of time and see if I&#8217;ve made progress. And then, like, maybe just donate all my money to whatever is my best guess at the end of that time. Or, um, maybe it doesn&#8217;t make sense for me to spend time thinking about this, uh, join a donor lottery, or, like, ask a friend you trust to like allocate your money for you, or, um, or yeah, donate to something like EA Animal Welfare Fund or Light Cone or EA Infrastructure Fund where they do more research and like allocate to different projects.</p><p>AARON</p><p>No, I was actually having a conversation with this, uh, uh, with somebody on, uh, wait, who was it? Um, I think Caleb. Yeah, I think Caleb on, on Twitter, Caleb Parikh, uh, and about how the thing that I actually want is like a, like a like a fund, like analogous to, like, animal welfare, a long-term future fund that just, like, in fact has my, like, values exactly. But, like, this just doesn&#8217;t exist.</p><p>ROBI</p><p>Right.</p><p>AARON</p><p>Yeah. So it&#8217;s like, and, and in fact, like, if any, like, I&#8217;m actually quite open to just anybody wants to make the case, I should just give them money and they&#8217;re like, they&#8217;re better at making decisions than me, but like we have relevantly, like, very similar values. I&#8217;m actually like pretty open to that. I just like, I, I like, I know a lot of friends and we all have like important, like, disagreements, even though it&#8217;s like sort of like nihilism of small, uh, narcissism, small differences. It&#8217;s like, yeah, these are like actually like quite important disagreements about like what the world is like or something.</p><p>ROBI</p><p>Yeah, I, I agree with you on all that. I liked your approach of, um, um, you made that manifold market according to Aaron&#8217;s views, what should Aaron donate to?</p><p>AARON</p><p>Oh yeah, yeah, yeah.</p><p>ROBI</p><p>And that there were like 20 different options. See, I&#8212; so I think you should make that market and then at the end you should just give all your money to whatever is the highest probability.</p><p>AARON</p><p>Yeah, no, I think that that&#8217;s probably the act, in fact, the right thing to do. Or like, at the end of the day, like, taking that into account as a source of evidence or something. Like, yeah, sure. I, I think I probably&#8212; right.</p><p>ROBI</p><p>Or you could even&#8212; maybe this makes sense. I don&#8217;t think it does. You could even moral marketplace and do your donations in proportional to the percentages on the, the market. Um, but I think&#8212; actually, no, I&#8217;m sure that is just worse than donating all to the top one.</p><p>AARON</p><p>Yeah, yeah, I think I&#8217;d probably agree. And yeah, so maybe&#8212; okay, should we move on?</p><p>ROBI</p><p>Yeah, um, uh, I, uh, don&#8217;t have good new thoughts on the repugnant conclusions things, but, uh, I would be happy to talk about the rest of the doc.</p><p>AARON</p><p>Uh, okay, wait, so&#8212; well, one thing I just want to, like, clear the record. So you&#8217;re totally wrong. Wait, I&#8217;m just like&#8212; to be clear, I, like, say this with love or whatever. But like, you talk about this like purple yellow card thing in the first&#8212; Yeah. Um, interview around 40 minutes for, for, I guess starting at 39:30 for people who are like, want to reference that. Yeah. Um, and like, I actually think in some like extremely autistic, um, interpretation of utilitarianism, like you&#8217;re right, but like nobody actually means that. So nobody, nobody actually thinks that like you should be like considered blameworthy or like suboptimal if you try, if like you absolutely try your best and like the ex ante. The best guess.</p><p>ROBI</p><p>I&#8217;m not claiming that. I, I, I&#8217;m not claiming that. I&#8217;m, I, I still think you should do your ex ante best guess. I&#8217;m just saying it&#8217;s very arbitrary based on this, like, fact of what happened to be in someone&#8217;s mind, like, that determines whether you were wrong, whether, whether you were good or bad.</p><p>AARON</p><p>No, but like, ex ante isn&#8217;t the whole point that you can&#8217;t&#8212; wait, so like, the, the thing that you don&#8217;t know is like whether like you&#8217;re this like potential, uh, like, uh, moral patient. Prefers, like, X or Y, and you don&#8217;t know that, but like, that doesn&#8217;t affect what the best choice is, ex ante.</p><p>ROBI</p><p>I agree. Yes.</p><p>AARON</p><p>And so like&#8212; Yes.</p><p>ROBI</p><p>Yes.</p><p>AARON</p><p>Yeah.</p><p>ROBI</p><p>So you, you&#8217;re saying the morality of doing your choice is, um, is based on your ex ante. Um, and I&#8217;m saying the, like, ex post, uh, morality is, like, decided by this, like, unknowable bit or something.</p><p>AARON</p><p>Yeah.</p><p>ROBI</p><p>Um, and this just makes it really, like, Like, really, universe? This is&#8212; this is the moral truth?</p><p>AARON</p><p>[Speaker:Kevin] I don&#8217;t know, it seems like it&#8217;s like probably just like, yes, like I&#8217;m not surprised at all that this is the case.</p><p>ROBI</p><p>[Speaker:David] Okay, uh, well, anyway, if you ever come across any evidence for moral realism, let me know. I still haven&#8217;t seen any.</p><p>AARON</p><p>[Speaker:Kevin] Um, okay, I&#8212; like, my evidence is like, eat a&#8212; eat a like California Reaper pepper and then get back to me. Oh, that&#8217;s really nice.</p><p>ROBI</p><p>I love that.</p><p>AARON</p><p>Oh, okay. That is&#8212; I&#8217;m assuming it was from like a&#8212; like I, I think somebody like asked me like a similar&#8212; I forget if it was exactly this question, but like something similar. And like, actually, like at the end of the day, like you actually like experience sort of is the argument. Wait, I actually sort of stand by&#8212;.</p><p>ROBI</p><p>No, that, that&#8217;s not evident. That&#8217;s just like, it&#8217;s just unpleasant. It&#8217;s subjectively unpleasant. That&#8217;s not objectively bad. Um.</p><p>AARON</p><p>You&#8217;Re part&#8212; I mean, I feel like I&#8217;ve, I feel like I&#8217;ve had this conversation many times. I don&#8217;t want to like beat too much of a dead horse, but like you are part of the universe, right? Like, you can imagine, would it still be that if, like, every molecule had the same&#8212; it was like panpsychism is true, and also every&#8212; like, every molecule&#8212; just pretend that this is coherent or, like, possible&#8212; like, had the same experience. Like, at some point, would you concede that, like, the, the, like, the subjective, like, is just part of the objective? And in, like, when you have, like, sentient beings who are just, like, an aspect of the world.</p><p>ROBI</p><p>Um. Good question. I&#8217;m not sure.</p><p>AARON</p><p>Um, you&#8217;re like, I don&#8217;t think there&#8217;s anything like weird, like sort of like spooky written into the stars. It&#8217;s just like sentient beings are part of the&#8212; are part of the world. And we talk about like bad for one person, um, like versus bad in general. What do you mean? What would we mean by bad in general is like bad for the world and like they&#8217;re just part of that. Yeah, sure.</p><p>ROBI</p><p>Um, Maybe you have a good point. I think you&#8217;re moving the goalposts to, like, something less than what, uh, I understand moral realism to be.</p><p>AARON</p><p>Okay. I mean, maybe I just have a different conception of moral&#8212; Yeah, I mean, I don&#8217;t know. It&#8217;s, it&#8217;s, um, like, I guess maybe I want to, like, say that, okay, like, I actually, um, there is some additional step, which is like, uh, what is, like, genuinely good or bad. Is just, is like, is like identical. And this is like a substantive claim. It&#8217;s like identical to what is good or bad for the world or something.</p><p>ROBI</p><p>Okay, great.</p><p>AARON</p><p>And what&#8217;s your evidence? My evidence is just like&#8212;.</p><p>ROBI</p><p>Yeah, that&#8217;s right. You don&#8217;t have any. No, because it&#8217;s like, it&#8217;s like&#8212; Source, I made it up.</p><p>AARON</p><p>I made it up, but it&#8217;s, I made it up because it&#8217;s true.</p><p>ROBI</p><p>Okay, well, I can&#8217;t argue with that.</p><p>AARON</p><p>No, but, um, uh, okay, maybe there&#8217;s some like logical, I feel like this is gonna get into like a highly semantic, like a highly, um, yeah, like semantic, uh, discussion about like what we mean by words.</p><p>ROBI</p><p>Um, yeah, uh, maybe, maybe we&#8217;re, uh, maybe we&#8217;re not gonna convince each other. Uh, I&#8217;m happy to talk about it more if you want, but maybe not productive.</p><p>AARON</p><p>Yeah, maybe, maybe we&#8217;ll&#8212; I feel like we&#8217;ve already had this discussion, so yeah. Um, oh yeah, so there&#8217;s the, there&#8217;s the total happiness point. Oh, and then you said&#8212; so I actually totally would think that interpersonal utility comparisons are, are legit. Um, that doesn&#8217;t&#8212; so ignore that part.</p><p>ROBI</p><p>Um, uh, you like ordinal but not cardinal interpersonal utility comparison.</p><p>AARON</p><p>So, um, me, so No, cardinality still exists, just not necessarily, um, it&#8217;s not necessarily well, well modeled by, by addition over the real numbers. And this is like, did you see my&#8212; are you familiar with my EA forum post where I like, I spent way too long on this, but it&#8217;s like, uh, like effective altruism should accept that some suffering cannot be offset. Um, and then this actually&#8212; I, I.</p><p>ROBI</p><p>I decided it was all wrong, and I, I didn&#8217;t read that carefully, but yeah, you&#8217;re okay.</p><p>AARON</p><p>Okay, I think this is like&#8212; okay, I will&#8212; okay, I feel like I.</p><p>ROBI</p><p>Will&#8212; I will, I will engage further if you want, but I, I don&#8217;t&#8212; I think I&#8217;m gonna, uh, actually, I think I just read the first half and found lots of problems with it.</p><p>AARON</p><p>Okay, sure. I like don&#8217;t believe you, or like, I don&#8217;t believe that your problems The problems you identified are legit. Um, or like, I think, I think&#8212;.</p><p>ROBI</p><p>I don&#8217;t think they would convince you.</p><p>AARON</p><p>Oh, okay. Maybe you, if you want to, like, uh, if you like, yeah, maybe we can have another discussion, like, or which do you, do you have them like top of mind enough where like you could pull up the link and then go through them?</p><p>ROBI</p><p>No. Um, I remember this post. I think I read the first one-third or half and decided it wasn&#8217;t worth the time to comment.</p><p>AARON</p><p>Oh, wild. That&#8217;s funny, because I&#8217;m like, I&#8217;m like 99% sure in the first&#8212; the part 1 is true, and like less sure about part 2. Okay, 1 part is like, it&#8217;s like the logical thing, which is like, uh, it could be the case, it&#8217;s like not illogical, uh, that under utilitarianism, um, some off suffer&#8212; some suffering cannot be offset. And this part 2 is like, in fact, some suffering cannot be offset.</p><p>ROBI</p><p>Uh, okay, uh, I might have&#8212; it might have been I agreed with that, and then Scroll past it to the second part. It was like the second half.</p><p>AARON</p><p>Okay, that, that makes, that makes more sense. Like, I, I am, I&#8217;ve actually updated against, uh, the, like, not below 50%, but like from like, I don&#8217;t know, like 80 to 65% or something after.</p><p>ROBI</p><p>Yeah, fair enough.</p><p>AARON</p><p>It&#8217;s a, it&#8217;s literally on my to-do list to like go back over the comments. And there&#8217;s like some good points that are like, I, that are just actually like really conceptually hard for me to think through. Um, that I like, it&#8217;s, this is literally on my Google Tasks and I needed to do it.</p><p>ROBI</p><p>Um, what&#8217;s your day job these days?</p><p>AARON</p><p>Oh, so as of now, arb&#8212; I&#8212; arb research. Arb something. Yeah, this is like, as of like a week ago. Thank you.</p><p>ROBI</p><p>Okay, yeah, um, I am not sure, uh, how much time that takes up and whether you should&#8212; I don&#8217;t know. Yeah, um, if you get to it, um, Yeah, if I&#8217;m, um, maybe next time I&#8217;m on the subway or something, I will, um, try to go back to your post and get through it. Uh.</p><p>AARON</p><p>If you want, I don&#8217;t want to, like, pressure you. Um, plausibly, yeah, plausibly the&#8212; like, wait, maybe, maybe I&#8217;ll&#8212; maybe after this, I won&#8217;t try to find it now, but, like, there&#8217;s a&#8212; there&#8217;s, like, a single comment there that is, like, a bunch of&#8212; that is, like, cruxy or something, and, like, maybe I&#8217;ll try to link that so I can, like, make clear, like, what my current uncertainties or something.</p><p>ROBI</p><p>Amazing. Yes, you can send me a link to the comment.</p><p>AARON</p><p>Okay, I will&#8212; let me try to do that. Okay, cool. Uh, what else should we talk about, if anything?</p><p>ROBI</p><p>Uh, is there anything else on the doc before the repugnant conclusion stuff?</p><p>AARON</p><p>Those stipulated happiness&#8212; I forget this one. Paper&#8212; Uh, that&#8217;s like not&#8212; no, so I think, I think the main additional one is, um, the very repugnant conclusion that you just bite the bullet. I say no, don&#8217;t bite the bullet.</p><p>ROBI</p><p>Um, I think I still endorse Bite the Bullet. I would have to read through it.</p><p>AARON</p><p>Yes, so like this one, um, let me just remind myself of what I said when I was originally listening. Oh yeah, so no, this is, this is honestly just my&#8212; this is just like the paper again, or the EA forum post again. Um, okay, not, not just, but like very similar, like very closely related. Um, and like, I think the main point is like, or like a main point is, um, that it&#8217;s not unlike the repugnant conclusion where you could just&#8212; wait, no, no, even the repugnant&#8212; oh, so sorry, this actually relates to number 4. So even though we&#8217;re probably in conclusion, I&#8217;ve come to believe, even if, even though I like tentatively am willing to bite the bullet on that, it doesn&#8217;t actually follow from the premises of utilitarianism. It&#8217;s not like a logical thing. Like, you can&#8217;t just&#8212; like, it&#8217;s conceptually possible that there&#8217;s like happiness so great that it, that it like isn&#8217;t consi&#8212; like moral&#8212; it&#8217;s like morally, uh, more important to create like one amount of that than any amount or any like, you know, a number of like time being units of some like more normal happiness. This is like a possibility to consider.</p><p>ROBI</p><p>Um, uh, is this like a finitist objection? Like, there could be that much if the universe were like bigger, but the universe is not big enough to have this much utility.</p><p>AARON</p><p>We don&#8217;t in fact know, like, there&#8217;s, um, we&#8217;re so like the&#8212; in part because of like the radical, I guess mostly because of like our radical like uncertainty about like the nature of like qualia and experience. It&#8217;s just like not&#8212; so like the mapping between like qualia and like moral value or Um, what am I trying to say? Um, it&#8217;s just like, it&#8217;s like not written anywhere that like the moral value of an arbitrary qualitative state like has.</p><p>ROBI</p><p>To be&#8212; [Speaker:KEVIN] Can go higher? Can&#8212; like there might be a ceiling somewhere? You&#8217;re saying?</p><p>AARON</p><p>[Speaker:Howie] Um, conceptually there&#8217;s like probably not a ceal&#8212; wait, I&#8217;m just like trying to remember that like the right words to say this. So like conceptually there&#8217;s like probably not a ceiling, but it&#8217;s just like nothing breaks if you imagine that like one pigeon, like Like the state of a pigeon eating an apple, like, just doesn&#8217;t&#8212; like, like, there&#8217;s just, um, morally, like, no, like, uh, any, any, like, real number, uh, number of, like, hours of pigeon apple hours, like, don&#8217;t morally, um, or, like, morally, like, less important than creating, like, one, like, hour of, like, I don&#8217;t know, like, jhanas or something. And, like, it could&#8212; like, on the, on the merits, like, you can debate this. Like, is this, like, like, okay, like, is it&#8212; are in fact, like, 100,000 pigeon hours or, like, way more than that? Like, morally equivalent, but like nothing under, like in the utilitarian core, as I call it, like breaks. If you, if you like, um, if in fact the answer is no, like for any, for any number you pick of pigeon apple hours, like, like the moral value of the, of like the jhanas, like situ&#8212; uh, experience, like still&#8212; Yeah, sure, sure.</p><p>ROBI</p><p>I agree with that. Um, uh, is this where the name pigeon hour came from?</p><p>AARON</p><p>No pigeon. That was, uh, no, I just like pigeons.</p><p>ROBI</p><p>Yeah, okay, um, yeah, uh, have I told you about, um, what&#8217;s it called, unique entity ethics before? Have we talked about this?</p><p>AARON</p><p>I don&#8217;t think so, I don&#8217;t think so. Go, go for it.</p><p>ROBI</p><p>Um, okay, there, uh, this is a&#8212; have you read Unsung by Scott Alexander?</p><p>AARON</p><p>No, sorry.</p><p>ROBI</p><p>Oh, okay. Um, so in Unsung there&#8217;s this, um, uh, do you know what theodicy is? Like attempts to resolve, like how Uh, there can be suffering in the world even if God is omnipotent. Yeah. Um, so there&#8217;s this, uh, there&#8217;s this theodicy theory, uh, proposed by like some Christian philosopher, uh, which also happens to show up in, um, uh, Scott Alexander&#8217;s fiction novel. So, um, the question is, uh, how can there be suffering in the, the universe if God is omnipotent and benevolent and omniscient? Surely he would like not let us suffer. He&#8212; because he&#8217;s able to stop it. And he doesn&#8217;t want us to suffer. Um, uh, so, um, in the book, some character talks to God and finds out God created&#8212; like, uh, Job is like cursed with boils and plague, and he asks God, like, why didn&#8217;t you make me happy? Um, and God says, I did make a universe where you&#8217;re happy. Then I made a universe almost exactly like that perfect universe where everyone is happy. Then it made a universe almost exactly like that one, and then like so on and so on. Uh, I made every universe with net positive utility. Uh, and then yours is like the farthest one from the perfect universe, uh, where, where the universe is still good. So your life is filled with suffering, but it will ultimately be worth it. Um, uh, so Now you could imagine God just creates like an infinite, infinite, like uncountably infinite copies of the, um, the, the perfect universe where everyone is having the same experience over and over. Um, but maybe duplicates don&#8217;t count, or like the value, the moral value doesn&#8217;t increase linearly.</p><p>AARON</p><p>Um, so like, I mean, maybe I don&#8217;t know.</p><p>ROBI</p><p>I think&#8212; yeah, this is debated. So let me, let me give you an argument for why it, it doesn&#8217;t scale linearly. Um, although I think this might depend on more, um, physicalism than you&#8217;re willing to endorse. Okay, suppose I take&#8212; suppose I take your, um, your, uh, your brain and I scan it and I imprint it, imprint it onto like a silicon wafer. I&#8217;ve got like a one-atom-thick silicon wafer, and then I add some circuits on it so that when I plug in this chip, uh, it runs neural pro&#8212; like some electrical processes. So it&#8217;s like perfectly simulating your brain. Um, uh, can we assume, or like, bear with me for a sec. Um, can we agree if like the, the mind that is experiencing the things because these like electrons are flowing, has the exact same experiences as you do, it has like the same moral value as you, or like&#8212; [Speaker:HOWIE] Yeah, conditional.</p><p>AARON</p><p>On having the same experience, then it has the same moral value.</p><p>ROBI</p><p>[Speaker:David] Sure, yes, uh, or, or let&#8217;s just say it has some unit of moral, moral value, um, but like it has some experiences, it has some moral value. Okay, it&#8217;s like one atom thick and there are like circuits etched into it. Okay, now, um, uh, now let&#8217;s say I take an anatomically precise 3D printer and I just like stack another atom on top, uh, to double the thickness. So it&#8217;s a 2-atom-thick, uh, chip. Um, like everywhere there was a silicon atom, there&#8217;s now another silicon atom on top of it. And everywhere there was a copper atom, there&#8217;s another copper atom. And now it&#8217;s, it&#8217;s a chip exactly as before, except twice as thick. It still runs the same processes, still has the same experiences. Okay. Um, is this still the same moral value as before?</p><p>AARON</p><p>I have no idea. Like, maybe. It&#8217;s like, it&#8217;s like, it&#8217;s a quasi-empirical question. Like, are there two&#8212; are there two beings now, or are there one? Like, streams of experience or one? And like, I, I just don&#8217;t know the answer.</p><p>ROBI</p><p>That&#8217;s a good point. Almost everyone says yes, this is the same as the, the, the previous, like, before I doubled the thickness. Um, okay, now the gotcha is supposed to be here. So like, I take this two-atom-thick thing that&#8217;s having the same experiences as before, and then I separate it by like one angstrom. So now there&#8217;s two copies. I&#8217;ve done nothing except like split it apart. Is this now suddenly twice as valuable.</p><p>AARON</p><p>As&#8212; [Speaker:HOWIE] I mean, the answer is like, the answer is like, maybe it&#8217;s like, it&#8217;s not inconceivable that like, that you have like some version of computational functionalism, just like [UNCLEAR] is just like, it&#8217;s like a brute fact of the universe. Like you have to separate functional, you have to separate like instantiations of an algorithm by like some amount of like air.</p><p>ROBI</p><p>[Speaker:Howie] Okay, great. So, so just to This didn&#8217;t work on you, but a lot&#8212; uh, many people find this like a knockdown argument against, like, linear scaling of identical experiences of, like, moral value. So, um, uh, many people would say, &#8220;Okay, this argument conclusively demonstrates, yeah, 2 pigeons eating 2 apples for 2 hours is not twice as valuable as, um, 1 pigeon eating an apple Uh, yeah, it&#8217;s somewhere between 1x and 2x. Like, there&#8217;s some kind of discount.</p><p>AARON</p><p>Like, the discounting thing is like pretty&#8212; seems pretty implausible. Like, the closer thing to like plausible in this, in this like space is like, um, it&#8217;s like you, you want to be able to say&#8212; it&#8217;s like, uh, this gets into like weird multiverse stuff, but like you want like a genuine&#8212; like you want, um like two copies of the quote unquote, like the exact same thing. And there&#8217;s like sort of metaphysical question, which is like, can you have two copies of the same thing? Like if, if there are just like two copies of, of like, uh, of like a set. So like, I think at the sort of low level physics, &#8216;cause like my understanding is like, it&#8217;s just properties all the way down. And like, can you even have two different, like I quote unquote, like identical pigeons? Or something, because like they&#8217;re not identical with respect to like how their fields are interacting with like, uh, you know, like various, uh, like other gravitational&#8212; [Speaker:HOWIE].</p><p>ROBI</p><p>Yes, wait, okay, um, suppose I just take, uh, I say there&#8217;s this pigeon eating this apple for 1 hour, uh, let&#8217;s suppose I make an atom-for-atom copy, uh, it&#8217;s so far away they&#8217;re like outside of each other&#8217;s visible universe.</p><p>AARON</p><p>[Speaker:Howie] But like, like, so, so like, like, um, I think that I haven&#8217;t thought about this very much because I don&#8217;t think it holds, but like, um, I think the idea is like, oh, like physical position, like, is just a property that matters here. And so like, you can&#8217;t&#8212; basically, like, the idea of the argument is like, you, you can&#8217;t actually&#8212; you can&#8217;t actually get to like two pigeons, like two identical things. Like, in some intuitive sense you can, but it&#8217;s in like genuine metaphysical sense you can&#8217;t, because like, okay, yes, exactly the exact same properties, it&#8217;s just like one item.</p><p>ROBI</p><p>[Speaker] Right. So I was going to say, like, yeah, suppose they&#8217;re so far apart, they&#8217;re like, they can&#8217;t causally interact with each other. But then you say, okay, like, yeah, but I don&#8217;t know, bird&#8217;s eye view of the universe, there&#8217;s one bird over here and one bird over here. And like the location in the universe relative to the other bird is like a property that is different between them. Okay. What if I have an infinite grid of birds and like, 80 billion light years apart from each other so that for any individual bird, there&#8217;s an equal number of birds in every direction. And so they are in fact, like, identical in that.</p><p>AARON</p><p>[Speaker:Howie] Oh, uh, I love it. Um, you know, I, I, I, some, like, the answer is like, yes, like maybe that makes the difference. And like in this, in this situation, there&#8217;s in fact only&#8212; so I, I think the fundamental thing that I believe is like, um, there is like, it&#8217;s like a fact of the matter about like the number of streams of experience. &#8212; or like, there&#8217;s something about, like, how much quality is going on, or like what quality exists in the world. Um, and like, physics can, like, pretend to, like, have an answer to this, but like, there&#8217;s either&#8212; either there&#8217;s one stream or there&#8217;s many. And like, maybe it&#8217;s a brute fact of the universe that if you have, like, structurally&#8212; like, the structural perfect, like, lattice of pigeons, there&#8217;s only one stream. But this seems, like, very unlikely to.</p><p>ROBI</p><p>Me. [Speaker:ELANA GORDON] Uh, okay, yeah. Um, I don&#8217;t know if we can, um&#8212; I don&#8217;t know that there&#8217;s any way to empirically resolve this. No, I don&#8217;t think it comes down to&#8212; I think it comes down to, like, people believe different things about this, uh, because of arguments like that, like, one-atom-thick etched silicon circuit.</p><p>AARON</p><p>Thing.</p><p>ROBI</p><p>Yeah. And people have different intuitions, uh, and maybe you just can&#8217;t test.</p><p>AARON</p><p>This. Yeah. No, I mean, the, um, I, I, it&#8217;s like a little bit&#8212; it&#8217;s like interesting to me that people&#8212; and I think objectionable, I guess, also not just interesting&#8212; that, like, people will go from &#8212;uh, the direction of like, oh, that there&#8217;s this like wafer argument to&#8212; and therefore there&#8217;s like less than 2 streams of experience as opposed to the&#8212; or sorry, less than 2 times like the amount of moral value as opposed to the, the like other logical direction, which is like, which is like clear, like there can be 2 identical twins, therefore this, this thought experiment is nonsense. Um, like, or like there, there can be 2 identical twins who are like clearly like have their own &#8212;like conditional on them, on us not being radically wrong about what the world is like, there are just like two&#8212; there are in fact two persons there, like two streams of experience, two sets of qualia. Like they can&#8212; and so like this demonstrates that the thought experiment like doesn&#8217;t&#8212; is like, is like tricking me somehow. Like I think that&#8217;s the correct.</p><p>ROBI</p><p>Inference. Yeah, some people, some people see this and they&#8217;re like, okay, that seems fine. Yeah. So now there&#8217;s just two&#8212; it&#8217;s just, it&#8217;s, it&#8217;s&#8212; that&#8217;s right. It&#8217;s twice as valuable when you separate them by one Planck length. Um, you&#8217;ve made it twice as.</p><p>AARON</p><p>Valuable. No, that&#8217;s the other option. Yeah. Um, cool. I&#8217;ll FYI, so I&#8217;m actually happy to keep talking, but I&#8217;m like at 6:30, I&#8217;m probably gonna hop.</p><p>ROBI</p><p>Off. Or, uh, yeah, I should get going. Um, I can send you some, uh, I&#8217;ll send you two links. I&#8217;ll send you one, uh, I don&#8217;t know if you know Richard Bruns. Um, he works at JHU. He wrote a blog post on unique entity ethics. I&#8217;ll send you that. I&#8217;ll send you an EA forum comment thread about the Moral Parliament tool and point out where in the Moral Parliament tool this&#8212; these decision rules are. Oh, and maybe something about voting mechanics. Send me the transcript. I&#8217;ll listen to it and then figure out what would be the best&#8212;.</p><p>AARON</p><p>Okay. Okay, cool. And I will try to send you&#8212; I just wanna identify like a single comment thread that is like good to identify about from my, from my forum.</p><p>ROBI</p><p>Post. Great. Uh, and then I will see you again in 13.</p><p>AARON</p><p>Months. Yes. Maybe even sooner. Maybe we&#8217;ll run into each other in real life. You never.</p><p>ROBI</p><p>Know. Uh, likely. Um, are you going to.</p><p>AARON</p><p>EAG? No. If you&#8217;re ever in DC, you can look.</p><p>ROBI</p><p>Uh, likely in.</p><p>AARON</p><p>March. Yeah. Okay. Well, you hit me up. Okay, cool. See.</p><p>ROBI</p><p>Ya. Awesome. See you.</p><p>AARON</p><p>Later. Take care. See.</p>]]></content:encoded></item><item><title><![CDATA[Vegan Hot Ones | EA Twitter Fundraiser 2024]]></title><description><![CDATA[Featuring Max Alexander and Robi Rahman (and not Aaron)]]></description><link>https://www.aaronbergman.net/p/vegan-hot-ones</link><guid isPermaLink="false">https://www.aaronbergman.net/p/vegan-hot-ones</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Sun, 25 Jan 2026 01:56:27 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/185685724/5fc7209421a195ab622cf8a4457c7ccf.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>A great discussion between my two friends Max Alexander of <a href="https://scoutingahead.substack.com/">Scouting Ahead</a> and <a href="https://www.robirahman.com/">Robi Rahman</a> (in response to a fundraiser that we wrapped up more than 13 months ago)</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/AaronBergman18/status/1999918243205779864?s=20&quot;,&quot;full_text&quot;:&quot;Ok it has been a little while but goddammit we are doing the things\n\nPresenting: (Vegan) Hot Ones ft. <span class=\&quot;tweet-fake-link\&quot;>@robi_rahman</span> and <span class=\&quot;tweet-fake-link\&quot;>@absurdlymax</span>\n\nSpicy vegan nuggets, even spicier takes &#127798;&#65039;&#129397;\n\n(YouTube link below because Twitter doesn't seem to l our video file)&quot;,&quot;username&quot;:&quot;AaronBergman18&quot;,&quot;name&quot;:&quot;Aaron Bergman &#128269; &#9208;&#65039; (in that order)&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1876854317694517248/a4tSFXMr_normal.jpg&quot;,&quot;date&quot;:&quot;2025-12-13T19:04:00.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/G8Ei8T_WIAEyW8X.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/vStpevLuN8&quot;}],&quot;quoted_tweet&quot;:{&quot;full_text&quot;:&quot;It&#8217;s back and it&#8217;s not exactly bigger than ever yet but hopefully it will be!\n\n@SpacedOutMatt @absurdlymax @Laura_k_Duffy and I are once again raising money for the EA Animal Welfare Fund in this year&#8217;s Terminally Online EA Giving Season Fundraiser! \n\n(Link below)&quot;,&quot;username&quot;:&quot;AaronBergman18&quot;,&quot;name&quot;:&quot;Aaron Bergman &#128269; &#9208;&#65039; (in that order)&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1876854317694517248/a4tSFXMr_normal.jpg&quot;},&quot;reply_count&quot;:1,&quot;retweet_count&quot;:3,&quot;like_count&quot;:35,&quot;impression_count&quot;:6572,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><h1>Video</h1><div id="youtube2-yYAs5Xbnwvk" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;yYAs5Xbnwvk&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/yYAs5Xbnwvk?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h1>Transcript </h1><p><em>(AI-generated, likely imperfect)</em></p><p>MAX</p><p>Hello to the internet, maybe.</p><p>ROBI</p><p>Hey internet.</p><p>MAX</p><p>Um, I&#8217;m Max.</p><p>ROBI</p><p>I&#8217;m Robi.</p><p>MAX</p><p>Um, thank you all for donating, especially you. Um, so we&#8217;re gonna do a vegan version of Hot Ones. I actually don&#8217;t know if the camera can properly see. I mean, we took a photo as well, so someone will see it eventually. Um, but I have some very not spicy questions for you, and I hope.</p><p>ROBI</p><p>They get spicy, right?</p><p>MAX</p><p>Do you think it&#8217;s a little spicy?</p><p>ROBI</p><p>Um, I don&#8217;t know. Anyway.</p><p>MAX</p><p>Yeah, they&#8217;re not, you know, I&#8217;m sure someone will judge me greatly for this online.</p><p>ROBI</p><p>Um, yeah, the, uh, the food is spicy at least, or it gets a bit spicy. So, um, we&#8217;ve got, um, uh, we&#8217;ve got field roast, uh, buffalo wings without the buffalo sauce. We&#8217;ve got some spices on them. We&#8217;ve got, uh, Jack and Annie jackfruit nuggets and Impossible fake chicken nuggets, uh, with&#8212; my god, Sriracha, um, spicy chili.</p><p>MAX</p><p>Crisp.</p><p>ROBI</p><p>Calabrian hot chili powder, habanero hot salsa, Scotch bonnet puree, Elijah&#8217;s Extreme Regret Screamin&#8217; Hot, um, Scorpion Reaper hot sauce.</p><p>MAX</p><p>Cool.</p><p>ROBI</p><p>And, um, some, uh, Dave&#8217;s Hot Chicken.</p><p>MAX</p><p>Reaper seasoning and Carolina I&#8217;m going to have a much worse time than you are.</p><p>ROBI</p><p>I&#8217;m looking forward to this.</p><p>MAX</p><p>Yeah, uh, I guess I think in tradition of hot ones, um, the guest, um, introduces themselves and like says a background. So I don&#8217;t know if you want.</p><p>ROBI</p><p>To&#8212; okay, yeah, um, let&#8217;s see, um, I&#8217;ve been involved in EA for&#8212; well, I think the first meetup I went to was 2017. Um, they, uh, EA was much smaller then and, uh, we didn&#8217;t have our own meetups. They were, um, the DCEA meetup group was, uh, combined with a vegan feminist environmentalist&#8212; [Speaker:MAX] That&#8217;s cool. [Speaker:ROBI] &#8212;something meetup. [Speaker:MAX] Yeah, nice. [Speaker:ROBI] Eventually we, we had enough EAs that we, you know, spun off our own, uh, effective altruism only thing. [Speaker:MAX] Cool. [Speaker:ROBI] Yeah, um, yeah, but, uh, that was fun. Um, that was also the first year I played giving games. Um, And then, uh, I was, I was kind of a global health person back then, but, um, um, Matt Ginsel was way ahead of his time, and he, um, like in the Giving Games, you get to&#8212; you like play all the games like poker or like whatever, whatever, and you win the chips, and then at the end you put the chips in, into the box for whatever charity you think should get the money. And, um, He surprised me by donating to pandemic prevention, which wasn&#8217;t even on my radar then. And then, like, 3 years later, he was totally right.</p><p>MAX</p><p>Yeah, unfortunately.</p><p>ROBI</p><p>Yeah. Uh, yeah.</p><p>MAX</p><p>And now you work at Epoch.</p><p>ROBI</p><p>I work at Epoch. Yeah. Um, I do AI forecasting, basically. My job is kind of to figure out when everyone else&#8217;s job will be automated. Delightful.</p><p>MAX</p><p>You know? Yeah. Cool. Um, yeah, I guess maybe our very lukewarm, uh, question is, uh, which do you think is better, fuel or soil land?</p><p>ROBI</p><p>Um, I think I prefer Soylent for the drinks.</p><p>MAX</p><p>[Speaker:Robi] Interesting.</p><p>ROBI</p><p>[Speaker:Max] But, um, Hewlett Hot Savory was great. They&#8217;ve recently rebranded, right? [Speaker:ROBI] I don&#8217;t know. [Speaker:MAX] Hot Savory to, um, Instant Meals or like, something like that? I haven&#8217;t bought it in a while.</p><p>MAX</p><p>[Speaker:Robi] I, yeah, I bought some for the fundraiser.</p><p>ROBI</p><p>[Speaker:Max] Should we eat some lukewarm nuggets to go with the lukewarm questions?</p><p>MAX</p><p>[Speaker:Robi] Yeah, yeah, exactly.</p><p>ROBI</p><p>[Speaker:Max] So let&#8217;s start off with the Chili Crisp, um, uh, buffalo wing. [Speaker:ROBI] Okay. [Speaker:MAX] Cheers.</p><p>MAX</p><p>Yeah, that&#8217;s not that spicy.</p><p>ROBI</p><p>[Speaker:Robi] Eat the whole thing.</p><p>MAX</p><p>[Speaker:Max] Oh no.</p><p>ROBI</p><p>I&#8217;m sorry. It&#8217;s so far. Chicken nugget. [Speaker:ROBI] Yeah, um, yeah, I don&#8217;t think I would&#8212; I don&#8217;t know if I would notice that&#8217;s not chicken.</p><p>MAX</p><p>[Speaker:Max] Oh yeah. For sure.</p><p>ROBI</p><p>I mean, I&#8217;m not a huge fan of chicken nuggets anyway, but yeah. Um.</p><p>MAX</p><p>Cool. Okay, um, let&#8217;s see.</p><p>ROBI</p><p>Uh.</p><p>MAX</p><p>Okay, well, this one&#8217;s a little spicy at least. Uh, what&#8217;s one thing you think everyone in EA is getting wrong?</p><p>ROBI</p><p>Um, I&#8217;m kind of like very EA orthodox, and I think EA is like basically right about everything. Um, the The thing I think EAs get wrong&#8212; I think the, um, I don&#8217;t believe in the, like, perils of maximizing stuff, or like&#8212; like, maximizing does have the problems that they point out, but like, I don&#8217;t think anyone has a good argument that, like, you should not maximize.</p><p>MAX</p><p>Sure.</p><p>ROBI</p><p>I think all of the, like&#8212; I don&#8217;t know, I just bite the bullet. I&#8217;m taking everything to the&#8212; like, if the principles are right and you have the facts, yeah, the conclusion is what it is.</p><p>MAX</p><p>Okay, well, that&#8217;s good. I think I have a question later that&#8217;s like Is the repugnant conclusion actually repugnant?</p><p>ROBI</p><p>I&#8217;ll have some thoughts on that. Yeah, I think I basically disagree with Holden Karnofsky and Scott Alexander on, like, you should get off the crazy train if it seems too weird. Like, no, if the reasoning checks out, you should do what you should do.</p><p>MAX</p><p>Cool.</p><p>ROBI</p><p>Yeah, I kind of think&#8212; this might be a bit spicy&#8212; Okay. I kind of think, um, they are&#8212; I slightly suspect they&#8217;re just saying that as cover, like after the FTX scandal and whatnot. Like, no, no, no, no, we don&#8217;t really believe in that stuff where you like take it to the extreme and like, yeah, yeah, yeah.</p><p>MAX</p><p>That is plausible. I don&#8217;t know Holden, so I cannot say for sure.</p><p>ROBI</p><p>Neither do I, but I&#8217;d like to think he&#8217;s smarter than to&#8212; sure.</p><p>MAX</p><p>Yeah, yeah. Um, cool. Yeah, though Yeah, I mean EA is a whole big thing, so, you know, um, cool, that&#8217;s a good one. That&#8217;s a&#8212; if you brought that to a party, you know, you would start a 3-hour discussion, sort of.</p><p>ROBI</p><p>No, I think that would be like, um, a 30th percentile EA spicy opinion.</p><p>MAX</p><p>Well, yeah, but then the other people, you like start the whole thing and they, uh, yeah, cool.</p><p>ROBI</p><p>Um, cool.</p><p>MAX</p><p>Oh wait, should we eat another thing first?</p><p>ROBI</p><p>Yeah, how many questions are there?</p><p>MAX</p><p>16? I have 16, but some of them are like not&#8212; Yeah, 2, 3 questions. Okay, cool.</p><p>ROBI</p><p>Um, yeah, uh, so you spoke at UHG once, right? I&#8212; not&#8212; I wasn&#8217;t quite a speaker. I was a, um, I ran a session. Yeah, it was, but it was, um, it was like a forecasting interactive exercise. So it was a, like, short presentation, and then we did a workshop.</p><p>MAX</p><p>Cool.</p><p>ROBI</p><p>Yeah, I think the EAG team has been trying to move away from static content and lectures, because EA has this meme of, like, you don&#8217;t go for the content, you go for the one-on-ones. Or a lot of people say, like, well, why should I watch a talk when my time is scarce and I could just watch it on YouTube anyway at 2x speed, thereby saving all this time? I don&#8217;t think people would&#8212; I don&#8217;t think the counterfactual is actually watching. I think it&#8217;s just never seeing the talk. Exactly.</p><p>MAX</p><p>Yeah.</p><p>ROBI</p><p>But, but, um, And there have been some really good talks at the AGs. Kevin Esvelt at EAJxBoston was incredible. Yeah, very, very good biosecurity presentation. But yeah, so I offered to&#8212; or like was, you know, talking to the content team about like they might have wanted a presentation, but they didn&#8217;t want it to just be a lecture. I could just give an Epoch spiel, but I think it was more fun with, you know, people who are in current views.</p><p>MAX</p><p>[Speaker:Max] Cool. Yeah, I guess if you were to do it now, has anything changed or.</p><p>ROBI</p><p>Is it mostly the&#8212; [Speaker:ROBI] Well, I would fix&#8212; one of my forecasting questions had a loophole. I think we were&#8212; so Matthew Barnett is another AI forecasting guy. He has just left Epoch to form a startup. Spicier than anything I&#8217;m doing. I can talk about that later.</p><p>MAX</p><p>[Speaker:Max] Yes, that&#8217;s a good question actually.</p><p>ROBI</p><p>Well, I&#8217;ll finish. Um, Matthew and I, you know, uh, had some questions. We adapted them for the UAG format. Um, I think I made some last-minute changes and then overlooked a loophole, which was&#8212; so the, um, I don&#8217;t remember what it was exactly, but it, it was something like one of the questions ended up being like&#8212; so there were 3 big questions of like different domains. Um, one was like superhuman in math, one was like, um, do all like households tasks by inventing robotics, and one was, um, um, synthetic biology capabilities. And one, uh, the last question was something like, um, when will it be possible to, with the aid of AI, invent a virus at least&#8212; like, synthesize a virus at least as dangerous as COVID or something. But I think I edited it last minute and then left some loophole where someone raised their hand and was like, &#8220;Well, you can already acquire a sample of a virus at least as dangerous as COVID by getting a sample of COVID.&#8221; Simply just have someone sneeze and then deliver it. So AI can already do that. But that&#8217;s not the point of the question. No, it was something like, &#8220;When will a rogue terror&#8212; when will it be possible for a rogue terrorist group with the aid of AI to get a sample of a virus at least as dangerous as COVID?&#8221; And they can already get COVID. Yeah, yeah, yeah, yeah.</p><p>MAX</p><p>Uh, yeah, cool.</p><p>ROBI</p><p>Uh, that wasn&#8217;t the exact question, but something like that. Yeah, nice.</p><p>MAX</p><p>Um, cool, that&#8217;s very fun. Yeah.</p><p>ROBI</p><p>Um.</p><p>MAX</p><p>Let&#8217;S see, uh, I guess, yeah, so if you kind of weren&#8217;t in EA now, is there like a career you would&#8212; do you have like a dream career that you&#8217;re like, ah, it&#8217;s just not impactful enough?</p><p>ROBI</p><p>So, um, that is a great question. I really like data science. Um, this is a little suspicious. Um, like Maybe I would do the same thing anyway. But yeah, I mean, I previously had a different job. I was like a construction engineer. But it was kind of boring and I wanted to switch to data science anyway. And then I found out&#8212; well, I was already considering it and then 80K was also a factor. Work on AI and all this stuff. It&#8217;s going to be really impactful. I had an old non-EA job and was just like earning to give. Um, but it like wasn&#8217;t much direct impact, and I wasn&#8217;t earning that much money, so like, um, and also I was like bored at my job, so I probably would have quit anyway. Um, but like that, uh, I ended up quitting I think in 2020, um, around when the principles came out, and that like influenced me a bit. Like that also spurred me to, you know, get into this.</p><p>MAX</p><p>Yeah, cool. Uh, should we do another one?</p><p>ROBI</p><p>Sure, yeah. Can I&#8212; interest you in, uh, an Italian spicy chicken nugget. Wait, this is, um&#8212; Oh yes, yeah, this is the, uh, field roast nugget but with a Calabrian hot chili powder.</p><p>MAX</p><p>Cool.</p><p>ROBI</p><p>Um, which is, uh, I believe it&#8217;s the spiciest thing from Europe. Um, nope, Scotch bonnets are not from Scotland. They&#8217;re, they&#8217;re named that because the pepper is in the shape of that, like, Scottish hat. Oh, okay. I&#8217;m putting some more spice on mine, but, uh, this stuff tastes really good. Like, well, apart from spiciness, but, uh, cool. Yeah.</p><p>MAX</p><p>Cheers.</p><p>ROBI</p><p>Cheers.</p><p>MAX</p><p>I can see how that&#8217;s the spiciest thing in here. Do you think it&#8217;s spicy, or&#8212; No.</p><p>ROBI</p><p>But yeah, um, This chili tastes so good, I put it on everything. It&#8217;s like great on like risotto, arancini, pizza, pasta.</p><p>MAX</p><p>Yeah, yeah, I mean, I could see why you would do that. And that would&#8212; I don&#8217;t need a lot of spice and it&#8217;s not like super, uh, yeah, but it&#8217;s nice. Cool. Um, let&#8217;s see. Yeah, so, well, this one&#8217;s a little spicy. What do you think a big mistake people in AI safety you&#8217;re making right now?</p><p>ROBI</p><p>[Speaker:Robi] Oh, I don&#8217;t know if I have any. Um, I think it&#8217;s become&#8212; at some point it was like low status to be too doomer. Like, I think AI safety didn&#8217;t&#8212; or like, EAs didn&#8217;t want to be associated with like being, um, like having high P doom. Um, because I don&#8217;t know, maybe government official&#8212; like, maybe it&#8217;s not put into the government, so like policy people didn&#8217;t want to like be one of those rabid doomer people, start to gain credibility, they went with the angle of, &#8220;I think it&#8217;s only 1% or 5%, but even regardless, you should still take it very seriously.&#8221; Which I agree. I totally think even if you have only 1% or 5% be doomed, this is possibly the most important issue. And the government is like sleeping on this and has no plan. But, um, no, but I don&#8217;t think these people&#8212; I don&#8217;t think there&#8217;s enough evidence to be like 95% confident this won&#8217;t cause doom, basically. Yeah.</p><p>MAX</p><p>I guess, do you have a P-Doom?</p><p>ROBI</p><p>It&#8217;s hard to define. I think it really depends what negative outcomes are included. Um, so I guess I, I don&#8217;t see humanity existing in its current form in like centuries from now.</p><p>MAX</p><p>Cool.</p><p>ROBI</p><p>But like, so a lot of people might think if we, um, like upload ourselves to cyborg bodies and then every 10 years like there&#8217;s more and more advantages of like, like having a robot arm instead of a human arm, they just get better and better, and then like people who are old-fashioned are like die out or are competed. Even if everyone at every step is happy with, like, &#8220;Oh, I would rather have robot hands instead of regular hands because robot hands are better.&#8221; Some people, if you look 100 years in the future and see that humans have turned into these cyborg monstrosities, might think, &#8220;Oh god, that&#8217;s horrid, that&#8217;s human extinction, like there are no biological humans left.&#8221; We&#8217;ve been destroyed. Even if it happened in a good way, where everyone is happier and happier each year, I think I would not count that as due. But, um, yeah, if I had to put a number on it, maybe 20-30%.</p><p>MAX</p><p>[Speaker:Max] Okay, well, you know, it never.</p><p>ROBI</p><p>Makes me happy to hear anyone&#8217;s numbers.</p><p>MAX</p><p>But thankfully I&#8217;m good at processing using, um, what&#8217;s the word, all those things you read when you get into EA where it&#8217;s like, ah, scope insensitivity and stuff.</p><p>ROBI</p><p>So yeah, you know, uh, I&#8217;ve never heard that one before. Luckily I&#8217;m very scope insensitive, so I&#8217;m not as worried. I&#8217;m not freaking out as much as I should be.</p><p>MAX</p><p>Okay, yeah. How many work trials have you done in your life?</p><p>ROBI</p><p>How many work trials have I done in my life? Um, at least one. I mean, I worked for Epoch and.</p><p>MAX</p><p>Then.</p><p>ROBI</p><p>Uh, Um, I mean, I solved the thing we were, we were trying to do in the work trial, so I got, got the job, uh, that way. Um, what other places have I worked out that&#8212; oh, um, I applied to Open Phil. Their, their hiring process is really long. I made it to the third round of work trials, and then I think they, uh, hired someone else for the position.</p><p>MAX</p><p>Yeah, yeah.</p><p>ROBI</p><p>Um, man, EA, uh, EA job market is rough. Yes.</p><p>MAX</p><p>This might be more of a more&#8212; Everyone&#8217;s super overqualified. Yes, yeah, uh, you know, as a young EA, you do a lot of work trials.</p><p>ROBI</p><p>Yeah.</p><p>MAX</p><p>What do you think the optimal number.</p><p>ROBI</p><p>Of work trials is? Optimal number of work trials is?</p><p>MAX</p><p>Yeah, in a hiring round, I guess. Um, but maybe in your life.</p><p>ROBI</p><p>Oh, for an employer to have?</p><p>MAX</p><p>Yeah.</p><p>ROBI</p><p>Or for you to do before picking a job?</p><p>MAX</p><p>Um, you know, either.</p><p>ROBI</p><p>So I, um, I&#8212; when you said that, I interpreted the question as like, what is the right number of work trials to do before you I thought this was like a secretary problem question, like how many jobs should you&#8212; how many job offers should you go through before you settle down on a job? The optimal number of work trials to do is I think 20 or 30, because that&#8217;s how many the guy did when he wrote that famous post. Getting an EA job is really, really, really hard. So, um, for that sweet, sweet forum karma, you should do 20 or 30 work trials.</p><p>MAX</p><p>Well, you gotta do like 10 more or something, see if I keep one up.</p><p>ROBI</p><p>I, I, I, I mean, you can&#8217;t just&#8212; you can&#8217;t just&#8212; that&#8217;s like 2020 talk. Yes, yeah, that was 5 years ago. The standards are, uh, yeah, much higher now. Yeah, cool. Um, should we eat another? Sure, yeah. Um, so this is just some, um, Tostitos Habanero Salsa on a, um, Jack and Annie jackfruit nugget. What do you think of the jackfruit or the habanero?</p><p>MAX</p><p>It&#8217;s spicier.</p><p>ROBI</p><p>I honestly, I&#8217;m not noticing any spice.</p><p>MAX</p><p>That&#8212; you know, well, um Um.</p><p>ROBI</p><p>Yeah.</p><p>MAX</p><p>I think the nuggets first though.</p><p>ROBI</p><p>I feel like you don&#8217;t like the nugget as much. Yeah, yeah. Um, my favorite is Impossible Nuggets, which are the last two. Um, yeah, um, I didn&#8217;t eat much jackfruit. Um, I think they had it at UAG, maybe in like a, like a salad or something. It was like kind of a meaty option. Um, but I looked at the macros of this. Unfortunately, jackfruit isn&#8217;t like very protein dense, so, um, it&#8217;s not my, uh, chicken replacement of choice.</p><p>MAX</p><p>That makes sense. Yeah. Uh, what do you think the best UAG venue is?</p><p>ROBI</p><p>Not UAG venue. Yeah, I don&#8217;t know how many you&#8217;ve been to. I freaking loved, um, London 2021, which was in&#8212; what&#8217;s the housing project called? Um, can you look up UG London 2021?</p><p>MAX</p><p>Yeah. Oh goodness.</p><p>ROBI</p><p>[Speaker:Robi] What&#8217;s it called? Uh, the Barbican. [Speaker:MAX] Okay. [Speaker:ROBI] Um, it&#8217;s like a, um, it&#8217;s like public housing, but it&#8217;s like freakishly nice. It&#8217;s like they have a museum and like an opera hall and bookstores and cafes and like an indoor tropical jungle.</p><p>MAX</p><p>[Speaker:Max] Okay, yeah, it seems like it would win, you know.</p><p>ROBI</p><p>And they had a conference, and they had, um, that was my&#8212; was that my first UAG? Yeah, it was. And then I was, I was blown away at the delicious vegan food. Maybe I had low standards for vegan food back then, but yeah, it was so good.</p><p>MAX</p><p>I&#8217;ve heard it&#8217;s gotten much better. I wasn&#8217;t really engaging with it that much in the past. I guess it got better than, like, 20 years ago or something.</p><p>ROBI</p><p>Yeah, sure.</p><p>MAX</p><p>So, you know, on some time horizon.</p><p>ROBI</p><p>I&#8217;m so glad that veggie burgers are good now. They used to be just like bean paste. Like, that&#8217;s not a burger substitute. Anyway.</p><p>MAX</p><p>Yes, yeah, yeah. Maybe we save that. Cool. #ADKpodcast or #Dworkishpodcast?</p><p>ROBI</p><p>Ooh, I really like both of them. I think I would&#8212; so if you took all of the 80K episodes I haven&#8217;t listened to and all the Dwarkesh episodes I haven&#8217;t listened to and randomly picked one of each without me seeing what they were, I would rather listen to the 80K episode just because&#8212; I think my reasoning is wrong, so I have to reconsider. I was going to say, because Dwarkesh mostly does AI stuff and I hear enough about&#8212; like, I have enough AI in my, uh, podcast ecosystem diet, uh, so I don&#8217;t need any more. Um, but actually Dwarkesh&#8217;s, uh, episodes on, like, history and, like, um, anthropology have been really good. So, um, yeah, now I&#8217;m torn. Um, gotta pick. Okay, I&#8217;m picking&#8212; I&#8217;m actually&#8212; now that I&#8217;ve remembered, he does non-AI stuff that&#8217;s also very good. Like extremely good. Um, no, I&#8217;m gonna say Dwarkesh, actually. Um, partly because I think I&#8217;ve already listened to most of the 80&#8212; like, I&#8217;ve gone through the 80K, like, episode list and listened to all the ones that seemed interesting. Um, so the ones that are left are, like, stuff I don&#8217;t really care.</p><p>MAX</p><p>And see here and there. Um, yeah, this one&#8217;s a little spicy, I guess. I was going to say this, but we&#8217;ll just do it because it&#8217;s on theme. Uh, what do you think about the 80K, uh, pivot?</p><p>ROBI</p><p>Kind of&#8212; 80K pivot to AI? Yeah. Um, I I really respect them for doing it. They&#8217;re doing what Holden is too chicken to do.</p><p>MAX</p><p>[Speaker:Max] Well, he&#8217;s an anthropic now, right?</p><p>ROBI</p><p>[Speaker:Robi] Oh, sure. I meant, um, with the maximization. [Speaker:MAX] Ah, yeah, fair. [Speaker:ROBI] If you think&#8212;if you have done the research, and you think you have the best&#8212;like, you have figured out what the best thing to do is, just friggin&#8217; do it. Don&#8217;t waffle about how, &#8220;Oh, but we don&#8217;t want to maximize, we don&#8217;t want to&#8212;&#8221; optimized too hard, it would be too optimal. You can&#8217;t have that. Um, no, they, um, if they&#8212;their mission is to, like, um, maximize impact, and, um, they think AI is just, like, much more pivotal in the next few years than all, like, every other issue that could steer people to or focus on, which I think that&#8217;s correct. Um, it absolutely makes sense to go for it.</p><p>MAX</p><p>Nice.</p><p>ROBI</p><p>Um, yeah, but&#8212; and, and I mean, they&#8217;re leaving the career guides up on the other cause areas, so for people who are like not, you know, AI true believers or are, um, animal welfare fanatics, they&#8217;re still&#8212; yeah, yeah.</p><p>MAX</p><p>Do you, um, what do you think about like, uh, there&#8217;s kind of a thing where like EA is very young, like we have an oversupply of, you know, young 22-year-olds, uh, ambitious. You think we need to like, uh switch our recruiting? Yeah, recruiting, you know, now that&#8212; yeah.</p><p>ROBI</p><p>I think I rubbed spice in my account. Um, switch the recruiting? Uh, I don&#8217;t know. Um, I think I understand and mostly agree with most of EA&#8217;s, like, past decisions. Like, they get criticized for focusing on, like, um, um, like, fancy universities or something, but, like, the, the kids who go here&#8212; there&#8212; like, they&#8217;re the ones who have the most opportunities to go into these, like, uh, like, competitive, like, tech or, like, consulting jobs, which are, like, the kind of things that were needed by the movement. Um, Um, yeah, I think&#8212; I guess the shorter your timelines are, the more important it is to not do stuff like&#8212; so, like, Horizon Fellowship makes sense, right? They&#8217;re incubating people in with maybe technical or policy expertise and get&#8212; getting them placed in influential positions for policy. Um, If you have shorter timelines, maybe this, like, long setup stuff doesn&#8217;t make sense, and maybe you should just, like, directly, like, pitch to&#8212; just, like, try to convince the senator instead of, like, this, like, galaxy brain plan where you, like, train the staffers who will then be in the office of, like, the next candidate who wins, and then, like, when the House switches back to Democrats, then they&#8217;ll have like these people in a position of power, and then there&#8217;ll be the singularity in like 2032, and by then we will have like gotten all the&#8212; yeah.</p><p>MAX</p><p>Nice, cool.</p><p>ROBI</p><p>Yeah.</p><p>MAX</p><p>Um, should we get another one?</p><p>ROBI</p><p>Sure, yep. Um, okay, I&#8217;ve got a spicy, spicy take for you along with a spicy nugget. So this is, um, um, the last jackfruit nugget, and it&#8217;s got scotch bonnet puree, um, Yep, enjoy it too. I wasn&#8217;t thinking about this, but I put a lot of Scotch bonnet on mine. It might hurt my stomach later, but it&#8217;s delicious. How are you doing?</p><p>MAX</p><p>Oh, you know, okay. It&#8217;s just spicier than what I would put on my food, but you know. I eat a lot of cereal, so, you know, it&#8217;s like, yeah, all right.</p><p>ROBI</p><p>You know, I haven&#8217;t heard that one in, uh, I mean, like, yeah, not.</p><p>MAX</p><p>Like an insane amount of cereal, but, you know, probably more than average.</p><p>ROBI</p><p>I can&#8217;t remember the last time I ate cereal.</p><p>MAX</p><p>Like, maybe it&#8217;s not surprising. I guess, like, maybe it&#8217;s not. I only have, like, I only know the cereal eating habits of people I&#8217;m around, and like, if you live in a house where people&#8212; it&#8217;s like, you know, no one likes cereal. Yeah, I don&#8217;t know what the market cap is.</p><p>ROBI</p><p>I don&#8217;t know, I guess we&#8217;re just not a cereal household.</p><p>MAX</p><p>Yeah. So what&#8217;s your spicy take?</p><p>ROBI</p><p>Uh, well, I think, um, you know, politics is the mind killer from restaurant. Yeah. Um, I don&#8217;t know if I&#8217;ve actually.</p><p>MAX</p><p>Read it, to be honest.</p><p>ROBI</p><p>Sure, but I&#8217;ve heard people say like political&#8212; like having political opinions biases you. My spicy take is that, um, I think EAs are too Democrat and they&#8217;re like, oh, discriminating against Republicans. Like, we&#8217;re leaving billions of dollars of donation and like tons of political influence on the table because EAs won&#8217;t put aside their like partisanship.</p><p>MAX</p><p>Yeah, that I don&#8217;t know. I feel like you&#8217;re probably right there, though also being liberal-leaning, I&#8217;m like, I.</p><p>ROBI</p><p>Don&#8217;T know, like, you know, like, uh, we could get twice as many donations for vendettas. But on the other hand, Republicans are just so&#8212; like, if you met MAGA supporters. Yeah, yeah, yeah. But, um, yeah, but I think we&#8217;re seeing, um, maybe some bad effects of this kind of situation in EA. Um, EA&#8217;s being too left-wing, I guess, where the administration&#8212; there&#8217;s just like no EAs in power right now. And I think we could have, you know, we&#8217;re missing impactful opportunities to have the policy and implementation be less terrible if we just had people in the system who could like, you know, if we just built up the deep state in both parties. Like, if there were any EA Republicans, they would be in the government right now, and they would be, you know, throwing a spanner in the works of the disastrous tariffs and US aid cancellations.</p><p>MAX</p><p>[Speaker:Max] I&#8217;ve heard people say that Doge is EA, though I don&#8217;t think it is.</p><p>ROBI</p><p>[Speaker:Robi] Okay, there was this&#8212; there was this, uh, what&#8217;s, what&#8217;s a tweet but on Bluesky? [Speaker:MAX] I just rolled a tweet. [Speaker:ROBI] A Bluesky tweet or post.</p><p>MAX</p><p>[Speaker:Max] Yeah.</p><p>ROBI</p><p>[Speaker:Robi] Um, they got like 50,000 likes. And it was like quote tweeting an article about how Doge had just cut billions of dollars from USAID, and I&#8217;m like, &#8220;Millions of people are dying in Africa without the foreign aid.&#8221; And the tweet or whatever on Blue Sky is like, &#8220;This is horrible. Effective altruism always leads to disasters. Don&#8217;t you see how horrible these people are?&#8221; [Speaker:ROBI] Yeah. This is the exact opposite of effective altruism. Yeah, like, what do you think effective altruism is?</p><p>MAX</p><p>Yeah, there&#8217;s a&#8212; we&#8217;ll never live down that Elon Musk went to like one EHE one time. Yeah, it&#8217;s gotten worse every year. Oh man.</p><p>ROBI</p><p>Um.</p><p>MAX</p><p>Yeah, uh, what&#8217;s the most embarrassing thing you&#8217;ve donated to? I don&#8217;t know if this is like&#8212;.</p><p>ROBI</p><p>Most embarrassing thing&#8212; I kind of mean.</p><p>MAX</p><p>This is like, you know, like, I don&#8217;t know if you gave to the What&#8217;s like a&#8212; the plate pumps?</p><p>ROBI</p><p>[Speaker:Robi] Right, um, honestly I don&#8217;t think I&#8217;ve got any funny answer here.</p><p>MAX</p><p>[Speaker:Max].</p><p>ROBI</p><p>No. [Speaker:ROBI] Um, no, nothing. Everything I donate to is super effective. [Speaker:MAX] That&#8217;s great to hear. [Speaker:ROBI] Um, well, okay. Okay, maybe a spicy opinion. I think, um, so you know how, like, for example, the average charity is like&#8212; or sorry, an effective charity is like 2+ orders of magnitude as effective as some average charity? So if you just donate to random charities and you look at the bucket of that portfolio and what you&#8217;ve achieved with those Like 99% of donations, you&#8217;re just completely wasting your money, basically. And this, like, people get offended if you point this out. Like, how dare you imply that my cute neighborhood kitten shelter is not the most effective thing to do. Yeah, I think the Robin Hanson take of, like, at the scale of an individual donor, There is, for whatever your values are, you should not diversify. You, um, you should spend up&#8212; like, you should research as much as you have time for, or like bounded rationality, and then all of your donations should go to one thing, um, and anything else you do is just like sabotaging your own effectiveness.</p><p>MAX</p><p>So you&#8217;re not a Headspace giving guy?</p><p>ROBI</p><p>No, no, even if you&#8217;re&#8212; oh my God, this is, uh, Yeah, you should just&#8212; you should simply&#8212; okay, you have an assessment of each different charity. You have an expected value and the variance. But at the level of an individual donor who is not saturating&#8212; like you&#8217;re not donating enough for the charity you donate to, then fulfills its most urgent funding need and then becomes less effective than something else. Um, like, hitspace giving is good, but at the level of an individual, you should not be splitting it up. You should not be trying to, like, do the hits. Um, yeah, uh, my roommate is like&#8212; she, she really, like, intuitively resists this, um, even though we&#8217;ve tried to explain to her many times, but it&#8217;s just like very unintuitive to her. Um, Like, she wants to donate to global health and animal welfare and AI. I just like&#8212; but what if I&#8217;m wrong and the AI, like, animals are, like, more important than, um&#8212; well, okay, if you think the animals are more important, with your, like, $10,000 of donation, you should put all that money into animal welfare. Um, and then she said, but no, but what about the, the people? I should at least donate something to&#8212; yeah. [Speaker:MAX] I&#8217;m confident in that. [Speaker:ROBI] It&#8217;s just like&#8212; yeah. But it&#8217;s just like a mathematical fact. You should not be splitting up.</p><p>MAX</p><p>[Speaker:Max] Have you seen the meme that&#8217;s like, &#8220;I&#8217;m doing a billion calculations a second, but all of them are wrong&#8221;?</p><p>ROBI</p><p>[Speaker:Robi] No. [Speaker:MAX] Okay.</p><p>MAX</p><p>It&#8217;s like&#8212; maybe one day you&#8217;ll see it. Like, people reply with it and this sort of things, but that&#8217;s, you know, maybe. So we have to hedge a little Yeah.</p><p>ROBI</p><p>Another bad argument I hear in favor of diversifying individual donations is like, &#8220;Well, you diversify your portfolio, why wouldn&#8217;t you diversify your donations?&#8221; But this is also wrong. It&#8217;s not like&#8212; going from everything donated to one charity to everything donated to a couple of charities is not Like having a portfolio that consists entirely of like one stock and going to an index fund, it&#8217;s like you already have an index. Let&#8217;s say, uh, stocks on average return 7% per year and bonds return 3% and savings accounts yield 1%. Um, this would be like you already have a diversified account including, um, like 70% stocks and like 20% bonds and 10% cash, um, and then like, &#8220;Oh, but I want to diversify, so I&#8217;m going to take a dollar out of stocks and put it into like something with like&#8212; something with like lower EV.&#8221; Like, that doesn&#8217;t actually help. Um, it&#8217;s just reducing your, um, uh, expected value without improving the, like, risk that balance.</p><p>MAX</p><p>Yeah. Sure.</p><p>ROBI</p><p>Um.</p><p>MAX</p><p>Let&#8217;S see, what remaining questions do I have?</p><p>ROBI</p><p>How many do we have left?</p><p>MAX</p><p>Uh, I haven&#8217;t gone fully in order, so that actually makes it annoying to, uh, count.</p><p>ROBI</p><p>I can make more nuggets, but I don&#8217;t know if we have, um, more hot sauces.</p><p>MAX</p><p>No, I think we have like 4 questions left.</p><p>ROBI</p><p>Uh, um.</p><p>MAX</p><p>Should we do another nugget?</p><p>ROBI</p><p>Uh, sure. Okay, so, um, Oh, I encourage you to, um, eat them like sauce down so you can taste it better. You want&#8212; like, flip it? Yeah, that&#8217;s what I&#8217;m doing. Okay, um, so this is Elijah&#8217;s Extreme Regret.</p><p>MAX</p><p>Okay, fantastic.</p><p>ROBI</p><p>Scorpion and Carolina Reaper. Okay, and, um, it&#8217;s on an Impossible Chicken Nugget, so enjoy it.</p><p>MAX</p><p>Yeah, that&#8217;s much harder, hotter.</p><p>ROBI</p><p>Are you, um, extremely regretting eating this?</p><p>MAX</p><p>Not yet, but we&#8217;ll see how it&#8212; uh, it&#8217;s still, you know, you&#8217;ll get.</p><p>ROBI</p><p>Some hiccups in the podcast increasing.</p><p>MAX</p><p>Yeah, in spice.</p><p>ROBI</p><p>You didn&#8217;t put very much, uh, of the last few sauces on yours. I thought they were supposed to be like, you know, Doused in the sauce.</p><p>MAX</p><p>Well, you know, I&#8217;m not the guest.</p><p>ROBI</p><p>All right, well, yeah, but, um, that was delicious. I love that.</p><p>MAX</p><p>That was a good nugget. I&#8217;m glad it was hot. That is beyond where I would, uh.</p><p>ROBI</p><p>You know&#8212; Well, you&#8217;ve got some oat milk here.</p><p>MAX</p><p>I&#8217;m gonna drink some of this, actually.</p><p>ROBI</p><p>What do we got left?</p><p>MAX</p><p>Okay, um, what&#8217;s an essay or blog&#8212; I guess you&#8217;ve already kind of said one, but what&#8217;s another one? Forum post that you, uh, find really annoying but everyone keeps sharing? Ooh, um.</p><p>ROBI</p><p>An essay or forum post that I find really annoying but everyone keeps sharing? Yeah. I don&#8217;t know, um, I don&#8217;t think I have anything like that. Um, closest might be&#8212; so I think, uh, you know, Bentham&#8217;s Bulldog? Um, I mean, he&#8217;s really smart. Um, I respect his, um, moral philosophy takes. Prolific writer, like&#8212; yeah. Um, but he believes in God, and this opinion is just like really stupid. Like, um, yeah, and I, I think some of the stuff he&#8217;s written about, like, reasons for God are dumb. Like, and he&#8217;s way more into philosophy than I am. Like, he&#8217;s got lots of arguments that I, like, I can&#8217;t refute this specific argument. I&#8217;m not gonna, like, I don&#8217;t think it&#8217;s worth my time to, like, dig into this. Um, Yeah, I guess that&#8217;s the only.</p><p>MAX</p><p>Um&#8212; [Speaker:ROBI] Yep, that&#8217;s fair. [Speaker:MAX] Stuff I&#8217;ve been annoyed with among.</p><p>ROBI</p><p>You know, the EA blogosphere.</p><p>MAX</p><p>[Speaker:Robi] Yeah, um, yeah, cool. Let&#8217;s see, um, what&#8217;s the, the best moral philosophy? I mean, I feel like I don&#8217;t know why I put that one down because I don&#8217;t know, what are you going to say?</p><p>ROBI</p><p>[Speaker:Robi] Yeah, um, so I&#8217;m&#8212; I have most of my, like, probability mass on utilitarianism being, like, the best way to act. Or, well, actually, um, I&#8217;m a moral anti-realist. I&#8217;m, like, 95+ confident. Like, I just don&#8217;t see how moral realism could be true, or, like, what it would even mean if it were true. Um, I&#8217;ve also never even heard an argument in or like never heard&#8212;seen any evidence in favor of it. Like, as far as I&#8217;m aware, the only arguments I know of in favor of moral realism are, uh, if God exists, like Divine Command Theory. Like, if God exists and he set the laws of the universe, fine, uh, in that case, sure, that&#8217;ll make sense. Um, every other argument I&#8217;ve heard is just like moral intuition. Like, oh, it seems like there probably are Like, clearly this&#8212; it&#8217;s wrong to murder because I intuitively know it&#8217;s wrong to murder. [Speaker:MAX] Yeah. [Speaker:ROBI] Um, which is just like, okay, interesting claim. Uh, do you have any evidence for that? Also, like, evolution is a much better, uh, explanation for people having this intuition than, like, moral facts existing in some metaphysical sense that, like, interacts with your mind and then causes you to believe this. I don&#8217;t know.</p><p>MAX</p><p>[Speaker:Max] Yeah.</p><p>ROBI</p><p>But, so conditional on&#8212; conditional on moral realism being true, I&#8217;m actually a deontologist. The only moral philosophy that seems viable to me if moral facts exist would be something like the non-aggression principle. I think conditional on moral realism being true, I&#8217;m like, very deontologically libertarian, like, um, it&#8217;s immoral to, like, harm another sentient being, basically. Um, a lot of other moral philosophies don&#8217;t really make sense to me if moral realism is true. So, like, uh, I think I said this on Aaron&#8217;s podcast, but, um, so suppose moral realism is true and utilitarianism is true. This would be kind of&#8212; actually, like, this would be really weird. So for example, you meet a stranger, and unbeknownst to you, they really, really, really love the color yellow, but they really, really hate the color purple. In fact, purple was like&#8212; [Speaker:ROBI] Purple killed my father. [Speaker:MAX] Purple&#8212; A guy, a murderer wearing purple, genocided their ancestors or something, and seeing anything in the color purple will cause them vast anguish. You&#8217;re going to meet them, so you bring a gift. It&#8217;s like a thank you card or like a flower or something. If moral realism is true and utilitarianism is true, then you&#8217;re either like&#8212; being extremely moral or committing some heinous atrocity based on the random happenstance of this card you give them being purple or yellow, which strikes me as a bit nonsensical. So, the only viable moral realism kind of ethics I&#8217;ve thought of is like, if someone is a sentient conscious being, just like, don&#8217;t hurt them. Um, or like, don&#8217;t like destroy their atoms or like, uh, inflict pain on.</p><p>MAX</p><p>Their mind or something.</p><p>ROBI</p><p>I don&#8217;t know.</p><p>MAX</p><p>Yeah, discount utilitarianism.</p><p>ROBI</p><p>Or discount&#8212; no, no, no, but it would be like, hurting people is immoral. Yeah. Um, and then anything else is supererogatory. Great. Thanks. Cool.</p><p>MAX</p><p>Um, do I have another question? Oh yeah, um, I guess maybe let&#8217;s eat the, the final nugget and then you can say those&#8212; Republican conclusion takes you, uh, a little bit too earlier.</p><p>ROBI</p><p>Um, so, um, we&#8217;ve got another Impossible Nugget, uh, my favorite kind of nugget, and, um, it&#8217;s got some Dave&#8217;s Hot Chicken Reaper seasoning and some This is, um, did we put any of the Carolina Reaper batter? Oh, basically it&#8217;s Carolina Reaper batter. Yeah.</p><p>MAX</p><p>I think I&#8217;m going to regret this.</p><p>ROBI</p><p>Cheers. I don&#8217;t think we put that much spice on it.</p><p>MAX</p><p>I don&#8217;t think I did.</p><p>ROBI</p><p>Yeah, the, um, Reaper-flavored chicken or cauliflower at Dave&#8217;s Hot Chicken is, um, that is beyond my spice tolerance. They put, um, I think capsaicin extract on it. It&#8217;s basically like getting pepper sprayed when you take a bite.</p><p>MAX</p><p>Yeah, that doesn&#8217;t really sound fun to me. I know someone will be like, &#8220;Ah!&#8221;.</p><p>ROBI</p><p>[Speaker:Robi] It is very fun, but, um, I threw up after taking 2 bites.</p><p>MAX</p><p>[Speaker:Max] Okay, yeah, so, you know.</p><p>ROBI</p><p>[Speaker:Robi] Delicious though, in my&#8212; [LAUGHTER] [Speaker:MAX] Mass acoustic device opinion.</p><p>MAX</p><p>[Speaker:Robi] Yeah, I, uh&#8212; [Speaker:MAX] Ooh, that does taste good.</p><p>ROBI</p><p>Are you sure you don&#8217;t want some more of&#8212; more of Reaper spice?</p><p>MAX</p><p>[Speaker:Robi] No, I have some on my tongue. I can feel that. [Speaker:MAX] Mmm, that&#8217;s tasty. [Speaker:ROBI] Oh man. I should not be a permanent host for the show. I guess maybe that&#8217;d be funny, but usually I think the conceit is like the other person, yeah, the guest, but it&#8217;s maybe it&#8217;s funnier if, uh, the host is just dying every time. Yeah, yeah, cool. Yeah, so what&#8217;s your, uh, delicious, um, illusion?</p><p>ROBI</p><p>Well, isn&#8217;t there the&#8212; do you know the Hillary newspaper about like, um The paper being referred to is called Population Axiology by Hilary Graves.</p><p>MAX</p><p>I feel like she&#8217;s, uh, you know, underrated EA sort of, uh&#8212; I feel.</p><p>ROBI</p><p>Like she was everywhere in EA at some point, like every meetup I&#8217;d go to was like talking about a Hilary Graves, um, something something.</p><p>MAX</p><p>Yeah, of course, that was, you know, when I was around.</p><p>ROBI</p><p>The one I was thinking of was, um, is it, um, impossibility theorems for, um, Do you know what I&#8217;m talking about? So like, if you have these assumptions, you get the repugnant conclusion. But if you try to behave any differently or construct some other axiology, it has other worse&#8212; [Speaker:MAX] Yep. [Speaker:ROBI] Yeah. So I agree with this. I think if you assume anything else other than utilitarianism, you end up with even worse problems than the repugnant conclusion. And also, I just bite the bullet. Like, I don&#8217;t think there are&#8212; I disagree with Scott on this. The repugnant conclusion is not repugnant. I think there&#8217;s the classic situation, like, imagine a planet with a million happy people, and then imagine a series of slightly modified planets where you take the people and you replace them with two people who are slightly more than half as happy. And then you go all the way down, uh, a very long series of planets until you get to people who are like just infinitesimally happy, and their lives are just barely worth living. Um, they like eat boiled potatoes and listen to elevator music. Um, but there are so many of them that adding up all their epsilon happiness, you have more&#8212; you have a better planet than the million blissfully idyllic happy live people. And people say, &#8220;Oh, this is obviously not better. No number of these people could be better than this.&#8221; Yeah, I think they&#8217;re just neglecting the&#8212; or not taking the premises seriously, or each step. If you are really replacing it with more than half as much happiness and doubling the population, you do have more total happiness. And I think people are underestimating, or like, they have this vision in their head of like, that those lives on the last planet being like worthless or negative. Actually, you have to remember, if you set it up like this, as stipulated, they are a little bit happy. Like, they have more total happiness. And I, I don&#8217;t think there&#8217;s anything wrong with like, Many people with a little happiness.</p><p>MAX</p><p>[Speaker:Robi] Yeah, fair. Um, you know, I guess if you were behind the veil of ignorance, you&#8217;d prefer to be on the, uh, the most small equator, but if you were&#8212;.</p><p>ROBI</p><p>[Speaker:Max] No, no, no, no, okay.</p><p>MAX</p><p>[Speaker:Robi] Well, because if you knew, you wouldn&#8217;t be one of them.</p><p>ROBI</p><p>[Speaker:Max] No, okay, excuse me. You&#8217;re not doing the veil of ignorance properly. So, like, a billion people with&#8212; so, Planet A, a million people with, um, 10 happiness, or Planet B, um, a billion people with 1 happiness each. So Planet B has more total happiness, a billion. Planet A has less, 10 million, but there&#8217;s&#8212; the average happiness is 10. You&#8217;re saying veil of ignorance would, um, you would choose&#8212; Well, what I was.</p><p>MAX</p><p>Going to say is that, you know, if you&#8217;re behind the veil of ignorance and you&#8217;ve kind of already assumed that going to exist.</p><p>ROBI</p><p>[Speaker:Robi] That&#8217;s right, yeah, yeah, that&#8217;s where I was going with this. So, um, if you had to choose between being a random person on planet A and a random person on planet B, obviously you pick planet A. But I think the apples-to-apples choice is, um, 1 million chances to draw a ticket where you exist and have 10 happiness, or 999 million to just not exist and have no happiness. In that case, if you set it up fairly, actually you should pick the Plan B.</p><p>MAX</p><p>[Speaker:Robi] Yep, yeah, nice. Um, have you heard of the Very Repugnant Conclusion?</p><p>ROBI</p><p>[Speaker:Max] Uh, yes, but remind me what this is.</p><p>MAX</p><p>[Speaker:Robi] Uh, it&#8217;s the&#8212; so there&#8217;s suffering, so it&#8217;s lots of small people, or small happiness people, suffering people versus a happy planet, and then the, the one with suffering is higher total because there&#8217;s just so many.</p><p>ROBI</p><p>[Speaker:Max] Oh yeah.</p><p>MAX</p><p>Do you have like, I don&#8217;t know, I feel like for&#8212; for so the, the problem here is something that&#8217;s like, ah, like I can bite the bullet.</p><p>ROBI</p><p>That if you have just a little.</p><p>MAX</p><p>Bit of happiness, you know, like that&#8217;s fine, but now you&#8217;ve like also introduced all these people to suffering.</p><p>ROBI</p><p>This like kind of&#8212; yeah, if you were like a negative utilitarianist. Yeah, I think there&#8217;s a lot of problems with negative utilitarianism. Um, I, I don&#8217;t agree with that. I&#8217;m just like a total utility&#8212; yeah, uh, I was gonna ask you if.</p><p>MAX</p><p>You have, uh, hot takes [Speaker:ROBI] Um.</p><p>ROBI</p><p>Mechanized?</p><p>MAX</p><p>[Speaker:Max] Yeah, yeah.</p><p>ROBI</p><p>[Speaker:Robi] Um, yeah, well, um, Tamay and Agee have much longer timelines than I do. Um, Matthew I think has similar timelines to mine, uh, I think past. Um, but, um, I think they all have much lower P2M. And so, um, while I wish they wouldn&#8217;t, like, you know, go do capabilities, um, I can&#8217;t fault them for it. Like, basically If you think AI is probably almost certainly going to go well and you see this opportunity to earn trillions of dollars by automating the whole economy, and again, you don&#8217;t expect to&#8212; you think it&#8217;s going to go well, um, yeah, that makes sense to do. And I don&#8217;t disagree with them on factual premises. They are completely right that there is this opportunity here to automate the economy and earn trillions of dollars. I guess hate the game, not the player.</p><p>MAX</p><p>Sure.</p><p>ROBI</p><p>Yeah, um, yeah, but I wish people would stop doing capabilities until we can, like, you know, figure out alignment and whatnot.</p><p>MAX</p><p>Yeah, so true. Uh, cool. Well, yeah, thank you for donating and for, you know, recording.</p><p>ROBI</p><p>Yeah, um, happy to save the chickens. I hope, um, I hope maybe a few hundred or a few thousand of them are, you know, not, not suffering.</p><p>MAX</p><p>Yeah, cool.</p>]]></content:encoded></item><item><title><![CDATA[Public intellectuals need to say what they actually believe]]></title><description><![CDATA[Intro This Twitter thread from Kelsey Piper has been reverberating around my psyche since its inception, almost six years now.]]></description><link>https://www.aaronbergman.net/p/public-intellectuals</link><guid isPermaLink="false">https://www.aaronbergman.net/p/public-intellectuals</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Wed, 07 Jan 2026 01:06:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!NEfZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6e1970-ec2b-407b-ba7e-e1a28093fc35_2252x1126.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1>Intro</h1><p><a href="https://x.com/KelseyTuoc/status/1243295699728388096?s=20">This Twitter thread</a> from Kelsey Piper has been reverberating around my psyche since its inception, almost six years now.</p><p>You should read the whole thing for more context, but here are the important tweets:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/KelseyTuoc/status/1243301678301888512?s=20&quot;,&quot;full_text&quot;:&quot;One thing I've struggled with personally - I told my family in early February that we should expect the virus to hit here and should buy what we'd need and plan to soon stop leaving our home. I wasn't that direct in a public article for three more weeks. Why not?&quot;,&quot;username&quot;:&quot;KelseyTuoc&quot;,&quot;name&quot;:&quot;Kelsey Piper&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1957484507730518016/JKtDNrOH_normal.jpg&quot;,&quot;date&quot;:&quot;2020-03-26T22:19:54.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:67,&quot;retweet_count&quot;:69,&quot;like_count&quot;:564,&quot;impression_count&quot;:0,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/KelseyTuoc/status/1243301679228841985?s=20&quot;,&quot;full_text&quot;:&quot;I didn't want to sound alarmist. I didn't want to step out ahead of public health officials, who were still telling us that the risk was low. I wanted to tell readers what scientists were saying, and they too were trying not to sound alarmist.  I don't want to imply I was silent.&quot;,&quot;username&quot;:&quot;KelseyTuoc&quot;,&quot;name&quot;:&quot;Kelsey Piper&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1957484507730518016/JKtDNrOH_normal.jpg&quot;,&quot;date&quot;:&quot;2020-03-26T22:19:54.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:11,&quot;retweet_count&quot;:21,&quot;like_count&quot;:245,&quot;impression_count&quot;:0,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/KelseyTuoc/status/1243303788808564737?s=20&quot;,&quot;full_text&quot;:&quot;And I deeply regret not telling my readers sooner what I told my family. I have no idea if you would have believed me, but you deserved to know, and I wish that when the government dropped this ball more people had been there to pick it up.&quot;,&quot;username&quot;:&quot;KelseyTuoc&quot;,&quot;name&quot;:&quot;Kelsey Piper&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1957484507730518016/JKtDNrOH_normal.jpg&quot;,&quot;date&quot;:&quot;2020-03-26T22:28:17.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:18,&quot;retweet_count&quot;:20,&quot;like_count&quot;:341,&quot;impression_count&quot;:0,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><p>I really like Kelsey Piper. She&#8217;s &#8220;based&#8221; as the kids (and I) say. I think she was trying to do her best by her own lights during this whole episode, and she deserves major props for basically broadcasting her mistakes in clear language to her tens of thousands of followers so people like me can write posts like this. And I deeply respect and admire her and her work.</p><p>But:</p><blockquote><p>I deeply regret not telling my readers sooner what I told my family</p></blockquote><p>Easier said than done, hindsight is 20/20, etc., but I basically agree that she fucked up.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>The reason I&#8217;m writing this post now is that it&#8217;s become increasingly apparent to me that this kind of isn&#8217;t a one-off, and it&#8217;s not even an <em>n</em>-off for some modest n. It&#8217;s not out of the norm.</p><h1>Claim</h1><p>Rather, <strong>public intellectuals, </strong><em><strong>including those I respect and admire</strong></em><strong>, regularly communicate information to their audiences and the public that is fundamentally different from their true beliefs, and they should stop doing that.</strong></p><p>I haven&#8217;t interviewed anyone for this post so take this with a large grain of salt, but my impression and suspicion is that, to public intellectuals, broadly, it&#8217;s not even considered a bad thing; rather it&#8217;s the relatively above-board and affirmatively endorsed modus operandi.</p><p>Indeed, PIs have reasonable and plausible (if implicit) reasons for thinking that being less than candid about their genuine beliefs is a good, just, and important part of the job.</p><p>The problem is that they&#8217;re wrong.</p><p>To be clear, this post isn&#8217;t intended as a moral condemnation of past behavior because, again, my sense is that media figures and intellectuals - ~certainly those I reference in this piece - genuinely believe themselves to be doing right by their readers and the world.</p><h1>A few more examples</h1><h3>Will MacAskill</h3><p>Jump forward to 2022 and <a href="https://en.wikipedia.org/wiki/William_MacAskill">Will MacAskill</a>, whom I also greatly respect and admire, is <a href="https://80000hours.org/podcast/episodes/will-macaskill-what-we-owe-the-future/">on the 80,000 Hours podcast</a> for the fourth time. During the episode, MacAskill notes that his first book<a href="https://forum.effectivealtruism.org/topics/doing-good-better"> </a><em><a href="https://forum.effectivealtruism.org/topics/doing-good-better">Doing Good Better</a></em> was significantly different from what &#8220;the most accurate book&#8230;fully representing my and colleagues&#8217; EA thought&#8221; would have looked like, in part thanks to<strong> </strong>demands from the publisher (bolding mine):</p><blockquote><p>Rob Wiblin: ...But in 2014 you wrote <em>Doing Good Better</em>, and that somewhat soft pedals longtermism when you&#8217;re introducing effective altruism. So it seems like it was quite a long time before you got fully bought in.</p><p>Will MacAskill: Yeah. <strong>I should say for 2014, writing </strong><em><strong>Doing Good Better</strong></em><strong>, in some sense, the most accurate book that was fully representing my and colleagues&#8217; EA thought would&#8217;ve been broader than the particular focus.</strong> And especially for my first book, there was a lot of equivalent of trade &#8212; like agreement with the publishers about what gets included. <strong>I also wanted to include a lot more on animal issues, but the publishers really didn&#8217;t like that, actually.</strong> Their thought was you just don&#8217;t want to make it too weird.</p><p>Rob Wiblin: I see, OK. They want to sell books and they were like, &#8220;Keep it fairly mainstream.&#8221;</p></blockquote><p><em>Wait what?</em> It was a throwaway line, a minor anecdote, but if I&#8217;m remembering correctly I physically stopped walking when I heard this section.</p><p>The striking thing (to me, at least) wasn&#8217;t that a published book be slightly out of date with respect to the authors&#8217; thinking - the publishing process is long and arduous - or that the publisher forced out consideration of animal welfare.</p><p>It was that, to the best of my knowledge (!), <em>Will never made a significant effort to tell <s>anyone</s> the public about all this </em>until the topic came up <em>eight years </em>after publication.  See the following footnote for more:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>.</p><p>It&#8217;s not like he didn&#8217;t have a platform or the ability to write or thought that nobody was reading the book.</p><p><em>Doing Good Better </em>has almost 8,000 reviews on <a href="https://www.goodreads.com/book/show/23398748-doing-good-better">Goodreads</a> and another 1,300 or so on <a href="https://www.amazon.com/Doing-Good-Better-Effective-Altruism/dp/1592409660">Amazon</a>. The top three LLMs estimate <a href="https://gemini.google.com/share/54d534573e21">75k</a>, <a href="https://chatgpt.com/share/69531acf-0658-8004-8147-4aedaa724125">135k</a>, and <a href="https://claude.ai/share/bad97924-c499-40c5-ab32-d88a3d58a525">185k</a> sales respectively. Between when Doing Good Better was published and when that podcast interview came out, Will published something like <a href="https://forum.effectivealtruism.org/users/william_macaskill?from=post_header">33 EA Forum Posts</a> and 29 <a href="https://scholar.google.com/citations?user=yH7sp5kAAAAJ&amp;hl=en&amp;oi=sra">Google Scholar-recognized publications.</a> Bro is a machine.</p><p>And Will is steeped deeply in the culture of his own founding - EA emphasizes candidness, honesty, and clarity; people put &#8220;epistemic status: [whatever]&#8221; at the top of blog posts. I don&#8217;t personally know Will (sad) but my strong overall impression is that he&#8217;s a totally earnest and honest guy. </p><p>Unfortunately I&#8217;m not really advancing an <em>explanation</em> of what I&#8217;m critiquing in this post. As mentioned before, I haven&#8217;t interviewed anyone<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> and can&#8217;t see inside Will&#8217;s brain or anyone else&#8217;s.</p><p>But I can speculate, and my speculation is that clarifying Doing Good Better post-publication (i.e. by writing publicly somewhere that it was a bit out of date with respect to his thinking and that the publisher made him cut important, substantive material on animal welfare) never even registered as the kind of thing he might owe his audience.</p><h3>Dean Ball</h3><p>To beat a dead horse, I really like and respect Piper and MacAskill. </p><p>I just don&#8217;t know <a href="https://www.deanball.com/">Ball&#8217;s</a> work nearly as well, and the little that I do know suggests that we have substantial and fundamental disagreements about AI policy, at the very least. </p><p>But he was <a href="https://80000hours.org/podcast/episodes/dean-ball-ai-policy-governance-white-house/">recently on the 80,000 Hours Podcast</a> (for 3 hours) and I came away basically thinking &#8220;this guy is not insane and (to quote my <a href="https://x.com/AaronBergman18/status/2004019770027659490?s=20">own tweet</a>), &#8220;probably way above replacement level for &#8220;Trump admin-approved intellectual&#8221;&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>All this is to say that I don&#8217;t have it out for the guy, just as I don&#8217;t have it out for Piper or MacAskill.</p><p>But part of the interview drove me insane, to the point of recording a 12 minute <s>rant</s> voice memo that is the proximate reason for me writing this post.</p><p>Here&#8217;s the first bit (bolding mine):</p><blockquote><p>Dean Ball: So let&#8217;s just take the example of open source AI. Very plausibly, a way to mitigate the potential loss of control &#8212; or not even loss of control, but power imbalances that could exist between what we now think of as the AI companies, and maybe we&#8217;ll think of it just as the AIs in the future or maybe we&#8217;ll continue to think of IT companies. I think we&#8217;ll probably continue to think of it as companies versus humans &#8212; you know, if OpenAI has like a $50 trillion market cap, that is a really big problem for us. You can even see examples of this in some countries today, like Korea. In Korea, like 30 families own the companies that are responsible for like 60% of GDP or something like that. It&#8217;s crazy. The Chaebols.</p><p>But if we have open source systems, and the ability to make these kinds of things is widely dispersed, then I think you do actually mitigate against some of these power imbalances in a quite significant way.</p><p>So part of the reason that I originally got into this field was to make a robust defence of open source because I worried about precisely this. <strong>In my public writing on the topic, I tended to talk more about how it&#8217;s better for diffusion, it&#8217;s better for innovation &#8212; and all that stuff is also true &#8212; because I was trying to make arguments in the like locally optimal discursive environment, right?</strong></p><p>Rob Wiblin: Say things that make sense to people.</p><p>Dean Ball: <strong>Yeah, say things that make sense to people at that time. But in terms of what was animating for me, it does have to do with this power stuff in the long term.</strong></p></blockquote><p>Ahhhhhh! Hard to get a clearer example than this.</p><p>Ball is, in a purely neutral and descriptive sense, reporting candidly that he not merely wrote in a <em>way</em> or <em>style </em>that his audience could understand but <em>substantively modified his core claims to be different than those which were the true causes of his beliefs and policy positions.</em></p><h5>Not about lying</h5><p>I actually want to pick out the hyphened segment &#8220;and all that stuff is also true&#8221; because it&#8217;s an important feature of both the underlying phenomenon I&#8217;m pointing at and my argument about it.</p><p>As far as I can tell, Ball never lied - just as Piper and MacAskill never lied.</p><p>At one point I was using the term &#8220;lie by omission&#8221; for all this stuff, but I&#8217;ve since decided that&#8217;s not really right either. The point here is just that literally endorsing every claim you write doesn&#8217;t alone imply epistemic candidness (although it might be ~necessary for it).</p><h3>Ball, pt 2</h3><p>Ok, let&#8217;s bring in the second of Ball&#8217;s noteworthy sections. This time Rob does identify Ball&#8217;s takes as at least potentially wrong in some sense.</p><p>Sorry for the long quote but nothing really felt right to cut (again, bolding mine):</p><blockquote><p>Rob Wiblin: <strong>I think you wrote recently that there&#8217;s speculations or expectations you might have about the future that might influence your personal decisions, but you would want to have more confidence before they would affect your public policy recommendations.</strong></p><p>There&#8217;s a sense in which that&#8217;s noble: that you&#8217;re not going to just take your speculation and impose it on other people through laws, through regulations &#8212; especially if they might not agree or might not be requesting you to do that basically.</p><p><strong>There&#8217;s another sense in which to me it feels possibly irresponsible in a way. Because imagine there&#8217;s this cliche of you go to a doctor and they propose some intervention. They&#8217;re like, &#8220;We think that we should do some extra tests for this or that.&#8221; And then you ask them, &#8220;What would you do, if it was you as the patient? What if you were in exactly my shoes?&#8221; And sometimes the thing that they would do for themselves is different than the thing that they would propose to you.</strong> Usually they&#8217;re more defensive with other people, or they&#8217;re more willing to do things in order to cover their butts basically, but they themselves might do nothing.</p><p><strong>I think that goes to show that sometimes what you actually want is the other person to use all of the information that they have in order to just try to help you make the optimal decision, rather than constraining it to what is objectively defensible.</strong></p><p>How do you think about that tradeoff? Is there a sense in which maybe you should be using your speculation to inform your policy recommendations, because otherwise it will just be a bit embarrassing in a couple of years&#8217; time when you were like, &#8220;Well, I almost proposed that, but I didn&#8217;t.&#8221;</p><p>Dean Ball: It&#8217;s a really good question. My general sense is that, in intellectual inquiry when you hit the paradox, that&#8217;s when you&#8217;ve struck ore. Like you found the thing. The paradox is usually in some sense weirdly the ground truth. It&#8217;s like the most important thing. This is a very important part of how I approach the world, really.</p><p>So it&#8217;s definitely true that there are things that I would personally do&#8230; Like if I were emperor of the world, I would actually do exactly all the same: I still wouldn&#8217;t do the things that I think, in some sense, I think might be necessary &#8212; because I do just have enough distrust of my own intuitions. And I think everybody should. I think probably you don&#8217;t distrust your own intuitions enough, even me.</p><p>Rob Wiblin: So is it that you think that not taking such decisive action, or not using that information does actually maximise expected value in some sense, because of the risk of you being mistaken?</p><p>Dean Ball: Yeah, exactly.</p><p>Rob Wiblin: So that&#8217;s the issue. It&#8217;s not that you think it&#8217;s maybe a more libertarian thing where you don&#8217;t want to impose your views, like force them on other people against their will?</p><p>Dean Ball: It&#8217;s kind of both. I think you could phrase it both ways. And I would agree with both things, I would say.</p><p>Rob Wiblin: But if it&#8217;s the case that it&#8217;s better not to act on those guesses about the future because of the risk of being mistaken, wouldn&#8217;t you want to not use them in your personal life as well?</p><p>Dean Ball: Well, it depends, right? For certain things&#8230; There are things, especially now &#8212; you know, I&#8217;m having a kid in a few months. So when these decisions start to affect other people, again, it changes.</p><p>I guess what I would say is: <strong>Will I bet in financial markets about this future? Yeah, I will.</strong> Because I do think my version of the future corresponds enough to various predictions you can make about where asset prices will be, that you can do things like that. That&#8217;s a much easier type of prediction to make than the type of prediction that involves emergent consequences of agents being in the world and things like this.</p><p>So it has to do with the scale of the impact and it also has to do with the level of confidence. I think the level of confidence that you need to recommend policies that affect many people is just considerably higher. </p></blockquote><p>Not sure we need much analysis or explanation here; Ball is straightforwardly saying that he neglects to tell his audience about substantive, important, relevant beliefs he has because of&#8230;some notion of confidence or justifiability. Needless to say I don&#8217;t find his explanation/justification very compelling here.</p><p>Of course he has reasons for doing this, but I think those reasons are bad and wrong. So without further ado&#8230;</p><h1>The case against the status quo</h1><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NEfZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6e1970-ec2b-407b-ba7e-e1a28093fc35_2252x1126.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NEfZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6e1970-ec2b-407b-ba7e-e1a28093fc35_2252x1126.png 424w, https://substackcdn.com/image/fetch/$s_!NEfZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6e1970-ec2b-407b-ba7e-e1a28093fc35_2252x1126.png 848w, https://substackcdn.com/image/fetch/$s_!NEfZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6e1970-ec2b-407b-ba7e-e1a28093fc35_2252x1126.png 1272w, https://substackcdn.com/image/fetch/$s_!NEfZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6e1970-ec2b-407b-ba7e-e1a28093fc35_2252x1126.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NEfZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6e1970-ec2b-407b-ba7e-e1a28093fc35_2252x1126.png" width="1456" height="728" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4a6e1970-ec2b-407b-ba7e-e1a28093fc35_2252x1126.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:728,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1355325,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aaronbergman.net/i/182827176?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6e1970-ec2b-407b-ba7e-e1a28093fc35_2252x1126.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NEfZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6e1970-ec2b-407b-ba7e-e1a28093fc35_2252x1126.png 424w, https://substackcdn.com/image/fetch/$s_!NEfZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6e1970-ec2b-407b-ba7e-e1a28093fc35_2252x1126.png 848w, https://substackcdn.com/image/fetch/$s_!NEfZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6e1970-ec2b-407b-ba7e-e1a28093fc35_2252x1126.png 1272w, https://substackcdn.com/image/fetch/$s_!NEfZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4a6e1970-ec2b-407b-ba7e-e1a28093fc35_2252x1126.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>It&#8217;s misleading and that&#8217;s bad</h2><p>This isn&#8217;t an especially clever or interesting point, but it&#8217;s the most basic and fundamental reason that &#8220;sub-candidness&#8221; as we might call it is bad. </p><h3>No one else knows they&#8217;re playing a game </h3><p>To a first approximation, podcasts and Substack articles and Vox pieces are just normal-ass person-to-person communication, author to reader.</p><p>As a Substacker or journalist or podcast host or think tank guy or whatever, <em>you</em> can decide to play any game you want - putting on a persona, playing devil&#8217;s advocate, playing the role of authority who doesn&#8217;t make any claims until virtually certain.</p><p>All this is fine, <em>but only if you tell your audience what you&#8217;re doing.</em></p><p>Every instance of sub-candidness I go through above could have been avoided by simply publicly stating somewhere the &#8220;game&#8221; the author chose to play.</p><p>I think Piper should have told the public her genuine thoughts motivating personal behavior vis a vis Covid, but I wouldn&#8217;t be objecting on the grounds I am in this post if she had said something like &#8220;In this article I am making claims that I find to be robustly objectively defensible and withholding information that I believe because it doesn&#8217;t meet that standard.&#8221; </p><p>Part of the point of this kind of disclaimer is that it might encourage readers to go &#8220;wait but what do you actually think&#8221; and then you, Mr(s). Public Intellectual might decide to tell them.</p><h3>What would a reasonable reader infer?</h3><p>Merely saying propositions that you literally endorse in isolation is ~necessary but not at all sufficient for conveying information faithfully and accurately.</p><p>The relevant question public intellectuals need to ask is: <strong>&#8220;What would a reasonable reader infer or believe both (a) about the world and (b) about </strong><em><strong>my</strong></em><strong> beliefs after consuming this media?&#8221; </strong></p><p>Of course sometimes there are going to be edge cases and legitimate disagreements about the answer, but I think in general things are clear enough to be action-guiding in the right way. </p><p>I think some (partial) answers to that question, in our example cases, are:</p><ol><li><p>Piper was not actively personally preparing for a serious pandemic.</p></li><li><p>The conception of effective altruism presented in Doing Good Better is essentially MacAskill&#8217;s personal conception of the project at least as of when each page was first written. </p></li><li><p>Dean Ball&#8217;s expressed reasons for supporting open source AI were his actual reasons for doing so, basically in proportion to each reason&#8217;s emphasis.</p></li></ol><p>In each case, I claim, the reader would have been mistaken, and foreseeably so. And &#8220;foreseeably causing a reader to have false beliefs&#8221; seems like a pretty good definition of &#8220;misleading.&#8221;</p><h2>Public intellectuals are often domain experts relative to their audience</h2><p>Again, I can&#8217;t look inside anyone&#8217;s brain, but I suspect that public intellectuals often err by incorrectly modeling their audience.</p><p>If you&#8217;re Matt Yglesias, <em>some</em> of your readers are going to be fellow policy wonk polymaths with a lot of context on the subject matter you&#8217;re tackling today, but the strong majority are going to have way less knowledge and context on whatever you&#8217;re writing about</p><p>This is true in general; by the time <em>I </em>am writing a blog post about something, even if I had no expertise to start with, I am something of an expert in a relative sense now. The same is true of generalist journalists who are covering a specific story, or podcast hosts who have spent a week preparing for an interview.</p><p>This seems trivial when made as an explicit claim, and I don&#8217;t expect anyone to really disagree with it, but really grokking this asymmetry entails a few relevant points that don&#8217;t in fact seem to reflect the state of play:</p><p><strong>1) Your audience doesn&#8217;t know what they don&#8217;t know</strong></p><p>So your decision to not even mention/cover some aspect of the thing isn&#8217;t generally going to come across as &#8220;wink wink look up this one yourself&#8221; - it&#8217;s just a total blind spot far from mere consideration. MacAskill&#8217;s readers didn&#8217;t even know what was on the table to begin with; if you don&#8217;t bring up longtermist ideas and instead mainly talk about global poverty, readers are going to reasonably, implicitly assume that you don&#8217;t think the long-term future is extremely important. That proposition never even crossed their mind for them to evaluate. </p><p><strong>2) Your expertise about some topic gives you </strong><em><strong>genuinely good epistemic reason</strong></em><strong> to share your true beliefs in earnest</strong></p><p>&#8220;Epistemic humility&#8221; is a positively-valenced term, but too much in a given circumstance is just straightforwardly bad and wrong. Kelsey Piper shouldn&#8217;t have deferred to the consensus vibe because she&#8217;s the kind of person who&#8217;s supposed to <em>decide</em> the consensus vibe. </p><p><strong>3) Like it or not, people trust you to have your own takes - that&#8217;s why they&#8217;re reading your thing</strong></p><p>It is substantively relevant that the current media environment (at least in the anglosphere) is ridiculously rich and saturated. There are a lot of sources of information. People can choose to listen/read/vibe with a million other things, and often a thousand about the same topic. They chose to read you because for whatever reason they want to know what you think.</p><p>In other words, your take being <em>yours</em> and not like an amalgam of yours + the consensus + the high status thing is already built into the implicit relationship. </p><h3>You&#8217;re (probably) all they&#8217;ve got</h3><p>A partially-overlapping-with-the-above point I want to drive home is that, in general, you (public intellectual) <em>are</em> the means by which your audience can pick up on radical but plausible ideas, or subtle vibes, or whatever collection of vague evidence is informing your intuition, or anything else.</p><p>Insofar as you think other people in some sense &#8220;should&#8221; believe what you believe - (ideally at least, or if they had more information and time and energy, or something like that) or at least hear the case for, your views, <em>this is it</em>.</p><p>Maybe you&#8217;re part of the EA- or rationalist- sphere and have a bunch of uncommon beliefs about the relatively near future (perhaps along the lines of &#8220;P[most humans die or US GDP increases &gt;10x by 2030] &gt;= 50%&#8221;) and you&#8217;re talking to a &#8220;normie&#8221; about AI (perhaps a very important normie like a member of Congress).</p><p>You can try use arguments you don&#8217;t really find convincing or important, or moderate your opinions to seem more credible, or anything, but to what end?</p><p><em>So that they can leave the interaction without even the in-principle opportunity of coming closer to sharing your actual beliefs - beliefs you want them to have?</em></p><p><em>And the theory is that somehow this helps get them on the road to having correct takes, somehow, by your own lights?</em></p><p><em>Because maybe someone in the future will do the thing you&#8217;re avoiding - that is, sharing one&#8217;s actual reasons for holding actual views?</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>All of the above mostly holds regardless of who you are, but if you&#8217;re a public intellectual, that is your role in society and the job you have chosen or been cast into, for better or worse.</p><p>Is your theory of change dependent on someone else being just like you but with more chutzpah? If so, is there a good reason you shouldn&#8217;t be the one to have the chutzpah? </p><p>This is it!</p><h1>You can have your cake and eat it too</h1><p>At some level what I&#8217;m calling for involves intellectual courage and selflessness, but in a more substantive and boring sense it&#8217;s not especially demanding.</p><p><strong>That&#8217;s because you don&#8217;t have to choose between candidness and other things you find valuable in communication</strong> like &#8220;conveying the public health consensus&#8221; or &#8220;using concepts and arguments my readers will be familiar with&#8221; or &#8220;not presenting low-confidence intuitions as well-considered theses.&#8221;</p><p><strong>All you have to do is tell your audience what you&#8217;re doing!</strong></p><p>You can explicitly label claims or anecdotes or vibes as based on intuition or speculation or nothing at all. You can present arguments you don&#8217;t endorse but want the reader to hear or consider for whatever reason, or report claims from officials you low key suspect aren&#8217;t really true.</p><p>You can even use explicit numerical probabilities to convey degrees of certainty! One frustrating element about our example cases is that Piper, MacAskill, and Ball are all exceptionally bright and numerate and comfortable with probabilities - asking them to use it to clarify and highlight the epistemic stance they&#8217;re coming from doesn&#8217;t seem extremely burdensome</p><p>And more generally, setting aside numerical probabilities for a moment:</p><ul><li><p>Kelsey Piper could have said &#8220;this is what the CDC says and this other thing is my overall take largely based on arguments I can&#8217;t fully explicitly justify as rock-solid.&#8221;</p></li><li><p>Ball could have said &#8220;here is what actually motivates me and here are some other arguments I also endorse.&#8221;</p></li><li><p>MacAskill&#8217;s case is a bit trickier to diagnose and treat from the outside so here&#8217;s a footnote:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> </p></li></ul><p>It will increase the word count of your thing by 2%, fine. That&#8217;s a very small price to pay.</p><h1>Not a trivial ask</h1><p>Let me go back to the &#8220;At some level what I&#8217;m calling for can involve intellectual and social courage&#8221; bit from the previous section.</p><p>The universe makes no guarantees that earnestness will be met with productive, kind, and generous engagement. As a public intellectual, you might find yourself in the position of &#8220;actually believing&#8221; things that are going to sound bad in one way or another.</p><p>To be honest, I don&#8217;t have a totally worked-out theory or principle that seems appropriately action-guiding in almost all situations.</p><p>In some stylized Cultural Revolution scenario with a literal mob at your door waiting to literally torture and kill you if you say the wrong thing, I think you should generally just lie and save your ass and then report that&#8217;s what you did in the future if the situation ever improves.</p><p>But I guess the things I do stand by are that:</p><ol><li><p><strong>Public intellectuals especially, but also people in general, should consider conveying their genuine beliefs an important virtue in their work - a positive good, a praiseworthy thing to do, and an ideal to aim for.</strong></p></li><li><p><strong>This virtue should be valued </strong><em><strong>more</strong></em><strong> highly by individuals and society than it seems to be right now</strong></p></li></ol><h1>Appendix: transcript of original rant</h1><p>It has a bit of exasperated energy that the above post lacks, so here is a mildly cleaned up version (fewer filler words basically) of my original rant. Enjoy:</p><blockquote><p>Okay, so I&#8217;m listening to the 80,000 Hours podcast with Dean Ball. Broadly, I&#8217;m actually not exactly impressed, but glad this guy is not totally insane.</p><p>There are two parts of it so far&#8212;I&#8217;m not even done&#8212;that stand out that I want to criticize. And I think it actually extends to other people quite a bit.</p><p>First, he talks about how he&#8212;I&#8217;m pretty sure this is in the context of SB 1047 and SB 53&#8212;basically doesn&#8217;t use his real reasons for thinking about why he supports or opposes these bills. Instead, he basically puts things in terms that he thinks people will understand better.</p><p>Then the second part, which got me fired up a little bit, is that he talks about basically being willing to make bets or predictions in his personal life, but having a sort of higher confidence standard for making policy recommendations. Rob Wiblin aptly pulls out the example of going to a doctor. The doctor says, &#8220;Oh yeah, you should do thing X.&#8221; Then you ask, &#8220;Well, what would you do if you were me in my situation?&#8221; and he says, &#8220;Oh no, I wouldn&#8217;t do X. I would do Y.&#8221;</p><p>I think this is actually deeply important. The thesis of this ramble is that public intellectuals have a moral duty to say what they actually think. To not just... well, let me put out two other examples, because I don&#8217;t think this is a left-versus-right thing at all. It&#8217;s almost unique in <em>*not*</em> being a left-versus-right thing.</p><p>Kelsey Piper, whom I am a huge fan of&#8212;I think she&#8217;s a really good person, totally earnest, great reporter, etc.&#8212;basically wrote early in COVID. She wasn&#8217;t nearly as downplaying as other reporters, but she was, in fact, doing something like making pretty intense preparations on her own for a global pandemic, while basically emphasizing in her writing: &#8220;Oh, there&#8217;s so much uncertainty, don&#8217;t panic,&#8221; and so on. I think she had very similar reasoning. She thought, &#8220;You know, I don&#8217;t have overwhelming confidence, I&#8217;m sort of speculating,&#8221; and so on. That&#8217;s example two.</p><p>Example three is Will MacAskill, who also I am a huge fan of. I think he&#8217;s a great guy, obviously brilliant. Basically, there was a period&#8212;it&#8217;s been a little while since I&#8217;ve looked into this&#8212;but my sense is that when he first published <em>*Doing Good Better*</em>, the publisher didn&#8217;t let him really emphasize animal welfare. Even though he was, I think, pretty convinced that was more important than the global poverty stuff at the top. But reading the book, you wouldn&#8217;t know that. A reasonable person would just think, &#8220;Okay, this is what the person believes as of the time of the writing of the book,&#8221; but that wasn&#8217;t true.</p><p>Likewise for longtermism. There was a period of time when the ideas were developing in elite circles in Oxford, and basically, it was kept on the down-low until there was more of an infrastructure for it.</p><p>I strongly reject all of this. I don&#8217;t think these people were malicious, but I think they acted wrong and poorly. We should say, &#8220;No, this is a thing you should not do.&#8221;</p><p>First off, in a pre-moral sense&#8212;before we decide whether to cast moral judgment, and we don&#8217;t have to&#8212;this is essentially deception. Or, to put it in more neutral terms, let&#8217;s say intentionally conveying information so as to cause another person to believe certain information without actually endorsing the information&#8212;and certainly not endorsing it in full, even if it&#8217;s a half-truth.</p><p>The second thing, and the reason why this is baffling, is that these people&#8212;public intellectuals in general, and certainly the three people that I mentioned&#8212;are very intelligent and sophisticated. They are willing and able to deal with probabilities. You can just say, &#8220;This is tentative, I am not sure, but X, Y, and Z.&#8221; Or, &#8220;This is purely going on intuition,&#8221; or &#8220;This is my overall sense but I can&#8217;t justify it.&#8221; Those are just words you can say.</p><p>So, yeah, all of this makes it kind of baffling. I guess I&#8217;m a little bit fired up about this for some reason, so I&#8217;m sort of biased to put it in moralizing terms. But I&#8217;m happy to say let bygones be bygones. I don&#8217;t think anybody is doing anything intentionally bad; I think they are doing what is right by their own lights. And, once again, I strongly admire two of the three people and certainly respect the other one.</p><p>But just to reiterate the thesis: Yes, if you are a public intellectual, even if it&#8217;s early days, even if you&#8217;re not sure about X, Y, and Z, people are going to believe what you say. You really need to internalize that. Maybe <em>*you*</em> know what you&#8217;re excluding, but nobody else does. They don&#8217;t know the bounds, they don&#8217;t know the considerations, they don&#8217;t even know the type of thing that you could be ignoring. They are just quite literally taking you at exactly what you say.</p><p>So just for God&#8217;s sake... I think it&#8217;s a moral responsibility. Once you&#8217;ve been invited to get on the right side of things intellectually, it is a moral obligation to be willing to put yourself out there. Sometimes that is going to involve saying things you personally think that are against the zeitgeist, or simply a little bit outside the Overton window.</p><p>You might be right, or other people might be wrong. They are going to criticize you when you get something wrong. They&#8217;re going to say that you were stirring up panic about the pandemic that never happened&#8212;even if you&#8217;re totally epistemically straightforward about exactly what you believe and how confident you are. I mean, somebody should write a blog post about this. Maybe I&#8217;ll at least tweet it.</p><p>And as sort of a follow-up point: In discussions about honesty, or the degree to which you should be really straightforward and earnest, I feel like people jump to edge cases that are legitimate to consider&#8212;like &#8220;Nazis in the attic.&#8221; But this is sort of a boring axis of honesty that doesn&#8217;t get enough attention: which is, even if you are not literally saying anything you don&#8217;t believe, to what degree should you try to simply state what you think and why, without filtering beyond that?</p><p>I can dream up scenarios where this would be bad. The trivial example is just Nazis in the attic. You do whatever behavior minimizes the likelihood of the Nazi captain getting the Jews that you&#8217;re hiding. That is not the most epistemically legit behavior&#8212;that&#8217;s sort of obscuring the truth&#8212;but it doesn&#8217;t matter. But that is not a salient part of most honesty conversations, at least that I&#8217;ve been a part of.</p><p>I feel like there are adjacent things to what we should call this&#8212;maybe &#8220;pseudo-honesty&#8221; or something&#8212;that are good and fine. That are not violating what I&#8217;m advocating for as a moral duty. One of those is bringing up points that don&#8217;t personally convince <em>*you*</em>, but that you think are not misleading. For example, you might bring up theological arguments if you believe in God, X, Y, and Z.</p><p>But at some level, you still need to also convey the information that <em>*that*</em> is not what is convincing you. Likewise, a thing you can do is just say, &#8220;Here are all the arguments I want to say anyway,&#8221; and then also verify. You can say, &#8220;Because you think they are the right arguments understood in the discourse, people are going to know what they mean.&#8221; But then you can also say, &#8220;By the way, none of this is actually what really motivates me. What motivates me is X.&#8221; And that can be just one sentence.</p><p>You can get into the question of how deep do you bury this? Is it in chapter 43 of a thousand-page book? That&#8217;s probably not great. But to a first approximation, your goal is just to convey the information. It&#8217;s not that hard. You can really have your cake and eat it too. You are allowed to use probabilities. You are allowed to do all these disclaimers. The thing that I&#8217;m saying you shouldn&#8217;t do is fundamentally not convey what you actually believe.</p></blockquote><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aaronbergman.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aaronbergman.net/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aaronbergman.net/p/public-intellectuals/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aaronbergman.net/p/public-intellectuals/comments"><span>Leave a comment</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><a href="https://archive.ph/CaglF">Here&#8217;s</a> a relevant post of hers by the way. To be clear, far better than what other journalists were doing at the time! </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Ok I still stand by this claim but &#8220;show that something wasn&#8217;t said&#8221; is a hard thing to do. To be clear, I&#8217;m quite sure that <em>I</em> didn&#8217;t - and still don&#8217;t - know of any public statement by MacAskill basically conveying the points that either:</p><ol><li><p>His conception of EA was significantly different from that presented in <em>Doing Good Better</em> as of the time of publication in 2014; and/or</p></li><li><p>The publisher forced significant substantive cuts of animal welfare discussion.</p></li></ol><p>Of course that doesn&#8217;t imply such a statement doesn&#8217;t exist.</p><p>I ran Claude Opus 4.5 thinking research, <a href="https://chatgpt.com/share/695dab19-1e6c-8004-89aa-a49885e0a92c">GPT-5.2-Thinking-Deep Research</a>, and <a href="https://gemini.google.com/share/cf495c948f6a">Gemini-3-Pro-Preview-Research</a> on the topic and initially got some confusing contradictory vibes, but things seem to ground out as &#8220;no we can&#8217;t find anything with Will making either of the two points listed above either&#8221;.</p><p>Can&#8217;t share Claude Convo directly&#8230;</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NIZ2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe234b47d-f075-4566-a27a-c0c29f758d26_632x88.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NIZ2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe234b47d-f075-4566-a27a-c0c29f758d26_632x88.png 424w, https://substackcdn.com/image/fetch/$s_!NIZ2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe234b47d-f075-4566-a27a-c0c29f758d26_632x88.png 848w, https://substackcdn.com/image/fetch/$s_!NIZ2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe234b47d-f075-4566-a27a-c0c29f758d26_632x88.png 1272w, https://substackcdn.com/image/fetch/$s_!NIZ2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe234b47d-f075-4566-a27a-c0c29f758d26_632x88.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NIZ2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe234b47d-f075-4566-a27a-c0c29f758d26_632x88.png" width="392" height="54.58227848101266" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e234b47d-f075-4566-a27a-c0c29f758d26_632x88.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:88,&quot;width&quot;:632,&quot;resizeWidth&quot;:392,&quot;bytes&quot;:16883,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aaronbergman.net/i/182827176?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe234b47d-f075-4566-a27a-c0c29f758d26_632x88.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NIZ2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe234b47d-f075-4566-a27a-c0c29f758d26_632x88.png 424w, https://substackcdn.com/image/fetch/$s_!NIZ2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe234b47d-f075-4566-a27a-c0c29f758d26_632x88.png 848w, https://substackcdn.com/image/fetch/$s_!NIZ2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe234b47d-f075-4566-a27a-c0c29f758d26_632x88.png 1272w, https://substackcdn.com/image/fetch/$s_!NIZ2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe234b47d-f075-4566-a27a-c0c29f758d26_632x88.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>&#8230;but in the interest of completeness <a href="https://docs.google.com/document/d/1aNthuF6fjIO4zjme5Nkzbhm6iLxll1uKunvhlyXhJGk/edit?usp=sharing">here&#8217;s</a> a google doc with a bunch of screenshots. My takeaway after pushing Opus on points of confusion is basically: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!arr0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23692689-b32e-4a0c-aeb1-27b22c28bcb3_1240x1240.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!arr0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23692689-b32e-4a0c-aeb1-27b22c28bcb3_1240x1240.png 424w, https://substackcdn.com/image/fetch/$s_!arr0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23692689-b32e-4a0c-aeb1-27b22c28bcb3_1240x1240.png 848w, https://substackcdn.com/image/fetch/$s_!arr0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23692689-b32e-4a0c-aeb1-27b22c28bcb3_1240x1240.png 1272w, https://substackcdn.com/image/fetch/$s_!arr0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23692689-b32e-4a0c-aeb1-27b22c28bcb3_1240x1240.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!arr0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23692689-b32e-4a0c-aeb1-27b22c28bcb3_1240x1240.png" width="446" height="446" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/23692689-b32e-4a0c-aeb1-27b22c28bcb3_1240x1240.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1240,&quot;width&quot;:1240,&quot;resizeWidth&quot;:446,&quot;bytes&quot;:349760,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aaronbergman.net/i/182827176?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23692689-b32e-4a0c-aeb1-27b22c28bcb3_1240x1240.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!arr0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23692689-b32e-4a0c-aeb1-27b22c28bcb3_1240x1240.png 424w, https://substackcdn.com/image/fetch/$s_!arr0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23692689-b32e-4a0c-aeb1-27b22c28bcb3_1240x1240.png 848w, https://substackcdn.com/image/fetch/$s_!arr0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23692689-b32e-4a0c-aeb1-27b22c28bcb3_1240x1240.png 1272w, https://substackcdn.com/image/fetch/$s_!arr0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23692689-b32e-4a0c-aeb1-27b22c28bcb3_1240x1240.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Although by all means if you are mentioned in this post or otherwise have special insight and want to talk, please feel free to email me! aaronb50 [at] gmail.com</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>I was vaguely suspicious he might have been basically bending his views and tone to the host and the audience on 80k, but I just ran the transcript and his 30 most popular Substack posts through Gemini-3-Pro and at least according to <a href="https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%5B%221qKHSYUZuC9muvm5vfVX8vl5Pz62-rx2b%22%5D,%22action%22:%22open%22,%22userId%22:%22107980197219344887571%22,%22resourceKeys%22:%7B%7D%7D&amp;usp=sharing">that</a> the guy is pretty consistent. The conclusion of that LLM Message is:</p><blockquote><p>Conclusion</p><p>If you liked Dean Ball on the podcast, <strong>the Substack is the &#8220;director&#8217;s cut.&#8221;</strong></p><p>There are no contradictions between the two. The podcast is a faithful summary of his written positions. However, reading the Substack reveals that his policy positions (like private governance) are not just technocratic fixes, but attempts to preserve &#8220;ordered liberty&#8221; and human dignity in the face of what he views as a spiritual and civilizational transformation.</p></blockquote></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>And in rare circumstances they might reason themselves to the right answer, but this is the epistemic equivalent of planning your retirement around winning the lottery at 65.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>As an author, MacAskill has good reason not to upset or antagonize the publisher, but something along the lines of &#8220;here&#8217;s how my thinking has evolved over the course of writing this book&#8221; or &#8220;bits I didn&#8217;t have space for&#8221; articles on his website or &#8220;go to this url to read more [at the end of the book]&#8221; (like Yudkowsky and Soares did recently with If Anyone Builds It, Everyone Dies and <a href="https://ifanyonebuildsit.com/resources">ifanyonebuildsit.com/resources</a>) seem like they probably would have been fine to ship (I admit I&#8217;m less sure around this case). </p></div></div>]]></content:encoded></item><item><title><![CDATA[Post readout: Utilitarians Should Accept that Some Suffering Cannot be “Offset”]]></title><description><![CDATA[This is an audio readout of my recent post Utilitarians Should Accept that Some Suffering Cannot be &#8220;Offset&#8221;, also on the EA Forum]]></description><link>https://www.aaronbergman.net/p/post-readout-utilitarians-should</link><guid isPermaLink="false">https://www.aaronbergman.net/p/post-readout-utilitarians-should</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Mon, 06 Oct 2025 01:48:52 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/175388406/6b5f3caafd5c537b7d874d79e5b125b6.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>This is an audio readout of my recent post <a href="https://www.aaronbergman.net/p/utilitarians-should-accept-that-some">Utilitarians Should Accept that Some Suffering Cannot be &#8220;Offset&#8221;</a>, also <a href="https://forum.effectivealtruism.org/posts/je5TiYESSv53tWHC9/utilitarians-should-accept-that-some-suffering-cannot-be-1">on the EA Forum</a></p><p>Enjoy! </p>]]></content:encoded></item><item><title><![CDATA[Utilitarians Should Accept that Some Suffering Cannot be “Offset”]]></title><description><![CDATA[Note: see further discussion on the EA Forum]]></description><link>https://www.aaronbergman.net/p/utilitarians-should-accept-that-some</link><guid isPermaLink="false">https://www.aaronbergman.net/p/utilitarians-should-accept-that-some</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Sun, 05 Oct 2025 21:28:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!4nGE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffabd56e-ac1a-4b65-ba1f-ed2767f80b88_1599x758.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Note: see further discussion on the <a href="https://forum.effectivealtruism.org/posts/je5TiYESSv53tWHC9/utilitarians-should-accept-that-some-suffering-cannot-be-1">EA Forum</a></strong></p><div><hr></div><p><em>What follows is the result of my trying to reconcile various beliefs and intuitions I have about the nature of morality, namely why arguments for total utilitarianism seemed so compelling on their own and yet some of the implications seemed not merely weird but morally implausible.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aaronbergman.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Aaron's Blog is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Intro</h2><p>This post challenges the common assumption that total utilitarianism entails <em>offsetability,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></em><sup> </sup>or that any instance of suffering can, in principle, be offset by sufficient happiness. I make two distinct claims:</p><ol><li><p><strong>Logical</strong>: offsetability does not follow from the five standard premises that constitute total utilitarianism (consequentialism, welfarism, impartiality, summation, and maximization). Instead, it requires an additional, substantive, plausibly false premise.</p></li><li><p><strong>Metaphysical</strong>: some suffering in fact cannot be morally justified (&#8220;offset&#8221;) by any amount of happiness.</p></li></ol><p>While related, the former, weaker claim stands independently of the latter, stronger one.</p><h1>How to read this post</h1><p>Different readers will find different parts most relevant to their concerns:</p><p><strong>If you believe the math or logic of utilitarianism inherently requires offsetability</strong> (that is, if you think &#8220;once we accept utilitarian premises, we&#8217;re logically committed to accepting that torture could be justified by enough happiness&#8221;), <strong>start with Part I</strong>. There I show why this common assumption is mistaken.</p><p><strong>If you&#8217;re primarily interested in whether extreme suffering can actually be offset</strong> (that is, if you already see offsetability as an open philosophical question rather than a logical necessity), <strong>you may wish to skip directly to Part II</strong>, where I argue the more substantive metaphysical claim.</p><h1>Part I: The logical claim</h1><p><em>Offsetability doesn&#8217;t fall out of the math</em></p><h3>A brief aside</h3><p>I&#8217;ve found that two relatively distinct groups tend to be interested in part I:</p><ol><li><p>The <strong>philosophy-brained,</strong> who have taken the implicit &#8220;representation premise&#8221; I discuss below as a given and are primarily interested in conceptual arguments.</p></li><li><p>The <strong>math-brained</strong>, for whom alternatives to the &#8220;representation premise&#8221; are obviously on the table and who are primarily interested in rigorous formalization of my claim.</p></li></ol><p>If it ever feels like I&#8217;m equivocating - perhaps becoming too lax in one sentence and excessively formal in the next, you&#8217;d be right! Sorry. I have tried to put much of the formalization in footnotes, so the math-brained should be encouraged to check those out, but the post isn&#8217;t really optimized for either group.</p><h2>1. Introduction: what we take for granted</h2><p>The standard narrative about total utilitarianism goes something like: &#8220;once we accept that rightness depends on consequences, that (for the purpose of this post, hedonic) welfare is what matters, that we should sum welfare impartially across individuals, and that more welfare is better than less, it follows naturally that everything becomes <em>commensurable</em>.</p><p>And, more specifically, I mean &#8220;commensurable&#8221; in the sense that all goods and bads fundamentally behave like numbers in the relevant moral calculus: perhaps 15 for a nice day on the beach, -2 for a papercut, and so on.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> If so, it would seem to follow that any instance of suffering can, in principle, be offset by sufficient happiness, and obviously so.</p><p>I think this is false.</p><h2>2. The meaning of utilitarianism and the hidden sixth premise</h2><p><strong>My primary intention here is not to make an argument about how words should be used,</strong> <strong>but rather to make a more substantive claim about what implications follow from certain premises.</strong></p><p>Here I describe what I mean when I talk about total utilitarianism.</p><h3>The Utilitarian Core</h3><p>To the best of my understanding, total utilitarianism is constituted by five necessary and sufficient consensus premises and propositions,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>,which I&#8217;ll call the <strong>Utilitarian Core</strong>, or <strong>UC:</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a><a href="#fn1ozmu5tl3kg"><sup>[5]</sup></a></p><ol><li><p><strong>Consequentialism</strong>: the rightness of actions depends on their consequences (as opposed to, perhaps, the nature of the acts themselves or adherence to rules).</p></li><li><p><strong>[Hedonic] welfarism</strong>: the only thing that matters morally is the hedonic welfare of sentient beings. Nothing else has intrinsic moral value.</p></li><li><p><strong>Impartiality</strong>: wellbeing matters the same regardless of whose it is, with no special weight for kin relationships, race, gender, species, or other arbitrary characteristics.</p></li><li><p><strong>Aggregation or summation</strong>: the overall value of a state of affairs is determined by aggregating or summing individual wellbeing.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p></li><li><p><strong>Maximization</strong>: the best world is the one with maximum aggregate wellbeing.</p></li></ol><h3>What is left out</h3><p>The UC tells us to maximize the sum of welfare, but remains silent on what exactly is getting summed.</p><p>You can&#8217;t <em>literally</em> add up welfare like apples (i.e., by putting them in a literal or metaphorical basket). In some important sense, then, &#8220;summation&#8221; or &#8220;aggregation&#8221; refers to the claim that the moral state of the world simply <em>is </em>the grouping of the moral states that exist within. How exactly to operationalize this via some sort of conceptual/ideal or literal/physical process or model is entirely non-obvious.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><h3>Representation premise.</h3><p>To get universal offsetability, you need more structure than the Utilitarian Core provides. A sufficient additional assumption, if you want offsetability by construction, is to assume that welfare sits on a single number line where we can add people&#8217;s contributions, where every bad has a positive opposite, and where there are no lexical walls that a large enough amount of good could not overcome.</p><p>In practice, I think, this generally looks like <strong>an assumption that all states of hedonic welfare are adequately modeled by the real numbers with standard arithmetic operations.</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p><p>The Core itself does not force that choice. At most it motivates a way to combine people&#8217;s welfare that is symmetric across persons and monotone in each person&#8217;s welfare. If you drop either the &#8220;no lexical walls&#8221; condition or the &#8220;every bad has a positive opposite&#8221; condition, offsetability can fail even though you still compare and aggregate.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p><p>Without this additional premise (i.e. of some additional structure such as the one described above), the standard utilitarian framework doesn&#8217;t entail that any amount of suffering can be offset by sufficient happiness.</p><p>The crucial point is that the Representation Premise is not a logical consequence of the Utilitarian Core. <strong>It is a substantive and plausibly false metaphysical claim about the nature of suffering and happiness that typically gets smuggled in without justification.</strong></p><h2>3. Why real-number representation isn&#8217;t obvious</h2><h3>What utilitarianism actually requires</h3><p>The five core premises of utilitarianism establish the need for comparison and aggregation, but they don&#8217;t imply the existence of cardinal units that behave like real numbers. We need only be able to say &#8220;this outcome is better than that one&#8221; and to sum representations of individual welfare into a representation of social welfare.</p><p>One intuitive and a priori plausible operationalization is that any hedonic event corresponds naturally to a real number that accurately represents its moral value. But &#8220;a priori plausible&#8221; doesn&#8217;t mean &#8220;true,&#8221; and indeed the UC does not require this.</p><h3>Where cardinality might hold (and where it might not)</h3><p>To be clear, there are good arguments for <em>partial</em> cardinality in welfare. Setting aside whether they&#8217;re logically implied by UC, I (tentatively) believe that, in a deep and meaningful sense, subjective duration of experience and numbers of relevantly similar persons are cardinally meaningful in utilitarian calculus.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p><p>That is, suffering twice for what feels like as long really is twice as bad. Fifty people enjoying a massage is exactly 25% better than forty people doing so. In general, <em>conditioning on some specific hedonic state</em>, person-years (at least when both figures are finite)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a><sup> </sup>really do have properties we associate with real numbers: they are <a href="https://en.wikipedia.org/wiki/Archimedean_property">Archimedean</a>, follow normal rules of arithmetic, and so on.</p><p>But this limited cardinality for duration and population doesn&#8217;t establish that all welfare comparisons map to real numbers. The intensity and qualitative character of different experiences might not admit of the same mathematical treatment. The assumption that they do (e.g., that we can meaningfully say torture is 1,000 or 1,000,000 times worse than a pinprick) is precisely what needs justification.</p><h3>Alternative mathematical structures</h3><p>Many mathematical structures preserve the ordering and aggregation that utilitarianism requires without implying universal offsetability:</p><p><strong>Lexicographically ordered vectors</strong> (R^n with dictionary ordering<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a>) might be the most natural alternative. Here, welfare could have multiple dimensions ordered by priority: catastrophic suffering first, then all forms of wellbeing and lesser suffering. Or perhaps catastrophic suffering, then lexical happiness (&#8220;divine bliss&#8221;) then ordinary hedonic states, or any number of &#8220;levels&#8221; to lexical suffering. This preserves all utilitarian operations while rejecting offsetability between levels.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a></p><p><strong>Hyperreal numbers</strong></p><p>The hyperreal system extends the reals with infinitesimal and unlimited magnitudes. You can map catastrophic suffering to a negative non-finite value, call it &#8722;H_unlimited , and ordinary goods to finite values. Then &#8722;H_unlimited+1000 is better than &#8722;H_unlimited , so extra happiness still matters, but no finite increase offsets &#8722;H_unlimited. This blocks offsetability while preserving familiar arithmetic.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> <a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a></p><h3>The point</h3><p>I introduce these alternatives not to argue here that any particular mathematical structure is correct, but to illustrate something deeper: <em>there is no special &#8220;math&#8221; constraint above and beyond what the real world permits.</em></p><p>Mathematicians have every right to invent arbitrary exotic, internally consistent systems built on top of their choice of axioms and investigate what follows. But when using math to model reality, axioms are substantive claims about what you think the world is like.</p><p>This matters because in other domains, reality often diverges from our mathematical intuitions. Quantum mechanics requires complex numbers, not just reals. Spacetime intervals don&#8217;t add linearly but combine through curved geometry. The assumption that consciousness and welfare fit neatly on the real number line is a reasonable hypothesis but simply not an obvious truth.</p><p>Perhaps welfare really does map to real numbers with all that entails. Further investigation or compelling philosophical argument may establish this. But, as I wrote in my <a href="https://www.aaronbergman.net/p/my-case-for-suffering-leaning-ethics">original post</a> on this matter, &#8220;if God descends tomorrow to reveal that [all hedonic states correspond to real numbers], we would all be learning something new.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a></p><p>Again, the mathematical framework is just the toolbox. Whether actual experiences can ever map to infinite values within that framework is the separate quasi-empirical and philosophical question that the rest of this post addresses.</p><h2>4. The VNM (non-) problem</h2><p>Defenders of offsetability sometimes invoke the Von Neumann-Morgenstern theorem (&#8220;VNM&#8221;), alleging that VNM proves that rational preferences can be represented by real-valued utility functions. However, this does not hold in our case because non-offsetability implies a rejection of continuity, one of the four conditions required by the theorem to hold.</p><p>I admit this is an extremely understandable error to make in part because I myself was confused and frankly wrong about the theorem when I first encountered it as an objection. In a <a href="https://benthams.substack.com/p/contra-bergman-on-suffering-focused/comments">reply to me</a> a few years ago, friend and prolific utilitarian blogger Matthew Adelstein (<a href="https://forum.effectivealtruism.org/users/bentham-s-bulldog?mention=user">@Bentham&#8217;s Bulldog</a>) wrote that:</p><blockquote><p><em>Well, <a href="https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem#:~:text=In%20decision%20theory%2C%20the%20von,defined%20over%20the%20potential%20outcomes">the vnm formula</a> shows one&#8217;s preferences will be modelable as a [real-valued] utility function if they meet a few basic axioms</em></p></blockquote><p>To which I made the following <a href="https://benthams.substack.com/p/contra-bergman-on-suffering-focused/comment/7633745">incorrect response</a>:</p><blockquote><p><em>VNM shows that preferences have to be modeled by an *ordinal* utility function. You write that&#8230;&#8217;Let&#8217;s say a papercut is -n and torture is -2 billion.&#8217; but this only shows that the torture is worse than the papercut - not that it is any particular amount worse. Afaik there&#8217;s no argument or proof that one state of the world represented by (ordinal) utility u_1 is necessarily some finite number of times better or worse than some other state of the world represented by u_2</em></p></blockquote><p>My first sentence, &#8220;VNM shows that preferences have to be modeled by an *ordinal* utility function,&#8221; was totally incorrect. VNM <em>does </em>result in cardinally meaningful utility that respects standard expected value theory, but only <em>conditional</em> on four specific axioms or premises:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a></p><ol><li><p><strong>Completeness</strong>: option A is better than option B (A &#8827; B) or are of equal moral value (A ~ B)</p></li><li><p><strong>Transitivity</strong>: If A &#8827; B and B &#8827; C, then A &#8827; C</p></li><li><p><strong>Continuity</strong>: If A &#8827; B &#8827; C, there&#8217;s some probability <em>p </em>&#8712; (0, 1) where a guaranteed state of the world B is ex ante morally equivalent to &#8220;lottery <em>p</em>&#183;A + (1-<em>p</em>)&#183;C&#8221; (i.e., p chance of state of the world A, and the rest of the probability mass of C)</p></li><li><p><strong>Independence</strong>: A &#8827; B if and only if [p&#183;A + (1-p)&#183;C] &#8827; [p&#183;B + (1-p)&#183;C] for any state of the world C and p&#8712;(0,1) (i.e., adding the same chance of the same thing to all world states doesn&#8217;t affect their moral ordering)</p></li></ol><p>The theorem states that <strong>if</strong> these four conditions hold <strong>then </strong>there exists a <em>real valued</em> utility function <em>u</em> that respects expected value theory<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a>, which implies meaningful cardinality and restriction to the set of real numbers, which in turn implies offsetability.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-19" href="#footnote-19" target="_self">19</a></p><p>Quite simply, VNM does not apply in the context of my argument because I reject premise 3, continuity. And, in more general terms, it is not implied by UC.</p><p>More specifically, I claim that there exists no nonzero probability <em>p</em> such that a <em>p</em> chance of some extraordinarily bad outcome (namely, catastrophic suffering) and a (1-<em>p</em>) chance of a good world is morally equivalent to some mediocre alternative. In other words, the value of a state of the world (which includes probability distributions over the future) becomes <em>radically</em> different as you change from &#8220;very small possibility&#8221; of some catastrophic suffering in the future to &#8220;zero.&#8221;</p><p>To be clear, I haven&#8217;t really argued for that conclusion on the merits yet and reasonable people disagree about this. I will, in section II. The point here is just that UC does not entail the conditions necessary to imply meaningful cardinality via VNM, at the very least because of the counterexample described just above.</p><h3>Not an epistemic &#8220;red flag&#8221;</h3><p>It&#8217;s worth noting that assuming the relevant assumptions such that VNM holds is often a good guess. Two of the axioms are essentially entailed by what most people mean by &#8220;rationality,&#8221; three seem on extremely good footing, and all four are decidedly plausible.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-20" href="#footnote-20" target="_self">20</a></p><p>But rejecting premise 3, continuity, is perfectly coherent and doesn&#8217;t create the problems often associated with &#8220;irrational&#8221; preferences. An agent with lexical preferences (i.e., and e.g., who refuses any gamble involving torture no matter what the potential upside) violates continuity but remains completely coherent and consistent; there are no Dutch books (you can&#8217;t construct a series of trades that leaves them strictly worse off) or money pumps (you can&#8217;t exploit them through repeated transactions). They maintain transitivity and completeness.</p><h1>Part II: The metaphysical claim</h1><p><em>Some suffering actually can&#8217;t be offset</em></p><p>I now turn to the stronger claim that some suffering actually cannot be offset by any amount of happiness.</p><h2>5. The argument from idealized rational preferences</h2><h3>The setup: you are everyone</h3><p>Imagine that you become an <strong>Idealized Hedonic Egoist (IHE)</strong>. In this state, you are maximally rational:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-21" href="#footnote-21" target="_self">21</a><sup> </sup>you make no logical errors, have unlimited information processing capacity, complete information about experiences with perfect introspective access, and full understanding of what any hedonic state would actually feel like. You care only about your own pleasure and suffering in exact proportion to their hedonic significance.</p><p>Now imagine that as this idealized version of yourself, you will experience <em>everyone&#8217;s</em> life in a given outcome. Under this &#8220;experiential totalization&#8221; (ET), you live through all the suffering and all the happiness that would exist. For a hedonic total utilitarian, this creates a perfect identity: your self-interested calculation becomes the moral calculation. What&#8217;s best for you-who-experiences-everyone is precisely what utilitarianism says is morally best.</p><h3>The question</h3><p>As this idealized being who will experience everything, you face a choice: Would you accept 70 years of the worst conceivable torture in exchange for any amount of happiness afterward?</p><p>Take a moment to really consider what &#8220;worst conceivable torture&#8221; means. Our brains aren&#8217;t built for this, but it can reason by analogy: being boiled alive; the terror of your worst nightmare; the horror and existential regret of a mother watching her son fall to his death after reluctantly telling him he could play near the canyon edge; slowly asphyxiating as your oxygen runs out. All mitigating biological relief systems that sometimes give you a hint of meaning or relief even as you suffer would be entirely absent. All of these at once, somehow, and more. For 70 years.</p><p>Imagine what follows, as well, by all means: falling in love, peak experiences, the<a href="https://asteriskmag.com/issues/06/manufacturing-bliss"> jhanas</a>, drowning in unfathomable bliss, love, awe, glory, interest, excitement, gratitude, connection, and wonder. Not just for 70 years but for millennia, eons, until the heat death of the universe.</p><p>As an IHE who will experience all of this, knowing exactly what each part would feel like, do you take this deal?</p><p>As a matter of simple descriptive fact, I, Aaron, would not, and I don&#8217;t think I would if I was ideally rational either.</p><p>I also imagine accepting the deal and later being asked, with all the suffering behind me, &#8220;was it worth it?&#8221; And I think I would say &#8220;no, it was a terrible mistake.&#8221;</p><h3>The burden of idealization</h3><p>Some readers might think &#8220;I wouldn&#8217;t personally take this trade, but that&#8217;s just bias. The perfectly rational IHE would, so I would too if I became perfectly rational.&#8221;</p><p>This response deserves scrutiny, particularly if and once you&#8217;ve accepted the argument in part I that offsetability is not logically or mathematically inevitable.</p><p>To claim the IHE would accept what you&#8217;d refuse requires believing that your cognitive biases not only persist in spite of but essentially circumvent and overcome a conceptual setup specifically designed to elicit the epistemic clarity that comes with self-interest and conceptually simple trades on offer.</p><p>There is a clear similarity between this thought experiment and the conceptual and empirical use of <a href="https://en.wikipedia.org/wiki/Revealed_preference">revealed preference</a> in social science, especially economics.</p><p>To argue that the revealed hypothetical preference of this thought experiment is fundamentally wrong or misleading by the standard of abstract rationality and hedonic egoism is <strong>not</strong> analogous to arguing that a specific empirical context leads consumers to display behavior that diverges from the predictions of some simplified model of rational behavior; it is analogous to arguing that a specific context leads consumers to behave in such a way that is fundamentally contrary to their truest and most ultimate values and preferences. This latter thing is a much stronger claim.</p><h3>What this reveals</h3><p>If you share my conviction that you-as-IHE would refuse the torture trade, then you should be deeply suspicious of any moral theory that says creating such trades is not just acceptable but sometimes obligatory. The thought experiment asks you to confront what you actually believe about extreme suffering when you would be the one experiencing all of it. You can&#8217;t hide behind aggregate statistics or philosophical abstractions.</p><h3>Not a proof</h3><p>I recognize that this thought experiment is merely an intuition pump - directional evidence, not a proof.</p><p>I don&#8217;t expect to convince all readers, but I&#8217;d be largely satisfied if someone reads this and says: &#8220;You&#8217;re right about the logic, right about the hidden premise, right about the bridge from IHE preferences to moral facts, but I would personally, both in real life and as an IHE, accept literally anything, including a lifetime of being boiled alive, for sufficient happiness afterward.&#8221;</p><p>This, I claim, should be the real crux of any disagreement.</p><p>To explicitly link this to Part I: what the IHE would choose is a fundamental question about the nature of hedonic states. It doesn&#8217;t &#8220;fall out&#8221; of any axioms or mathematical truths. Any mathematical modeling must be built up from interaction with the territory. The IHE thought experiment, I claim, is an especially epistemically productive way of exploring that territory, and indeed for doing moral philosophy more broadly.</p><h2>6. The implications of universal offsetability are e<em>specially </em>implausible</h2><p>Most utilitarians I know are deeply motivated by preventing and alleviating suffering. They dedicate their time, money, and sometimes entire careers to reducing factory farming and preventing painful diseases.</p><p>Yet the theory many of them endorse says something quite different. Universal offsetability doesn&#8217;t just permit creating extreme suffering when necessary; it can enthusiastically endorse package deals that contain it.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-22" href="#footnote-22" target="_self">22</a></p><p>If any suffering can be offset by sufficient happiness, then creating a being to be boiled alive for a trillion years is not merely acceptable because all alternatives include more or worse suffering but because it&#8217;s part of an all-or-nothing package deal with sufficiently many happy beings along for the ride.</p><p>When I present this trade to utilitarian friends and colleagues, many recoil. They search for reasons why this particular trade might be different, why the theory doesn&#8217;t really imply what it seems to imply. Some bite the bullet (for what I sense is a belief that such unpalatable conclusions follow from very compelling premises - the thing that part I of this essay directly challenges). Very few genuinely embrace it.</p><p>I think their discomfort is correct and their theory is wrong.</p><h3>The moral difference</h3><p>There&#8217;s a profound difference between these scenarios:</p><ol><li><p><strong>Accepting tragic tradeoffs</strong>: Allowing, or even creating, some suffering because it&#8217;s the only way to prevent more or more intense suffering</p></li><li><p><strong>Creating offsetting packages</strong>: Actively creating torture chambers because you&#8217;ve also created enough pleasure to &#8220;balance the books&#8221;</p></li></ol><p>The former involves minimizing harm in tragic circumstances. Every moral theory faces these dilemmas. But the second involves creating more extreme suffering than would have otherwise existed, justified solely by also creating positive wellbeing. The theory says that while we might regret the suffering component, the overall package is not just acceptable but <em>optimal</em>. We should prefer a world with both the torture and offsetting happiness to one with neither.</p><p>Scale this up and offsetability doesn&#8217;t reluctantly permit but instead actively recommends creating billions of beings in agony until the heat death of the universe, as long as we create enough happiness to tip the scales. The suffering isn&#8217;t a necessary evil; it&#8217;s part of a package deal the theory endorses as an improvement to the world.</p><p>When your theory tells you to endorse deals that create vast torture chambers (even while regretting the torture component), the problem isn&#8217;t with your intuitions but with the hidden premises that feel from the inside like they&#8217;re forcing your hand.</p><h2>7. The asymptote is the radical part</h2><p>In this section I offer a conceptual reframing that draws attention away from the severity of suffering warranting genuine conceptual lexicality and towards the suffering that is slightly less severe. I argue that, insofar as my view is radical, the radical part of my view happens <em>before</em> the lexical threshold, in what appears to be the &#8220;normal&#8221; offsetable range.</p><p>To see why, let&#8217;s use a helpful conceptual framework</p><ul><li><p><strong>Instruments</strong>: measurable proxies that track suffering and happiness.</p><ul><li><p>A suffering instrument (i_s) could be neurons engaged in pain signaling or temperature of an ice bath. A happiness instrument (i_h) might be neurons in reward processing or some measure of endocannabinoid release. For our purposes, these are entirely conceptual devices. These instruments need only be <em>monotonic</em>: more instrument reliably indicates more of what it measures, at least within some relevant range.</p></li></ul></li><li><p><strong>Compensation schedule</strong> i_h=&#981;(i_s) tells us how much happiness instrument is needed to offset or morally justify a given amount of suffering instrument.</p><ul><li><p>Again, we can invoke the idealized hedonic egoist - the compensation schedule function as an indifference curve of this agent passing through neutral or absent experience.</p></li></ul></li></ul><h3>Why instruments?</h3><p>Trying to invoke &#8220;quantities&#8221; of happiness and suffering in the context of a discourse that references specific qualia or experiences, the abstract pre-moral &#8220;ground truth&#8221; intensity of those experiences, the abstract moral value of those experiences, and various discussion participants&#8217; notions of or claims about the relationship between any of these concepts is extraordinarily conducive to miscommunication and lack of conceptual clarity even under the best of epistemic circumstances.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-23" href="#footnote-23" target="_self">23</a></p><p>More concretely, I have observed a natural and understandable failure mode in which one attempts to map &#8220;suffering&#8221; (as a quantitative variable) to something like &#8220;how much that suffering matters&#8221; (another quantitative variable). But such a relationship is, in the context of hedonic utilitarianism, some combination of trivial (because under hedonic utilitarianism, suffering and the moral value of suffering are intrinsically 1:1 if not conceptually identical) and confused.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-24" href="#footnote-24" target="_self">24</a></p><p>Instruments break this circularity by grounding discussion in concrete, in principle-measurable properties that virtually all people and conceptual frameworks can agree on. We define compensation through idealized indifference rather than positing mysterious common units. The moral magnitudes can remain ordinal within each channel; the compensation schedule provides the cross-calibration.</p><h3>The compensation schedule&#8217;s structure</h3><p>I claim that as i_s approaches some threshold from below, &#981;(is) grows without bound, reaching infinity at the threshold, creating an asymptote in the process. Beyond it, no finite happiness instrument can compensate.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4nGE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffabd56e-ac1a-4b65-ba1f-ed2767f80b88_1599x758.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4nGE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffabd56e-ac1a-4b65-ba1f-ed2767f80b88_1599x758.webp 424w, https://substackcdn.com/image/fetch/$s_!4nGE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffabd56e-ac1a-4b65-ba1f-ed2767f80b88_1599x758.webp 848w, https://substackcdn.com/image/fetch/$s_!4nGE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffabd56e-ac1a-4b65-ba1f-ed2767f80b88_1599x758.webp 1272w, https://substackcdn.com/image/fetch/$s_!4nGE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffabd56e-ac1a-4b65-ba1f-ed2767f80b88_1599x758.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4nGE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffabd56e-ac1a-4b65-ba1f-ed2767f80b88_1599x758.webp" width="1456" height="690" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ffabd56e-ac1a-4b65-ba1f-ed2767f80b88_1599x758.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:690,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:58316,&quot;alt&quot;:&quot;This image compares two models of how much happiness is needed to compensate for increasing suffering as it approaches a catastrophic threshold.The Two ModelsLeft (Asymptotic Model - \&quot;Correct View\&quot;):Red curve grows gradually then shoots upward exponentially as suffering approaches the threshold at x=5Compensation requirements grow without bound, approaching infinity smoothlyBeyond the threshold (pink shaded area), no finite compensation is possibleRight (Discontinuous Model - \&quot;Naive View\&quot;):Blue curve shows moderate growth until just before the thresholdThen suddenly jumps from a finite value to infinity at x=5No warning or gradual transition - just an arbitrary leap to non-offsetability (blue shaded area)Key DifferenceThe asymptotic model shows compensation becoming astronomical (10^100, 10^1000...) before the threshold, making the transition to \&quot;infinite badness\&quot; a natural limit rather than an arbitrary jump. The discontinuous model treats the threshold as a mysterious bright line where suffering suddenly becomes categorically different.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aaronbergman.net/i/175296598?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffabd56e-ac1a-4b65-ba1f-ed2767f80b88_1599x758.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="This image compares two models of how much happiness is needed to compensate for increasing suffering as it approaches a catastrophic threshold.The Two ModelsLeft (Asymptotic Model - &quot;Correct View&quot;):Red curve grows gradually then shoots upward exponentially as suffering approaches the threshold at x=5Compensation requirements grow without bound, approaching infinity smoothlyBeyond the threshold (pink shaded area), no finite compensation is possibleRight (Discontinuous Model - &quot;Naive View&quot;):Blue curve shows moderate growth until just before the thresholdThen suddenly jumps from a finite value to infinity at x=5No warning or gradual transition - just an arbitrary leap to non-offsetability (blue shaded area)Key DifferenceThe asymptotic model shows compensation becoming astronomical (10^100, 10^1000...) before the threshold, making the transition to &quot;infinite badness&quot; a natural limit rather than an arbitrary jump. The discontinuous model treats the threshold as a mysterious bright line where suffering suddenly becomes categorically different." title="This image compares two models of how much happiness is needed to compensate for increasing suffering as it approaches a catastrophic threshold.The Two ModelsLeft (Asymptotic Model - &quot;Correct View&quot;):Red curve grows gradually then shoots upward exponentially as suffering approaches the threshold at x=5Compensation requirements grow without bound, approaching infinity smoothlyBeyond the threshold (pink shaded area), no finite compensation is possibleRight (Discontinuous Model - &quot;Naive View&quot;):Blue curve shows moderate growth until just before the thresholdThen suddenly jumps from a finite value to infinity at x=5No warning or gradual transition - just an arbitrary leap to non-offsetability (blue shaded area)Key DifferenceThe asymptotic model shows compensation becoming astronomical (10^100, 10^1000...) before the threshold, making the transition to &quot;infinite badness&quot; a natural limit rather than an arbitrary jump. The discontinuous model treats the threshold as a mysterious bright line where suffering suddenly becomes categorically different." srcset="https://substackcdn.com/image/fetch/$s_!4nGE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffabd56e-ac1a-4b65-ba1f-ed2767f80b88_1599x758.webp 424w, https://substackcdn.com/image/fetch/$s_!4nGE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffabd56e-ac1a-4b65-ba1f-ed2767f80b88_1599x758.webp 848w, https://substackcdn.com/image/fetch/$s_!4nGE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffabd56e-ac1a-4b65-ba1f-ed2767f80b88_1599x758.webp 1272w, https://substackcdn.com/image/fetch/$s_!4nGE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffabd56e-ac1a-4b65-ba1f-ed2767f80b88_1599x758.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Why this Is already radical</h3><p>The radical implications (insofar as you think any of this is radical) aren&#8217;t at the threshold but in the approach to it. The compensation schedule growing without bound (i.e., asymptotically) means that some sub-threshold suffering would require 10^(10^10) happy lives to offset, or 1000^(1000^1000). Pick your favorite unfathomably large number - the real-valued asymptote passes that early on its way to infinity.</p><p>Once you accept that compensation can reach unfathomable heights while remaining not literally infinite, the step from there to &#8220;infinite&#8221; is small in an important sense. See the image above for a graphical comparison between this view and a naive, less plausible view in which there is a sudden discontinuous jump at the point of lexicality.</p><p>Note that my framework leaves quite a bit of room for internal specification. See the following graphic for representations of various models that all fit within the framework I&#8217;m arguing for. The actual, specific shape of the compensation curve and asymptote are hard but tractable questions for science and moral philosophy to make progress on.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!o5m2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec71af7c-eb78-4ccd-b563-4aa2daf55c10_1600x1365.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!o5m2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec71af7c-eb78-4ccd-b563-4aa2daf55c10_1600x1365.webp 424w, https://substackcdn.com/image/fetch/$s_!o5m2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec71af7c-eb78-4ccd-b563-4aa2daf55c10_1600x1365.webp 848w, https://substackcdn.com/image/fetch/$s_!o5m2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec71af7c-eb78-4ccd-b563-4aa2daf55c10_1600x1365.webp 1272w, https://substackcdn.com/image/fetch/$s_!o5m2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec71af7c-eb78-4ccd-b563-4aa2daf55c10_1600x1365.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!o5m2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec71af7c-eb78-4ccd-b563-4aa2daf55c10_1600x1365.webp" width="1456" height="1242" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ec71af7c-eb78-4ccd-b563-4aa2daf55c10_1600x1365.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1242,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:87712,&quot;alt&quot;:&quot;Four graphs showing different mathematical models of how required compensation for suffering increases as it approaches a catastrophic threshold at x=5. All four curves eventually reach infinity at the threshold, but with different rates of increase: 'Extremely Sudden' (purple) stays nearly flat until x=4.5 then shoots up vertically; 'Moderately Sudden' (red) remains low until x=4 then curves sharply upward; 'Gradual' (blue) rises steadily across the entire range; and 'Ultra Gradual' (green) begins climbing early at x=2 with a smooth exponential curve. The area beyond the threshold (x>5) is shaded to indicate the region where no finite compensation is possible.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aaronbergman.net/i/175296598?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec71af7c-eb78-4ccd-b563-4aa2daf55c10_1600x1365.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Four graphs showing different mathematical models of how required compensation for suffering increases as it approaches a catastrophic threshold at x=5. All four curves eventually reach infinity at the threshold, but with different rates of increase: 'Extremely Sudden' (purple) stays nearly flat until x=4.5 then shoots up vertically; 'Moderately Sudden' (red) remains low until x=4 then curves sharply upward; 'Gradual' (blue) rises steadily across the entire range; and 'Ultra Gradual' (green) begins climbing early at x=2 with a smooth exponential curve. The area beyond the threshold (x>5) is shaded to indicate the region where no finite compensation is possible." title="Four graphs showing different mathematical models of how required compensation for suffering increases as it approaches a catastrophic threshold at x=5. All four curves eventually reach infinity at the threshold, but with different rates of increase: 'Extremely Sudden' (purple) stays nearly flat until x=4.5 then shoots up vertically; 'Moderately Sudden' (red) remains low until x=4 then curves sharply upward; 'Gradual' (blue) rises steadily across the entire range; and 'Ultra Gradual' (green) begins climbing early at x=2 with a smooth exponential curve. The area beyond the threshold (x>5) is shaded to indicate the region where no finite compensation is possible." srcset="https://substackcdn.com/image/fetch/$s_!o5m2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec71af7c-eb78-4ccd-b563-4aa2daf55c10_1600x1365.webp 424w, https://substackcdn.com/image/fetch/$s_!o5m2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec71af7c-eb78-4ccd-b563-4aa2daf55c10_1600x1365.webp 848w, https://substackcdn.com/image/fetch/$s_!o5m2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec71af7c-eb78-4ccd-b563-4aa2daf55c10_1600x1365.webp 1272w, https://substackcdn.com/image/fetch/$s_!o5m2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec71af7c-eb78-4ccd-b563-4aa2daf55c10_1600x1365.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>8. Continuity and the location of the threshold</h2><p>Critics object that lexical thresholds create arbitrary discontinuities where marginal changes flip the moral universe. This misunderstands the mathematical structure. As illustrated in the graphics above, the threshold is the limit point of a continuous process: as suffering intensity is approaches threshold is&#8727;, the compensation function &#981;(i_s) approaches infinity. Working in the extended reals, this is left-continuous: lim[is&#8594;is*]&#981;(is)=&#8734;+=&#981;(is*)</p><p>To be clear, whether we call this behavior &#8216;continuous&#8217; depends on mathematical context and convention. In standard calculus, a function that approaches infinity exhibits an infinite discontinuity.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-25" href="#footnote-25" target="_self">25</a></p><p>I&#8217;m not arguing about which terminology is correct. The substantive point, which holds regardless of vocabulary, is that the transition to non-offsetability emerges naturally from an asymptotic process where compensation requirements grow without bound.</p><h3>Where the threshold falls</h3><p>The precise location of is&#8727; admittedly involves <em>some</em> arbitrariness. Why does the compensation function diverge at, say, the intensity of cluster headaches rather than slightly above or below?</p><p>This arbitrariness diminishes somewhat (though, again, not entirely) when viewed through the asymptotic structure. Once we accept that compensation requirements grow without bound as suffering intensifies, <em>some</em> threshold becomes inevitable. The asymptote must diverge somewhere; debates about exactly where are secondary to recognizing the underlying pattern.</p><h2>9. From arbitrarily large to infinite: a small step</h2><p>Many orthodox utilitarians accept that compensation requirements can grow without bound. They&#8217;ll grant that &#8220;for any amount of happiness M, no matter how large, there&#8217;s some conceivable form of suffering that would require more than M to offset.&#8221;</p><p>This is substantial common ground. We share the recognition that there&#8217;s no ceiling on how much compensation suffering might require. This unbounded growth has practical implications even before reaching any theoretical threshold.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-26" href="#footnote-26" target="_self">26</a></p><p>Once you&#8217;ve accepted that some suffering might require a number of flourishing lives that you could not write down, compute, or physically instantiate to morally justify, at least in principle, the additional step to &#8220;infinite&#8221; is smaller in some important conceptual sense than it might seem prima facie. The step to infinity requires accepting something qualitatively new but not especially radical.</p><p>This is <em>not</em> to say that all major disagreement is illusory.</p><p>Rather, my point here is that important questions and cruxes of substantial disagreement involves the actual moral value of various states of suffering, not the intellectually interesting but sometimes-inconsequential question of whether the required compensation is in-principle representable by an unfathomably large but finite number.</p><p>In other words, let us consider a specific, concrete case of extreme suffering: say a cluster headache lasting for one hour.</p><p>Here, the lexical suffering-oriented utilitarian who claims that this crosses the threshold of in-principle compensability has much more in common with the standard utilitarian who thinks that in principle creating such an event would be morally justified by <a href="https://en.wikipedia.org/wiki/Kruskal%27s_tree_theorem">TREE(3)</a> flourishing human life-years than the latter utilitarian has with the standard utilitarian who claims that the required compensation is merely a single flourishing human life-month.</p><h2>10. The phenomenology of extreme suffering</h2><p>A fundamental epistemic asymmetry underlies this entire discussion: we typically theorize about extreme suffering from positions of relative comfort. This gap between our current experiential state and the phenomena we&#8217;re analyzing may systematically bias our understanding in ways directly relevant to the offsetability debate.</p><p>Both language and memory prove inadequate for conveying or preserving the qualitative character of intense suffering. Language functions through shared experiential reference points, but extreme suffering often lies outside common experience. Even those who have experienced severe pain typically cannot recreate its phenomenological character in memory; the actual quality fades, leaving only abstract knowledge that suffering occurred. When we model suffering as negative numbers in utility calculations, we are operating with fundamentally degraded data about what we&#8217;re actually modeling.</p><p>The testimony of those who have experienced extreme suffering deserves serious epistemic weight here. Cluster headache sufferers describe pain that drives them to <a href="https://www.google.com/books/edition/The_5_Minute_Sports_Medicine_Consult/-LOm9enAxQ8C?hl=en&amp;gbpv=1&amp;pg=PA87&amp;printsec=frontcover">self-harm or suicide</a> for relief. To quote <a href="https://web.archive.org/web/20110922070249/https://www.abc.net.au/rn/talks/8.30/helthrpt/stories/s42434.htm">one patient</a> at length:</p><blockquote><p>It&#8217;s like somebody&#8217;s pushing a finger or a pencil into your eyeball, and not stopping, and they just keep pushing and pushing, because the pain&#8217;s centred in the eyeball, and nothing else has ever been that painful in my life. I mean I&#8217;ve had days when I&#8217;ve thought &#8216;If this doesn&#8217;t stop, I&#8217;m going to jump off the top floor of my building&#8217;, but I know that they&#8217;re going to end and I won&#8217;t get them again for three or five years<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-27" href="#footnote-27" target="_self">27</a></p></blockquote><p>Akathisia victims report states they judge <a href="https://journals.ust.edu/index.php/yjms/article/view/249/226">&#8220;worse than hell,&#8221; driving some to suicide</a>:</p><blockquote><p>I am unable to rest or relax, drive, sleep normally, cook, watch movies, listen to music, do photography, work, or go to school. Every hour that I am awake is devoted to surviving the intense physical and mental torture. Akathisia causes horrific non-stop pain that feels like you are being continually doused with gasoline and lit on fire.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-28" href="#footnote-28" target="_self">28</a></p></blockquote><p>The systematic inaccessibility of extreme suffering from positions of comfort is a profound methodological limitation that moral philosophy must recognize and mitigate with the evidential help of records or testimonies from those who have experienced the extremes.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-29" href="#footnote-29" target="_self">29</a></p><h2>11. Addressing major objections</h2><p>Let me address the most serious objections to the view that I have not already discussed. Some have clean responses while others reveal genuine uncertainties.</p><h3>Time-granularity problem</h3><p><em>Does even a second of extreme suffering pass the lexical threshold? A nanosecond? Far shorter still?</em></p><p>I began writing this post eager to bite the bullet, to insist that any time in a super-lexical state of extreme suffering, however brief, is non-offsetable.</p><p>But I am no longer confident; I don&#8217;t trust my intuitions either way, and I lack a strong sense of what an Idealized Hedonic Egoist would choose when faced with microseconds of otherwise catastrophic suffering.</p><p>To flesh out my uncertainty and some complicating dynamics a bit: it seems plausible to me that the physical states corresponding to intense suffering do not in fact cash out as the &#8220;steady state&#8221; intense suffering one would expect if that situation were to continue; that is, a nanosecond of placing one&#8217;s hand on the frying pan as a psychological and neurological matter isn&#8217;t in fact subjectively like an arbitrary nanosecond from within an hour of keeping one&#8217;s hand there. This may be a sort of distorting bias that complicates communication and conceptual clarity when thinking through short time durations.</p><p>On the other hand, at an intuitive level I can&#8217;t quite shake my sense that even controlling for &#8220;true intensity,&#8221; there is something about very short (subjective) durations that meaningfully bears on the moral value of a particular event.</p><p>Quite simply, this is an open question to me.</p><h3>Extremely small probabilities of terrible outcomes</h3><p><em>Does even a one in a million chance of extreme suffering pass the lexical threshold? One in a trillion? Far less likely than that?</em></p><p>I do bite the bullet on this one, and think that morally we ought to pursue <em>any</em> nonzero reduction of the probability of extreme, super-lexical suffering. Let me say more about why.</p><p>I&#8217;ve come to this view only after trying and failing to talk myself out of it (i.e., in the process of coming to the views presented in this post).</p><p>Under standard utilitarian theory, we can multiply both sides of any moral comparison by the same positive constant and preserve the moral relationship. This means that 10^(-10) chance of extreme torture for life plus one guaranteed blissful life is morally good if and only if one lifetime of extreme torture plus 10^10 blissful lives is morally good. I accept this &#8220;if and only if&#8221; statement as such.</p><p>Presented this way, the second formulation makes the moral horror explicit: we&#8217;re not just accepting risk but actively endorsing the creation of actual extreme torture as part of a positive package deal. And now we&#8217;re back to the same arguments for why extreme suffering does not become morally justifiable in exchange for <em>any</em> amount of wellbeing (the IHE and such).</p><p>I am happy to admit my slight discomfort - my brain, it seems, <em>really</em> wants to round astronomically unlikely probabilities to zero. But in a quite literal sense, small probabilities are not zero, and indeed correspond to actual, definite suffering under some theories of quantum mechanics and cosmology (i.e., Everettian multiverse, to the best of my lay-understanding).</p><h3>Evolutionary explanations of intuitive asymmetry</h3><p>The objection is some version of &#8220;Evolutionary fitness can be essentially entirely lost in seconds but gained only gradually; even sex doesn&#8217;t increase genetic fitness to nearly the same degree that being eaten alive decreases it. This offers an alternative, plausible alternative to &#8220;moral truth&#8221; as explanation for why we have the intuition that suffering is especially important.</p><p>I actually agree this has some evidential force, I just don&#8217;t think it is especially strong or overwhelming relative to other, contrary evidence that we have.</p><p>Evolution created many different intuitions, affective states, emotions, etc., that do not <em>directly</em> or <em>intrinsically</em> track deep truths about the universe but can, in combination with our general intelligence and reflective ability, serve as motivation for or be bootstrapped into learning genuine truths about the world.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-30" href="#footnote-30" target="_self">30</a></p><p>Perhaps most notably, some sort of moral or quasi-moral intuitions that may have tracked e.g., game theory dynamics and purely instrumental cooperation in the ancestral environment, but (at least if you&#8217;re not a nihilist) you probably think that these intuitions simply do happen to at least partially track a genuine feature of the world which we call morality.</p><p>Reflection, refinement, debate, and culture can serve to take intuitions given to us by the happenstance of evolution and ascertain whether they correspond to truth entirely, in part, or not at all.</p><p>For example, we might reflect on our kin-oriented intuitions and conclude that it is not in fact the case that strangers far away have less intrinsic moral worth. We might reflect on our intuition about caring for our friends and family and conclude that something like or in the direction of &#8220;caring&#8221; really does matter in a trans-intuitive sense.</p><p>This is, what I claim, we can and should do in the context of intuitions about the nature of hedonic experience. There&#8217;s no rule that evolution can&#8217;t accidentally stumble upon moral truth.</p><p>The phenomenological evidence, especially, remains almost untouched by this objection. When someone reports that no happiness would be worth the cluster headache they are having <em>right now</em>, that is a hypothesis whose truth value needn&#8217;t change according to how good pleasure can get.</p><h3>&#8220;Doesn&#8217;t this endorse destroying the world?&#8221;</h3><p>This common objection, often presented as a reductio, deserves careful response.</p><p>First, this isn&#8217;t unique to suffering-focused views. Traditional utilitarianism also endorses world-destruction when all alternatives are worse. If the future holds net negative utility, standard utilitarianism says ending it would be good.</p><p>Second, this isn&#8217;t strong evidence against the underlying truth of suffering-focused views. Consider scenarios where the only options are (1) a thousand people tortured forever with no positive wellbeing whatsoever or (2) painless annihilation of all sentience. Annihilation seems obviously preferable.</p><p>Third, the correct response isn&#8217;t rejecting suffering-focused views but recognizing moderating factors:</p><p><strong>Moral uncertainty</strong></p><p>I don&#8217;t have 100% confidence in any moral view. There might be deontological constraints or considerations I&#8217;m missing, and it&#8217;s worth making explicit that I&#8217;m not literally 100% certain in either thesis of this post.</p><p><strong>Cooperation and moral trade</strong></p><p>I, and other suffering-focused people I know, strongly value cooperation with other value systems, recognizing moral trade and compromise matter even when you think others are mistaken.</p><p><strong>Virtual impossibility</strong></p><p>This point, I think, is greatly underrated in the context of this objection and related discussions.</p><p><em>Actually destroying all sentience and preventing its re-emergence is essentially impossible with current or foreseeable technology. It is quite literally not an option that anyone has.</em></p><p>This point is suspiciously convenient, I recognize, but it also happens to be true.</p><p>Anti-natalism doesn&#8217;t actually result in human extinction except under the most absurd of assumptions.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-31" href="#footnote-31" target="_self">31</a> Killing all humans leaves wild animals. Killing all life on earth permits novel biogenesis and re-evolution. Destroying Earth doesn&#8217;t eliminate aliens. AI takeover scenarios involve a different, plausibly morally worse agent in control of the future and digital sentience.</p><p>At the risk of coming across as pompous, the suggestion that anything near my ethical views entails literal, real-life efforts to harm any human falls apart under even the mildest amount of serious and earnest scrutiny and, in my experience, seems almost entirely motivated by the desire to dismiss substantive and plausible ethical claims out-of-hand.</p><p>I want to be entirely intellectually honest here; I can <em>imagine</em> worlds in which a version of my view indeed suggest actions that would result in what most people would recognize as harm or destruction.</p><p>For instance, we can suppose that we had an extremely good understanding of physics and acausal coordination and trade across the Everettian multiverse and also some mechanism of precipitating a hypothetical universe-destroying phenomenon known as &#8220;<a href="https://forum.effectivealtruism.org/posts/CFv82Xt2kuvvjNvP8/vacuum-decay-expert-survey-results-1">vacuum collapse</a>&#8221; and furthermore were quite sure that precipitating vacuum collapse reliably reduces the expected amount of non-offsetable suffering throughout the multiverse. At least a naive unilateralist&#8217;s understanding of my theory might indeed suggest that we should press the vacuum collapse button.</p><p>Fair enough; we can discuss this scenario just like we can discuss the possibility of standard utilitarianism confidently proclaiming that we ought to create a trillion near-eternal lives of unfathomable agony for enough mildly satisfied pigeons.</p><p>In both cases, though, moral discourse needs to recognize that as a matter of empirical fact there is actual no possibility of you or I or anyone doing either of these things in the immediate future. Neither theory is an infohazard, and both need to be discussed in earnest on the merits.</p><p><strong>Irreversibility considerations</strong></p><p>Irreversible actions that can be accomplished by a single entity or group warrant extra caution beyond simple expected value calculations. The permanence of annihilation requires a higher certainty bar than other interventions.</p><p>This is particularly important given the unilateralist&#8217;s curse: when multiple agents independently decide whether to take an irreversible action, the action becomes more likely to occur than is optimal. Even if nine out of ten careful reasoners correctly conclude that annihilation would be net negative, the single most optimistic agent determines the outcome if they can act unilaterally.</p><p>This systematic bias toward action becomes especially dangerous with permanent consequences. The appropriate response isn&#8217;t to abandon moral reasoning but to recognize that irreversible actions accessible to small groups require not just positive expected value by one&#8217;s own lights, but (1) robust consensus among thoughtful observers, (2) explicit coordination mechanisms that prevent unilateral action, and/or (3) confidence levels that account for the selection effect where one is likely the most optimistic evaluator among many.</p><h3>General principle</h3><p>Most fundamentally, it is better to pursue correct ethics, wherever that may lead, and then add extra-theoretical conservative, cooperation and consensus-based guardrails than to start with an absolute premise that one&#8217;s actual ethical theory simply cannot have counterintuitive implications.</p><h2>12. Conclusion</h2><h3>Implications</h3><p>Dozens, hundreds, or thousands of pages could be written about how the claims I&#8217;ve made in this post cash out in the real world, but to gesture at a few intuitive possibilities, I suspect that it implies allocating more resources to preventing and reducing extreme suffering, being more cautious about creating suffering-capable beings, and taking s-risks seriously. These are reasonable and, more importantly, plausibly true conclusions.</p><p>Indeed, more ought to be written on this, and I&#8217;d encourage my future self and others to do just this.</p><h3>We keep what&#8217;s compelling</h3><p>The view I&#8217;ve outlined is a refinement to orthodox total utilitarian thinking; we preserve what&#8217;s compelling while dropping an implausible commitment that was never required or, to my knowledge, explicitly justified.</p><p>The core insights of the Utilitarian Core remain intact:</p><ul><li><p><strong>Consequentialism</strong>: what matters is what happens.</p></li><li><p><strong>Welfarism</strong>: the hedonic wellbeing of sentient beings is the sole source of intrinsic value.</p></li><li><p><strong>Impartiality</strong>: welfare matters regardless of who experiences it.</p></li><li><p><strong>Aggregation or summation:</strong> the moral value of the world is constituted by and equal to the collection of morally relevant states within it - regardless of which symbolic system best represents the actual nature of those states.</p></li><li><p><strong>Maximization</strong>: more aggregate welfare is always better.</p></li></ul><h3>We drop what&#8217;s implausible</h3><p>We abandon the assumption of universal offsetability, which was never a core commitment but rather a mathematical convenience mistaken for a moral principle.</p><p>Specifically, we drop the offsetability of extreme suffering; some experiences are so bad that no amount of happiness elsewhere can make them worthwhile. This isn&#8217;t because suffering and happiness are incomparable in principle, but because the nature of hedonic experience makes some tradeoffs categorically <em>bad deals</em> for the world as a whole.</p><div><hr></div><p>Thank you to Max Alexander, Bruce Tsai, Liv Gorton, Rob Long, Vivian Rogers, and Drake Thomas for a <em>ton</em> of thoughtful and helpful feedback. Thanks as well to various LLMs for assistance with every step of this post, especially Claude Opus 4.1 and GPT-5.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Sometimes referred to as &#8220;lexicality&#8221; or &#8220;lexical priority.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>See later in this section for a more technical description of what exactly this means</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>In the standard story, so-called &#8220;utils&#8221; are <em>scale-invariant</em>, so we can set 1 equal to a bite of an apple or an amazing first date as long as everything else gets adjusted up or down in proportion.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>The <a href="https://plato.stanford.edu/entries/consequentialism/#ClasUtil">Stanford Encyclopedia of Philosophy</a> further subdivides these into what I will call the <em><strong>Extended</strong> [Utilitarian] <strong>Core</strong></em>:</p><ul><li><p><strong>&#8220;Consequentialism</strong> = whether an act is morally right depends only on consequences (as opposed to the circumstances or the intrinsic nature of the act or anything that happens before the act).</p></li><li><p><strong>Actual Consequentialism</strong> = whether an act is morally right depends only on the actual consequences (as opposed to foreseen, foreseeable, intended, or likely consequences).</p></li><li><p><strong>Direct Consequentialism</strong> = whether an act is morally right depends only on the consequences of that act itself (as opposed to the consequences of the agent&#8217;s motive, of a rule or practice that covers other acts of the same kind, and so on).</p></li><li><p><strong>Evaluative Consequentialism</strong> = moral rightness depends only on the value of the consequences (as opposed to non-evaluative features of the consequences).</p></li><li><p><strong>Hedonism</strong> = the value of the consequences depends only on the pleasures and pains in the consequences (as opposed to other supposed goods, such as freedom, knowledge, life, and so on).</p></li><li><p><strong>Maximizing Consequentialism</strong> = moral rightness depends only on which consequences are best (as opposed to merely satisfactory or an improvement over the status quo).</p></li><li><p><strong>Aggregative Consequentialism</strong> = which consequences are best is some function of the values of parts of those consequences (as opposed to rankings of whole worlds or sets of consequences).</p></li><li><p><strong>Total Consequentialism</strong> = moral rightness depends only on the total net good in the consequences (as opposed to the average net good per person).</p></li><li><p><strong>Universal Consequentialism</strong> = moral rightness depends on the consequences for all people or sentient beings (as opposed to only the individual agent, members of the individual&#8217;s society, present people, or any other limited group).</p></li><li><p><strong>Equal Consideration</strong> = in determining moral rightness, benefits to one person matter just as much as similar benefits to any other person (as opposed to putting more weight on the worse or worst off).</p></li><li><p><strong>Agent-neutrality</strong> = whether some consequences are better than others does not depend on whether the consequences are evaluated from the perspective of the agent (as opposed to an observer).&#8221;</p></li></ul><p>For the remainder of this post, I&#8217;ll use and refer to the simpler five-premise Utilitarian Core rather than the eleven-premise Extended Core, though these are equivalent formulations at different levels of detail.</p><p>The Extended Core expands what is compressed in the five-premise version; &#8220;consequentialism&#8221; subdivides into commitments to actual consequences, direct evaluation, and evaluative assessment, &#8220;impartiality&#8221; into universal scope and equal consideration, and so on. Any argument that applies to one formulation applies to the other. Those who prefer the finer-grained taxonomy should feel free to mentally substitute it throughout.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p><a href="https://utilitarianism.net/introduction-to-utilitarianism/#what-is-utilitarianism">Utilitarianism.net</a> leaves out maximization; as of September 16, 2025, <a href="https://en.wikipedia.org/wiki/Average_and_total_utilitarianism">Wikipedia</a> reads &#8220;Total utilitarianism is a method of applying utilitarianism to a group to work out what the best set of outcomes would be. It assumes that the target utility is the maximum utility across the population based on adding all the separate utilities of each individual together.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>By &#8220;summation&#8221; I mean a symmetric, monotone aggregation operator over persons or events. It need not be real-valued addition. But, conceptually, &#8220;addition&#8221; or &#8220;summation&#8221; does seem to be the right or at least best English term to use. The key point is that this operator needn&#8217;t be inherently restricted to the real numbers or behave <em>precisely</em> like real-valued addition.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>See footnote above for elaboration and formalization.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Formal statement: A sufficient package for universal offsetability is an Archimedean ordered abelian group (V, &#8804;, +, 0) that represents welfare on a single scale. Archimedean means: for all a, b &gt; 0 there exists n &#8712; &#8469; with n&#183;a &gt; b. Additive inverses mean: for every x &#8712; V there is &#8722;x with x + (&#8722;x) = 0. Total order and monotonicity tie the order to addition. On such a structure, for any finite bad b &lt; 0 and any finite good g &gt; 0 there exists n with b + n&#183;g &#8805; 0. The Utilitarian Core does not by itself entail Archimedeanity, total comparability, or additive inverses. It is compatible with weaker aggregation, for example an ordered commutative monoid that is symmetric and monotone.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Proof that UC doesn&#8217;t entail offsetability by counterexample:</p><p>Represent a world by a pair (S, H), where:</p><ul><li><p>S is a nonnegative integer counting catastrophic-suffering tokens,</p></li><li><p>H is any integer recording ordinary hedonic goods.</p></li></ul><p>Aggregate by componentwise addition:</p><p>(S1, H1) &#8853; (S2, H2) = (S1 + S2, H1 + H2).</p><p>Order lexicographically:</p><p>(S1, H1) is morally better than (S2, H2) if either</p><p>a) S1 &lt; S2, or</p><p>b) S1 = S2 and H1 &gt; H2.</p><p>This structure is an ordered, commutative monoid. It is impartial and additive across individuals. Yet offsetability fails: if S increases by 1, no finite change in H can compensate.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>&#8220;Tentatively&#8221; because I don&#8217;t have a rock-solid understanding or theory of either time or personhood/individuation of qualia/hedonic states.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Though I&#8217;m not familiar with current work in infinite ethics, my argument about representation choices seems relevant to that field. If your model implies punching someone is morally neutral in an infinite universe (because &#8734; + 1 = &#8734;), don&#8217;t conclude &#8216;the math has spoken, punching is fine&#8217;; conclude you&#8217;re using the wrong math.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Words that start with A come before B, those with AA come before AB, and so on.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Here, higher dimensions are analogous to and representative of more highly prioritized kinds of welfare: perhaps the most severe conceivable kind of suffering, and then the category below that, and so on.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>Other structures that avoid universal offsetability include ordinal numbers, surreal numbers, Laurent series, and the long line. The variety of alternatives underscores that real-number representation is a choice, not a logical necessity.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>This analysis suggests utilitarianism might not entail the repugnant conclusion either. Just as some suffering might be lexically bad (non-offsetable by ordinary goods), perhaps some flourishing is lexically good (worth more than any amount of mild contentment). The five premises don&#8217;t rule this out.</p><p>However, positive lexicality doesn&#8217;t solve negative lexicality; even if divine bliss were worth more than any amount of ordinary happiness, it wouldn&#8217;t follow that it could offset eternal torture. The positive and negative sides might have independent lexical structures, a substantive claim about consciousness rather than a logical requirement.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>I know this isn&#8217;t the technically correct use of &#8220;a priori.&#8221; I mean &#8220;after accepting UC but before investigating beyond that.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>Revised from the original agent-based economic formulation to fit the language of moral philosophy. Please see any mainstream economics textbook or lecture slides for the economic formulation with any amount of formalization or explanation. <a href="https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem">Wikipedia</a> seems good as well!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p>I.e., state of the world A is better than B if and only if the expected value of A is greater than the expected value of B, where expected value is defined and determined by that function, <em>u.</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-19" href="#footnote-anchor-19" class="footnote-number" contenteditable="false" target="_self">19</a><div class="footnote-content"><p>The explanation here is reasonably intuitive; essentially, the fact that all states of the world get assigned a real number means that enough good can surpass the value of any bad because there exists some positive real number n such that an-b &gt; 0 for any positive real numbers a and b.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-20" href="#footnote-anchor-20" class="footnote-number" contenteditable="false" target="_self">20</a><div class="footnote-content"><p>Rejecting premise 1, completeness is essentially a nonstarter in the context of morality, where the whole project is premised on figuring out which worlds, actions, beliefs, rules, etc., are better than or equivalent to others. You can deny this your heart of hearts - I won&#8217;t say that you literally cannot believe that two things are fundamentally incomparable - but I will say that the world never accommodates your sincerely held belief or conscientious objector petition when it confronts you with the choice to take option A, option B, or perhaps coin flip between them.</p><p>Rejecting premise 2, transitivity, gets you so-called &#8220;money-pumped.&#8221; That is, it implies that there are a series of trades you would take that leaves you, or the world in our case, worse off by your own lights at the end of the day.</p><p>Premise 4, independence, is a bit kinder to objectors, and I believe empirically observed insofar as it applies to consumer behavior in behavioral economics. But my sense is that it is very rarely if ever explicitly endorsed, and at least intuitively I see no case for rejecting it in the context of utilitarianism or morality more broadly. In the <a href="https://chatgpt.com/share/68cb87b2-9db0-8004-97a7-e901faf2806e">words</a> of GPT-5 Thinking, &#8220;adding an &#8216;irrelevant background risk&#8217; shouldn&#8217;t flip your ranking.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-21" href="#footnote-anchor-21" class="footnote-number" contenteditable="false" target="_self">21</a><div class="footnote-content"><p>I am using this term in a rather colloquial sense. Feel free to substitute in your preferred word; the description later in this paragraph is really what matters.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-22" href="#footnote-anchor-22" class="footnote-number" contenteditable="false" target="_self">22</a><div class="footnote-content"><p>Wording tweaked in response to a good point from Toby Lightheart <a href="https://x.com/TobyLightheart/status/1972510337800044956">on Twitter</a>, who (quite reasonably) proposed the term &#8220;pragmatically accept&#8221; with respect to the suffering itself. I maintain that we should note the &#8220;enthusiastic endorsement&#8221; of <em>package deals</em> that contain severe suffering.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-23" href="#footnote-anchor-23" class="footnote-number" contenteditable="false" target="_self">23</a><div class="footnote-content"><p>I.e., earnest collaborative truth seeking, plenty of time and energy, etc.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-24" href="#footnote-anchor-24" class="footnote-number" contenteditable="false" target="_self">24</a><div class="footnote-content"><p>For instance, one critic of lexicality <a href="https://scoutingahead.substack.com/p/against-lexical-suffering-focused">argues</a> that lexical views &#8220;result in it being ethically preferable to have a world with substantially more total suffering, because the suffering is of a less important type,&#8221; but this claim is circular; the <em>whole debate</em> concerns which kinds of worlds have &#8220;how much&#8221; suffering in the relevant sense, and in this post I am arguing that some kinds of worlds (namely, those that contain extreme suffering) have &#8220;more suffering&#8221; than other worlds (namely, those that do not).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-25" href="#footnote-anchor-25" class="footnote-number" contenteditable="false" target="_self">25</a><div class="footnote-content"><p>In the extended reals with appropriate topology, such a function can be rigorously called left-continuous.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-26" href="#footnote-anchor-26" class="footnote-number" contenteditable="false" target="_self">26</a><div class="footnote-content"><p>The asymptotic structure creates genuine practical constraints in our bounded universe. Feasible happiness is bounded - there are only so many neurons that can fire, years beings can live, resources we can marshal. Call this maximum H_max. When the compensation function &#934;(i_s) exceeds H_max while still below the theoretical threshold, we reach suffering that cannot be offset in practice. At some level i_s_practical where &#934;(i_s_practical) &gt; H_max, offsetting becomes practically impossible even while remaining theoretically finite. This creates a zone of &#8220;effective non-offsetability&#8221; below the formal threshold.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-27" href="#footnote-anchor-27" class="footnote-number" contenteditable="false" target="_self">27</a><div class="footnote-content"><p>Before taking this man&#8217;s revealed preference not to commit suicide as strong evidence against my thesis, I urge you to consider the selection effects associated with finding such quotes.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-28" href="#footnote-anchor-28" class="footnote-number" contenteditable="false" target="_self">28</a><div class="footnote-content"><p>From <a href="https://akathisiaalliance.org/patient-experiences/">https://akathisiaalliance.org/patient-experiences/</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-29" href="#footnote-anchor-29" class="footnote-number" contenteditable="false" target="_self">29</a><div class="footnote-content"><p>Cluster headaches and torture, yes, but also the heights of joy and subjective wellbeing.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-30" href="#footnote-anchor-30" class="footnote-number" contenteditable="false" target="_self">30</a><div class="footnote-content"><p>Or at least influenced; we don&#8217;t need to get into the causal power of qualia and discussions in philosophy of mind here.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-31" href="#footnote-anchor-31" class="footnote-number" contenteditable="false" target="_self">31</a><div class="footnote-content"><p>The practical implementation of anti-natalism faces insurmountable collective action problems that prevent it from achieving human extinction. Even if anti-natalists successfully refrain from reproduction, this merely ensures their values die out through cultural and genetic selection pressures while being replaced by those who reject anti-natalism. The marginal effect of anti-natalist practice runs counter to its purported goal: rather than reducing total population, it simply shifts demographic composition toward those who value reproduction.</p><p>Achieving actual extinction through anti-natalism would require near-universal adoption enforced by an extraordinarily competent global authoritarian regime capable of preventing any group from reproducing. Given human geographical distribution and the ease of small-group survival, even a single community of a thousand individuals escaping such control would be sufficient to repopulate. The scenario required for anti-natalism to achieve its ostensible goal is so implausible as to render it irrelevant to practical ethical consideration.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Clarifying some points on "Suffering-focused total utilitarianism"]]></title><description><![CDATA[I'm still right]]></description><link>https://www.aaronbergman.net/p/clarifying-some-points</link><guid isPermaLink="false">https://www.aaronbergman.net/p/clarifying-some-points</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Thu, 11 Sep 2025 19:14:50 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bf3004eb-907f-4869-88f4-ca924ca987be_1638x1514.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A while back I wrote the post &#8220;<a href="https://www.aaronbergman.net/p/my-case-for-suffering-leaning-ethics">Suffering-focused total utilitarianism: Total utilitarianism doesn't imply that suffering can be offset</a>&#8221; and I basically stand by it.</p><p>The post isn&#8217;t that long, but here&#8217;s an LLM summary which I admit adds a bit of structural clarity even before I introduce some additional substantive points of clarification below:</p><blockquote><h3>Summary of the original post</h3><p><strong>Core Claim:</strong> Even within a utilitarian framework that treats happiness and suffering as commensurable values on a single moral axis, some instances of suffering may be morally incommensurable with any finite or infinite amount of happiness.</p><h3>The Argument</h3><p><strong>P1.</strong> A perfectly rational, self-interested hedonist would refuse certain trades (e.g., one week of maximal torture for any amount of happiness, however large or long-lasting).</p><p><strong>P2.</strong> The preferences of this idealized agent reveal either (a) our own idealized preferences or (b) what is morally valuable, all else equal.</p><p><strong>P3.</strong> There exists no compelling argument that all suffering can in principle be offset by sufficient happiness.</p><p><strong>P4.</strong> In the absence of such arguments, we should defer to the intuition that some suffering cannot be morally offset.</p><p><strong>C.</strong> Therefore, some instances of suffering cannot be ethically outweighed by any amount of happiness.</p><h3>The Mathematical Challenge</h3><p>Standard utilitarianism implicitly assumes hedonic states map to the real numbers, enabling cardinal comparisons and trade-offs. However:</p><ol><li><p><strong>Ordinal &#8800; Cardinal:</strong> While hedonic states must be ordinally comparable (we can rank them), this doesn't entail they possess cardinal magnitudes that behave like simple real numbers</p></li><li><p><strong>Alternative Models:</strong> The relationship between suffering and offsetting happiness might be better modeled by a function with a vertical asymptote&#8212;where beyond some threshold of suffering, no amount of happiness can provide equivalent value.</p></li></ol><h3>Against the Standard Defense</h3><p><a href="https://www.cold-takes.com/defending-one-dimensional-ethics/">Karnofsky's argument</a> (that we accept small risks of terrible outcomes for modest benefits, therefore any harm can be offset) fails because:</p><ol><li><p>It conflates <strong>removing goods</strong> (death) with <strong>instantiating suffering</strong> (torture)</p></li><li><p>It extrapolates from weak evidence (our flawed risk-taking behavior) to strong metaphysical claims</p></li><li><p>It assumes without argument that probability calculus applies uniformly across all magnitudes of suffering</p></li></ol><h3>Implications</h3><p>This view preserves total utilitarianism's core structure while rejecting the assumption that all values are mathematically tractable. Some suffering may constitute a moral catastrophe that no amount of flourishing can justify&#8212;not because suffering and happiness are incomparable, but because their relationship is asymptotic rather than linear.</p></blockquote><h2>1) My sin: some conflated claims</h2><p>In the original post, I didn&#8217;t distinguish clearly enough between a couple highly related but distinct claims. It actually gets more granular than this, but here are the two big ones:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>  </p><ol><li><p><strong>The weaker logical/mathematical claim that offsetability isn't implied by standard utilitarianism.</strong></p><ol><li><p>Or really, to be pedantic but a bit more specific: that offsetability isn&#8217;t implied <em>by the set of claims/premises that seem to be the consensus conceptual basis and content *of* utilitarianism. </em>More on this below.</p></li></ol></li><li><p><strong>The stronger</strong> <strong>metaphysical claim that some suffering actually cannot be offset.</strong></p></li></ol><p>The issue is one of argumentative and rhetorical clarity. As the subtitle of this post suggests, I don&#8217;t think this failure undermines the validity and likelihood of the underlying claims themselves.</p><h2>2) Clarifying [total] &#8220;utilitarianism&#8221;</h2><p>To the best of my knowledge, there are five consensus premises that are necessary and sufficient to imply and constitute [total] utilitarianism<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><ol><li><p><strong>Consequentialism</strong>: The rightness of actions depends solely on their consequences (the states of affairs they bring about), not on the nature of the acts themselves or adherence to rules.</p></li><li><p><strong>Welfarism</strong>: The only thing that matters morally in evaluating consequences is the wellbeing (welfare/utility) of sentient beings. Nothing else has intrinsic moral value.</p></li><li><p><strong>Impartiality/Equal Consideration</strong>: Each being's wellbeing counts equally - a unit of wellbeing matters the same regardless of whose it is. No special weight for yourself, your family, your species, etc.</p></li><li><p><strong>Aggregation/Sum-ranking</strong>: The overall value of a state of affairs is determined by summing individual wellbeing. More total wellbeing is better.</p></li><li><p><strong>Maximization</strong>: We ought to bring about the state of affairs with the highest total value (maximum aggregate wellbeing).</p></li></ol><p>To be clear, these five things are just what I <em>mean </em>by &#8220;[total] utilitarianism&#8221;.</p><p>I don&#8217;t think this bit is very contentious, but of course please pushback if you think this is wrong.</p><p>Anyway, I think my argument for the <em>logical</em> claim (point 1 above) becomes significantly clearer when you make all this explicit:</p><h3><strong>My claim</strong></h3><p><strong>Claim: the above five premises do not imply offsetability. Full offsetability requires an additional sixth premise that describes what kinds of mathematical model accurately and adequately model the real world. </strong></p><p>The most intuitive and common such premise I believe to be the following:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><ol start="6"><li><p><strong>Real-number representation</strong>: All states of hedonic welfare are adequately modeled by the real numbers (with standard arithmetic operations).</p></li></ol><p><strong>Without such an</strong> <strong>additional premise, the standard utilitarian framework doesn't entail that any amount of suffering can be offset by sufficient happiness. </strong></p><p>It&#8217;s perfectly coherent to argue that premise 6 is in fact true. Maybe it is. To quote myself from the original:</p><blockquote><p>Perhaps hedonic states really <em>are</em> cardinally representable, with each state of the world being placed somewhere on the number line of units of moral value; I wouldn&#8217;t be shocked. But if God descends tomorrow to reveal that it is, we would all be learning something new.</p></blockquote><h4>A note on math</h4><p>I&#8217;ll clarify that I am <em>not</em> objecting to any sort of pure mathematical claim. Rather, I&#8217;m beefing with the usually-implicit substantive metaphysical claim that relates pure mathematics to the real world. </p><h2>On the Von Neumann-Morgenstern utility (VNM) theorem</h2><p>The only <a href="https://benthams.substack.com/p/contra-bergman-on-suffering-focused/comment/7633745">counterarguments</a> I&#8217;ve heard to my logical claim involve the <a href="https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem">Von Neumann-Morgenstern utility theorem</a>. I do not find these very convincing.</p><p>Check out Wikipedia (linked above) for the formal definition, but here&#8217;s Claude&#8217;s summary in plain-ish English:</p><blockquote><h3>The Setup</h3><p>Consider an agent choosing between <strong>lotteries</strong> - probability distributions over outcomes. A lottery L might be: "30% chance of outcome A, 70% chance of outcome B."</p><p>The agent has preferences (&#8827;) over these lotteries: L &#8827; M means "the agent prefers lottery L to lottery M."</p><h3>The Four Axioms</h3><ol><li><p><strong>Completeness</strong>: For any two lotteries, the agent either prefers one or is indifferent.</p><ol><li><p>Philosophically: The agent has determinate preferences over all options.</p></li></ol></li><li><p><strong>Transitivity</strong>: If L &#8827; M and M &#8827; N, then L &#8827; N.</p><ol><li><p>Philosophically: Preferences are coherent/non-cyclical.</p></li></ol></li><li><p><strong>Continuity</strong>: If L &#8827; M &#8827; N, there's some probability p where the agent is indifferent between M and "lottery p&#183;L + (1-p)&#183;N."<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><ol><li><p><strong>Philosophically: No outcome is lexically superior - everything has a "price" in probability terms.</strong></p></li></ol></li><li><p><strong>Independence</strong>: L &#8827; M if and only if [p&#183;L + (1-p)&#183;N] &#8827; [p&#183;M + (1-p)&#183;N] for any N and p&#8712;(0,1).</p><ol><li><p>Philosophically: Irrelevant alternatives don't affect relative preferences. If you prefer coffee to tea, you still prefer "coffee or death (50-50)" to "tea or death (50-50)."</p></li></ol></li></ol><h3>The Theorem</h3><p><strong>If</strong> an agent's preferences satisfy these four axioms, <strong>then</strong> there exists a utility function u such that:</p><ul><li><p>The agent prefers lottery L to lottery M if and only if E[u(L)] &gt; E[u(M)]</p></li><li><p>This function is unique up to positive affine transformation (multiplying by a positive constant and adding any constant)</p></li></ul></blockquote><h3>Response </h3><p>VNM <em>does</em> imply that agents whose preferences satisfy those four criteria above can be turned into a real-valued (i.e., modeled adequately by the set of real numbers) utility function, but <strong>it&#8217;s totally logically possible, coherent, and plausible to reject continuity (premise 3). </strong>Indeed<strong>, </strong>that&#8217;s exactly what lexical, non-offsetable preferences are doing: </p><p>And importantly: <strong>not only is it logically possible, but nothing particularly weird or implausible happens if you reject this premise.</strong> Again, to quote an LLM:</p><blockquote><p><strong>No money pump problems</strong>: If I lexically prefer "not being tortured" over "any amount of money," you can't construct a sequence of trades that leaves me strictly worse off. My preferences remain transitive and complete - I just refuse certain trades categorically.</p><p>&#8230;</p><p><strong>What you keep</strong>:</p><ul><li><p>Immunity to Dutch books</p></li><li><p>Transitivity and completeness</p></li><li><p>Perfectly coherent decision-making</p></li></ul></blockquote><p>In simpler terms, rejecting continuity doesn&#8217;t imply any sort of epistemic red flag. No <a href="https://www.oxfordreference.com/display/10.1093/oi/authority.20110803100205601">money pumps</a>, no loss of coherence, no cognitive dissonance necessary; the world keeps turning.  </p><h2>Conclusion</h2><p>Standard total utilitarianism, defined by consequentialism, welfarism, impartiality, aggregation, and maximization, does not imply that any instance of suffering can be &#8220;offset&#8221; or morally justified by sufficient happiness. </p><p>That requires an additional sixth premise: that welfare states map cleanly onto the real numbers with standard arithmetic operations. This is a substantive metaphysical claim about the nature of suffering and happiness that usually gets smuggled in without justification. </p><p>Recognizing that offsetability isn't mathematically required by utilitarianism itself opens up conceptual space for suffering-focused views that preserve the structure of and very compelling (and I think true) arguments for total utilitarianism in some form.</p><div><hr></div><p><em>If you find this argument important and compelling and like doing this kind of thing, I&#8217;d be interested in turning it into a proper arXiv PDF - feel free to reach out to aaronb50[at]gmail.com! Bonus points for having more philosophy training/experience than me!</em></p><p><em>Thanks to <a href="https://www.anthropic.com/news/claude-opus-4-1">Claude Opus 4.1</a> for much help with this article and <a href="https://experiencemachines.substack.com/">Rob Long</a> for discussion that led to it.</em></p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aaronbergman.net/p/clarifying-some-points/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aaronbergman.net/p/clarifying-some-points/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aaronbergman.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aaronbergman.net/subscribe?"><span>Subscribe now</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>There&#8217;s a sort of intermediate claim between (1) and (2) that&#8217;s essentially &#8220;plausibility: not only is non-offsetability logically permitted by utilitarianism, but it is basically plausible given what we know about ethics and what the world is like&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I am using the terms &#8220;utilitarianism&#8221; and &#8220;total utilitarianism&#8221; to mean the same thing. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><strong>Update </strong>a few hours after publication:<br><br>I edited this section response to a good critique by <a href="https://x.com/absurdlymax">Max Alexander</a>, which is that there are other mathematical systems <em>besides</em> the standard real numbers + arithmetic one that would then also entail offsetability.</p><p>I&#8217;m not smart enough to have a formal proof + formal criteria for what kinds of math/model/ontology do preserve offsetability but I think the (somewhat contrived) of example of closed real-valued <em>intervals </em>with interval addition and scalar multuplication works. That is:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0JQH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F168e8fa0-0388-43d1-be91-d7fb5995e784_562x282.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0JQH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F168e8fa0-0388-43d1-be91-d7fb5995e784_562x282.png 424w, https://substackcdn.com/image/fetch/$s_!0JQH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F168e8fa0-0388-43d1-be91-d7fb5995e784_562x282.png 848w, https://substackcdn.com/image/fetch/$s_!0JQH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F168e8fa0-0388-43d1-be91-d7fb5995e784_562x282.png 1272w, https://substackcdn.com/image/fetch/$s_!0JQH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F168e8fa0-0388-43d1-be91-d7fb5995e784_562x282.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0JQH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F168e8fa0-0388-43d1-be91-d7fb5995e784_562x282.png" width="408" height="204.72597864768684" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/168e8fa0-0388-43d1-be91-d7fb5995e784_562x282.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:282,&quot;width&quot;:562,&quot;resizeWidth&quot;:408,&quot;bytes&quot;:33431,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.aaronbergman.net/i/173368396?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F168e8fa0-0388-43d1-be91-d7fb5995e784_562x282.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0JQH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F168e8fa0-0388-43d1-be91-d7fb5995e784_562x282.png 424w, https://substackcdn.com/image/fetch/$s_!0JQH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F168e8fa0-0388-43d1-be91-d7fb5995e784_562x282.png 848w, https://substackcdn.com/image/fetch/$s_!0JQH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F168e8fa0-0388-43d1-be91-d7fb5995e784_562x282.png 1272w, https://substackcdn.com/image/fetch/$s_!0JQH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F168e8fa0-0388-43d1-be91-d7fb5995e784_562x282.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Note that p cannot be 0 or 1 here, as this would imply indifference between some of the options but we are taking as a hypothesis that the preference ordering is strict.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#14: Jesse Smith on HVAC, indoor air quality, and generally being an extremely based person ]]></title><description><![CDATA[An actual adult for once]]></description><link>https://www.aaronbergman.net/p/14-jesse-smith-on-hvac-indoor-air</link><guid isPermaLink="false">https://www.aaronbergman.net/p/14-jesse-smith-on-hvac-indoor-air</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Wed, 14 May 2025 03:53:26 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/163506952/f6a02da00079a678d23915d2afb62de0.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h2>Summary</h2><p>Join host Aaron with Jesse Smith, a self-described "unconventional EA" (Effective Altruist) who bridges blue-collar expertise with intellectual insight. Jesse recounts his wild early adventures in Canadian "bush camps," from planting a thousand trees daily as a teen to remote carpentry with helicopter commutes. Now a carpenter, HVAC technician, and business owner (Tay River Builders), he discusses his Asterisk magazine article, "Lies, Damned Lies, and Manometer Readings." </p><p>Discover the HVAC industry's surprising shortcomings, the difficulty of achieving good indoor air quality (even for the affluent!), and the systemic issues impacting public health and climate goals, with practical insights on CO2 and radon monitors like the Airthings View Plus.</p><h2>Jesse&#8217;s links</h2><ul><li><p><a href="https://asteriskmag.com/issues/05/lies-damned-lies-and-manometer-readings">Lies, Damned Lies, and Manometer Readings</a>, the Asterisk magazine article discussed at length</p></li><li><p><a href="http://tayriverbuilders.com/">Tay River Builders</a>, his contracting company</p></li><li><p><a href="https://www.willardbrothers.net/">Willard Brothers Woodcutters</a>, his wood store</p><ul><li><p>And its <a href="https://www.instagram.com/willardbroswoodco/">viral Instagram page</a></p></li></ul></li><li><p><a href="https://x.com/JesseTayRiver">Jesse on Twitter</a></p></li><li><p>The <a href="https://a.co/d/8vTmU4i">Airthings View Plus air quality monitor</a> discussed (currently $239 on Amazon)</p><ul><li><p>No they&#8217;re not paying either of us for this but they should </p></li></ul></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IDcO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72685e1-beb4-4bbb-8294-17b2f7c8eca2_400x400.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IDcO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72685e1-beb4-4bbb-8294-17b2f7c8eca2_400x400.jpeg 424w, https://substackcdn.com/image/fetch/$s_!IDcO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72685e1-beb4-4bbb-8294-17b2f7c8eca2_400x400.jpeg 848w, https://substackcdn.com/image/fetch/$s_!IDcO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72685e1-beb4-4bbb-8294-17b2f7c8eca2_400x400.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!IDcO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72685e1-beb4-4bbb-8294-17b2f7c8eca2_400x400.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IDcO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72685e1-beb4-4bbb-8294-17b2f7c8eca2_400x400.jpeg" width="400" height="400" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f72685e1-beb4-4bbb-8294-17b2f7c8eca2_400x400.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:400,&quot;width&quot;:400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:&quot;Image&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!IDcO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72685e1-beb4-4bbb-8294-17b2f7c8eca2_400x400.jpeg 424w, https://substackcdn.com/image/fetch/$s_!IDcO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72685e1-beb4-4bbb-8294-17b2f7c8eca2_400x400.jpeg 848w, https://substackcdn.com/image/fetch/$s_!IDcO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72685e1-beb4-4bbb-8294-17b2f7c8eca2_400x400.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!IDcO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff72685e1-beb4-4bbb-8294-17b2f7c8eca2_400x400.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Jesse</figcaption></figure></div><h2><strong>Transcript</strong></h2><p><strong>Aaron</strong>: Okay. First recorded pigeon hour in a while. I'm here with Jesse Smith resident dad of EA Twitter. I don't know if. I don't know if you'll accept that. Accept that honor. Okay, cool. and I actually, we haven't chatted, face to face in, like, a while, but I know you have, like, a really interesting. You're very, like, unconventional EA in some respects. Do you want to, like, give me your whole like, life story? In brief?</p><p><strong>Jesse</strong>: okay. So I guess one thing is that I'm super old for EA, right? like. And so being a dad and owning, like, a kind of normal business, I guess another is kind of more blue collar background, right? So, I was originally a carpenter also, then took on being an HVAC technician. So I, the businesses that I own Kind of like focus on a little bit of both those. yeah. So like my my background. I was raised in Canada. I left school, I didn't go to college. yeah. I went into, like, after a few years of, like a few years after high school, went into the trades basically.</p><p><strong>Aaron</strong>: Okay. Yeah. Nice. Okay. Like. Yes, I think that. Yeah, that definitely, like, makes you at least, at least stereotypically. But I think also like, in real life, like, there just aren't that many, like, carpentry businessmen who are, like happen to, like, hang out on Twitter also. So no, this is like legitimately really interesting. And at one point, I swear, I thought you went to Princeton. You must have mentioned the city, and I must have interpreted it as the town.</p><p><strong>Jesse</strong>: Yeah, my my brother, my brothers and I lived in Princeton for quite a while. Two of my brothers actually.</p><p><strong>Aaron</strong>: Still.</p><p><strong>Jesse</strong>: Okay around Princeton. I'm not far from Princeton. That's kind of the area where we work. it is where my dad went to grad school, so could have been that as well. so. Yeah. Yeah, but I did not attend Princeton.</p><p><strong>Aaron</strong>: I mean.</p><p><strong>Jesse</strong>: I worked on some of their buildings, but I have not attended.</p><p><strong>Aaron</strong>: Maybe, maybe that was where I got the that like. Yeah, like like myth from. So I know I have, I got like a couple at least. Matt. Matt from Twitter sent in a question, but I as usual, I've done a minimal level of preparation. So also we can we can talk about talk about truly whatever, but like, maybe. Yeah. So how did you, how did you, like, find out about Yale's? Like one thing.</p><p><strong>Jesse</strong>: Well, yeah. Okay. So there's some, I guess, some weird stuff. So I was fairly enamored with Peter Singer. Kind of just like starting with the book Animal Liberation. It would have been. I forget when he wrote that. Like it would have been years after he wrote it, right? Because I think he wrote it in even when, like when I was super young. But I probably read that in my late teens. Okay. And so, yeah.</p><p><strong>Aaron</strong>: That's that's the 1975 book.</p><p><strong>Jesse</strong>: So yeah, that sounds right. Yeah. I was going to guess the 70s. Right. So I was like.</p><p><strong>Aaron</strong>: Nice.</p><p><strong>Jesse</strong>: Nice or something. Right. So what.</p><p><strong>Aaron</strong>: Year old?</p><p><strong>Jesse</strong>: Yes, exactly. But so so when I was 16, I briefly dropped out of high school and I was working. This is really weird. I was working in these, like, bush camps in Canada. It's somewhat popular to do this. And so, like, I was 16, I celebrated my 17th birthday in a bush camp. That was like a tree planting bush camp. But this. Okay, so this is really weird. It sounds like this is like core blue collar, but it's not quite. The guy who owned the company I was working for was a friend of my dad's, and he was Baha'i and vegetarian. And so he had these vegetarian bush camps that we planted trees and did like some brushing out of. Right. So we ran like brush saws and stuff. And so I sort of I think that's kind of what, like I became a vegetarian out of those camps and was reading kind of Peter Singer's stuff at the time. And I think partly being there made me realize like, oh, this is going to not be that difficult. A lot of guys were really irritated by vegetarian bush camps, right? Like some of it was kind of core blue collar, mediating type guys. But like, I was totally fine. I was actually like super happy because it was kind of my first experience in a full time job. And I was nervous because everybody was like, oh, you know, it's going to be hell. And I actually thought it was great. It was much better than being in high school. I thought at the time, like, they just like everything was squared away, like they just fed you. You just had to go and like, try to put as it was piecework. So it was like $0.22 a tree or something. And after a few days, I think on my third day I put something like a thousand trees into the ground or something. Right. So I was.</p><p><strong>Aaron</strong>: Like, Jesus Christ.</p><p><strong>Jesse</strong>: I was like, oh, this is amazing, right? Like, all I have to do is like. Run as fast as I possibly can with these big bags of trees in the woods in, like, this beautiful setting. Eat the food they give me and then like, go to sleep and, like, read or whatever. Right. So like it was a it was a great experience. I know that's the total effect, right. What's that.</p><p><strong>Aaron</strong>: No no no it's not it's not a digression. One thing is can you just define bush camp like for for us dumb American like, oh yeah, dumb like Americans or whatever.</p><p><strong>Jesse</strong>: Yeah. So I, I don't know, I, I guess maybe I haven't heard the terms. They're in the US, but they must exist for some purpose. Right. So usually it's like somewhere remote that you are basically camped out of. In my case, it was literally like camping. It was tents, which I didn't mind at the time. So you'd be like, you know, in our case, it was big camps. Like, I think sometimes they can be as little as maybe ten people, let's say. Right. And this was like a pretty decent company. So they were running 40. The max I saw was maybe 100 people working out of this camp in the remote wilderness. The first year I did it was around an area called mica, which I understand now is a popular heli skiing destination. Like I have a friend who now skis and mica, which is hilarious to me, but it would maybe take you to the nearest town to mica was probably Revelstoke, which was in this case we could drive there, you know, maybe like a year or two later. There were ones that we were flown into, and in some cases there were even ones where I'm trying to think like I was in one in my like early 20s where they would helicopter. You would take a helicopter ride every day to the site, like so like you. So they would like, you'd see this helicopter coming in and like, they'd land in the camp and then they take you. But it was just it wasn't like it didn't feel like special operations. It was like the helicopter was rented from the, like, small towns Weather channel. Right.</p><p><strong>Aaron</strong>: Well, that's so badass as like, I feel like the correct term for all this is, is very based.</p><p><strong>Jesse</strong>: Yeah, I don't know. I mean, I like it seems weird now to describe this to people and it's not in people's experience. But it wasn't it didn't feel the helicopter thing. Maybe did initially felt weird. Right. Because like, I don't know anything about helicopters, right. Like but it didn't feel that weird at the time. And I knew a lot of people growing up who worked out of bush camps and then years later. So like probably around when I was in my early 20s is when I started my carpentry apprenticeship formally, like I had worked in construction a bit and then done the bush camp thing on and off. And so then I ended up doing some some remote wilderness bush camp carpentry work as well, maybe midway through my apprenticeship. So I worked on a it. An Indian reserve building, a water treatment facility that would have been like probably late 90s. Like I'm thinking like probably right before I moved to the US. And that was like that was months and months. That was actually not a good camp. One of the things I hate is that the first camp I went to was incredible, like incredible, like incredible food, like they would haul in saunas like you had you had a trailer with a sauna. And so, like when you're 16, you know, you're just like, oh yeah, this is like normal, right? And I've often thought like. And the food was amazing. Like the, the lead cook would make like she made it for my 17th birthday. She made me a cake. Right. But I was like, and I'm I'm sure I said like, thank you. But it should have been like effusive with praise, right? Because it was just like, yeah, incredible. And then if you, you know. And then I was probably in like over the years maybe 2 or 3 other camps and they suck. Like I remember showing up and being like hey, when is. And like, so this woman would, she would have like Indian night and Mexican night, like themed food nights and like you like they had generators and you could watch movies and like, it was just crazy. And I remember rolling into, like, this next, logging camps and logging camps are legendarily crappy, right? They just feed you food basically out of a warmed up can, right? I remember being like, hey, when's like Mexican night, you know? And they're like, what are you talking like just like nothing, right? Like no amenities. The sleeping was maybe a little better. And that you were you slept in a trailer, but that didn't feel like a quality of life improvement anyway. So that so that is that's Canadian bush camp experience.</p><p><strong>Aaron</strong>: I guess that's. That is nuts. I literally just I yeah, this feels like highly optimized for just sounding like as badass as possible. And you wait. You. So you said this is like normal. Maybe it was like, normal for your. What was your reference class, like a 50 other families or something? Because this sounds incredibly abnormal to me as like a grew up with like a very like sheltered, like upper middle class, like American. Like there was zero chance I was ever gonna, like, fly into a bush camp at 16.</p><p><strong>Jesse</strong>: Okay, so to be fair, the first one was not flown in. That was. That was driving.</p><p><strong>Aaron</strong>: Okay. Oh, sorry.</p><p><strong>Jesse</strong>: Subsequent ones were. Or potentially there was one where we took a boat to get to the camp initially, but yeah, it wasn't uncommon to be flown in, I guess. the I don't know, actually, it's a good question. I think if I'd been raised in a Canadian city, it would have been more unusual. But, you know, so some of the kids that I, it was very common at the time and I think it may still be for college kids in Canada, like kids attending university in Canada. Maybe go do this in summers, right? Because it makes you it's piecework. If you get proficient, it's fairly highly paid, right? It was a surprising amount of money at the time and even the remote work for construction was similar. Right. You're you're going to be well paid. You're not going to spend, like any money, right? In some I mean, yeah, there were some where there would be like something nearby that you could spend money on. But the first few camps I was in, we would go to some buffet for the Mica dam once a week and it was like $10 or something, right? And I was like, yeah, you know. Yeah. Like you just didn't spend money. So yeah, it was good for that, I guess. Yeah. I it didn't feel that weird aside from being very young, even by the reference. Like I was very young, I was like, I was very much not wanting to be in high school at the time. So that was a little bit weird. Yeah.</p><p><strong>Aaron</strong>: Let's come let's come back to, actually. Yeah. So what was, so that's like, interesting that you didn't want to be in high school. Was that, because because, I mean, we've known each other sort of. I don't even think we've met in real life, which is very, very sad, but, like, feel like people in general who are, like, as smart and thoughtful as you generally are, the type that wants to stay in high school. So what was that situation like for you?</p><p><strong>Jesse</strong>: Oh, I don't know. I, I yeah I, I just didn't like it, but it wasn't like it was common for kids. Okay. So I was in Ontario and Ontario at the time had a 13th grade as well. So it seemed like insurmountable, right. Like and it just wasn't uncommon. Like it, it just wasn't uncommon for kids that I was around to be like, okay. Like and also I think the US really limit this, limits this somehow in a way that that Canada, or at least Ontario didn't like. You could just leave high school, right?</p><p><strong>Aaron</strong>: And yeah.</p><p><strong>Jesse</strong>: People didn't. You know, you just were like, okay, I'm going to work like, I, I'm kind of tapping out.</p><p><strong>Aaron</strong>: Yeah, yeah.</p><p><strong>Jesse</strong>: And it wasn't like, also, I didn't really grow up in an area where. Not a lot of kids who I knew were even considering going to college, like in my friend group. There were. Maybe it was. It just wasn't common. Like in at least in my core group of friends, like there were other kids, I'm sure, who definitely went on to college, but it just wasn't in the US right now. You know, my kids, it's it's weird to look at, right? Because there is no doubt they'll go to college, right? Like there's there's this established pathway. I don't remember I don't think I talked to my parents about that much. I don't think we ever, you know, it wasn't it wasn't established early on in the way that people today and I'm sure it was the same with you, where they just it's just established that you will be attending a four year college.</p><p><strong>Aaron</strong>: Yeah. I mean, I think I looked it up recently and it's like, I think something like half of like high school or like college age students at some point go to college. I forget the exact number. So it's like not universal, but I think it's pretty close to universal in some like urban, like relatively wealthier, like liberal settings. It's like basically, yeah, basically like 100%. yes.</p><p><strong>Jesse</strong>: Like my daughter, her friends and my son, his friends, they will just all go to college. They may get weeded out at some point. I know that's also surprisingly common for kids to sort of struggle and then drop out or something like this, right? But the likelihood of them attending a four year college is is extremely high in that circle of kids. I think so, yeah.</p><p><strong>Aaron</strong>: Oh, yeah. So something I just remembered. Sorry. This is like my ADHD brain is maybe not like the best made for a podcast, but that's okay. So like, yeah, you're making a lot of money in part because as, as is probably, I assume, been consistent through your whole life, you're extraordinarily athletic. I can just apparently, like, run and plant a thousand trees a day while, like, carrying a bag. I feel like that you shouldn't, like, take for granted or something like that is so sorry. I won't I won't ramble on too long. But at 16 I also had a blue collar job for about one week was that I was a, summer camp counselor Outlaw, and at first they didn't know where to put me. So like I, there wasn't any like youth group for me to go to. So they just like there was like this barn to like work on. And I was like, wow, I can't believe I signed up for this. Like, this sucks. So like, that that I guess to some extent like demonstrates. Yeah, I people's like personal preferences or like abilities or whatever.</p><p><strong>Jesse</strong>: Like, so what did they have you do to the barn? What? Like what were you what type of work were you.</p><p><strong>Aaron</strong>: Oh man. So this is so this is like nine years ago, where, like, I remember we were cutting stuff. Cutting wood, like what were we doing? I think I was like, a very small chunk of a project to either to, like, expand the barn or something, and like, what I was doing was, like, literally just, like carrying wood around, but like, wasn't it wasn't fun. It definitely wasn't fun for $2 an hour, just like how much I was making, or like, yeah, maybe someone could have paid me, like, a lot to be, like, very excited. I think I have photos on my phone, maybe I'll like link those in like the show description. And I was also I was I mean, I was an exceptionally tiny 16 year old also, so that might have something to do with it. anyway. Yeah.</p><div class="image-gallery-embed" data-attrs="{&quot;gallery&quot;:{&quot;images&quot;:[{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6deb616e-266e-443c-a747-77443db5feeb_960x1280.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/988df489-a08f-4fcd-b3c0-c987a11d6e1d_2448x3264.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4a884b06-984e-435b-a317-c535ae3bb5f4_2448x3264.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/920fdbfc-3024-46ec-9ca8-4d3e81698bcf_2448x3264.jpeg&quot;},{&quot;type&quot;:&quot;image/jpeg&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c3770f73-2d15-4a7e-a0f3-15112ede7ffe_2448x3264.jpeg&quot;}],&quot;caption&quot;:&quot;Here they are lol&quot;,&quot;alt&quot;:&quot;&quot;,&quot;staticGalleryImage&quot;:{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f926969a-9939-4bbf-8b3d-085fb4be5b3c_1456x1210.png&quot;}},&quot;isEditorNode&quot;:true}"></div><p><strong>Jesse</strong>: Yeah. That's interesting. Like, like also I can like okay, so I have some exposure to environments like this. And I think also in a lot of cases people sort of feel directionless. Right. Like, so you're probably being supervised by someone who has no clue what they're doing. Right.</p><p><strong>Aaron</strong>: Like, I don't even remember, like something like.</p><p><strong>Jesse</strong>: I find this commonly in construction is that people don't approach. It's like an oversight where people don't approach things like there's a kind of systematic way to engage in the exercise and therefore it becomes sort of more frustrating as a consequence, although like the physical demands are real and I, you know, it could be that it's like, you know like people find it physically demanding. But I also find that people find it, you know, in in bad environments, they find it frustrating and directionless and, and like, this kind of ties in with the stuff I say about HVAC that like, if everybody in the kind of hierarchy, if you're being supervised by people who are just completely ignorant, right, you end up, you know, kind of lost, right? So it could I mean, I'm not saying necessarily it's the case with you, but it could have been that you're like, what am I doing again? I'm moving wood.</p><p><strong>Aaron</strong>: I mean, honestly, I think it was probably largely the fact that, like you, yeah, I would guess the physical demands had like and, like preferences, just like sheer preferences around that had like a lot to do with our differential experience.</p><p><strong>Jesse</strong>: It could be. Yeah, it could be like I don't have. Yeah. So I ended up like, yeah, that ended up athletics and that kind of thing also ended up being kind of a like a big feature, at least by like a little maybe more later on. But yes, it was like the constant thing, like constant. Even my wife says that. Right. Like she finds it difficult sometimes to be my presence because I'm constantly in motion Ocean and doing stuff.</p><p><strong>Aaron</strong>: Thank you for sitting. Thank you for sitting down. Also, I don't know if I can cut this out, but you're in a car right now. I should have mentioned that, right? Yes. Okay.</p><p><strong>Jesse</strong>: I'm in my truck.</p><p><strong>Aaron</strong>: Very, very badass. Okay, nice. yeah. No. Just like. Just like, jump forward. Now you run like, a gazillion miles a day and also do jujitsu all the time and also do, like, like carpentry. Like as, like the break in the middle or something.</p><p><strong>Jesse</strong>: But yeah, that's so. Yes, that's pretty accurate. like, but keep in mind. Right. So I own the business and I end up doing a lot there, right. Like, so I'm not doing a ton of field work necessarily. If I'm doing field work for someone else, it often is HVAC. HVAC has, in my opinion, like extremely low physical demands, usually residential in particular. Like you can get into some stuff and like if an HVAC tech hears this, they'll get all bent out of shape about it. But in all, like I have experienced in like a lot of different subsets of carpentry and HVAC is like, yeah, okay. Yes, you had to carry something of a ladder like one time in your day, like. Yes. Okay, great. Yeah. You know what I mean? Like like yeah. Like like I did concrete formwork for years in my apprenticeship and that was, that was probably the most in terms of, like direct physical demands. That was pretty high, maybe the most. Right. So you're just slinging like 88, like 85, &#163;90 sheets of form ply just all day long. Right. So that was more physically demanding. Framing is maybe like a big step down, but, you know, still moderate I'd say. yeah. So yeah. And so I did that and then started racing. The big thing that kicked that out like was racing Ironman like when I moved to the US, maybe a year or so later, I started training for Ironman and fell in with like a group of, like, mostly Princeton grad students or a lot of Princeton grad students who, who well, or maybe in some cases, older kids who like guys who had Would run under an undergrad program in Princeton. So like, really fast runners, right? Like so D1 runners. And there was this training group in Princeton, we would go out and just try to murder each other every Thursday night for like, like eight years or something. So that was super fun. And so that was like pretty eye opening. And then being able to do that and do carpentry at the same time.</p><p><strong>Aaron</strong>: Yeah. I mean, we're gonna have to we should put up a notice. You mentioned your wife because all the the many thousands of ladies like, listening to this are gonna, are gonna get. Yeah, are going to get a little too optimistic.</p><p><strong>Jesse</strong>: With what? What do you mean?</p><p><strong>Aaron</strong>: No, sorry. That was that was a terrible. That was a terrible joke. I'm saying, like, you're coming across as, like a very like, you know, you mentioned like. Oh, yeah, like running, you know, running like these, like Iron Man triathlons with these like and like doing, like all this, like backcountry carpentry stuff. Like very like very sexy and like manly and so, like, we're gonna have to, like put a flag up just to say that. Like to say that, like, yeah, you're, you're you're a taken man.</p><p><strong>Jesse</strong>: That's really funny. I funny. I don't feel like.</p><p><strong>Aaron</strong>: It's, like.</p><p><strong>Jesse</strong>: Almost.</p><p><strong>Aaron</strong>: Over. Honestly.</p><p><strong>Jesse</strong>: Yeah. It feels like the blue collar thing is more a detriment today, but I'm really uncertain. Like, people talk constantly about social status and things like that. And obviously like.</p><p><strong>Aaron</strong>: Yeah.</p><p><strong>Jesse</strong>: Collar jobs are way down on the what people think of as social status, I guess I don't I don't participate much in that discussion. Like, I have a lot of, I mean, aside, like trying to decouple from the fact that I am a carpenter and like, I have many sort of like when you start to unravel it I often think it's not like it's not clear to me like that, like I, that I understand what that means really. But it. Yeah. So but it is my understanding that those blue collar jobs are not like highly skilled people don't highly seek out men, and women don't highly sought out men in blue collar jobs.</p><p><strong>Aaron</strong>: That's interesting. I mean, actually, yeah, it's like maybe like discuss. Maybe not. Not just from like that the like, romantic angle, but like, just like social status in general is like pretty important topic. So like, you're you're a business owner in one respect, right? So, like, you could, get do you have any like, I guess further insight on, yeah. For someone who's like, sort of like grew up or like professionally and literally around like blue collar stuff, but like, now, like owns a literal business, like where whether you think, like, the status stuff has more to do with just like, money or like, like if you were just like, took somebody who's just like, you know working literal construction, but they were suddenly making $100 an hour instead of whatever. Like, would that, like, balance the status scales, do you think? Do you know what I'm getting at?</p><p><strong>Jesse</strong>: Yes, I do, and that's what like people talk about this all the time. They say, oh, well, you know, like a college professor, you know, at an entry level in an entry level role where they may get stuck in some cases permanently is ostensibly higher social status than someone working in construction who could be there are jobs that actually pay towards like a very There are jobs that pay really quite well in that world, right? Or, you know, like moderately there are many. Like there are many. I mean, not a ton. So like, you know, I want to be fair here, right. Like, I think there's this separate trend in society to be like, why don't you go be a plumber where they make $250 an hour? I'm like, no, they do not make 250. Like there's a widespread conflation of, like what people are being billed by the hour in some cases with what people are being compensated. Right. Which is weird because they don't seem to make that mistake when you know they understand, like corporate employment at a deeper level than they understand plumbing employment, let's say. Right.</p><p><strong>Aaron</strong>: Yeah, yeah, yeah.</p><p><strong>Jesse</strong>: But there are definitely jobs in the blue collar world that routinely kind of, pay sort of in the you know, like on the bubble, six figures. Right? Like it's rare to be over 150, but it's not that rare to have like 80 to 120 as a range for people in highly skilled jobs and competitive markets. Right. Yeah. So but yeah like so that is a thing. Right. And so the college professor ostensibly has more social status than the HVAC technician making, you know, double their salary or something. This is where I sort of start to think that, like, I sort of have questions about this, like how people kind of perceive that, I guess, like it seems at least weakly coupled with the amount of money that people make, but not entirely. Yeah. Although, you know, there is some domain specific stuff to like. It seems like people play different games with social status, and I'm not entirely sure about social status being a motivation for something like I get the ascendancy part of it. But when Mark Zuckerberg started jiu jitsu, he entered a room in which he was the lowest possible social status imaginable. Right. And so I think that's really interesting, right? Because, like, it can't be that Mark Zuckerberg is strictly motivated by the achievement of the highest possible social status, because he entered a world in which he had no status. Right? Like he was the lowest possible.</p><p><strong>Aaron</strong>: There's got to be some. There's got to be some continuity between general society and Jiu-Jitsu world. No.</p><p><strong>Jesse</strong>: Oh, I well, yeah I mean like people are oh my God, it's Mark Zuckerberg.</p><p><strong>Aaron</strong>: Like yeah yeah.</p><p><strong>Jesse</strong>: Beat the ever loving crap out of that guy. Right or something. Right. Like but I don't think like there are many instances of this. Right. Like people just enter other domains. I don't think Mark Zuckerberg went to jiu jitsu because he was motivated by social status. He would have just done something like he would have continued to be Mark Zuckerberg. Only Zuckerberg harder or something, right? Yeah, yeah. So I don't like I don't think I totally get this and I think people don't. I think there is a domain specificity specificity to status that people kind of like, don't acknowledge. Like if you talk to people also about like how the blue collar world might be structured. Well, it will have its own social status, right? Yeah. Yeah. Right. And so, like, people kind of like it's just a blind spot. Like it's like people go, oh my God, the blue collar world has status to like and and engages in thinking about this, right?</p><p><strong>Aaron</strong>: Yeah. I mean, so, like one thing that I was maybe intuitively thinking and now, like, explicitly thinking, it's just like a you're like an extremely, like thoughtful and like, interesting person and like, I don't want to I feel like there's no way to say this and like smart person that like, doesn't implicitly like denigrate like people who like, do. Maybe I'll have to like, clean up this section so it doesn't sound bad, but like there's some way to like in general, you're going to find that like the more like intellectual intellectually, like motivated, like people. The people who write for Asterix magazine, for example, are are generally the type that do like to go to high school and in college or whatever. So I feel like in some sense like the, the combination of like having that extremely like thoughtful and smart disposition, plus like the badass flying out to mercury mines in Canada. I know it's not literally mercury mines. It's like it's like kind of like the best of both worlds.</p><p><strong>Jesse</strong>: I'm sorry, I appreciate that. yeah, I appreciate your thinking. You're saying that. Yeah. Yes. I'm not clear, but. Yeah, I don't know. Yeah. Maybe I should try to go to college or something and round it out.</p><p><strong>Aaron</strong>: Yeah. Okay. So, wait, so, yeah, in your, in terms of your, like, biography, you were so you were like circa 18 as a vegetarian in, a bush camp. What do you want to do? You want to, like, lay out the next story? Like, I don't know a couple of years or something.</p><p><strong>Jesse</strong>: Yeah. So, let me think. So I was working kind of like. Okay, so one, I did go finish high school, right? So I had dropped out of high school. I only had one semester left. So I think right before like around when I turned 18 is when I went back to high school, I lived like kind of on my own in a city called Kingston and finished out the semester to kind of wrap it up. then let's see, I was kind of working like a little construction in Eastern Ontario and then out to the West coast for bush camps and stuff for a couple years. I think I shattered my heels in a construction accident when I was 20. And that was pretty like like that was sort of like negatively transformative, I guess. Right? So I was out of work for maybe six months. I had so my right heel was surgically reconstructed. I mean, this is going deep on stuff, right? So, and also like, you know, I was I was like drinking much too much. Right? So I kind of like, ended up drinking heavily probably, you know, that that was just kind of like a period where I'm just drinking way too much. Once I recovered, like, I think I worked a little bit there and then moved back out to the West Coast. And that started my carpentry apprenticeship short, really shortly after moving to Victoria, British Columbia. and then did that. And that's sort of where I lived before I moved to the US. I moved to the US, like 99, I think I came, I moved to the US because my my aunt's house had burnt down in Hightstown, and so I rebuilt my aunt's house for like, oh, like six months. It was a townhome in Twin Rivers, which is actually the site. Well, this is okay. So this is not a building science podcast, but one version of the blower door, which is used to test infiltration, was developed in those townhouses in, in Twin Rivers like Hightstown, new Jersey, by a Princeton grad student. Actually, I forget whose name escapes me right now, but yeah. So so, you know, that's a tool that we now run every day. Pretty much a blower door for testing, house for leakage. But yeah. So I rebuilt my aunt's place, pretty much like by myself too, which was really funny.</p><p><strong>Aaron</strong>: Like, That's that's insane.</p><p><strong>Jesse</strong>: It was like a little bit of subject, but I can remember doing stuff like. Like I was like, back then, I was just a complete maniac. Like a complete maniac. Like things that no one had ever seen. Like, I at one point, like, so I had she had a truss roof all the, all the homes there are like truss roof. Right. So it's like this big pre-assembled roof member that typically gets craned onto the roof. And usually when you do that, you have a crew of maybe like four dudes or something, right? And I sort of and you get this crane rental, right? So I can remember like, oh, I got a book, The Crane. And I had this like, it was like a four hour minimum or something. And like, I was super cheap too. So the crane operator showed up and goes as the rest of the crew inside. And I was like, it's just me. And, and he goes no. And I was like, yeah, it's just going to be me. And we're getting you out of here. Like inside the window. Like, trust me, it's going to happen. And I just like, I think I was like two and a half or three hours or something, and that was it. Like I knocked out an entire truss roof going up and down like. So I had to go down, lash the truss, give the guy like directions as I was like running back up onto the roof, set the truss like, nail it right. You know, like off, off like just this whole package. Yeah. Like I was a complete maniac. Like other, like other things. Like I, when I first got there, I filled the 30 yard dumpster every day, like, so I would fill a 30 yard dumpster in a single day with the refuse from the fire. Like, I was just, like, fill the entire dumpster completely by myself. So I would just call for a dumpster every single day for like, I think it was like 3 or 4 days in a row that I just filled an entire 30 yard dumpster, which like those huge, like, roll off dumpsters. Right? There's no way, like, right now, I know for certain there would be. There would be like, no, I think no way I could possibly do that now. Which is crazy to think about. Right? So yeah, I.</p><p><strong>Aaron</strong>: Mean, yeah, this is like feats of physical endurance, not just endurance, everything, whatever. Or like kind of kind of insane here. yeah. Wait. And so, like, not not to, during all this time, were you, like separately having this, like, philosophical track where you were, like, reading Peter Singer or.</p><p><strong>Jesse</strong>: Yeah, for sure, for sure. Yeah.</p><p><strong>Aaron</strong>: Okay, cool.</p><p><strong>Jesse</strong>: Yeah. Like, not just like I'm trying to think of what I was reading mostly. Then, like, Peter Singer really stuck with me. Oh, and there's. And the weird. Okay, so then, like, there's just this kind of weird Forrest Gump thing, so, like, my brother and I end up doing a project. We didn't meet Peter Singer, but we did a Peter Singer project at Princeton, right? For like, some, like I forget it. I don't know if it was even through the university or some other thing. So we ended up like doing some build project for like a community garden that was supposed to be endorsed by Peter Singer or something, right? And oh, and at the time, too, I was running with, I'll just say his name. Simon Keller was, a grad student under Peter Singer, so he was in that same running group. He's still a beast, too. He's in. I think he's in New Zealand now. he's a professor there. And he. And like, he can still like he's probably my age. I'm 51 and he can still throw down these, like, incredible runs on Strava. I follow him and he's just like a monster. Like he'll throw down like a, like a ten miler in the hills at like six minutes flat or something, right? Which is like really impressive. Like super, super impressive. Right. For a masters runner, it's just like. So I was running. But anyway, so I was running with Simon and a bunch of other like grad students too. It was really cool. And talking to him about that kind of stuff, like not all the time. Like someone was just like like it was probably like 90% bro talk. But then it was also like, you know, I had this kind of thing where we would talk about those ideas, right? And then also there was another one who was, at the institute. There was this woman at the Institute. We'd have dinner afterwards and talk about this stuff. And she was with you know, convent. She was with Daniel Kahneman. Right.</p><p><strong>Aaron</strong>: Oh, wow.</p><p><strong>Jesse</strong>: So that was pretty cool. So I was also like reading a, a ton of like at some point there, I started reading a ton of Kahneman. Shiller I was probably more enamored, actually, of like, behavioral economics and, and that kind of stuff for some period there. So yeah, but that was like really that was super cool. Like there were a few people like that who were just really interesting grad students and institute and that kind of stuff. Yeah.</p><p><strong>Aaron</strong>: If you're listening to this, you should you should follow Jesse on Strava. I'm sure we'll we'll link him. it's very impressive. I can't say I keep up with you every single or like, I don't like, religiously. Scroll your feed. I'm sorry. I don't know if that's gonna get me canceled, but it's, like, also quite impressive.</p><p><strong>Jesse</strong>: It's not. I mean, it's not crazy, like, it's not. Yeah. So I run like, I run a fair bit, but it's not like, you know I just run.</p><p><strong>Aaron</strong>: Just run. Okay? Yeah. And also, you know, build crazy shit. And you. Jujitsu. okay. Wait. Cool. Wait. Where to? Where to go from here? Do you have any? I don't know. yeah. Is there anything you're, like, jumping out as, like, the interesting direction you want to, like, go in either. Like in terms of, like, keep talking about, like, I don't know how you got from one club to the current situation or like, intellectually or.</p><p><strong>Jesse</strong>: Yeah. So I guess my thing on Twitter in life, right, is like the HVAC thing. And you know that sort of what I like. I wrote that asterisk article so we could talk a little bit about that possibly. So like in part HVAC is sort of it's sort of proxy for two, a couple of catastrophic risks. Right. Like global warming obviously is one. Right. Global warming Just in like basic terms, we end up kind of electrifying everything. And then so converting fossil fuel devices to heat pumps is kind of one of the things, although like recognizing that we I think most people are kind of. It looks like I think most people are kind of in agreement that global warming is not it's probably not likely to be severely catastrophic for humanity and almost definitely not likely to be X risk level. Right. Then the other is like sort of indoor air quality interventions that can mitigate pandemic risk. And there you get into things that are, you know, potentially, you know, definitely catastrophic risks and potentially now X risk level threats. Right.</p><p><strong>Aaron</strong>: So that's you want to do you want to wait. So I have a confession to make which is that I haven't read I haven't read the article. I feel really bad. Do you want it. Okay. I'm going to try to find it. Do you want to, like, give an introduction to like that. Let me just like what it's.</p><p><strong>Jesse</strong>: Yeah.</p><p><strong>Aaron</strong>: What it's about.</p><p><strong>Jesse</strong>: So I, you know, maybe around that time period like going to 2018 2019 okay. So I had this experience. Right. So in 2010 my company, you know, there was the the sort of housing bubble kind of pops. Right. And I, you know, I'd been reading Schiller, so like, I don't want to be like, oh, I'm like a genius, but, like I had the strong sense, like the sense that people had in kind of like February and March. EA had around February and March around Covid right? Which every EA was like, oh, jeez, this is going to be bad. Like, like, but you also had it was also a surreal sense. I think people had where you're like, okay, how much should I talk about this? You know, like my wife was like, you're starting to freak people out right? Like kind of thing.</p><p><strong>Aaron</strong>: And think it's March. March is pretty late. I was like, like, I have to do any individual work myself. But like, it was like I was just like, like going on like other people's takes, mostly on Twitter. And like, March was still pretty early relative to the rest of the world. Yes. Yeah.</p><p><strong>Jesse</strong>: Yeah. Like, it's a wild time, right? So I can remember the thing that really cemented it for me was one of the E. Like what? Like one of the 80k podcasts which I think dropped on Valentine's Day. And after that, I was just freaking out, right? Yeah. Like, you know, there it was definitive because I remember I can't remember the woman's name was someone from Hopkins, but, you know, she was very much like, yeah, there's no way at this point to contain this. Like it's just impossible. Like, I'm going to summarize and people should just go listen to the podcast, right? But like, she was like, this is just coming. Like you're going to see a bunch of waves. She laid, laid out kind of what she thought the next few months was going to be like, and it was just dead on. Right. And also it was just like, you know, like like so I think I don't know if it was like my feeling after that was like, oh, this is really bad. Like, this is going to be a big deal. And I probably listened to that the day after it dropped or something. Right. So, so like all of February, I'm like talking about this. And then my wife is like, hey, you can't like, you gotta tone it down. You sound like a crazy person, right? But, like, that's the real feeling. Was the same sort of feeling I had in like 2007 about housing in the United States, which is like, wow, you know, not not an EA thing. But I was like like Shiller had written about this and really called it and laid out a case. And then there were a few guys working in like, you know, in hedge funds who were also laying this out. And I was like, oh, yeah, like, this is going to be like really bad. So I had kind of like set the company up to be like, okay we're going to just segue. We're going to just transition into kind of energy rebate stuff. And that's what we were doing, like strictly carpentry and general contracting. And then we transitioned over to this energy retrofit thing where we would partner with HVAC contractors, and we brought like insulation and air sealing in house. And we still have like crews that do this. and it sucked like it it was we'd always had HVAC problems on job sites and it was like, oh, in the same way that I talk to homeowners all the time and they think they're having these kind of uniquely bad experience right? Rather than just the default experience, which is just overwhelmingly negative. So over time, I just realized like that there was a very specific problem within HVAC where the level of competence was just very, very weak. And I started just going to these training classes and doing things. And and it was just really obvious. And then Covid kind of rolls around. Right. And you're like, oh, okay. You know, somewhere pretty late in Covid, it looks like indoor air quality interventions. Like, I don't think we figured out the indoor air quality interventions or I didn't until it was like, you know, people started talking about going back to school. Harvard reached like they had what's his name? Joe. Something he had like healthy buildings. Right. He wrote that book, but he had, like, this whole blueprint for, like, sending kids back to school. And I was like, hey, that's really cool. Like, I like what you're. I like what these people are saying, but there's no fucking way. Like, there is no way you are going to do this across the current workforce, right? And so people would say things like, oh, training, training. And I'm like, no, you don't get it. Like this is extremely bad. This is not something that you can do with the workforce you have. Even the most remedial things that you are proposing are not doable, right? Like they are not going to happen. You need to figure out other things. And if you look at like the interventions that were successful, there were things like Hepa filters and Corsi boxes and this type of thing. And if you look at the interventions that failed, it was like, you know, anything that involved touching centrally ducted HVAC equipment, just like didn't wasn't able to be worked right, like it didn't feature as a solution. And so like I was like like just telling people that all the time and they were like, oh, training, training. The workers are going to need training. I'm like, no. And then I wrote that article and then then things like, I felt like at least the feedback on the article was like, oh, we we by the time I wrote that article, like, I like it was fairly recent. So like Covid was kind of like we had vaccines, we had other interventions. But like people were like, oh, this actually makes a ton of sense, right? Like this. This is what we experienced during the pandemic. Like we had things where text would just say filtered like filters can't be installed, the manufacturer doesn't allow it, or something like this. Right. Like that was the classic example.</p><p><strong>Aaron</strong>: Yeah. I was about to ask you for, for like more specific examples for like Luddite or not Luddites. I don't know if that's the right word. Whatever the right like analogous term is for, for Luddites with respect to like not Luddites, but like ignorant people or whatever. so, so like filters couldn't be installed. Is there, is there something that's like, is there some like technical thing that like people just like couldn't do or like. Yeah.</p><p><strong>Jesse</strong>: So there was okay, so one thing that I encountered over and over again and I would actually like engage in Twitter conversations about this. Right. Was the building, the the building facilities, guys say that you can't put a Merv 13 filter in our equipment. It's not built for it, right? Well, so I don't know for certain that every single piece of equipment ever made doesn't contain this in the instructions. But I have read the instructions for dozens possibly hundreds of pieces of equipment, and it's never been contained in any of the pieces of equipment that I've read. Right. Like, it's just it's something that, like, doesn't exist. And also, like, I know enough about equipment, like equipment. You're supposed to do what's called a static pressure measurement, right? So you actually put filters in and then you read the pressure that they're introducing into a system you don't you like. There are just many things that can be installed into a system. And if the static pressure becomes too high as a consequence, they don't say, you can't do this filter. They say the static pressure should be within whatever parameters right? It's just not a thing. Right?</p><p><strong>Aaron</strong>: Yeah. Yeah.</p><p><strong>Jesse</strong>: So but that was a common thing that people were told over and over. Like I would be willing to bet that that has been falsely claimed in hundreds of systems in public schools.</p><p><strong>Aaron</strong>: So so this is a just to be like more specific. So. so I yeah, I don't know what I'm talking about. So like there are these filtering things, they're like rectangles and we're talking about rectangles. Yeah. We're talking about rectangles. Cool. And like people were saying, okay, we need to use one of the worst ones that like doesn't do filtering as well. Yes. Okay. That's like that's kind of silly. Prima facia.</p><p><strong>Jesse</strong>: Yes. It is really silly. Right. Like it doesn't it. It is very silly. Right. Like and you can kind of like the other thing is like I think there's a thing too, where norms around bullshit are much stronger in the white collar world. Right. And sort of so white collar people like parents of, of kids going to those schools, they go okay, like, I'm going to take this thing, this idea that I read about Merv 13 filters, which like the difference in Merv 13 versus Merv 11, which is effectively the next step down, is really high, right? Like Merv 11? Probably. I think it's like less than 50% effective or what we think against droplets. Right.</p><p><strong>Aaron</strong>: And then.</p><p><strong>Jesse</strong>: Okay. Yeah. You end up you end up jumping from like half to 90% or over 90% as you go with Merv 13. Right. You could probably look this up. Yeah. But like so that jump was very significant. And most people don't have Merv 11 filters even though they have Merv eight or some other like. So essentially the filtration in most systems is like zero, right? So the parents parents or whoever would go to the facilities people and they'd say, hey, can you put this in? And they'd say, no, you can't because the equipment can't handle the pressure. Right? And that would be like the end of the conversation, basically. And then maybe the parents would go online and like bemoan this, and hopefully someone caught it and said like, hey. So the other thing was that like, it's mandatory when you have a piece of equipment that you've installed to measure static pressure, right, like the the manufacturer will say most times they'll say you must measure static pressure unless they have some external thing which that measures it for you, which they mostly don't have. Right. So you would say something like, okay, did they measure static pressure. And then whoever it was would say yes. And then you'd say, okay, take a picture and show me the holes you drilled to measure static pressure. Right. Which is like I was really used to doing that kind of stuff in bullshit environments, right? Like you want to do that kind of thing. You'd be like, okay, sure, show me the holes, right? If you can show me the holes where you drilled for static pressure, show me how you did it. In most cases, people making the claim literally don't know how to measure static pressure. Like, which is kind of like a basic HVAC piece of knowledge that you're supposed to.</p><p><strong>Aaron</strong>: I don't know how to. I don't know how to know.</p><p><strong>Jesse</strong>: Yeah, but if someone if you are given a set of instructions and like one of the final steps of the instructions is to, like turn on a piece of equipment and measure static pressure like this is your job, right? Like this is.</p><p><strong>Aaron</strong>: Solidly.</p><p><strong>Jesse</strong>: In your bailiwick. Right. So this kind of thing. So like this. And also this is totally unsurprising to me because I've already gone through like at this point, Covid hits like I've gone through ten years of bullshit, right? Like being bullshitted regularly. Like I have all these kind of, like, workarounds when I'm in the field for, like, managing this problem either, like making someone, like, not bullshit me anymore or just in most cases, just doing it myself.</p><p><strong>Aaron</strong>: Yeah. Wow. Oh, man. There's so much. So one a lot, a lot of directions to go in. do you have a sense of, like, what? The fun. Any of the, like, more fundamental causes of this are? Because I just, like, imagine like there are some parts of like, the industrial American economy that are just, like, very competent and like, do you, do you like, have a good sense about, like what sets those apart from like your experience?</p><p><strong>Jesse</strong>: Yeah. No, it's a great question. Like so. And then the other thing is like I think people sometimes get this case confused, right? They think that I'm doing this thing that I'm not doing. Like, I agree with you. Like you, the default is almost high levels of confidence. Like you talk to people who whatever like Program. Program stuff, right? They'll they'll inevitably bitch about their colleagues, right? Like they're like, oh, everybody around me is so incompetent. And then it's like, it's like two people or something like that in an office at ten. And like, of course, like this is like and mostly they can fulfill. They're just kind of shirking. They can fulfill like what I'm saying is very different. I'm saying the trades are somewhat bad. I think they are worse at competence than like currently than the white collar world. Right. And you see this reflected like I, I this is not my area of expertise. But you see like productivity numbers I don't think like so construction has no productivity gains on a per worker basis since the 1970s. Right. People interpret this to like they have a regulatory argument here. And I think the regulatory argument is pretty good. But I also think it's partly explained by sort of declining caliber of individuals. Right. So I think partly what's happening is the thing that we talked about earlier where there's this kind of like default expectation. Now, I don't know this empirically. And so like I want to say very specifically, this could be completely wrong. Right. Like I could be completely wrong with this. But my suspicion is that partly a society that's gone from like, you know, in, I think in the same time period of like where you're measuring productivity losses, like 15% of kids to like 40 to 50% of kids attending college, construction like, is damaged by having all the smart kids attend college, in part. Right.</p><p><strong>Aaron</strong>: There's not many. There's not that many. Jesse's.</p><p><strong>Jesse</strong>: I don't know necessarily if that's the case, but yeah, like so I think part of it is that there are some other things though, too. Like I argue that within the trades that HVAC is uniquely bad. Part of that is because the cognitive demands are higher. Right? So there is no doubt that in HVAC the cognitive demands are higher than other trades, right? And I think people find that surprising. I think if you say like, okay, my guess is I hate okay. So like, just to be clear, I hate IQ discourse. Like I think it's detestable. Like, I think it's I think it's a really like I think there's some stuff there that's real. Right. And I yeah. Like I'm not disputing like I don't think people are blank slates. I think there's probably a genetic component to IQ. I think, you know, all these other things like yeah, yeah. But like the worst people in the world are talking incessantly about IQ. Right. Like the worst.</p><p><strong>Aaron</strong>: I agree with.</p><p><strong>Jesse</strong>: Most racist people are are extremely hung up on IQ. But I do think there's a thing probably where like it could be that being a good HVAC technician demands an IQ that's, you know, at least the kind of like median IQ of like it could be that it needs a 100 IQ. And there's just this, this is horrible. Like this is I should not talk about this, but there it could just be that the median entrant has an IQ of like that's ten points or five points less than what's actually required to fulfill the demands of the job, right?</p><p><strong>Aaron</strong>: You know that. For what it's worth, I mean, I'm a very I feel like I'm not the person to judge what the reaction to this is going to be, but like, this sounds totally reasonable and plausible to me.</p><p><strong>Jesse</strong>: Yeah. Well, yes, but I like I want to be careful with it. Like.</p><p><strong>Aaron</strong>: Yeah. Like I'm not.</p><p><strong>Jesse</strong>: I'm not like I'm not broadly endorsing like IQ discourse, like the horrible cesspit of like IQ discourse. But I do think there like I think people end up in these roles, and they, it's just that the they're being outpaced by the stuff. And then there's this like, you know, maybe it would work if you had a kind of institutional discipline where you're like, okay, yes, this is your life now. Like, we're doing this thing no matter what your capacities are, we're going to pull you up. And I think the industry is sort of like in somewhat indifferent to this. Like they kind of like the tolerance for fraud. And grifting is extremely high. The tolerance for were dishonesty. Like you could potentially build an it's like something that was, you know, not great cognitively but had very strong norms around honesty. And you don't end up with that. Like, yeah. So it's just really bad. And you don't see that like, you, you don't see that nearly as much with carpentry. Right? You don't see that nearly as much with with other trades. Right. In my opinion.</p><p><strong>Aaron</strong>: I mean, there's a bunch of bunch of things. it's like my first reaction is, okay, like, this sounds like a problem the market should solve. Like, you should be, you should be having like, some entrepreneur. And maybe at some point I'll get to that. My case for you trying to trying to beat this person, but like, you should, yeah. Like have some entrepreneur saying okay, like I'm going to like, like explain the issue and like, raise wages such that, like the most talented, potentially cognitively talented people are going to, like, be in my business and I'm going to say like, oh, are you like a rich hedge fund? Like, we'll make your office air quality like excellent or whatever. and so like my question is like why that dynamic hasn't worked or proliferated.</p><p><strong>Jesse</strong>: Well, I think that's a great question. It's one of the things I talk about. One of the things I talk about is that, so like, that guy also doesn't get a good HVAC system, right? Like, and to me, this like, demonstrates in weekly demonstrates that there's actually an endemic problem, right? So when people do like really bougie HVAC, it tends to just expand the grift a little bit. So they end up with like UV lights that we know empirically do nothing. Right. Not that you this is not an indictment of UV generally, but like the things that we're installing in HVAC systems, we just we just know that the most common devices right now are not sort of scientifically validated in any way, and airflow is probably much too fast for them to be successful. Right? And it's also noteworthy that, like most people working in UV, taking it seriously very quickly abandoned like inductive as an effective means to like, end up with like a material impact on viral transmission. Right. But those guys will buy that stuff like they're they're buying it hook, line and sinker. They're buying other things that are like oh, how can I break it down? Like the really common thing to see in cold climates was. So there's this bullshit statement that boilers produce wet heat. So a boiler is a device that heats water and then circulate it. It's common to the northeast as you get like further up. Right. So like in areas that didn't have kind of default in cooling systems like Vermont. Right. Vermont has a ton of boilers and they just distribute hot water through baseboards usually. Right. Or maybe in floor loops or things like this. Right. So so people describe these as having as being a wet heat. Right? Which is just fucking meaningless garbage. Like it just has no meaning, right? Like, the boiler doesn't release water. It's just.</p><p><strong>Aaron</strong>: That's what it sounds like. It's like. Like as a layperson, I would imagine. Oh, it's like. It's like hot air that's also humid. Like, that's what I imagine.</p><p><strong>Jesse</strong>: Right? You imagine this, these characteristics. Right. And so then they would end up like and there are benefits to boiler systems. Like having warm feet is an advantage. But like when you go into these systems that are super bougie, it's just people start connecting boilers to air handlers and then blowing air across a boiler coil. And this ends up being like thousands and thousands of dollars extra. Right. And it has no advantage over a furnace, which is just a device that blows hot air, right? Yeah. Like you end up with like just reams and reams of like pseudoscientific garbage or multiple zoning systems, which in principle should work but then tend to catastrophically fail, like at a really high rate, like so people are like, oh, I'm going to take my air conditioner. I'm going to like, chop it into these tiny zones so I can achieve whatever temperature I want in every single room individually, right? And those like catastrophic. Like I'll be going to a job on Tuesday. That is almost definitely one of those systems having catastrophically failed for like a super rich person, right? We end up doing like a fair bit of that kind of stuff through around Princeton, right?</p><p><strong>Aaron</strong>: Yeah, yeah. So man, that's. Yeah. That's interesting. So interesting man. I feel like I always want to you're going to say no, but we've had this discussion before. Like I will yell at you to like, try to make $1 billion.</p><p><strong>Jesse</strong>: Yeah. Like, I don't like I don't know, like, it is interesting. Oh, yeah. Sorry. And the other thing I want to talk about was the effect of like that. So it could be true that people end up in like, if you're smart, right, you end up in like on the bubble jobs that just pay a lot more like HVAC sales is actually insane. Like it's actually an insane. So like when we're talking about like, oh, blue collar jobs don't pay that much, right? There's also this filter in which the smart blue collar people tend to end up either in training or sales. Right. And salesmen just like. So it's like there are some companies, HVAC sales companies that if you are less than 150 K a year, you're probably fired because your numbers are just too low. Wow. Right. So they're like, you know, they'll be like 7 to 10% commission. And if you're bringing in less than a million and a half, then they're like, I'm sorry, you're on the chopping block. Like you just have to be bringing in 2 million as a minimum, ideally 3 or 4. Right? There are 3 or 4 B starts becoming exceptional, but there are lots of like like production companies built around this. They don't want people to be less than a million, right. Like that's just totally unacceptable. You're just on the chopping block. So I think partly what happens is that anybody with who's articulate and smart tends to gravitate towards those types of jobs because they're just so much more lucrative. And it's interesting to me that also, like, they're still pretty dumb right? Like, like it's interesting because, like, you could talk to people who do, like, you know, with graduate degrees in engineering and it's not like I don't know where the ceiling is, but it's like, not that much higher than the HVAC sales ceiling, right?</p><p><strong>Aaron</strong>: Yeah.</p><p><strong>Jesse</strong>: Like, if you've got a good a graduate degree from a good college in the United States in engineering, like there's a good chance that you might go towards some kind of technical sales. Yeah. And it doesn't appear to be that much better than HVAC sales, which I find kind of interesting. Like, I know, I know people who make, you know, like 3 to 5, right. That's a shock. Like, to me anyway. Like, I don't make that much money. Like, that's a very high amount of money in my world.</p><p><strong>Aaron</strong>: 300 to 500,000, to be clear. Yeah. Yes. Okay.</p><p><strong>Jesse</strong>: Yeah. To me, that's a lot, right? Yeah.</p><p><strong>Aaron</strong>: Yes. Likewise.</p><p><strong>Jesse</strong>: Right. And somewhat like. And to be honest, like some of those guys come from real trades. Like some of those guys can't put together a spreadsheet, right? Like it's insane. So.</p><p><strong>Aaron</strong>: Yeah. Wait. Okay, so this is interesting. So, I guess like one. Yeah. From like, the capitalism perspective, it's like, what are you trying to sell? Like is one issue that like consumers, whether whether like individuals or just like, businesses can't really tell the can't easily tell the difference. Like I have an indoor air quality monitor. Like would I be able to like, see the difference between like a good HVAC installation in my apartment and a bad one, like based on. Yes. Okay, okay.</p><p><strong>Jesse</strong>: Yeah you would, but you're also in the minority of people who are are, like, watching this, like I push. Yeah. You know, like the air things view. Plus I push people all the time on now. Right. Like that's an incredible nice. And so it also frames directions. Yeah. Like if someone calls me and it's just a call, I'm like, try to get an air things before I go to your house. Right. Like if you have any concerns about indoor air quality really they.</p><p><strong>Aaron</strong>: Should sponsor you. They should sponsor pitch an hour.</p><p><strong>Jesse</strong>: No, seriously. Right. Like it's insane. Like I've probably sold like, off of, like, I've probably got people to buy air things off Amazon. Like, I don't know, many, many people have bought those devices and some of them like I'm like I'm like like I'm totally Mr. Poverty. Like I'm like, oh, you know, maybe you can share it with a friend or someone else who's interested, and then they'll buy like two just for them like, you know, so it's like.</p><p><strong>Aaron</strong>: Yeah, there's, there's I feel like the kind of person who, like, cares enough, cares a lot about indoor air quality is like usually willing to spend $200 to like, measure it or it's like usually not the constraint or something. No. That's interesting.</p><p><strong>Jesse</strong>: Yeah. And also that I think there's probably a case. Right. Like radon alone looks really bad and I think I don't know for sure. Like but to me everything's view plus looked like the first consumer facing radon that measured it in real time. Right. And that was like to me that looked like an incredible breakthrough. Right. Like I, I don't recall something prior to that that was like I mean, it probably existed, but like, yeah, like The View Plus just looked like knocking out of the park. Radon kills a lot of people. Radon kills a lot of people. I think there's probably a justification for like based on the economics of the thing to for people to buy these. Right. Like at 300, it's right now they list for like 300 bucks on Amazon. Right? When they're not on sale like 300 bucks. If you can materially take action to reduce radon levels, that alone looks pretty good. And we've done that right. Like we've found radon in quite a few cases. Like also there's a there's a weird thing where people so when when you buy a house usually the I, I don't know if it's the state law or the lender, but you're generally required to measure radon, right, as part of the terms. And then if you discover something you go back to the seller. But the seller, it's usually while the seller is in the house. So if they're like halfway savvy and evil, which for some reason, selling a house makes a very sizable fraction of people completely sociopathic, Right. Like so. They'll just open all their windows, right? Which is incredibly unethical. Like incredibly bad, right? But I'm almost positive it happens all the time because I always encourage people to get a radon test. Like for years before The View. Plus I was like someone would say, oh, I bought the house. Radon tested fine, and I would just say, get a, get another run, another radon test, right? Like for years I said that and now I just say get a view plus. Right. But there have been many surprises in houses where people have done gotten a view plus or gotten a second radon test. Right.</p><p><strong>Aaron</strong>: So this is like, this is actually like, like encouraging because back in the day, of FTX being supplying infinite money to a certain set of people who are like, aligned with effective altruism. I asked for a grant to basically give out these devices. I got turned down, which is like, I think maybe fine, but like maybe now, maybe like if this is like maybe now the grant like the grant makers would have like, listened to this episode and said like, wow, Aaron. Like you're were actually really prescient. We should have given you $30,000 to give these to whoever you need.</p><p><strong>Jesse</strong>: That's really funny. Yeah. So, like, for radon specifically, one can come up with like, you could easily come up with a couple of criteria for, for determining whether it's more likely. Right. Like the presence of a basement. Right. Or a substantial below grade foundation would predict that the presence of an HVAC system in the basement also predicts also predicts it. Right. And there's a lot of interaction. This is totally overlooked, but there's a lot of interaction between the HVAC system and radon that people don't like. Totally understand that. I also have had worked on for many years before The View Plus. Right? So yeah, so you end up like with a lot of depressurized foundations where the depressurization is induced by the equipment that's located in the basement. And so that's pretty bad. Those are two kind of like major predictors. yeah. So you could narrow I radon.</p><p><strong>Aaron</strong>: Yeah. Yeah.</p><p><strong>Jesse</strong>: Radon. So you could potentially limit this further. It's it is interesting. You know, like also publicly. Right. In the pandemic, I was very surprised that CO2 didn't take off in kind of restaurants and things like that. I was very surprised. I was confidently predicting in like late spring of 2020 to my wife. I was like, when we go out in restaurants, they're going to be CO2 monitors, like everywhere. There's just going to be CO2 monitors. People are just, I said, like, I can remember being like finally, honey, CO2 is a proxy. The time is here. Like CO2 is a proxy. Like had been widely used. Like it was like kind of like office buildings. I worked in a place that had they didn't, like, brought in the ventilation system, like cycled on and off to CO2 levels much higher, like back in the day, like, you know, whatever. Four years ago. Five years ago, it was like 1200 PM was the yeah was the cycling default. So they just tried to keep it like below 1200. And then by the time Covid like mid Covid it was like I think people initially were eight and then maybe six or something like that. Right.</p><p><strong>Aaron</strong>: Yeah. Just just to be clear. So proxy for just like air quality in general, like including in a pandemic, the potential for viral stuff or.</p><p><strong>Jesse</strong>: Yeah. So I think proxy for like respiratory disease transmission.</p><p><strong>Aaron</strong>: Oh okay. Okay.</p><p><strong>Jesse</strong>: Right. So people people have talked about that for years. Right. yeah. Maybe you could say proxy for occupancy levels or something. Right. But nah, probably not. Probably more like disease transmission risk. It's not clean, right. Like it's not like. But that was like how people talked about it with CO2 as a proxy, right? It's not clean. Like maybe people are sick. Maybe they're not sick. So maybe you're just exhaling there or sorry.</p><p><strong>Aaron</strong>: There's also that the potential direct effects of CO2, which I know is like kind of controversial.</p><p><strong>Jesse</strong>: Yes. Yes. Slightly controversial. My guess is it's better to think of it as a proxy for like a bunch of other things. And we just happen to be measuring CO2. Like, it could be, you know, it could be other things as well as disease transmission risk. It's not clear. Sitting in a room with elevated CO2 you know, if you're when you're by yourself, it probably doesn't there. Maybe there's some cognitive effects, you know, and maybe those aren't directly CO2. They could be some other thing. But like.</p><p><strong>Aaron</strong>: Yeah yeah yeah.</p><p><strong>Jesse</strong>: Like it's not. Yeah. That's less clear. But as a as a risk for like I think it's a pretty good way to think about under ventilated spaces. There's a trade off there with filtration as well. So like that got difficult people I think there was initially a lot of pushback on using CO2 because there were potentially school interventions. So like when people are talking about opening school, there were potentially school interventions where you'd go in and just like filter school classrooms like crazy, and they had no window and no access to ventilation. And we were like, yeah, that's probably okay. But if we put a CO2 monitor there, the CO2 would be really high, right? But the net result of not making things transparent was and this wasn't obviously the solitary contributor, but in not doing that right, like in refusing to adopt CO2 monitors widely like we didn't open schools. Right. Yeah.</p><p><strong>Aaron</strong>: Yeah. Wait. So sorry, what what's the trade off between CO2 stuff and filtration? Like they seem to. Yeah. Can you explain that?</p><p><strong>Jesse</strong>: Yeah, sure. So when you filter air. Right, you're just recycling it, right? You can't filter out CO2, right?</p><p><strong>Aaron</strong>: Right right right, right.</p><p><strong>Jesse</strong>: When you're running a filter, you're just you assuming that the filter is as effective as replacing the air with fresh air at reducing disease transmission. And that's probably pretty close to correct. Right. So filtered rooms, assuming the same volume of air of clean air is filtered, they probably have the same transmission risk as a room in which we're just replacing that same air. Right. Ventilation versus filtration.</p><p><strong>Aaron</strong>: Yeah.</p><p><strong>Jesse</strong>: So there was hesitance to make this transparent. Like people didn't want to distribute CO2 monitors across across classrooms, like in the early summer of 2020. Right? Or through the late summer. And they didn't want to do this, right. Because they thought, oh, that's going to be confusing. And then like, you know, they didn't really go back to school anyway, right? Like it just kind of didn't materialize. I was very much in the pro transparency camp for this stuff. Yeah.</p><p><strong>Aaron</strong>: Yeah yeah.</p><p><strong>Jesse</strong>: Yeah. And I continue to be surprised at the lack of public facing CO2 monitors.</p><p><strong>Aaron</strong>: I mean, also like notoriously like us spends a lot of money on public education. Like one one $300 device per classroom would not break the bank of most public school systems, as far as I know.</p><p><strong>Jesse</strong>: Yes, that and also like it was weird, right? Because I think parents easily would have done that. Like the cost of parents alone was probably really high, right? So I think like the whole thing I found bizarre, like I found it truly bizarre. And then and, you know, like I sent my kids in with CO2 monitors, not the view. Plus what was the the right? Like I said, I emailed my son's teacher and he was like this is incredible. Like, I can't believe that there are. Solutions like there are engineering based solutions to this problem had not occurred to him. And this was like months like this was well into the fall. Like I forget when school was partly open, but it was probably like November when they were going back to school like two days a week or something, right? And he he had never he had never considered that there could be, you know, solutions that were engineering based to addressing this.</p><p><strong>Aaron</strong>: Yeah. Yeah. That's wild. And going back to the ventilation thing. So, the trade off. So like, the, the best of both worlds is just you pipe in fresh air, like, and then maybe you circulate internally, right? Like, you might want to also, like, have a, have a standalone filter. So like maybe it's. Yeah, maybe I don't fully understand like why there's necessarily a trade off or is it just like.</p><p><strong>Jesse</strong>: You would have, you would have had classrooms in which you couldn't, you didn't have a window. I think this is I think this is probably somewhat rare. Right. So if you can open a window, you're probably fine, even if the window itself passively doesn't let in enough air. You could just put a fan in the window and now you'll just start driving.</p><p><strong>Aaron</strong>: Yeah, that's what I, that's what I, I personally well I used to now I have an AC thing but that's what I used to do. I used to have a fan blowing either in or out like of a window to like. Yeah.</p><p><strong>Jesse</strong>: So, so yeah, you could, you could if you have a class. So the counterargument was there may be some classrooms where there is neither ventilation nor a window. Right. And in those classrooms we'll have to filter. Right. I think this might describe a very small number of classrooms. Yeah. Right. Yeah. Like, have you ever been in a window?</p><p><strong>Aaron</strong>: Right. Yeah. Yeah. Like, maybe there's a few in, like, a basement in college, but, like, it's just not that common.</p><p><strong>Jesse</strong>: Not common. Right. Like, it also felt like you could probably, like, get out of that somehow, right? You could move to a library or something or whatever, right? Like, yeah. Yeah, there was just there was some there were classes eventually held outdoors. Right. Like, and that obviously would have been like massively successful for Covid purposes. Right. A little bit over the top. But like, you know in, in like September it's fine. But like yeah. So that that was one of the counterarguments. And so like people there was a common argument against displaying CO2 to room occupants because it might confuse them. And you'd have to explain the filtration thing.</p><p><strong>Aaron</strong>: Oh, God. I mean, I feel like there's like this this version of argument gets like, made in a lot of different settings, just like don't don't measure and or provide people like true information because of x, Y and z. And like it's not literally impossible that this never checks out, but I'm always so skeptical of it. I mean, like, one thing is like blood testing. Like, I feel like, yeah, if I could have a continuous blood monitor for like all my biomarkers, it's like, yes, like that would turn up some like quote unquote false positives, but like you can just account for that. And like in general knowing like more true information tends to be better.</p><p><strong>Jesse</strong>: Yeah. I mean, I think in this case, like I can't speak to the, the biomarkers thing because I probably don't know enough about it. But like in this case, I think, well, yes, I think the default assumption should be we should just be honest and transparent with people where possible. Right. Like like, you know, there are just many examples of this like just endless, right? Like not wanting to cause a panic is another one, right?</p><p><strong>Aaron</strong>: Like yeah yeah yeah yeah.</p><p><strong>Jesse</strong>: Right. Like we don't want to give people true information about what this could be because we don't want to cause a panic. Right. Like that seems that seems really weak. yeah. Yeah. So but a lot of it seems to center even. I think the, the CO2 thing sort of centers around, like, we don't want to make people alarmed, right? I don't know what the next step in this is like what people do when they're alarmed. Right. Like they might, like, retaliate in some way.</p><p><strong>Aaron</strong>: Yeah. I mean, I don't know, I don't I feel like it's not gonna I can't imagine like, people rioting because there's like a little air monitor on the on the, on the like side of the room.</p><p><strong>Jesse</strong>: Right, right. And there were like, you know, and as it stood like, I think like, I mean, people talk about this a lot, right? Like with Covid, I think that was like, you know, obviously a huge failure. I, I think to some extent there's some things going on that people like maybe they overrate. Like there wasn't a lot there was transparency. Wasn't as good as it could have been. There's some other stuff though, too. Where to be fair. So that was an argument given by people in public health. So I feel confident in saying like, I think that was a false argument that was advanced by people who should have had the expertise to, like, do that effectively. I think there's another thing in Covid that people do where they go, like the playgrounds were closed, right? And I like I don't think the playgrounds being closed in all of those cases represents like the consensus view of public health. I think it represents like a kind of like.</p><p><strong>Aaron</strong>: I got you, I would say.</p><p><strong>Jesse</strong>: Like a narrow minded, like petty bureaucrat somewhere.</p><p><strong>Aaron</strong>: Yeah. It's the kind of thing where, like, if you're shutting down all of the county's buildings, it's like that's technically on the checklist of, like, things to shut down. Yeah.</p><p><strong>Jesse</strong>: Right. Right. Exactly like the beaches, right? Like people go, why did they close the beaches? Like, public health is just science is wrong about. And you're like, well, in the beach case, like, I'm like, that just looks like something where, like somebody got a hold of something and and like like it doesn't, it doesn't reflect this other thing. Right? Like this thing of.</p><p><strong>Aaron</strong>: Like.</p><p><strong>Jesse</strong>: The science being wrong. Right. So I, I'm cautious with I'm cautious with the argument in some areas. Right. Because I think it's like people over, over make, make it too much. Right. Like it wasn't like a consensus perspective, let's say. Yeah, yeah, yeah. Funny.</p><p><strong>Aaron</strong>: Yeah. And interesting. okay. So wait, so we got here from like, okay, why isn't the market solving the AC or HVAC, which stands for hold on because I didn't know this at one point, so I will I'm sorry. Yeah. This is going to get me canceled among among Jesse's colleagues since we're heating ventilation and air conditioning in case. In case I'm not the only one. There we go. Okay.</p><p><strong>Jesse</strong>: Yes.</p><p><strong>Aaron</strong>: so if we got if we we subsidized air monitors, would that have the downstream consequence of fixing the competence in HVAC, HVAC.</p><p><strong>Jesse</strong>: Yes. So I think generally making many things transparent would be extremely useful. And that's one of the things. Now in general. Right. Like the focal point for people purchasing HVAC services. Right. Isn't always indoor air quality, right. So but yeah that would go a long way. I it's a big ask right. Like, you know it takes a level of technical sophistication that's high. But I think like I pushed for this in many areas. Right. So not just indoor air quality but like installation standards right now. Like we just have the capacity to make installation standards transparent because so much is recorded. Right. Like we just have like temperature and pressure in various forms being recorded. It's the most kind of central thing to HVAC installations. Right. And so we can't we have this information that we're able to provide digitally to people. And then we just don't do it right. Like yeah, they don't know that it exists and we don't tell them. Right.</p><p><strong>Aaron</strong>: So yeah like I'm so I like where would I go to find this information for like I have an apartment. Like what.</p><p><strong>Jesse</strong>: Oh it's really hard to do. So like, you won't ever be given the information. Like so when we install, let's say we install an air conditioner, right? One of the things with an air conditioner is that it? So you have refrigerant moving through piping, right. And that's a big part of of like air conditioning. Right. And so there are a few things that should happen with that piping. And it's actually it's actually a very diligent process that has to be followed. Right. One is that we want to pressurize the system, which is partly to test for leaks and then partly to purge it with, with nitrogen. Right. So we pressurize the system with nitrogen to ensure that there are no leaks. It's a very high pressure, maybe like we do like 500 psi, right? When we do that test at 500 psi. That is something that we can record. So we can record any loss of pressure to the system if we pressurize it. Right. And we can just we'll just have a digital file. That file could be consumer facing. Right. That would be very simple to do. Then we release the nitrogen and we do what's called an evacuation. Right. So we draw the system to a very very low pressure to get any like any impurities out of the system and also ensure that it doesn't leak a second time. Right.</p><p><strong>Aaron</strong>: Yeah.</p><p><strong>Jesse</strong>: Back. That's called a vacuum test. Right. And so we put it under a vacuum and we hold that vacuum for some period of time. So both of those could be made transparent with incredible ease. Right. And this would solve like okay. So one of the things that's happening with global warming is refrigerant has a very high global warming potential. We're phasing out, one refrigerant right now and going with a somewhat lower global warming potential. Right. And so like what appears to be happening is we have endemic leaking of refrigerant, and much of the gains of electric electrification are being clawed back by leaking refrigerant in systems. Right. And that's really bad. Right. So like, we have this policy direction that's like, oh, we have to do this thing, right? But we have, like endemic leaking refrigerant problems that are clawing back the gains of electrification. So everybody converts to heat pumps, but heat pumps require refrigerant, and then we leak the refrigerant and then we kind of lose the game.</p><p><strong>Aaron</strong>: So I'm an idiot. Is refrigerant generally a liquid or a gas?</p><p><strong>Jesse</strong>: It's both.</p><p><strong>Aaron</strong>: Okay.</p><p><strong>Jesse</strong>: Right. So it's actually doing a thing. So it's not an idiot. It's like it's you're making refrigerant is making a phase change. Well, two phase change.</p><p><strong>Aaron</strong>: Well, that makes sense.</p><p><strong>Jesse</strong>: Yes. Yes it does. Right.</p><p><strong>Aaron</strong>: Okay, there we go. So you have this very smart.</p><p><strong>Jesse</strong>: You have this kind of process that's like, really bad and would likely be fixed by that. I have a couple of other ideas like the adoption. There's a new technology that both technicians and the industry fails to adopt called a like a press, like it's a press tool that crimps fittings versus like either flares which are very finicky, or brazing, which is not quite as bad, but still sometimes pretty bad. So the the HVAC industry refuses to adopt press fittings as well. So now, like very little is compatible with press fittings, in the field. So you have to kind of like cut stuff off and make press fittings, which is also really bad. But yeah, so you have these things where like you can just measure things. And if you made that transparent to customers, like they would just get the digital file and it would prove that the system at the time of installation was free of impurities and not leaking. Right. And it's totally weird to me that this has like no traction, like it has no traction whatsoever. Right? Like nobody anybody making the digital platforms knows that technicians, even if they want to use the digital platforms, will not want to show those platforms to their customers, right?</p><p><strong>Aaron</strong>: Yeah. Yeah. It's like it can't, it can't it can't only benefit you as a or like not immediately as a provider I guess potentially it could. Right. Because like you could say like I'm going to provide this service that my competitors aren't, but like, people don't really know about it. yeah. No, I mean, like, I'm a huge I collect data on, like, everything I possibly can. So, like, I want to know. I want to know all this stuff, like about like my, like, environment, like health, like whatever. But, like, I feel like most people aren't like that.</p><p><strong>Jesse</strong>: Yeah. And it's also weird because like, okay, so the blower door, which we mentioned earlier, right. So in 2010 when we started doing energy retrofits, they I was like, this is the coolest thing ever because we got like a couple blower doors and those test homes for leakage. Right? You put a big fan in the front door, you test the home before you start the work, right? You depressurize the home to like -50 pascals relative to the outdoors, and you measure the airflow across the fan. It's very slick. Like you can do this like I got it down to like five minutes or something, right? Wow. Like I can could test out in, like, five minutes. You set up this fan very fast, right? And then at the end of the job, you would also test the home, right? And you would see what kind of leakage reduction you'd gotten at the time, too. I had read. Adil Gawande, who I really liked, like, I really liked. He wrote a lot, I think, for The New Yorker. And he had some like it wasn't super empirical, but he had, a book on testing. And so, like, I was especially enamored of the idea of, like benchmarking. And then so the people working for me and I was in the field a lot then, and we just like went to work on taking these numbers down. Right. And like, even now, like I have a guy who like, I post the pics of the blower door sometimes in this like long Twitter thread I have. But like, you know, they went out to a job last Thursday. They spent a day there. They reduced leakage in the home by like I think it was like 26 or 27%. That's really typical, right? Like 1 to 2 guys will reduce leakage in the home. Working in an attic and like by like, you know, somewhere between 20 and 30% is pretty routine. Sometimes they'll get like 50 or 60 even, right? So like that kind of stuff was super motivating. Like we worked really hard to figure out, like what was working best. And the same thing is true of like, if someone gives you this digital file and you're like okay, how quickly can I make this number go down? Right?</p><p><strong>Aaron</strong>: Or yeah, yeah.</p><p><strong>Jesse</strong>: Right. Like it's very like I found that like kind of like kind of intrinsically motivating. Right? Like where you're like I just have this number and I want to hit it as fast as possible or whatever. Right?</p><p><strong>Aaron</strong>: The economy tends to be very good at optimizing for changing numbers, either either higher or lower. Right. If you could get some sort of price on that, that the problem is that there's not like I would guess you're not economically capturing it like all like that value that you're like, you could just do like a worse job on reducing that number and like you would get paid the same amount, right?</p><p><strong>Jesse</strong>: Yes. Correct. Yeah. That's absolutely correct. Yeah. That's true. Yeah. The incentives aren't like they're not great. But nevertheless like there's an intrinsic quality to doing it like like that. I felt that was like, oh man. And also speed like speed was definitely part of it because you make more money if you go fast, but also that you could be like give, like just send the owner two pics at the end of the job. Like you take a picture of the the little computer, the manometer at the start of the job and then you take it after, right? My guys do this. This is what they do systematically every time they hit a job, right? Like yeah. And like yeah, sure. Like it's maybe slightly against the incentives, but at the same time, you know, like we partly came up with systems like invented the systems for retrofitting buildings. We've done like, I think over 2000 retrofits in in like insulation type retrofits in homes. Right. Like it's a lot and like, it's also weird because like, everybody kind of reports like, I don't think there's anyone in our league for like those reductions. Right. And that's also weird because it wasn't like rocket science, you know?</p><p><strong>Aaron</strong>: Yeah. Okay, so why aren't you richer? Sorry. Maybe we'll cut this part out. No, but. Okay. Sorry. I feel like that was a very autistic way of asking the question. Like, you have all this insight, and there's a gigantic market which is, like the United States of America is probably willing to spend quite a bit of money on air quality. How can I convince you to become a billionaire?</p><p><strong>Jesse</strong>: Yeah, I don't know. Like, it's a difficult market, right. For. Well it's not, I don't know. Yeah. No, it's a good question. It's fair. Like, you know, partly like time. Right? And like, how much time am I prepared to put into this stuff? Like.</p><p><strong>Aaron</strong>: That's, like, totally reasonable. Yeah.</p><p><strong>Jesse</strong>: Yeah. Like I spent a lot of time doing other stuff. Right. So. But, like, you know. Yeah, we, I guess we could be a little more financially successful. We are doing more things like, we're we. It's taken a long time because I wanted to feel technically like really good about it, but we are adding, you know, HVAC stuff at rate like we encounter the same problem like anybody encounters, right? Which is like it's really hard to get technicians who are. Yeah like kind of committed and that type of stuff. It's hard to scale the business.</p><p><strong>Aaron</strong>: You want to you want to plug your companies like their names and your websites.</p><p><strong>Jesse</strong>: Oh yeah. I'm sure. Yeah, yeah. So like, my company is called Tay River Builders. the wood store I also own, which you haven't mentioned yet, is called Willard Brothers, which Willard brothers, you should search on Instagram because my youngest brother runs a really good Instagram page for our wood store and furniture shop and sawmill and kiln. Tay River is kind of my side of things. one of my other brothers runs the carpentry and building construction, and I do. I manage the insulation air sealing guys and then do the HVAC.</p><p><strong>Aaron</strong>: So I will, I will link, I will give you as much advertising as possible. I will link this and my all all seven people listening to this. Well we'll see.</p><p><strong>Jesse</strong>: Yeah. Exactly. Yeah it's cool.</p><p><strong>Aaron</strong>: No no I wasn't I wasn't meaning to say like wow. Like you should be making you should really be squeezing their business out for another like 10%. I guess I was really wondering, like. yeah. Like, why are you just, like, literally the only person in the world and like, you were like, with, like the combination of like knowledge insight, experience, etc. who could like, like who could potentially try to like, create like a large industrial, I don't know, like, scale things up, like three orders of magnitude if that makes any sense.</p><p><strong>Jesse</strong>: Yeah. That's interesting. So that is not super uncommon. Like it's not uncommon to have mixed residential and commercial HVAC companies that are doing say 50 to 100 million. It's not super common, right. But that's what the private equity wave has bought a ton of those sorts of companies. Like I am in a weird position like I'll acknowledge like I'm in a weird position because most of the people who think about things the way that I do Don't stay in the field for as long as I have.</p><p><strong>Aaron</strong>: Maybe that makes a lot of a lot of sense.</p><p><strong>Jesse</strong>: Yeah. They like they transition. Like I have friends who you know, inasmuch as they're similar to I am. Yeah. Which they probably are. Right. They teach one of my friends is pretty high up with ACA, which sets standard Air Conditioning Contractors of America set right standards. He does a lot of teaching a class that is targeted towards other like trainers. Right. other friends work for like utility or clean energy programs.</p><p><strong>Aaron</strong>: yeah. Cool.</p><p><strong>Jesse</strong>: It's not common for people to stay in the field, which I think partly speaks to the like norms. Right? Like you get into the field and you, you know, you are sort of like really constantly swimming against the current, even like another like one of the better texts text. I knew who didn't work for me, but worked for another big company and was like a true standout, right? he just took a job like maybe a year ago, representing, like a a Chinese, like, mini split heat pump manufacturer like. And he, he worked for a long time in the field, in all fairness.</p><p><strong>Aaron</strong>: But yeah.</p><p><strong>Jesse</strong>: It's just not that common in the. And that's also a distinction between other trades. Like you see there are many like there are many standout people framing houses. Right. Like you can find many examples of people and they're going and they're circulating in trade shows. It's pretty hard to like there is a weeding out thing. People just don't stay in the field. Maybe I could probably come up with a couple examples, but it does seem, I mean, somewhat rare.</p><p><strong>Aaron</strong>: I mean, maybe one other thing. Just like people really don't like doing entrepreneurship. Like it like it requires like, a lot of risk tolerance and like a lot of initiative. It's like I'm imagining like the person you just mentioned, like the standout tech or whatever like in principle could have tried to start a competitor to you, right? But like, people don't like doing that.</p><p><strong>Jesse</strong>: that's yes. That's true. Yeah. I think there is an aversion like it's a different it's definitely a different skill set, whatever it is like of the many high volume companies, many of which have been acquired by private equity, the technical sophistication of those companies is shockingly poor.</p><p><strong>Aaron</strong>: Like, wow.</p><p><strong>Jesse</strong>: It is crazy. And I know a lot of them, right? Like, like I've had meetings with them like we do some of their work like it is you will constantly like it's almost like I have a friend who does this like sales for another company. And it's pretty like all we do is laugh all the time, like it's just a wave. It's just wave after wave of just incredibly dumb things being said constantly. Like like hysterically. Right? Like just a complete lack of understanding of anything whatsoever. Like, it is really quite funny. A lot of the time.</p><p><strong>Aaron</strong>: Wow. So I'm not going to lie, I am kind of running out of steam, so we might have to have a part. Part. I feel like there's. So you could provide honestly like dozens and dozens of hours of podcast content. so this this might have to be approximately part one.</p><p><strong>Jesse</strong>: Yeah, I actually have to run to I got to get back, knock out some stuff. That's cool.</p><p><strong>Aaron</strong>: Do you want to give a, like, a last message for for part, part one of an hour with Jesse.</p><p><strong>Jesse</strong>: I'm like, happy to do this. Yeah. It was. This was really fun. I like I it's cool to think that this in some way overlaps with EA.</p><p><strong>Aaron</strong>: So. Yeah. Awesome. Awesome. Well thank you. Thank you so much.</p><p><strong>Jesse</strong>: Thanks, Aaron. I'll talk to you later.</p><p><strong>Aaron</strong>: See ya.</p>]]></content:encoded></item><item><title><![CDATA[Preparing for the Intelligence Explosion (paper readout and commentary)]]></title><description><![CDATA[In which I read and then briefly discuss a paper by Fin Moorhouse & Will MacAskill]]></description><link>https://www.aaronbergman.net/p/preparing-for-the-intelligence-explosion</link><guid isPermaLink="false">https://www.aaronbergman.net/p/preparing-for-the-intelligence-explosion</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Tue, 25 Mar 2025 22:49:09 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/159868306/3a215dc66673ca3596d0ccda4c760bd8.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><a href="https://www.forethought.org/research/preparing-for-the-intelligence-explosion.pdf">Preparing for the Intelligence Explosion</a> is a recent paper by Fin Moorhouse and Will MacAskill. </p><ul><li><p><strong>00:00 - 1:58:04</strong> is me reading the paper. </p></li><li><p><strong>1:58:05 - 2:26:06</strong> is a string of random thoughts I have related to it</p></li></ul><p>I am well-aware that I am not the world's most eloquent speaker (lol). This is also a bit of an experiment in getting myself to read something by reading it out loud. Maybe I&#8217;ll do another episode like this (feel free to request papers/other things to read out, ideally a bit shorter than this one lol)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ct5L!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0e04bf3-0f0c-4f08-8982-be82de00037f_1114x580.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ct5L!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0e04bf3-0f0c-4f08-8982-be82de00037f_1114x580.png 424w, https://substackcdn.com/image/fetch/$s_!Ct5L!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0e04bf3-0f0c-4f08-8982-be82de00037f_1114x580.png 848w, https://substackcdn.com/image/fetch/$s_!Ct5L!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0e04bf3-0f0c-4f08-8982-be82de00037f_1114x580.png 1272w, https://substackcdn.com/image/fetch/$s_!Ct5L!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0e04bf3-0f0c-4f08-8982-be82de00037f_1114x580.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ct5L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0e04bf3-0f0c-4f08-8982-be82de00037f_1114x580.png" width="1114" height="580" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a0e04bf3-0f0c-4f08-8982-be82de00037f_1114x580.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:580,&quot;width&quot;:1114,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:63522,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.aaronbergman.net/i/159868306?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0e04bf3-0f0c-4f08-8982-be82de00037f_1114x580.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ct5L!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0e04bf3-0f0c-4f08-8982-be82de00037f_1114x580.png 424w, https://substackcdn.com/image/fetch/$s_!Ct5L!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0e04bf3-0f0c-4f08-8982-be82de00037f_1114x580.png 848w, https://substackcdn.com/image/fetch/$s_!Ct5L!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0e04bf3-0f0c-4f08-8982-be82de00037f_1114x580.png 1272w, https://substackcdn.com/image/fetch/$s_!Ct5L!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0e04bf3-0f0c-4f08-8982-be82de00037f_1114x580.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://www.forethought.org/research/preparing-for-the-intelligence-explosion.pdf">Please see the paper here</a>!</figcaption></figure></div><div><hr></div><p>Below are my unfiltered, unedited, quarter-baked thoughts.</p><h2>My unfiltered, unedited, quarter-baked thoughts</h2><p><em>(Transcribed ~verbatim)</em></p><p>Okay, this is Aaron.</p><p>I'm in post-prod, as we say in the industry, and I will just spitball some random thoughts, and then I'm not even with my computer right now, so I don't even have the text in front of me.</p><p>I feel like my main takeaway is that the vibes debate is between normal to AI is as important as the internet, maybe. That's on the low end, to AI is a big deal. But if you actually do the not math, all approximately all of the variation is actually just between insane and insane to the power of insane. And I don't fully know what to do with that.</p><p>I guess, to put a bit more of a point on it, I'm not just talking about point estimates. It seems that even if you make quite conservative assumptions, it's quite overdetermined that there will be something explosive technological progress unless something really changes. And that is just, yeah, that is just a big deal. It's not one that I think of fully incorporated into my emotional worldview. I mean, I have it, I think, in part, but not, not to the degree that I think my, my intellect has.</p><p>So another thing is that one of the headline results, something that Will MacAskill, I think, wants to emphasize and did emphasize in the paper, is the century in a decade meme. But if you actually read the paper, that is kind of a lower bound, unless something crazy happens. And I'll, this is me editorializing right now.</p><p>So, I think something crazy could happen first, for example, nuclear war with China, that would destroy data centers and mean that, you know, AI progress is significantly set back, or it's an unknown unknown. But the century in a decade is really a truly a lower bound. You need to be super pessimistic with all the in-model uncertainty. Obviously there's out of model uncertainty, but the actual point estimates, whether you take geometric, however you do it, arithmetic means over distributions, or geometric means, however you combine the variables, you actually get much much faster than that.</p><p>So that is a 10x speed up, and that is, yeah, as I said 10 times, as pessimistic as you can get, I don't actually have a good enough memory to remember exactly what the point estimate numbers are. I should go back and look.</p><p>So chatting with Claude, it seems that there's actually a lot of different specific numbers and things. So one question you might have is, okay, over the fastest growing decade in terms of technological progress or economic growth in the next 10 decades, what will the peak average growth rate be? But there's a lot of different ways you can play with that to change it. It's, oh, what's the average going to be over the next decade? What about this coming decade? What about before 2030? Do we're talking about economic progress, progress or some less well-defined sense of technological and social progress.</p><p>But basically it seems the conservative scenario is, is that the intelligence explosion happens and at some, in some importantly long series of years, you get a 5x year over year. So not a doubling every year, but after two years, you get a 25x expansion of, of AI labor. And then 125 after three years. And I need to look back. I think one thing they don't talk about specifically is, oh yeah, sorry.</p><p>They do talk about one important thing to emphasize. And as you can tell, I'm not the most eloquent person in the world. Is that they talk about pace significantly and about limiting factors. But the third, the thing you might solve for, if you know those two variables is the length of time that such an explosion might take place across and just talking, thinking out loud, that is something that they, whether intentionally or otherwise, or me being dumb and missing it. I don't think that they give a ton of attention to, and that's yeah. I mean, my intuition is approximately fine.</p><p>Does it matter if the intelligence explosion conditional on conditional on knowing how to distribution of rates of say blocks of years, say, so we're not talking about seconds, we're not talking about, I guess we could be talking about months, but we're not talking about weeks, and we're not talking about multiple decades.</p><p>So we're talking about something in the realm of single digit to double digit numbers of years, maybe a fraction of a year. So two ish, three orders of magnitude of range. And so the question is, conditional on having a distribution of peak average growth rate for some block of time. Does it matter whether we're talking about two years, or 10 years or what? And sorry, backtracking, also conditional on having a distribution for the limiting factors.</p><p>So at what point do you stop scaling? Because we know that there's the talking point, infinite growth in a finite world is true. They're just off by 1000 orders of magnitude, or maybe 100. So there actually are genuine limiting factors. And they discussed this, at what point you might get true limits on power consumption or whatever.</p><p>But yeah, just to recap this little mini ramble. We don't, one thing the paper doesn't go over much is the length of time specifically, except insofar as that is implied by distributions you have for peak growth rates and limiting factors.</p><p>So another thing that wasn't in the paper, but that was, I'm just spitballing that was in Will MacAskill recent interview on the 80,000 hours podcast with Robert Roeblin about the world's most pressing problems and how you can use your career to solve them. Is that, yeah, I think Rob said this, he wishes that the AIX community hadn't been so tame or timid, in terms of hedging, saying, emphasizing uncertainty, saying, you know, there's a million ways it can be wrong, which is of course true. But I think his, the takeaway he was trying to get at was, even ex-ante, they should have been a little bit more straightforward.</p><p>And I actually kind of think there's a reasonable critique of this paper, which is that the century in a decade meme is not a good approximation of the actual expectations, you know, the expectations is something like 100 to 1000x, not a 10x speed up. As lucky as a reasonable conservative baseline, you have to be really within model pessimistic to get to the 10x point.</p><p>Another big thing to comment on is just the grand challenges. And so I've been saying for a while that my P doom, as they say, is something in the 50% range. Maybe now it's 60% or something after reading this paper up from 35% right after the bottom executive order. And what I mean by that, I actually think is some sort of loose sense of, no, we actually don't solve all these challenges.</p><p>Well, so it's not one thing MacAskill and Morehouse emphasize, but in both the podcast that I listened to and the paper is it's not just about AI control. It's not just about the alignment problem. You really have to get a lot of things right. I think this relates to other work that MacAskill is on that I'm not super well acquainted with. But there's the question of how much do you have to get right in order for the future to go well. And actually think there's a lot of strands there. Like I remember on the podcast with Rob, that we're talking in terms of percentage, percentage value of the best outcome. I'm not, yeah, I'm just thinking out loud here, but I'm not actually sure that's the right metric to go with.</p><p>It's a little bit like, so you can imagine just we have the current set of possibilities and then exogenously we get one future strand in the multiverse, the Everettian multiverse. And a single Everettian multiverse thread points to the future going a billion times better than it could otherwise. I feel like this approximately should not change approximately anything because you know it's not going to happen. But it does revise down those numbers, your estimate of the expected value, the expected percentage of the best future, it revises that down a billion fold.</p><p>And so this sort of, no I'm not actually sure if this ends up cashing, I'm just not smart enough to intuit well whether this ends up cashing out in terms of what you should do. But I suspect that it might, that's really just an intuition, so yeah I'm not sure.</p><p>You know something that will never be said about me is that I am an extremely well organized and straightforward thinker. So it might be worth noting these audio messages are just random things that come to mind as I'm walking around basically a park. Also that's why the audio quality might be worse.</p><p>Oh yeah getting back to what I was originally thinking about with the grand challenges and my P. Doom. They just enumerate a bunch of things that in my opinion really do have to go right in order for some notion of the future to be good. And so there's just a concatenation, I forget what the term is, but a concatenation issue of even if you're relatively optimistic and I kind of don't know if you should be on any one issue.</p><p>Like okay, so some of these, let me just list them off. AI takeover, highly destructive technologies, power concentrating mechanisms, value lock-in mechanisms, AI agents and digital minds, space governance, new competitive pressures, epistemic disruption, abundance, so capturing the upside and unknown unknowns. No, they're not, it's not as clean a model as each of these are fully independent. It's much more complex than that, but it's not as simple as you just, oh, if you have a 70% chance on each, you can just take that to the power of eight or however many there are and get the outcome that they all go well. Not only because they're overlapping and they're not independent, et cetera.</p><p>But I feel like this is sort of a more explicit, that, this is the kind of thing that I've been getting at with my relatively high, I think, PDoom numbers, even though a significant amount of my approximately 50% or 60% probability mass is basically not, not that there's a classic Yudkowskian takeover, single event where everybody drops dead from a pandemic that Claude 3.9 designed. But it's basically that, yeah, there's a shit ton of stuff to get right.</p><p>We could actually imagine a thriving sort of civilization on Earth, even one that ends wild animal suffering. And there's the question of, okay, what are the odds that, okay, what else is going on and also elsewhere in the universe and shit does get very weird. You get into weird counterfactuals, alternative evolution trajectories, aliens with different values.</p><p>So yeah, what else is there? Oh, yeah, I'm particularly nervous about getting AI consciousness right? I think the most likely way that we do get it right, or we do end up treating AI as well, is basically by accident. Just not a great position to be in.</p><p>I haven't really thought specifically about the likelihood that we basically treat AI as well as a group or don't.</p><p>I don't know. I want to say 50-50 or something, but it's also correlated with other things. Maybe that's too optimistic. I'm really just doing the opposite of what I dislike, which is when people refuse to put numbers on things. But I could change that in five minutes.</p><p>Yeah, another thing that is a little bit outside of the scope of the paper, but obviously relevant is, okay, so what do we do about this? And, you know, there's a meme of pause AI and actually I'm just in favor of frontier AI scaling. But quite possibly the more plausible and frankly, more important thing is that you actually take the growth rates down from 1000x year over year to approximately say 10x a year over year, which give you a 10 to the 10, x increase over a decade in terms of technological progress to approximately doubling. Sorry, approximately doubling every year, which would get you to the 10, which is 1000x or something. Or maybe you want to take us down to 30% growth. I don't know what 1.3 to the 10th is maybe that's roughly 10x or something. No, that can't be right.</p><p>But anyway, I do think it's an important question is, okay, what is the fastest rate of change we can get where we still muddle through? And 10x doesn't seem crazy to me. And I feel like this is actually just the default, this is the default alignment plan, is that we muddle through, and by some combination of effort, luck, restraint, and policy, just aren't independent. We merely see 30% annual growth, or something. Or things just go really right by accident. Now, that's not exactly a plan, right? But that is kind of where my optimism comes from, or my optimistic probability mass.</p><p>Yeah, and another point, important thing in the paper to reflect on, is punting stuff to future AI. And I don't think I have great original thoughts on this. I do think that, just point listeners to Joe Carl Smith, who I just, first of all, big fan. Second of all, even though he's wrong about stuff, not everything. Second of all, there, I think, he is specifically talking about, at least in this recent post that he wrote, the more fine grained dynamics of how we can best use AI to do AI alignment research. And that's only one sort of grand challenge in MacAskill and Morehouse's schema. But I will just point out, there's a lot of good thinking there.</p><p>Like one important takeaway is, you want to pause or slow down, maximally, in the range where AI is not yet maximally dangerous or ideally dangerous at all, but can significantly help with stuff with AI alignment, space governance, epistemics, et cetera. And that is something that I feel like I could have gotten to from first principles, but didn't. So hats on to me instead of hats off.</p><p>Yeah, also just in terms of the degree to which maybe this is going to manifest in my real life. I'm basically, so there's a chance I'll back out, but I'm not planning to, I'm just going to sleep on it one more night, planning to basically make a bet that any single calendar year by 2030 will have a growth Rate of above 5% now 5% is look and I'm planning to do this on something like a $1,000 magnitude and two to one odds against me. So I lose $1,000 if I lose and gain $500 if I win and I'll just throw that out there as 5% It's not directly addressed the question of whether we're got faster than last since the 1970s in economic growth. It's not directly addressed the question of whether we're got four percent versus seven percent. It isn't directly in the paper.</p><p>But yeah, I guess I'm reiterating here But look at a very broad qualitative level a lot of things have to go very specifically wrong quote-unquote wrong or right depending on your opinion in order for there not to be a world's historical speed up in technological growth and presumably also economic growth. I will also note that some of these random pauses I'm being self-conscious as I'm getting passed by joggers. And I kind of I would like to think that I think I'm a cool business analyst guy talking in these jargony terms. They probably don't.</p><p>Now totally separate from the substance. There's the question of whether I should do another thing of this this was actually a quite long paper.</p><p>So the recording is two hours I only did one very small section over again, literally 30 seconds. But basically just read it in multiple parts because I'm quite ADHD did like 7 and 17 different sections. That when you stitch them together you got two hours over the course of two days. But it was a true non-trivial amount of time energy and focus for those two days, but actually kind of the idea of doing of forcing myself to read this way. Because God knows that sometimes I won't otherwise a little bit too Twitter brained. But if you read it out loud to some extent you have to absorb some of it at least for me. You can't just skim totally over it and maybe I'll just maybe I'll do this more you know. Hopefully it won't be I think this is 55 56 pages. So pretty with some diagrams or whatever.</p><p>So I call it 50 pages of text so you're looking at 25 pages an hour or 30 minutes could do in Yeah, could not do it not that. Wouldn't be super difficult for me to do that in say three parts, I think. So maybe I should find more important PDFs to read there are really quite a lot of PDFs and Google Docs for that matter, I don't know. It's not a very enlightened point but I don't know, does anybody read this stuff?</p><p>Will MacAskill is kind of famous and this is a quite important paper and it's quite well written and it's very important. And so that's and but what about all the stuff that's just random PDFs out there that is written by some not a random guy, but a random smart guy which is maybe one-tenth of the importance which is still extraordinarily important or whatever. Does anybody read those PDFs?</p><p>The same park earlier today. I saw a wild turkey. And so I'm still hoping I see one. It's kind of cool. It was kind of from afar, but I haven't seen one yet another thing to spitball about which is I'm just reminded of isn't it's only tangentially related to the paper.</p><p>Is Leopold Aschenbrenner's if that's how you pronounce it. I forget the name but his big paper that was making the rounds on Twitter about how we need to ensure that the Democratic bloc of countries has a commanding lead over other particularly authoritarian countries particularly China when it comes to basically developing artificial intelligence and this is weird because it's a tension between okay if you're have a far enough lead then you know, you can take more safety measures. But also that sort of implies speeding up or whatever. I would just go on the record for that's for people listening to this. That's optimistic I basically agree with Leopold. And maybe tangential is putting it too weakly. This is certainly in the realm of MacAskill's grand challenges.</p><p>Um in some sense I yeah, maybe this is misplaced optimism but in terms of governance and stuff I'm not super confident I'm just spitballing but I do have a little bit of a sense of MacAskill maybe working a little bit too much of a Play model of you have an arbitrary country is what pops out of that whereas really what you actually have is the US and China and really idiosyncratic dynamics that can pop out of that and so some of my optimistic probability mass is basically on either by choice or by accident we get Leopold something approximating the ideal outcome of the US just continues to be dominant and that actually doesn't totally obviate especially when you get into the long term but at least diminishes or partially obviates some of the governance grand challenge stuff. There might just yeah, it's true that if there's arbitrary color countries there's these competitive dynamics and this and that.</p><p>I don't know I mean I do think part of the reason why maybe there's a vibes not quite a disagreement but a vibes separation or whatever is MacAskill thinking is not bound into this quite intuitive near-term frame of this is the world in 2025 and you know, there's... Yeah, this is truly a peak ADHD ramble but it's just quite interesting what dynamics pop out of the perhaps neglected intersection between quote-unquote long-term ism and actually very near-term timelines. You have the genuine intellectual foundations of long-term ism from what Toby Ward at all five years ago or something but we're not just talking about long... But Will MacAskill still actually quote-unquote doing long-term ism in this paper and I think that's good.</p><p>Like I think just on the merits long-term ism is more or less correct but maybe that actually binds you a little bit too much into a frame of these abstract models because you're so radically uncertain about what the future is gonna be like or as we're talking about developments that are potentially within the next two years quite plausibly the next five. So it's yeah, maybe abstract models just aren't actually the best way to go about doing intellectual work here.</p><p>I guess this is beating a dead horse to some extent but man, it is striking the degree to which that the what I think are the merits and or what objectively are the contents of this paper just how far that is from the current Overton window. The Overton window is quite a bit wider than it was in 2022. For example, but yeah, I genuinely don't know what percentage of members of Congress would basically nod along to this as opposed to objecting or just thinking it's crazy or something. What people and yeah this is quite a bit of uncertainty that I have.</p><p>Yeah, maybe this has come across but another thing is just in terms of vibes I think the paper is just quite high quality. This is from a one reading impression so I'm not gonna be able to justify this in a totally legit way but I was listening to this audiobook the other day. It was four stars or whatever and I was no, this is slop. It's written by a smart person or whatever but it's slop. I did not get the sense of this paper is slop. Needless to say if anybody had to be convinced that Will MacAskill is not writing slop you heard it here first.</p><p>Here's a really unrelated point. So I'm not a very good public speaker as you can tell I think I'm somewhere between a B maybe a B plus at best, but generally a B minus and C minus in terms of speaking quality. And I've actually been involved with this other school or screwworm related project.</p><p>And I think the answer is just Yeah for a lot of kind of random niche stuff no one else is actually gonna do it so even if you kind of suck you could make a contribution anyway, especially insofar as it involves punting some sort of other effort off to the actually competent people. So that's a vibe. I'm pretty sure nobody's listening at this point but if you are and you want to recommend papers for me to read you can do that. No promises, but just FYI.</p><p>Yeah, another thing just again kind of related in the same sense that Ashton Brenner's is related is I really don't think Donald Trump should be president during the intelligence explosion and this is not quite a hot take in my circle, but maybe I think it needs to be said because there's an ask you might have which is indefinite AI pause and then there's an ask you might have that's delayed timelines by two to three years and these are substantially different. In two to three years, you can do a lot of non frontier work. The products that anthropic and AI and Google DeepMind would make if they paused AI scaling it and put off an intelligence explosion would be quite society, quite I mean, it may be not transformative but quite impressive. It wouldn't feel like an AI pause.</p><p>And also I think that most of the smart people yes, even the tech bros are basically not fans of Donald Trump and so yeah, here's my ask as a rando on Pigeon hour the most famous podcast in the world is that we should not have self recursively self-improving or explosive technological growth driven by AI before the 2028 presidential election. Maybe I'll leave.</p><p>This is my last hot take. I don't know if this is tractable, maybe it's not. But I haven't seen it except for me on Twitter I haven't seen it said so explicitly. Usually it's a general notion of quote-unquote AI pause, and I you know, a lot of things are just dynamics or randomness, there's competitive pressures, but the federal government of the United States it's quite important and the person has quite a bit of power. I'm would be surprised if less than one percentage point worth of probability mass just comes down to the quality of the US federal government. Maybe it's five percentage points or something. That's yeah, if you take EV seriously, that's really fucking important. So yeah, I don't know Sam, Dario, Ilya, friends if you're listening to this, please wait until 2028 the outro music for pigeon hour doo-doo-doo-doo-doo</p>]]></content:encoded></item><item><title><![CDATA[#12: Arthur Wright and I discuss whether the Givewell suite of charities are really the best way of helping humans alive today, the value of reading old books, rock climbing, and more]]></title><description><![CDATA[Please follow Arthur on Twitter and check out his blog!]]></description><link>https://www.aaronbergman.net/p/12-arthur-wright-and-i-discuss-whether</link><guid isPermaLink="false">https://www.aaronbergman.net/p/12-arthur-wright-and-i-discuss-whether</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Thu, 11 Apr 2024 04:36:05 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/143470423/2918a72e460c747111afe0aaaa727c08.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong> Please follow Arthur on <a href="https://twitter.com/arthur_wright_">Twitter</a> and check out his <a href="https://thestereoscopicimage.substack.com/">blog</a>! </strong></p><div class="pullquote"><p>Thank you for just summarizing my point in like 1% of the words</p><p>-Aaron, to Arthur, circa 34:45</p></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1fGZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96aaa5e5-ab09-4626-9353-864d0a9e3f2e_1024x1024.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1fGZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96aaa5e5-ab09-4626-9353-864d0a9e3f2e_1024x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!1fGZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96aaa5e5-ab09-4626-9353-864d0a9e3f2e_1024x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!1fGZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96aaa5e5-ab09-4626-9353-864d0a9e3f2e_1024x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!1fGZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96aaa5e5-ab09-4626-9353-864d0a9e3f2e_1024x1024.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1fGZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96aaa5e5-ab09-4626-9353-864d0a9e3f2e_1024x1024.webp" width="502" height="502" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/96aaa5e5-ab09-4626-9353-864d0a9e3f2e_1024x1024.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:502,&quot;bytes&quot;:356180,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1fGZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96aaa5e5-ab09-4626-9353-864d0a9e3f2e_1024x1024.webp 424w, https://substackcdn.com/image/fetch/$s_!1fGZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96aaa5e5-ab09-4626-9353-864d0a9e3f2e_1024x1024.webp 848w, https://substackcdn.com/image/fetch/$s_!1fGZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96aaa5e5-ab09-4626-9353-864d0a9e3f2e_1024x1024.webp 1272w, https://substackcdn.com/image/fetch/$s_!1fGZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96aaa5e5-ab09-4626-9353-864d0a9e3f2e_1024x1024.webp 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Real photo of Arthur and I</figcaption></figure></div><h3>Summary</h3><p>(Written by <a href="https://claude.ai/">Claude</a> Opus aka Clong)</p><ul><li><p>Aaron and Arthur introduce themselves and discuss their motivations for starting the podcast. Arthur jokingly suggests they should "solve gender discourse".</p></li><li><p>They discuss the benefits and drawbacks of having a public online persona and sharing opinions on Twitter. Arthur explains how his views on engaging online have evolved over time.</p></li><li><p>Aaron reflects on whether it's good judgment to sometimes tweet things that end up being controversial. They discuss navigating professional considerations when expressing views online.</p></li><li><p>Arthur questions Aaron's views on cause prioritization in effective altruism (EA). Aaron believes AI is one of the most important causes, while Arthur is more uncertain and pluralistic in his moral philosophy.</p></li><li><p>They debate whether standard EA global poverty interventions are likely to be the most effective ways to help people from a near-termist perspective. Aaron is skeptical, while Arthur defends GiveWell's recommendations.</p></li><li><p>Aaron makes the case that even from a near-termist view focused only on currently living humans, preparing for the impacts of AI could be highly impactful, for instance by advocating for a global UBI. Arthur pushes back, arguing that AI is more likely to increase worker productivity than displace labor.</p></li><li><p>Arthur expresses skepticism of long-termism in EA, though not due to philosophical disagreement with the basic premises. Aaron suggests this is a well-trodden debate not worth rehashing.</p></li><li><p>They discuss whether old philosophical texts have value or if progress means newer works are strictly better. Arthur mounts a spirited defense of engaging with the history of ideas and reading primary sources to truly grasp nuanced concepts. Aaron contends that intellectual history is valuable but reading primary texts is an inefficient way to learn for all but specialists.</p></li><li><p>Arthur and Aaron discover a shared passion for rock climbing, swapping stories of how they got into the sport as teenagers. While Aaron focused on indoor gym climbing and competitions, Arthur was drawn to adventurous outdoor trad climbing. They reflect on the mental challenge of rationally managing fear while climbing.</p></li><li><p>Discussing the role of innate talent vs training, Aaron shares how climbing made him viscerally realize the limits of hard work in overcoming genetic constraints. He and Arthur commiserate about the toxic incentives for competitive climbers to be extremely lean, while acknowledging the objective physics behind it.</p></li><li><p>They bond over falling out of climbing as priorities shifted in college and lament the difficulty of getting back into it after long breaks. Arthur encourages Aaron to let go of comparisons to his past performance and enjoy the rapid progress of starting over.</p></li></ul><h1>Transcript</h1><p><em>Very imperfect - apologies for the errors.</em></p><p>AARON</p><p>Hello, pigeon hour listeners. This is Aaron, as it always is with Arthur Wright of Washington, the broader Washington, DC metro area. Oh, also, we're recording in person, which is very exciting for the second time. I really hope I didn't screw up anything with the audio. Also, we're both being really awkward at the start for some reason, because I haven't gotten into conversation mode yet. So, Arthur, what do you want? Is there anything you want?</p><p>ARTHUR</p><p>Yeah. So Aaron and I have been circling around the idea of recording a podcast for a long time. So there have been periods of time in the past where I've sat down and been like, oh, what would I talk to Aaron about on a podcast? Those now elude me because that was so long ago, and we spontaneously decided to record today. But, yeah, for the. Maybe a small number of people listening to this who I do not personally already know. I am Arthur and currently am doing a master's degree in economics, though I still know nothing about economics, despite being two months from completion, at least how I feel. And I also do, like, housing policy research, but I think have, I don't know, random, eclectic interests in various EA related topics. And, yeah, I don't. I feel like my soft goal for this podcast was to, like, somehow get Aaron cancelled.</p><p>AARON</p><p>I'm in the process.</p><p>ARTHUR</p><p>We should solve gender discourse.</p><p>AARON</p><p>Oh, yeah. Is it worth, like, discussing? No, honestly, it's just very online. It's, like, not like there's, like, better, more interesting things.</p><p>ARTHUR</p><p>I agree. There are more. I was sort of joking. There are more interesting things. Although I do think, like, the general topic that you talked to max a little bit about a while ago, if I remember correctly, of, like, kind of. I don't know to what degree. Like, one's online Persona or, like, being sort of active in public, sharing your opinions is, like, you know, positive or negative for your general.</p><p>AARON</p><p>Yeah. What do you think?</p><p>ARTHUR</p><p>Yeah, I don't really.</p><p>AARON</p><p>Well, your. Your name is on Twitter, and you're like.</p><p>ARTHUR</p><p>Yeah. You're.</p><p>AARON</p><p>You're not, like, an alt.</p><p>ARTHUR</p><p>Yeah, yeah, yeah. Well, I. So, like, I first got on Twitter as an alt account in, like, 2020. I feel like it was during my, like, second to last semester of college. Like, the vaccine didn't exist yet. Things were still very, like, hunkered down in terms of COVID And I feel like I was just, like, out of that isolation. I was like, oh, I'll see what people are talking about on the Internet. And I think a lot of the, like, sort of more kind of topical political culture war, whatever kind of stuff, like, always came back to Twitter, so I was like, okay, I should see what's going on on this Twitter platform. That seems to be where all of the chattering classes are hanging out. And then it just, like, made my life so much worse.</p><p>AARON</p><p>Wait, why?</p><p>ARTHUR</p><p>Well, I think part of it was that I just, like, I made this anonymous account because I was like, oh, I don't want to, like, I don't want to, like, have any reservations about, like, you know, who I follow or what I say. I just want to, like, see what's going on and not worry about any kind of, like, personal, like, ramifications. And I think that ended up being a terrible decision because then I just, like, let myself get dragged into, like, the most ultimately, like, banal and unimportant, like, sort of, like, culture war shit as just, like, an observer, like, a frustrated observer. And it was just a huge waste of time. I didn't follow anyone interesting or, like, have any interesting conversations. And then I, like, deleted my Twitter. And then it was in my second semester of my current grad program. We had Caleb Watney from the Institute for Progress come to speak to our fellowship because he was an alumni of the same fellowship. And I was a huge fan of the whole progress studies orientation. And I liked what their think tank was doing as, I don't know, a very different approach to being a policy think tank, I think, than a lot of places. And one of the things that he said for, like, people who are thinking about careers in, like, policy and I think sort of applies to, like, more ea sort of stuff as well, was like, that. Developing a platform on Twitter was, like, opened a lot of doors for him in terms of, like, getting to know people in the policy world. Like, they had already seen his stuff on Twitter, and I got a little bit, like, more open to the idea that there could be something constructive that could come from, like, engaging with one's opinions online. So I was like, okay, fuck it. I'll start a Twitter, and this time, like, I won't be a coward. I won't get dragged into all the worst topics. I'll just, like, put my real name on there and, like, say things that I think. And I don't actually do a lot of that, to be honest.</p><p>AARON</p><p>I've, like, thought about gotta ramp it.</p><p>ARTHUR</p><p>Off doing more of that. But, like, you know, I think when it's not eating too much time into my life in terms of, like, actual deadlines and obligations that I have to meet, it's like, now I've tried to cultivate a, like, more interesting community online where people are actually talking about things that I think matter.</p><p>AARON</p><p>Nice. Same. Yeah, I concur. Or, like, maybe this is, like, we shouldn't just talk about me, but I'm actually, like, legit curious. Like, do you think I'm an idiot or, like, cuz, like, hmm. I. So this is getting back to the, like, the current, like, salient controversy, which is, like, really just dumb. Not, I mean, controversy for me because, like, not, not like an actual, like, event in the world, but, like, I get so, like, I think it's, like, definitely a trade off where, like, yeah, there's, like, definitely things that, like, I would say if I, like, had an alt. Also, for some reason, I, like, really just don't like the, um, like, the idea of just, like, having different, I don't know, having, like, different, like, selves. Not in, like, a. And not in, like, any, like, sort of actual, like, philosophical way, but, like, uh, yeah, like, like, the idea of, like, having an online Persona or whatever, I mean, obviously it's gonna be different, but, like, in. Only in the same way that, like, um, you know, like, like, you're, like, in some sense, like, different people to the people. Like, you're, you know, really close friend and, like, a not so close friend, but, like, sort of a different of degree. Like, difference of, like, degree, not kind. And so, like, for some reason, like, I just, like, really don't like the idea of, like, I don't know, having, like, a professional self or whatever. Like, I just. Yeah. And you could, like, hmm. I don't know. Do you think I'm an idiot for, like, sometimes tweeting, like, things that, like, evidently, like, are controversial, even if they, like, they're not at all intent or, like, I didn't even, you know, plan, like, plan on them being.</p><p>ARTHUR</p><p>Yeah, I think it's, like, sort of similar to the, like, decoupling conversation we had the other night, which is, like, I totally am sympathetic to your sense of, like, oh, it's just nice to just, like, be a person and not have to, like, as consciously think about, like, dividing yourself into these different buckets of, like, what sort of, you know, Persona you want to, like, present to different audiences. So, like, I think there's something to that. And I, in some ways, I have a similar intuition when it comes to, like, I try to set a relatively strong principle for myself to not lie. And, like, it's not that I'm, like, a Kantian, but I just, like, I think, like, just as a practical matter, the problem with lying for me at least, is then, like, you have to keep these sorts of two books, sets of books in your head of, like, oh, what did I tell to whom? And, like, how do I now say new things that are, like, consistent with the information that I've already, like, you know, falsely or not, like, divulge to this person. Right. And I think, in a similar way, there's something appealing about just, like, being fully honest and open, like, on the Internet with your real name and that you don't have to, like, I don't know, jump through all of those hoops in your mind before, like, deciding whether or not to say something. But at the same time, to the, like, conversation we had the other night about decoupling and stuff, I think. I think there's, like, it is an unfortunate reality that, like, you will be judged and, like, perhaps unfairly on the things that you say on the Internet, like, in a professional sphere. And, like, I don't know, at some level, you can't just, like, wish your.</p><p>AARON</p><p>Way out of it. Yeah, no, no, that's, like, a. Okay, so I. This is actually, like, I, like, totally agree. I think, like, one thing is just. I, like, really, honestly, like, don't know how, like, empirically, like, what is the actual relationship between saying, like, say, you get, like, I don't know, like, ten, like, quote tweets, people who are, like, misunderstanding your point, like, and, like, I don't know, say, like, 30 comments or whatever replies or whatever. And, like, it is, like, not at all clear to me, like, what that corresponds to in the real world. And, like, I think I may have erred too much in the direction of, like, oh, that's, like, no evidence at all because, sorry, we should really talk about non twitter stuff. But, like, this is actually, like, on my mind. And this is something like, I didn't. Like, I thought about tweeting, like, but didn't, which is that, like, oh, yeah, I had, like, the building underground tweet, which, like, I think that's a good example. Like, anybody who's, like, reasonably charitable can, like, tell that. It's, like, it was, like, I don't know, it was, like, a reasonable question. Like, and we've mentioned this before, like, this is, like, I don't want to just, like, yeah, it's, like, sort of been beaten to death or whatever, but, like, I feel like maybe, like, I came away from that thinking that, like, okay, if people are mad at you on the Internet, that is, like, no evidence whatsoever about, like, how it, like, how a reasonable person will judge you and or, like, what will happen, like. Like, in real life and, like, yeah, maybe I, like, went too hard in that direction or something.</p><p>ARTHUR</p><p>Yeah, yeah, yeah. I mean, to, like, agree, maybe move on to non twitter, but, yeah, like, to close this loop. I think that, like, I agree that any. Any one instance of individuals being mad at you online, like, it's very easy to, like, over react or extrapolate from that. That, like, oh, people in the real world are gonna, like, judge me negatively because of this. Right. I think in any isolated instance, that's true, but I just. I also get the sense that in the broad world of sort of, like, think tanks and nonprofits and things where, like, your position would. Especially if you're, like, in a research position, like, to some degree, like, representative the opinions of an employer. Right. That there's a kind of, like, character judgment that goes into someone's overall Persona. So, like, the fact that you have, like, one controversial tweet where people are saying, like, oh, you think, you know, like, poor people don't deserve natural light or something like that. Like, that. Any one instance, like, might not matter very much, but if you, like, strongly cultivate a Persona online of, like, being a bit of a loose cannon and, like, oh, I'm gonna say, like, whatever controversial thing comes to mind, I can see any organization that has, like, communicating to a broader audience is, like, an important part of their mission. Like, being hesitant to, like, take a chance on a young person who, like, is prone to, you know, getting into those kinds of controversies on, like, a regular basis.</p><p>AARON</p><p>Yeah, yeah. And actually, like, maybe this is, like, sort of meta, but, like, I think that is totally correct. Like, you should 100% up. Like, up if you're an employer listening to this. And, like, I don't know. Who knows? There's, like, a non zero chance that, like, I don't know, maybe, like, not low. Lower than, like, 0.1% or something like that. That will be the case. And, like, no, it is totally true that, like, my. I have, like, subpart. Wait, you better. I'm gonna, like. No, no quoting out of context here, please. Or, like, not know, like, clipping the quote out of, like, so it becomes out of context. But, like, it is, like, I have definitely poor judgment about how things will be, um, like, taken, uh, by the people of the Internet, people of the world. I, like, legitimately, I think I'm below not. Probably not first percentile, probably below 50th percentile, at least among broadly western educated, liberal ish people. And so, yes, it's hiring me for head of communication. I mean, there's a reason I'm not. I wouldn't say that I'm not applying to be a communications person anywhere, but I don't know, it's not crazy that I would. If you want to. Yeah, you should. Like, it is, like, correct information. Like, I'm not trying to trick anybody here. Well, okay. Is there anything else that's on your mind? Like, I don't know, salient or, like.</p><p>ARTHUR</p><p>That'S what I should have done before I came over here, but nothing, like, on the top of my head, but I feel like there's, I don't know, there's all kinds of, well, like, there's something you've, like, wandered into.</p><p>AARON</p><p>Yeah, like, I think you have bad cause prioritization takes.</p><p>ARTHUR</p><p>Oh, right.</p><p>AARON</p><p>Like, maybe we shouldn't just, like, have the AI versus, like, I don't know, it's like my, like, the AI is a big deal. Tribe is like, yeah, not only winning, but, like, pretty obviously and for obvious reasons. So, like, I don't know, I don't, like, really need to have, like, the, you know, the 70th, like, debate ever about, like, oh, it's like, AI.</p><p>ARTHUR</p><p>Wait, sorry. You mean they're winning for obvious reasons insofar as, like, the victories are apparent or that you think, like, the actual arguments leading to them.</p><p>AARON</p><p>Oh, yeah.</p><p>ARTHUR</p><p>Becoming more prominent are obvious.</p><p>AARON</p><p>Yeah. Setting aside the. In the abstract, what, non, like, empirical or empirical, but, like, only using data, like, pre chat, GPT release, like, setting aside that whole cluster of arguments, there is the fact that, like, I don't know, it seems very, very apparent to, like, the chattering classes of people who care about this stuff that, like, AI is, like, both the overt has expanded tremendously, like, also moved. It seems like the AI is as big of a deal, like, as the Internet is, like, the lower bound and, like, much, much more important than that. Is, like, the upper bound. And so, like, and, like, that's a. That's like, a significant shift, I guess. One thing is just, like, there have a lot been a lot of conversations, like, in EA spaces, and, like, I'm just, like, thinking about the AdK podcast. I feel like I've heard it multiple times, but maybe I'm making that up where it's like, one person is, like, makes the case for, like, I don't know, taking AI or, like, thinking that, like, yeah, AI broadly is, like, the most important altruistic area, right? And then the other person says no, and then they do the same, like, five discussion points back and forth.</p><p>ARTHUR</p><p>Yeah.</p><p>AARON</p><p>So, like, I don't think we should do that.</p><p>ARTHUR</p><p>Sure.</p><p>AARON</p><p>That was a really long winded way of saying that.</p><p>ARTHUR</p><p>I see. So, so, but, but you're, you're trying to emphasize that, like, the kind of, like, reality of the pace of, you know, improvement in artificial intelligence and the fact that it is going to be, like, an incredibly important technology. Like you said, the lower bound being, like, as important as the Internet, I think, of the upper bound is like, I don't know, something like electricity provided we're not gonna, you know, all die or something. Or maybe more transformational extra. But. But I guess we're trying to say is that, like, the Overton window has, like, shifted so much that, like, everyone kind of agrees this is a really transformative technology. And, like, you know, therefore.</p><p>AARON</p><p>Well, I guess I. Sorry, wait, I interrupted. I'm an interrupting person. I'm sorry.</p><p>ARTHUR</p><p>That's good. It's a natural part of conversation, so I don't feel bad.</p><p>AARON</p><p>Continue.</p><p>ARTHUR</p><p>Oh, oh, no, no. I just. I like, like, yeah, maybe we don't need to rehash the, like, whether or not AI is important, but I'm curious, like, what you think. Yeah, like, what do you think is sort of wrong about my.</p><p>AARON</p><p>No, I was just about to ask that, like, when I interrupted you. I actually don't fully know what you believe. I know we, like, go into different, like, vibe camps or, like, there's another. There's like, a proper noun, vibe camp. This is like a lowercase letters.</p><p>ARTHUR</p><p>Vibe count, vibe sphere.</p><p>AARON</p><p>Yeah, yeah. And, like. But, like, I don't know, do you have, like, a thesis?</p><p>ARTHUR</p><p>Yeah, see, okay. I don't. I think in many ways, like, maybe just to lay out, like, I think my lack of a thesis is probably the biggest distinction between the two of us when it comes to these kind of cause prioritization things.</p><p>AARON</p><p>Right.</p><p>ARTHUR</p><p>Because, like, I think I, like, over the years have, as I became more interested in the effect of altruism, have sort of changed my views in many different directions and iterations in terms of, like, my basic moral philosophy and, like, what I think the role of EA is. And I think over time, like, I've generally just become, like, more kind of pluralistic. I know it's a bit of a hand wavy word, but, like, I think I have sufficient uncertainty about, like, my basic moral framework towards the world that, like, this is just a guess. Maybe we'll discover this through conversation. But I think, like, perhaps the biggest disagreement between you and I that, like, leads us in different directions is just that I am, like, much more willing to do some kind of, like, worldview diversification sort of move where like, just, you know, going from, like, a set of assumptions, you know, something like hedonistic utilitarianism and, like, viewing ea as, like, how can I as an individual make the greatest, like, marginal contribution to, like, maximizing this global hedonistic welfare function, right. I think, like, I hold that entire project with a little bit of, like, distance and a little bit of uncertainty. So, like, even if, you know, like, granting the assumptions of that project that spits out, like, okay, AI and animals are, like, the only things that we should care about. I think, like, I'm willing to, like, grant that that might follow from those premises. But I think, like, I hold the premise itself about, like, what the kind of EA project is or what I, as an individual who's, like, interested in these ideas should do with my career at, like, sufficient, you know, distance that I'm, like, willing to kind of, like, entertain other sets of assumptions about, like, what is valuable. And, like, therefore, I'm just, like, far less certain in committing to any particular cause area. I think before, before we get deeper into the weeds about this, just to put like, a sharper point on the, like, more meta point that I'm trying to make is that, like, so I think, like, I don't know if there was this ADK episode from like, a long time ago about solutions to the Fermi paradox. And I know this sounds unrelated, but I'm gonna try.</p><p>AARON</p><p>No, no, that's cool.</p><p>ARTHUR</p><p>And one of the things he talked about was like, you know, basically, like, the Fermi paradox isn't actually a paradox if you, like, understand the ways that, like, essentially, like, when you have uncertainty in, like, a bunch of different point estimates, those uncertainties, like, when combined, should yield like, a probability distribution rather than just like, the headline is often, like, the point estimate of, like, oh, we should expect there to be like, so many aliens, right? But it's like when you have uncertainties on, like, each decision, you know, like, each assumption that you're making in all of the parameters of the equation, right? Like, so I think, like, I guess to apply that a little bit to kind of my, like, sort of moral philosophy is, like, I think, like, the reason why I just am very kind of, like, waffly on my cost prioritization and I'm, like, open to many different things is just that, like, I start from the basic assumption that, like, the, you know, the grounding principle of the EA project, which is like, we should try to do good in the world and we should try to do, like, you know, good in the world in ways that are, like, effective and actually, like, you know, have the consequences that we, we want. Right. That, like, I am very bought into that, like, broad assumption, but I think, like, I have sufficient uncertainty at, like, every chain of reasoning from, like, what does good mean? Like, what, you know, what is the role of, like, me as an individual? Like, what is my comparative advantage? What does it mean to be cause neutral, like, at all of these points of decision? I feel like I just have, like, sufficiently high level of uncertainty that, like, when you get to the end of that chain of reasoning and you arrive at some answer of, like, what you ought to do. Like, I think I hold it sort of very lightly, and I think I have, like, very low credence on any, like, one, you know, conclusion from that chain of research.</p><p>AARON</p><p>Yeah, yeah. That's what cut you off too much. But, like, but, like, I think there's, like, a very, like, paradigmatic conversation which is like, oh, like, should we be pluralistic? And it's happened seven bazillion times. And so, like, I know I want to claim something different. So. Sorry. I guess there's two separate claims. One is like. And you can tell if, like, I sort of. I was sort of assuming, like, you would disagree with this, but I'm not sure is, um. Yeah, like, even if you just, like, purely restrict, um, you're, like, philosophizing or, like, restrict your ethics just to, like, um, humans who are alive right now and, like, like, basically, like, have the worldview that, like, implies malaria nets. Yeah, um, I, like, think it's, like, very unlikely that, like, actually, like, the best guess intervention right now, like, is the set of, like, standard yay interventions or whatever. And, like, another, like, very related, but, like, some, I guess, distinct claim is, like, I don't know exactly. I don't. Yeah, I really don't know at all what this would look like. But, like, it seems very plausible to me that even under that worldview, so not a long term is worldview at all, like, probably doing something related to, like, artificial intelligence. Like, is, like, checks out under. Yeah, under, like, the most, like, norm, like, normal person version, like, restricted version of EA. And, like, I don't know.</p><p>ARTHUR</p><p>I think I. Yeah, so I think I am inclined to agree with the first part and disagree with the second part. And that's why I want you to spell this out for me, because I. I actually am sympathetic to the idea that, like, under sort of near termist restricting our class of individuals that we want to help to human beings, like, who are alive today. Right. Under that set of assumptions, I similarly think that there's, like, relatively low likelihood that, like, the standard list of sort of, like, give well, interventions are the best. Right.</p><p>AARON</p><p>Well, not.</p><p>ARTHUR</p><p>Or.</p><p>AARON</p><p>Yeah, yeah, or, like, I'm telling you, like, yeah, if you think. Sorry. Um, yeah, my claim was, like, stronger than, like, that. That. Or, like, what one would interpret that as, like, if you just, like, take it like, super literally. So, like, I think that, like, um, not only expose, like, they're not even our, like, real best guesses, like, like, an actual effort would, like, yield other best guesses. Not only like, oh, yeah. Like, this is our, like, this is like a minority, but like, a plurality of the distribution, if that makes sense.</p><p>ARTHUR</p><p>Okay, then. Then I do think we disagree because I think where I was going to go from that is that I think to me, like, I'm not as informed on these arguments as I should be. So, like, I will fully admit, like, huge degree of, like, epistemic limitation here, but, like, I think my response was just going to be that I think, like, the case for AI would be sort of even weaker than those givewell style interventions. So even though they're, like, unlikely to be, you know, the best, like, you know, like x post in some, like, future where we, like, have more information about other kinds of ways that we could be helping people. Right. They're like, still, you know, better than the existing alternatives and.</p><p>AARON</p><p>Yeah, yeah, I'm gonna.</p><p>ARTHUR</p><p>So what is the case for, like, near termist case for AI? Like, what if you could.</p><p>AARON</p><p>Yeah, yeah. Just to, sorry. I like, promise I will answer that. But like, just to clarify. Yeah, so I'm like, more confident about, like, the give world charities are, like, not the ex ante best guess than I am that the better, like one of the best. Like, in fact, ways to help only humans alive right now would involve AI. So, like, these are related, but like, distinctive and the AI one I'm like, much less confident in and haven't, I guess, in some sense, just because it's so much more specific.</p><p>ARTHUR</p><p>Actually, let's do both parts because I realized earlier also what I meant was not ex ante, but ex post. Like, with much larger amount of information about other potential interventions, we might determine that something is better than Givewell. Right. But nonetheless, in the world that we actually live in, with the information that we currently have, the evidence is sufficiently strong for impact under the kinds of assumptions we're saying we're operating under. Right. That, like, you know, other, other competing interventions, like, have a very high bar to click. Like, maybe they're worthwhile in, like, a hit space giving kind of way. Like, in that, like, it's worth, like, trying a bunch of them to, like, see if one of them would outperform givewell. But, like, for the time being, you know, that whatever givewell spreadsheet says at any current time, I think is pretty, like, is pretty compelling in terms of, like, you know, higher certainty ways to help individuals.</p><p>AARON</p><p>Yeah. So, um.</p><p>ARTHUR</p><p>So, so one, I want to hear, like, why you disagree with that. And then two, I want to hear, like, your case for, like, AI.</p><p>AARON</p><p>Yeah, okay. I think I'm responding to this. Like, you can cut me off or whatever. Um, so, like, fundamentally, I want to, like, decouple. Haha. Or. Yeah, this is something I like doing and decouple, um, the, like, uh, yeah, who we care about. And, like, um, how, like, how aesthetically normal are we gonna be? So, like, I want to say, like, okay, even, yeah, if you. If you're, like, still in the realm of, like, doing analytic philosophy about the issue. And, like, you just, like, say, like, okay, we're just, like, gonna restrict, like, who we care about to, like, humans alive right now. There's, like, still a lot of weird shit that can, like, come out of that. And so, like, my claim, I think actually, like, what's what. Maybe this is, like, somewhat of a hot take, whatever. But I think, like, actually what's happening is, like, there is, like, a, quote, unquote like, worldview that, like, vibe associates and to some extent, like, explicitly endorses, like, only just like, for whatever reason, like, trying to help humans who are alive right now, or, like, maybe, like, who will become alive in the near future or something. But, like, this is always paired with, like, a default, like, often non explicit assumption that, like, we have to do things that look normal. Or, like. And to some extent you can. Some extent you can, like, formalize this by just, like, saying you, like, care about certain deep impact. I think there's, like, not even that technical, but, like, mildly, like, technical reasons why. Like, if you're still in the realm of, like, doing analytical philosophy about the issue, like, that doesn't check out, like, for example, you don't actually know, like, which specific person you're gonna help. I'm, like, a big fan of, like, the recent reaping priorities report. So I spent, like, five minutes, like, rambling and, like, doing a terrible job of explaining what I. What I mean. And so the idea that I'm getting at is that I think there's like a natural, like, tendency to think of risk aversion in like, an EA or just like generally, like, altruistic context. That basically means, like, we like, understand like, a chain of causality. And there are like, professional economists, like, doing RCT's and they like, know what works and what doesn't. And, like, this isn't, like, there's like, something there that is valuable. Like, doing good is hard. And so, like, you know, careful analysis is actually really important. But I think this, like, doesn't, there's a tendency to, like, ignore the fact that, like, these type of, like, give well style, like charities and give well, style, like, analysis to identify the top charities. Basically to, as far as I know, almost exclusively, like, looks at just one of, like one of like, the, the most salient or like, intended, like, basically first order effects of an intervention. So we, like, it's just not true that we know what the impact of like, giving $3,000 to the gens malaria foundation is. And, like, it's like, you know, maybe there are, like, compelling, compelling reasons to, like, think that basically it all washes out or whatever. And, like, in fact, like, you know, reducing deaths from malaria and sickness is like the absolute, like the single core effect. But, like, as far as I know, there's, like, not, that seems to be mostly just like taken as a given. And I don't think this is justified. And so I don't think this, like, really checks out as like, a type of risk aversion that stands up to scrutiny. And I found this tweet. Basically, I think this is like, good wording. The way to formalize this conception is just have narrow confidence intervals on the magnitude of one first order effect of an intervention. And that's an awfully specific type of risk aversion. This is not generally what people mean in all walks of life. And then I mentioned this rethink priorities report written by Laura Duffy first, Pigeonhauer Guest. And she basically lists three different types of risk aversion that she does in some rating priorities, like analysis. So, yeah, number one, avoiding the worst. Basically, this is the s risk style or modality of thinking. The risk. The thing we really, really want to avoid is the worst states of the world happening to me. And I think to many people, that means a lot suffering. And then number two, difference making risk aversion. Basically, we want to avoid not doing anything or causing harm. But this focus is on, like, not on the state of the world that results from some action, like, but like your causal effect. And then finally, number three, ambiguity aversion. Basically, we don't like uncertain probabilities. And for what it's worth, I think, like, yeah, the givewell style, like, leaning, I think, can be sort of understood as an attempt to get to, like, addressed, like, two and three difference making an ambiguity aversion. But like, yeah, for reasons that, like, are not immediately, like, coming to my head and like, verbalize, I, like, don't think. Yeah, basically for the reasons I said before that, like, there's really no comprehensive analysis there. Like, might seem like there is. And like, we do have like, decent point estimates and like, uncertainty ranges for like, one effect. One. But like, I. That doesn't, as far as I can tell, like, that is not like, the core. The core desire isn't just to have like, one narrow, like, nobody. I don't think anyone thinks that we, like, should intrinsically value, like, small confidence intervals. You know what I mean? And this stands in contrast to, as I said before, also s risk of french organizations, which are also, in a very real sense, doing risk aversion. In fact, they use the term risk a lot. So it makes sense. The givewell vibe and the s risk research organization vibes are very different, but they, in a real sense that they're at least both attempting to address some kind of risk aversion, although these kinds are very different. And I think the asterisk one is the most legitimate, honestly. Yeah. Okay, so that is a. There was sort of like a lemma or whatever and then. Yeah. So, like, the case for AI in like, near term only affecting humans. Yes. So, like, here's one example. Like, this is not the actual, like, full claim that I have, but like, one example of like, a type of intervention is like, seeing if you can make it basically like, what institutions need to be in place for, like, world UBI. And let's actually try to get that policy. Let's set up the infrastructure to get that in place. Like, even now, even if you don't care about, like, you think long termism is false, like, don't care about animals, don't care about future people at all, it seems like there is work we can do now, like, within, you know, in like, the realm of like, writing PDF's and like, building. Yeah, building like, like, political institutions or like, at least. Sorry, not building institutions, but like, affecting political institutions. Like via. Like, via, like, I guess like both like, domestic and like, international politics or whatever that, like, still. And sorry, I like, kind of lost like, the grammatical structure of that sentence, but it seems plausible that, like, this actually is like better than the givewell interventions, just like if you actually do like an earnest, like best guess, like point estimate. But the reason that I think this is plausible is that all the people who are willing to do that kind of analysis are like, aren't, aren't restricting themselves to like, only helping humans in like the near future. They're like, I don't know. So there's like a weird, like missing middle of sorts, which, depending on what the counterfactual is, maybe bad or good. But I'm claiming that it exists and there's at least a plausible gap that hasn't really been ruled out in any explicit sense.</p><p>ARTHUR</p><p>Okay, yeah, great. No, no, that's all very useful. So I think, I guess setting x risky things aside, because I think this is a usual way to get at the crux of our disagreement. Like, it's funny, on the one hand, I'm very sympathetic to your claim that sort of like the kinds of things that give, well, sort of interventions and, you know, RCT's coming out of like development economics are interested in, like, I'm sympathetic to the idea that that's not implied by the kind of like basic near termist EA, philosophical presupposition.</p><p>AARON</p><p>Thank you for just summarizing my point in like 1% of the words.</p><p>ARTHUR</p><p>Yeah, yeah. So I'm like, I actually strongly agree with that. And it's precisely why, like, I'm more open to things that aren't like givewell style interventions, why I'm very sympathetic to the economic growth side of the growth versus RCT perennial debate, all that. That's maybe interesting side discussion. But to stay on the AI point, I guess putting existential risk aside, I want to make the standard economist argument for AI optimism and against what you were just trying to say. So like, to me, like, I think it is like plausible enough that we should be concerned that, like, increasing AI progress and dissemination of AI technologies decreases returns to labor in the global economy. I think it's plausible enough that we should care about that and not dismiss it out of hand. But, like, I think it's far less likely that, like, or sorry, not. Well, I want to be careful with. I think it's potentially more likely that almost exactly the opposite is true. So, like, if I look at like the big picture history of like global economic growth, like the classic, you know, hockey stick graph where like GDP per capita for the world is like totally flat until, you know, like about 200 years ago. Right? Like, I think the standard, like this is a super interesting rich topic that I've been like learning a lot more about over the last few years. And I think, like the devil is very much in the details. But nonetheless, I think the kind of like classic, you know, postcard length summary is basically correct that like, why did that happen? That happened because like productivity of individual workers, like dramatically increased, like orders of magnitude due to technological progress, right? And like, whether that to what degree that technological progress is sort of like political institutional technologies versus like direct, like labor augmenting technologies is like, you know, whatever, way too deep to get into in this discussion. I don't have like good informed takes on that. But like, nonetheless, I think that, like, the basic like, sort of lump of labor fallacy, like, is strongly at play at these worries that AI is going to displace workers. Like, I think if like, you look at all these, you know, previous technologies, like the, you know, Luddites destroying the power looms or they weren't really power looms, but they were like this more like, you know, better kinds of handlers or whatever. Right, right. Like, I think the worry that people have always had, and again, I get, I'm giving the standard economists soapbox thing that everyone has heard before, but like, I just don't see why AI is categorically different from these other technological advancements. And that, like, at a glance, like, for me as an individual, like trying to build a research career and like get a job and stuff, my ability to access GPT four and Claude, like has I think, like dramatically increased my marginal productivity and like would presumably also increase my wage in the long term because I can just do a lot more in the same amount of time. So, like, it seems to me like just as if not more likely that the better AI technology gets, you have people that are able to produce more in economic value with the same amount of labor and therefore are going to increase economic growth and increase their wages rather than just somehow displace them out of the labor market. And I think there is something that I think EA should maybe paying more attention to, but like maybe they're too concerned with existential risk. There is some interesting experimental economics research already, like looking at this question, which is like having people who work in kind of like standard sort of like, you know, operations and middle management sort of office jobs, like using AI in their work. And I think one of the interesting findings seems to be like a lot of these experiments are finding that it has sort of an equalizing effect, which is like for the most productive employees at a given task, their productivity is like only very modestly improved by having access to large language models. But like, the least productive employees see like very large improvement in their productivity from these technologies. So, like, in my opinion, it seems plausible that like, you know, better access to these sorts of technologies would, if anything, make your, like, standard, you know, employee in the global economy, like, you know, not only more productive, but have this sort of like leveling of the playing field effect. Right. Where like people who, who do not have the current capacities to like produce a lot of value are sort of, you know, brought up to the same level as like.</p><p>AARON</p><p>Yeah. So, like, I think these are all reasonable points. I also think, um, sorry, I think I have like three, like points, I guess. Yeah. On the object level, I like, don't think I have anything to like, add to this discussion. The one thing I would point out is that it seems like there's, as far as I can tell, like no disagreement that like in principle you can imagine a system that is better than all humans at all tasks that does not have the effect you're talking about in principle, better than humans, better than, better and cheaper than all humans at all tasks.</p><p>ARTHUR</p><p>Right. With no human input required.</p><p>AARON</p><p>Yeah, in principle, yeah.</p><p>ARTHUR</p><p>Okay.</p><p>AARON</p><p>Yeah, yeah. Like, I don't think this is a radical claim. So like, then there's like the now moving away from the object level. Like, okay, so we've like set this now. Like the normal, like default thing is to like have an debate where, oh, you make some more points in the direction you just said. And I said makes more points. I just said. But like, the thing I want to point out is that like this discussion is like absent from near termist EA because all the people who are taking ideas seriously have already moved on to other areas. And there was one more, but just.</p><p>ARTHUR</p><p>To jump on that for a second. But I think I totally take your point that then maybe a lot more people should be thinking about this. Right. But to me, like, whether that's possible in principle, like, like, and I think you're obviously going to agree with me on this. Like, to what degree that's relevant depends on like whether we are living in a world where like those systems are on the horizon or are going to exist in the near term future. Right. And like, to what degree that, you know, imprincible possibility, like, represents the actual path we're heading on is like sort of the real crux of the issue.</p><p>AARON</p><p>Oh, yeah. Okay. Maybe I actually wasn't sure. Yes, because we're living in a more.</p><p>ARTHUR</p><p>Standard story where like this just increases the marginal product of labor because everyone gets more productive when they, like, learn how to use these technologies, and it doesn't mean it's not going to be disruptive, because I think there's a lot of interesting IO research on how, with the implementation of computer technologies in a lot of workplaces, it was very difficult to train older employees to use the new systems. So really, the only solution for a lot of firms was essentially just, like, fire all of their old employees and hire people who actually knew how to use these technologies. But presuming we get past the disruptive transition where the old people get screwed or have to learn how to adapt, and then the young people who grew up learning how to use AI technologies enter the workforce, it seems very possible to me that those people are just going to be the most productive generation of workers ever. Accordingly.</p><p>AARON</p><p>Yeah. Yeah. Again, I think there's, like, sorry, I, like, don't. I guess I was about to just, like, make this. It make the same point that I. That I was before. I guess, like, put a little bit more, like, yeah, be a little bit clearer about, like, what I mean by, like, this debate isn't happening. It is, like, it doesn't seem. Maybe I'm wrong, but, like, like, reasonably confident that, um, givewell isn't doing the thing that, like, the long term esteem on open philanthropy is where they're like, try to answer this question because it's really fucking important and really informs and really informs what kind of, like, what the best near term interventions are. And, like, maybe that's, like, I don't want to pick on Givewell because, like, maybe it's in, you know, givewell is, like, maybe it's, like, in their charter or, like, in some sense, just like, everybody assumes that, like, yeah, they're going to do, like, the econ RCT stuff or whatever, but, like, well, but there'd be value.</p><p>ARTHUR</p><p>That, like, that would be my defensive give. Well, like, is that, like, you know, you, like, comparative advantage is real, and, like, you know, having an organization that's like, we're just not gonna worry about these. Like, they don't even do animal stuff, you know? And I think that's a good decision. Like, I care a lot about animal stuff, but I'm glad that there's an organization that's, like, defined their mission narrowly enough such that they're like, we are going to, like, do the best sort of econ development rct kind of stuff. And if you're, like, into this project, like, we're gonna tell you the best way to use your.</p><p>AARON</p><p>Yeah, I think that like, I don't know, in the abstract. Like, I think. I guess I'm, like, pretty, pretty 50 50 on, like, whether I think it's good. I don't think they should, like, if anybody's deciding, like, whether to, like, give a dollar to give well or, like, not give well with Nea, I think, like, yeah, it's like, don't give a dollar to give well. Like, I don't think they should get any funding, EA funding or whatever. And I can defend that, but, like, so, yeah, maybe that particular organization, but I. Insofar as we're willing to treat, like, near term sea as, like, an institution, like, my stronger claim is, like, it's not happening anywhere.</p><p>ARTHUR</p><p>Yeah, well, I mean, I, like, you're right. At one level, I think I more or less agree with you that it should be happening within that institution. But I think what, at least to me, like, your broad sketch of this sort of near termist case for AI, like, where that discussion and debate is really happening, is in, like, labor economics. You know what I mean? Like, it's not that aren't people interested in this. I just think the people who are interested in this, like, and I don't think this is a coincidence are the people that, like, don't think, you know, the paperclip bots are going to kill us. All, right? They're like, the people who are just, like, have a much more, like, normie set of priors about, like, what this technology is going to look like.</p><p>AARON</p><p>Yeah, I do.</p><p>ARTHUR</p><p>And, like, they're the ones who are, like, having the debate about, like, what is the impact of AI going to be on the workforce, on inequality, on, you know, global economic growth, like, and I think, like, but, like, I guess in a funny way, it seems like what you're advocating for is, like, actually a much more, like, Normie research project. Like, where you just have, like, a bunch of economists, like, being funded by open philanthropy or something to, like, answer these questions.</p><p>AARON</p><p>I think the answer is, like, sort of, um. Does some extent. Yeah, actually, I think, like, I. Like, I don't know. I'm, like, not. Yeah, I actually just, like, don't know. Like, I don't, like, follow econ, like, as a discipline. Like, enough to, like, I, like, believe you or whatever. And, like, obviously it's, like, it's, like, pretty clearly, like, both. I guess I've seen, like, examples I've thrown around of, like, papers or whatever. Yeah, there's, like, clearly, like, some, like, empirical research. I, like, don't know how much research is, like, dedicated to the question, like, yeah, I guess there's a question that's like, if you know, like, yeah. Is anybody, like, trying to. With, like, reasonable. With, like, reasonable parameters, estimate the share of, like, how, like, late the share or the returns to, like, labor or whatever will, like, change in, like, the next, like, ten years or five? Not. Not only. Not only, like, with GPT-3 or, like, not. Not assuming that, like, GPT four is going to be, like, the status quo.</p><p>ARTHUR</p><p>Yeah, I mean, to my knowledge, like, I have no idea. Like, basically I don't have an. All the stuff that I'm thinking of is from, like, you know, shout out Eric Brynjolfsson. Everyone should follow him on Twitter. But, like, like, there's some economists who are in the kind of, like, I o and, like, labor econ space that are doing, like, much more, like, micro level stuff about, like, existing LLM technologies. Like, what are their effects on, sort of, like, the, like, you know, I don't know, knowledge work for lack of a better word, like, workforce, but that, yeah, I grant that, like, that is a very much more, like, narrow and tangible project than, like, trying to have some kind of macroeconomic model that, like, makes certain assumptions about, like, the future of artificial.</p><p>AARON</p><p>Yeah, and, like, which maybe someone is doing.</p><p>ARTHUR</p><p>And, I mean, I.</p><p>AARON</p><p>No, yeah, I'm interested. People should comment slash dm me on Twitter or whatever. Like, yeah, I mean, I think we're just, like, in agreement that. I mean, I mean, like, I think I have some, like, pretty standard concerns about, like, academic, like, academia, incentives, which are, like, also been, like, rehashed everywhere. But, like, I mean, it's an empirical question that we, like, both just, like, agree is an empirical question that we don't know the answer to. Like, I would be pretty surprised if, like, labor economics has, like, a lot to say about. About fundamentally non empirical questions because, like, it doesn't. Yeah, I guess, like, the claim I'm making is, like, that class of research where you, like, look at, like, yeah. Like, how does chat GPG, like, affect the productivity of workers in 2023? 2024? Really? Just like, I mean, it's not zero evidence, but it's really not very strong evidence for, like, what the share of labor income will be in, like, five to ten years. Like, yeah, and it's, like, relevant. I think it's, like, relevant. The people who are actually building this technology think it's going to be, like, true, at least as far as I can tell. Broadly, it is a consensus opinion among people working on building frontier AI systems that it is going to be more transformative or substantially more transformative than the Internet, probably beyond electricity as well. And if you take that assumption, like, premise on that assumption, it seems like the current. I would be very surprised if there's much academic, like, labor economics that, like, really has a lot to say about, like, what the world would be, like in five to ten years.</p><p>ARTHUR</p><p>Yeah, I think I was just gonna say that I'm, like, sufficiently skeptical that people, like, working on these technologies directly are, like, well positioned to, like, make those kinds of. I'm not saying the labor econ people are, like, better positioned than them to make those progress, but, like, I think.</p><p>AARON</p><p>No, that's totally fair. Yeah, that is really to be fair.</p><p>ARTHUR</p><p>Also that, like. Like, I think some of this is coming from, like, coming from a prior that I, like, definitely should, like, you know, completely change with, like, the recent, you know, post GPT-3 like, explosion these technologies. But I just think, like, for, like, just if you look at the history, like, I'm not. I'm not saying I endorse this, but, like, if you look at the history of, like, you know, sort of AI, like, not like optimism per se, but, like, enthusiasm about, like, the pace of progress and all this, like, historically, like, it had a, like, many, many decade track record of, like, promising a lot and failing that, like, was only, like, very recently falsified by, like, GPT-3 and.</p><p>AARON</p><p>I mean, like, I think this is basically just, like, wrong. It's like, a common misconception. Not like you're. I think this is, like, totally reasonable. This is, like, what I would have. Like, it seems like the kind of thing that happened. I'm pretty sure, like, there have been some, like, actually, like, looking back analyses, but it's, like, not wouldn't. It's not like there's zero instances, but, like, there's been a real qu. It is not, like, the same level of AI enthusiasm, like, as persistent forever. And, like, now we're like, um. Yeah, now it seems like. Oh. Like we're getting some, like, you know, results that, like, that, like, maybe justify. It seems like, um. Yeah, the consent, like, people are way. Hmm. What am I. Sorry. The actual thing that I'm trying to say here is I basically think this is just not true.</p><p>ARTHUR</p><p>Meaning, like, the consensus was like, that.</p><p>AARON</p><p>Like, people didn't think Agi was ten years away in 1970 or 1990.</p><p>ARTHUR</p><p>Well, I mean, some people did. Come on.</p><p>AARON</p><p>Yeah. So I can't.</p><p>ARTHUR</p><p>You mean, just like, the consensus of the field as a whole was not as I like.</p><p>AARON</p><p>So all I like, I have, this is, like, this is the problem with, like, arguing a cast opinion. Like, my cashed opinion is, like, I've seen good, convincing evidence that, like, the very common sense thing, which is like, oh, but AI, there's always been AI hype is, like, at least misleading and, like, more or less, like, wrong and that, like, there's been a, like, a, yeah, and like, I don't actually remember the object level evidence for this. So, like, I can try to, like, yeah, that's fine.</p><p>ARTHUR</p><p>And I also like to be clear, like, I don't have a strong, like, strongly informed take, like, for the, like, AI. Yeah, hype is overblown thing. But, like, putting that aside, I think the other thing that I would wonder is, like, even if individuals, like, who work on these technologies, like, correctly have certain predictions about the future that are pretty outside the window or that people aren't sufficiently taking seriously in terms of what they think progress is going to be. And maybe this is some lingering, more credentialist intuitions or whatever, but I think that. I am skeptical that those people would also be in a good position to make kinds of economic forecasts about what the impacts of those technologies.</p><p>AARON</p><p>Yeah, I basically agree. I like, yeah, it's, I guess, like, the weak claim I want to make is, like, you don't have to have that high a percentage on, like, oh, maybe there's, like, some, like, maybe these people are broadly right. You don't have to think it's above 50% to, like, think that. Like, I think the original claim I was, like, making is, like, um, is like, why probably, like, standard, like, labor economics, like, as a subfield. Like, isn't really doing a ton to, like, answer the core questions that would inform my, like, original thing of, like, oh, like, is ubi, like, like, a better use of money than, like, I give you against malaria foundation or whatever? Um, I, like, yeah, I just, like, don't. Yeah, maybe I'll be, like, pleasantly surprised. But, like, yeah, we could, we could also, I don't know. Do you want to move on to.</p><p>ARTHUR</p><p>A. Yeah, yeah, sure.</p><p>AARON</p><p>Sorry. So I didn't mean to. You can have the last word on.</p><p>ARTHUR</p><p>No, no, I don't. I don't think I have the last word. I mean, I think it's funny, like, just how this has progressed in that, like, I think, like, I, I don't completely, like, I think, I don't completely disagree, but I also don't feel like my, like, mind has been, like, changed in a big way, if that makes sense. It's just like, maybe we're in one of these weird situations where, like, we kind of, like, do broadly agree on the, like, actual object level, like, questions or whatever, but then there's just some, like, slight difference in, like, almost, like, personality or disposition or, like, some background beliefs that we, like, haven't fully fleshed out that, like, that, like, at least in terms of how we, like, present and emphasize our positions. Like, we end up still being in different places, even if we're not actually that.</p><p>AARON</p><p>No, something I was thinking about bringing up earlier was, like, oh, no. Yeah, basically this point. And then, like. But, like, my. My version of, like, your defensive of the, like, I guess, like, the give. Well, class is, like, my defense of donating to, like, the humane league or whatever, and, like, maybe it doesn't check. And, like, I don't know. I just. Yeah, it's for whatever reason, like, I. I, like, yes, something. I'm still, um. I guess I still don't. Sorry. I just did, like, a bunch of, like, episodic, like, jumps in my head, and I, like, I always forget, like, oh, they can't see my thought patterns in the podcast. Yeah, it seems, like, pretty possible that a formal analysis would say that even under a suffering focused worldview, yet donating to s risk prevention organizations, beats, for example, are at least beats like the Humane League or the Animal Welfare Fund, which we recently raised funds for.</p><p>ARTHUR</p><p>Do you want to talk? So, there's many things we could talk about. One potential thing that comes to mind is, like, I have a not very well worked out, but just, like, sort of lingering skepticism of long termism in general, which, like, I think doesn't actually come from any, like, philosophical objection to long termist premises. So, like, I think the.</p><p>AARON</p><p>Yeah, I think.</p><p>ARTHUR</p><p>I don't know what you want to talk about.</p><p>AARON</p><p>I mean, if you really want. If you're, like, really enthusiastic about it.</p><p>ARTHUR</p><p>I'm not.</p><p>AARON</p><p>Honestly, I feel like this has been beaten to death in, like, on 80k. There's cold takes. Like, there have been a. Sorry. I feel like we're not gonna add anything. Like, I'm not gonna add anything either.</p><p>ARTHUR</p><p>Okay. I don't feel like I would.</p><p>AARON</p><p>I mean, we can come. Another thing is, like, yeah, this doesn't have to be super intellectual. Talk about climbing. We're talking about having a whole episode on climbing, so, like, maybe we should do that. Like, also anything. Like, I don't know. It doesn't have to be, like, these super, like, totally.</p><p>ARTHUR</p><p>No, no. That was something that came to mind, too, and I was like, oh, the long term isn't thing, but like, it would be fun to just like, talk about something that's like, much less related to any of these topics. And in some ways, given both of our limitations in terms of contributing to these object level EA things, that's not a criticism of either of us, but just in terms of our knowledge and expertise, it could be fun to talk about something more personal.</p><p>AARON</p><p>Yeah, I need to forget that it's. Yeah. I don't know what is interesting to you.</p><p>ARTHUR</p><p>I'm trying to think if we should talk about some other area of disagreement because I feel like, like, I'm like, this is, this is random and maybe we'll cut this from the podcast. This is a weird thing to say, but I feel like Laura Duffy is one of the few people that I've, like, met where we just have like a weird amount of the same opinions on like, many different topics that wouldn't seem to like, correlate with one another, like, whatsoever. And it's funny, like, I remember ages ago, like, listening to y'all's discussion on this podcast and just being like, God, Laura is so right. What the fuck does Aaron believe about all these things?</p><p>AARON</p><p>And I'm willing to relitigate some if it. If it's like something that hasn't been beaten to death elsewhere.</p><p>ARTHUR</p><p>So I think we should either talk about like, something more personal, like we should talk about like rock climbing or something, or we should, like, now I.</p><p>AARON</p><p>Have to defend myself. You can't just say, you know, yeah. Was it the, like, oh, old philosophy is bad.</p><p>ARTHUR</p><p>Old philosophy.</p><p>AARON</p><p>Old philosophy is fucking terrible. And I'm guessing you don't like this take.</p><p>ARTHUR</p><p>I do not. Well, I find this take entertaining and, like, I find this take, like, actually, like, I mean, this totally, like, this sounds like a huge backhand compliment, but, like, I actually think it's, like, super useful to hear something that you just, like, think is, like, so deeply wrong, but then you, like, take for granted when you, like, surround yourself with people who, like, would also think it's so deeply wrong. So I think it's, like, actually, like, very useful and interesting for me to, like, understand why one would hold this opinion.</p><p>AARON</p><p>Also, I should. I guess I should clarify. So, like, I. It's like, this is like, the kind of thing that, like, oh, it's like, kind of is like, in my vibe disposition or whatever. Yeah. And, like, also, it is not like, the most high stakes thing in the world, like, talking in the abstract. So, like, when I said, like, oh, there, it's fucking terrible. I was, like, I was, like, being hyperbolic.</p><p>ARTHUR</p><p>Oh, I know.</p><p>AARON</p><p>I know. No, but, like, in all serious, like, not in all seriousness, but just, like, like, without being, I don't know, using any figurative language at all, I'm, like, not over. There are definitely things that I'm, like, much more confident about than this. So, like, I wouldn't say I'm, like, 90. Oh, it's a. Me too.</p><p>ARTHUR</p><p>I'm, like, pretty open to being wrong on this. Like, I don't think I have, like, a deep personal vested stake.</p><p>AARON</p><p>Yeah, no, I don't think.</p><p>ARTHUR</p><p>It's just, I think. Okay, so this is something. Or actually maybe. Maybe an interesting topic that we are by no means experts on but could be interesting to get into is, I think, like, a lot of the debates about, like, what is, like, the role of, like, kind of higher education in general or, like, somewhat hard to separate from these questions of, like, oh, yeah, old text. Because I'm sort of have two minds of this, which is, like, on the one hand, I think I buy a lot of the, like, criticisms of, like, the higher ed, like, sort of model. And that I think, like, this general story which is not novel to me in any way, shape or form, that, like, we have this weird system where, like, universities used to be a kind of, like, the american university system, like, you know, comes in a lot of ways from, like, the british university system, which. Which, if you look at it historically, is sort of like a finishing school for elites, right? Like, you have this, like, elite class of society, and you kind of, like, go to these institutions because you have a certain social position where you, like, learn how to, like, be this, like, educated, erudite, like, member of the, like, elite class in your society. And there's, like, no pretense that it's any kind of, like, practical, you know, skills based education that'll help prepare you for the labor force. It's just like, you're just, like, learning how to be, like, a good, you know, a good, like, member of the upper class, essentially. Right? And then that model was, like, very, like, successful and, like, I think, in many ways, like, actually important to, like, the development of, like, lots of institutions and ideas that, like, matter today. So it's not like, it's not like, you know, it's, like, worth, like, taking seriously, I suppose. But, like, I think there's some truth to, like, why the hell is this now how we, like, certify and credential, like, in a more kind of, like, merit, like, meritocratic sort of, like, world with more social mobility and stuff. Like, why is this sort of, like, liberal arts model of, like, you go to, like, learn how to be this, like, erudite person that, like, knows about the world and, like, the great texts of the western tradition or whatever. Like, I think there's something to the, like, this whole thing is weird. And, like, if what college is now supposed to do is, like, to train one to be, like, a skilled worker in the labor force, like, we ought to seriously rethink this. But at the same time, I think I do have some, like, emotional attachment to, like, the, like, more flowery ideals.</p><p>AARON</p><p>Of, like, the liberal, oh, you know, I'm sorry.</p><p>ARTHUR</p><p>You're good. You're good. So I think, like, that, I don't know, that could maybe be interesting to talk about because I think in some ways that's, like, very related to the old text thing, which is that I think, like, some of my attachment to the, like, oh, we should read these great works from the past. Like, very much is difficult to cash out in terms of, like, some kind of, like, ea or otherwise, like more like practical concrete. Like, oh, this is the, like, value that one gains from this. And I think more of it is like, my, like, lingering intuition that there's something, like, inherently enriching about, like, engaging in this, like, tradition of knowledge.</p><p>AARON</p><p>Oh, yeah, I think we probably agree on this, like, way more than. Or like, at least the. You just said all the things that you just, like, said. Maybe more than you suspect. Yeah, like, I am like, sort of, I think, like, relative to, like, I don't know, maybe what you would guess, like, knowing my other views, I am, like much. I am like, sort of, like, anti. I'm like, anti anti flowery. Like liberal arts. Like, I don't know, sort of like a bunch of, like, ideas in that, like, space. No. Just yet to be, like, concrete about the philosophy thing. Like, my claim is it's bad, like, if you personally, like, want to like. Or like, for whatever, like, just like, take as a given that, like, what you want to do is like, read and like, think about philosophy. Um. Uh, like, not, but like, um, like, look for the, like, object level content, which is like, you know, ideas and stuff. Then, like, on the merits, it's fucking terrible. I like, yes. Which is like, it's like, so this.</p><p>ARTHUR</p><p>We totally disagree about.</p><p>AARON</p><p>Okay.</p><p>ARTHUR</p><p>Yeah, okay.</p><p>AARON</p><p>Yeah, so I don't know, like, er. Um, yeah, but, but like, but like, I. Yeah, we can, we can do like a. We can, like, double team convincing the audience that like, actually, like, uh, no, it's we shouldn't just, I don't know, sort of. Sort of like. Like, flower. Yeah, flowerly. Like, like, liberal arts stuff is, like. Is, like, probably good.</p><p>ARTHUR</p><p>I think it might be more interesting to talk about what we disagree about them.</p><p>AARON</p><p>Yeah.</p><p>ARTHUR</p><p>Which is, like.</p><p>AARON</p><p>I mean. Yeah. So do you have an example of, like, I don't know, it's like, it seems like plausibly we just disagree on the merits about philosophy, like, rather than, like. And it's, like, being hashed out in this, like, weird way about, like, old philosophy in particular. I don't know. Like, Aristotle seems, like, wrong about everything and, like, his reasoning doesn't seem very good. And I read, oh, I mean, this is a hobby horse of mine. Like, I don't need to beat a dead horse too much. But, like, kantian ethics is just, like, totally fucking incoherent. Not incoherent. It is, like, just so obviously wrong that, like, my, like. And I've. Yeah, again, this is, like, a hobby wars on Twitter. But, like, I am, like, truly willing to say that, like, there are. I have not identified, like, a single reasonable, like, person who is, like, who will, like, bite the bullets on, like, what Kantianism, like, actually implies. People actually, like, say that they're Kantians. Say they have, like, kantian intuitions, but, like, kantian ethics is just, like, absolutely batshit insane. Right, sorry.</p><p>ARTHUR</p><p>Okay. Well, I'm not going to defend kantian ethics, but I think, well, a few things. One, just since Kant is maybe an interesting example, right. I think that the fact that, like, kantian ethics is, like, weird and incoherent and no one is actually willing to bite these bullets, it's kind of funny to me that you chose this example because, like, you know, Kant has this, like, incredibly large and rich body of work that goes, like, far beyond, oh.</p><p>AARON</p><p>Yeah, totally ethical view.</p><p>ARTHUR</p><p>Right. And I think, like, you know, reading, like, the critique of pure reason or, like, the lesser known, like, the critique of the power of judgment, his stuff on aesthetics and, like, his metaphysics, like, I don't know, I think a lot of those tech, like, as utterly frustrating as it is to read Kant, like, I think, you know, like. Like, I don't know. What I'm trying to say is Kant is a bad choice for, like, you know, old philosophy is bad because object level, they're wrong. When I think, like, a lot of Kant's work is, like, not obviously wrong at all.</p><p>AARON</p><p>Okay. Yes. So part of this is that, like, I, like, don't. I'm like, actually, I'm really not familiar yes. So one thing is, like, I am totally, like, not claiming it. Like, I just don't have an opinion on, like, most things that Kant wrote. What I am willing to claim, though, is that, like, no, if you're looking. Even if he's right. Even if, like, he's, like, right or broadly right or whatever. Like. Like that, sir, I guess there's a couple claims here. One is, like, the more object level, which I think of, like, sort of this is, like, restate. Like, I think broadly, there's like, a, like, a correlation between, like, time and, like, takes gets more correct. But, like, also, like, the whole idea of reading foundational texts. Yes. Is just. Doesn't make a lot of sense if what you care about is, like, the content of philosophy. Got it.</p><p>ARTHUR</p><p>Okay, good. Okay. So, yes, I think this is where we really disagree. And, like, on the one hand, I'm like, I think this actually still does end up being related to the, like, flowery justifications for a liberal arts education. Because I think, like, on the one hand, like, I totally agree when it comes to a question of, like, philosophical pedagogy for, like, intro level courses, right? Because, like, the vast majority of people taking an intro to philosophy course, like, are not going to be professional philosophers. They're probably not even going to take a second philosophy course. So, like, I'm sympathetic to the idea that, like, secondary sources that more succinctly, in excessively summarize the ideas of old philosophers, or, like, maybe better suited than, like, this sort of weird valorization of the primary text. Right? Like, I think that is probably true. Nonetheless, I think if you buy into in any degree, these, you know, liberal arts ideals, like, I think an inextricable part of that project is engagement with a historical tradition. And I just think that, like, like, like, well, maybe let me separate two things. So claim one is that, like, if you're interested, insofar as you're interested in ideas at all, like, you should be interested in the history of ideas. And I think if you're interested in the history of ideas, then, like, reading primary sources actually does matter because it's, like, very hard to, like, understand, like, to, like, understand how ideas are situated within, like, a long running historical conversation and. And are, like, products of the, like, cultural, social, historical context that they came out of without reading them in their original or, like, their original translation. So that would be, like, claim one is, like, the history point, but even, like, object level, I think, like, if you want to go past a, like, philosophy 101, like, introduction to certain historical ideas, like, I think Laura tried to make this point in the. In the podcast, and this is where I was like, yes, Laura, yes.</p><p>AARON</p><p>Like, get him.</p><p>ARTHUR</p><p>You know, like, I think she was trying to make a point about, like, translation, and she said something about, I think it was like, like, wisdom or eudaimonia, or, like, one of these, like, concepts from, like, aristotelian ethics. I think, like, she made the point, which I want to reiterate, which is that, like, when you abstract these ideas from the context of their original text, it's, like, much harder to understand particular concepts that aren't easily amenable to translation, either into English or into contemporary sort of vernacular ideas. And this may be part of why this seems more obvious or seems more true to me, is that a lot of what I studied as an undergraduate was the buddhist tradition. And I think, especially when you step outside of ideas that super directly influenced a lot of later developments in western philosophical thought, and you go to a very different culture and tradition. It's obvious to me that if you want to understand Buddhism and buddhist ideas in english translation, you have to leave certain terms untranslated, because there are certain things that just have a very complicated semantic range that do not neatly map onto english concepts. And I think when that is the case, it's much harder to understand those terms and concepts in, like, an accessible secondary source, and it's much easier to get a sense for, like, what the, like, true semantic range of a term is, like, when you read it in its original context. Yeah, so I can give an example.</p><p>AARON</p><p>If you want, but I mean, I. Couple things. One is, like, there is, like, a self, like, self reinforcing, like. Sorry. Yeah, so, so I think some of the justification you just laid out is true. It's sort of, like, begging the question, or, like, as. I think that's, like, a terrible, like, phrase, but actually, yeah, this is sort of relevant. I think we should change begging the question. Just mean what it actually sounds like. Okay.</p><p>ARTHUR</p><p>Yeah, yeah, because it's a useful concept. Yeah, but it's so easy to misunderstand if you don't.</p><p>AARON</p><p>Yeah, okay. No, I'm gonna. So what I actually mean is, like, it's like, like, somewhat circular should be.</p><p>ARTHUR</p><p>Called just, like, assuming the conclusion or something like that.</p><p>AARON</p><p>Yeah, so, so, like, if, like, the, like, state of the world is such that, like. Oh, yeah. Like, like, if you are, like, you will be expected to, like, know, like, what Aristotle's ideas are in. Like, in, like, a meaningful. Like. Like a, um. Yeah, in, like, like, a content sensitive way. Then. Then, like, then, like, yeah, like, reading, then, like, leaving things untranslated is, like. I guess what I mean is there is, like, a. Once you've accepted the premise that, like, that, like, there is virtue in, like, reading old stuff. Yes. Then, like, sometimes, like, the old, like, the original words are indeed good at helping you understand the old stuff or whatever structure there. And, like, um. Yeah. Um, separately, I am not against philosophers coining terms. Like, uh. So, like, I am. Yeah, I am, like, pretty happy for somebody for, like, a contemporary analytic philosopher to, like, out of just, like, you know, out of, like, uh, not even respect, but, like, out of tradition, say, like, okay, I'm gonna, like, co op sort of co opt or, like, recycle the term. You like, new Daimonia. And then, like, here, I'm gonna, like, give, like, as good of a definition as, like, I possibly can, and it's not even gonna be a fully explicit definition. It's also going to be, like, I pointed some examples and say, like, that's a central example of eudaimonia, et cetera. But, like, I don't want to repeat this entire, like, chapter where I try to define this word every time I use the word. So I'm gonna, like, use that as a variable, and it, like, yeah, I'm, like, not against analytic philosophy, like, coining terms and, like, using them that way. And, like, if we want to do, like, a nod to history by, like, using, you know, old terms, like, from, like, another language, whatever, Latin here and there. That's cool. Yeah, those are my points. What was your. Wait, there was your first point. What was your first point?</p><p>ARTHUR</p><p>Well, I was just saying that, like, okay, like, to, like, maybe offer a more full throated defense then, because you were saying some degree is begging the question then. Yeah, like, I see. Yeah, I think you're on it. Like, I think that's true. But, like, like, I guess to me, like, part of the, like, sort of ideas for their own sake kind of, you know, like, vibe or inclination or whatever. Is that, like, history kind of history, like, ought to matter, right? Because, like, I think, like, I take your point that it's begging the question that, like, if history matters, then, like, you have to, like, read the historical text or whatever to, like, understand the history. Right.</p><p>AARON</p><p>But, like. No, no, but there's a separate point here.</p><p>ARTHUR</p><p>Okay.</p><p>AARON</p><p>Yeah, sorry. Brief interruption. I think. I think I. Like, I'm. Yeah, I think this is, like, substantively different. Yes. And I think I just, like, disagree that, like, sorry, I think we're both. Before giving like, lip service to, like, the truth term, like, oh, flower. Like liberal arts. But, like, my conception of that is, like, seemingly different, I think, than yours. And yours is more history based, so. Sorry, keep going.</p><p>ARTHUR</p><p>Yeah. Okay, well, then maybe I'll defend that a little bit, which is just that I think, like, I. And this is. This is one of the ways in which I feel, like, not vibe associated with ea, even though I, like, love ea, like, in terms of, like, the merits of its ideas, is that I think a lot of people with the kind of, like, ea, you know, sort of like, spf. Every book should be a six paragraph blog post, sort of like, whatever orientation. Like, I think there's something like, like, really, really mistaken about that, which is just that I think, like, people have, like, far, like, they. They cloak themselves in this, like, rhetoric of epistemic humility that's all about, like, confidence intervals and, like, you know, like, credences and, like, how sure you are about your beliefs. But I think, like, people miss the, like, meta level point, which is that entire way of thinking is, like, totally alien to how, like, humans have thought about many topics for, like, most of human history. And that it would be really weird if, like, we, you know, like, yeah, weird in, like, the social psychology sense. Like, we western educated, industrialized, democratic, like, citizens who, like, went through the, like, you know, like, hoops of, like, getting super educated or whatever, like, in the 2020s, like, happened across the, like, correct, you know, like, like, higher level framework for, like, investigating questions of, like, value and morality and ethics, right? And, like, I think people should just, like, be a lot more humble about, like, whether that, like, project itself is justified. And I think because of that, that leads me to this, like, history matters not just for its own sake, but, like, for doing philosophy in the present, right? Which is, like, if you think that, like, we ought to have a lot of humility about, like, whether our current, like, frames of thinking, like, are correct. And, like, I think doing the kind of, like, almost sort of, like, postmodern foucault deconstruction thing of, like, just pointing out that those are, like, really historically contingent and weird and, like, have a certain, like, lineage to, like, how we, like, arrived at the set of assumptions. Like, I think that, like, people should respect or, like, should take seriously that, like, move, right? And, like, that move being this, like, exposing a lot of what we think is, like, natural incorrect and just, like, the best way to, like, come about our, like, views of the world, right, is, like, a product of our, like, time and era and circumstance and culture. Right? And I think, like, if you take that really seriously, then you become much more interested in history, and then you become much more interested in what is the actual genesis of these ideas that we take for granted. I know I've been rambling for a little bit, but I think to put a sharper point on this with a particular issue, I think we don't have to talk about this object level, though maybe it would be interesting, would be the whole debate over free will. The whole debate over free will takes for granted that we as human agents have a lot of the discussion about free will takes for granted this idea that we intuitively view ourselves as these solitary, unified self that can act in the world and that can make these free agential decisions. And that if our scientific image of ourselves as naturally evolved creatures that are the product of evolution and are, like, governed by the same, like, laws of physics and things as everyone else, like, then there's this, like, fundamental metaphysical problem where it, like, seems like we are these, like, determined beings, but then, like, in our personal experience, it, like, feels like we're these, like, choosing beings that are, like, somehow, like, outside of the causal nexus of, like, scientific processes, right? And I think, like. Like, I use this as an example because there's some interesting work in, like, sort of cross cultural philosophy that just suggests that this is just, like, actually the product of, like, a christian intellectual heritage that, like, is trying to solve a certain theodicy problem, which is, like, how can we be, like, viewed as, like, moral agents that can make decisions that are worthy of salvation and damnation when, like, also, you know, when, like, when, you know, we, like, live in a world, like, governed by, like, a benevolent God, right? So sort of, like, reconciling, like, God's sort of benevolence and omnipotence with the idea that we as humans are, like, free agents who can make good and bad decisions, right? And that, like, if God was truly, like, Justin, right. Like, why wouldn't he just, like, make us make all of the right decisions? Like, how do you hang. I'm not presenting the problem very well, but, like, there's this philosopher, Jay Garfield, who does a lot of work in, like, Buddhism, that basically argues that this, like, whole framing of the free will debate comes from a christian cultural intellectual legacy. If we actually want to understand the problem of free will, I think it's useful to know where this whole framing of the problem came from and how other cultures and traditions have thought about this historically in ways that are very different. The buddhist tradition being the example in this case, which just doesn't seem to think there is a there there, so to speak. Like, there's not really some kind of problem to be solved because they start from a different set of metaphysical assumptions. Like, I think that is interesting. And, like, that should matter when we're trying to answer questions about, like, what should we value? And, like, how do we be good in all these things?</p><p>AARON</p><p>Okay, you're really taxing my working memory because I. Wait, sorry, sorry. That was supposed to be a light hearted thing. Yeah, a lot there. Um, I think the most fundamental point is, um, I disagree. So I reject that. Primary texts are generally a good way of doing the good thing, which you just argued for, which is learning about intellectual history. I think intellectual history is great and that a terrible way to do it, at least if you're a finite being, like all beings are, is, or like, with finite time and mental energy, et cetera. Like, I'm not gonna say, like, no, there's, like, never. Yeah, my claim is like, yeah, intellectual history, you're better off reading a book about the. About the texts where they are, like, the object of study or, like, not this, or I guess, the ideas of the object of study. And, like, actu. Like, um. Yeah, it's just, like, not very efficient or whatever to, like, to, like, read primary text. And now, like, if you're for whatever reason, like, really interested in, like, one specific sub area of something, like. And, yeah, yeah. So if you're interested in, like, a very specific slice of intellectual history, history of ideas, then I. Then, yes, I agree. That is a good reason to read primary text. That is very uncommon.</p><p>ARTHUR</p><p>Sure. Yeah. So I want to respond to that, but I realized, hey, maybe you can edit this in or something. I slipped out of conversation in a podcast mode. I was like, fuck. I did a terrible job of explaining the whole theodicy problem, origin of free will, which is really that the theodicy problem is like the problem of evil, right? It's like, how do we have a benevolent. How do we reconcile, like, God's omnipotence and benevolence with the problem of evil? Yeah, it's like the problem of evil, right? But the specific free will move is then, like this, postulating this libertarian, not in the political sense, postulating this libertarian free will comes from christian theologians who are saying, like, oh, this is how we get around. The problem is that humans have been endowed by this omnipotent God to be able to make choices, and why God would want us to make choices if he's benevolent is obviously still a deep problem that I don't really understand how you would possibly arrive at that place. But something about we don't fully understand God's plans. It's all very mysterious. Yada yada. But anyways, going back to what you just said, I think again, I'm inclined to agree at some level, for many people in many circumstances, it's like a very inefficient way to learn about these ideas is to read the primary text. I totally agree, but I think in some ways, the farther back you go in history, in a weird way, I think the less that that is true. And the reason why I think that is for the language translation thing that I mentioned earlier, which is just to bring up Buddhism again, because I think it's useful here. Like, in Buddhism we have one of the central ideas is this idea of dukkha, which gets often translated as suffering. So you hear the first noble truth is translated as life is suffering, or is pervaded by suffering. But dukkha doesn't really mean suffering because surprise, surprise, ancient pali words aren't easily translatable into English. Like maybe something like unsatisfactoriness would be like a better translation, but like, it's just like much more like subtle and multifaceted than that, right? And it, like, there isn't a single english word that it maps well onto. And I understand that, like, some, like, really good, like, scholar who's like a good writer could maybe like give a few paragraphs about, like how, like, what dukkha really means. But to me, like, I felt like over like, years of study, like, I got a much better sense of that through just like, reading a lot of, like, original text and like, seeing where.</p><p>AARON</p><p>The word came up in English or in.</p><p>ARTHUR</p><p>Yeah, in english translation. But I'm saying, like, with, like, technical terms, like, yeah, left untrained. Right? And like, it was only through, like, understanding that context that I was able to feel like I could like, wield the word appropriately, right? So, like, and I think this to me comes back to my more sort of like, like wittgenstein like view of, like, what language is, right. Is that I don't think, like, every term like, has some kind of like, correct set of, like, necessary and sufficient conditions. Like, I think my account of language would be, like, much more, excuse me. So my throat would just be much more like, would be much more like social practice and, like, usage based, right? Which is that, like, meaning is more like defined by its, like, use in actual language. And I think like, once you get there, right? Like, if you grant that kind of premise, then it's, like, hard to see, like, how, given how much, like, language has changed and evolved over the years, like, a skillful secondary source interpreter is going to be able to, like, clearly just, like, lay out, like, this is what eudaimonia means in three paragraphs. And it's going to be, like, much more necessary if you have this philosophical view about, like, what languages to, like, actually understand how that term is, like, used in its original context in order to, like, grasp its meaning. Yeah, I just. I think there's just, like, limits of, like, how much these secondary sources can really tell you.</p><p>AARON</p><p>I somewhat agree. The thing that I agree about is that, yes, it actually seems like if your goal is to get the most genuine unbiased, and I'm actually thinking unbiased in an econometric sense, where you want to get a true point estimate of the real meaning at the very heart of the wittgensteinian, multidimensional linguistic us, like, sub. Like, you know, a hyperspace or whatever, you want to, like, accurately point that, like, target that, um, is like, yes, probably best to, like, read the original texts. Um, and, like, unfortunately, like, probably, um. Uh. So, like, one thing is, like, there's the question of, like, does there exist a, um. Like, what would a really good attempt at not defining in three paragraphs, but maybe defining in half of the length of, like, the total corpus that you just said, like, in the terms of analytical philosophy, like, how close could you get to, like, an unbiased. Yeah, like, point estimate of, like, what dukkha means or whatever. And I actually don't really know the answer. I like. I like. Yeah, suspect that, like. Yes, that, like, you don't.</p><p>ARTHUR</p><p>I like your putting this in, Conor match returns because I actually think it, like, it gives me an easy way to conceptualize how we can converge on agreement, which is, I think we both agree that to get the least biased estimate possible, you have to have the largest sample size in this case, which is just going to be reading the entire text. But there's going to be some optimal trade off between how biased is your estimate of what a particular term means and how much time does it take to invest in actually reading those primary texts? So, yeah, maybe there's some optimal. That's a mix of extended excerpts and secondary exegesis or whatever that would find the correct balance between those two for any given person at a certain level of interest and time commitment. So, yeah, it's not a hard and.</p><p>AARON</p><p>Fast yeah, no, Minty is perfect. It's like lukewarm. Do you want it hot or cold or lukewarm?</p><p>ARTHUR</p><p>Lukewarm is fine.</p><p>AARON</p><p>Okay.</p><p>ARTHUR</p><p>I've already, like, way overdone it on, like, caffeine for the day, so I was, like, so tempted by the tea, but, like, mint tea. That's perfect.</p><p>AARON</p><p>Yeah.</p><p>ARTHUR</p><p>I think a funny thing about, like, um, or I don't know, also on the, like, vibes.</p><p>AARON</p><p>Yeah.</p><p>ARTHUR</p><p>Like, how ea might. I think this is loosely related to, like, the other, like, vibe based way in which I feel, like, different than a lot of other, like, more like, rat type ea people.</p><p>AARON</p><p>Yeah.</p><p>ARTHUR</p><p>Is I feel like there's like, a certain, like, like, philistine. How the fuck do you say that word? Philistinism. Philistine. Vibe.</p><p>AARON</p><p>Don't actually know what that means.</p><p>ARTHUR</p><p>A philistine is like one who, like, eschews, like, arts and aesthetics and, like, doesn't care about sort of, like, you know, like, like literature, music, cultural, stuff like that. And, like, I feel like that's another just sort of, like, vibe. Vibe aligned way in which I'm like, what the hell are these people doing? And I just feel like there's a lot of, like, there's a lot of, like, philistinism in, like, kind of like, rap culture, which is just like, sort.</p><p>AARON</p><p>Of, I feel like that's more a stereotype, I think. I think there's, there is like a, so, like. Sorry, sorry. Keep going.</p><p>ARTHUR</p><p>No, it's good. Because this all, like, I don't know, this is all, like, sounds like, so, like, incredibly pretentious to, like, say, like this, but I do think there is this grain of truth just like, when people, like, really, really strongly believe that, like, I don't know, the world is going to end in ten years or that, like, the goal of their life ought to be able to, like, maximize their personal contribution to, like, global utility. I think that, like, naturally leads you to, like, care a lot less about these, like, other.</p><p>AARON</p><p>Yeah, maybe. I mean, I think that's like a stereotype. And it's like, what? Like, you would think if, like, the only, like, thing you knew is like, oh, there's a crazy group of people who call themselves the rationalists. I think, like, by and large, it's like, no, it's like, on average people, like, yeah, it's like people are mostly pretty normal. On the other hand, it's true. So this is actually something I've been thinking about about myself, which is that, like, I am not a total philistine. I, like, like, in fact, I think, like, music is, like, a pretty important. It's an important part of my life. Not in that it's like, like, not in an interesting way, but in the sense that, like, oh, I like music and I, like, listen to it a fair amount. I think Spotify is great, even though I have, like, very basic tastes. But, like, besides that, I think I have, like, really weak aesthetic intuitions. And, like, I was thinking about, like, whether that means that I have, like, slightly autistic or whatever.</p><p>ARTHUR</p><p>Like, yeah, this just reminded me. I don't, like, we can cut this out or whatever, but this fucking. This liam bright tweet was so awesome. He was like, I just thought of this because I think one of the things that I find funny about, like, rationalists, like, calling themselves rationalists, is that it, like, it's kind of.</p><p>AARON</p><p>Kind of pretentious or something.</p><p>ARTHUR</p><p>Yeah, I mean, it's pretentious, but also the, like, philosopher in me is, like, so annoyed that they've, like, co opted this term that, like, has a very specific meaning in, like, epistemology. You know what I mean? But this tweet is so funny. I like that. The argument for rationalism is that its practitioners have easily the best track record of success as scientists, whereas the argument for empiricism is that. The problem is, when you think about it, rationalism is kind of wacky bullshit.</p><p>AARON</p><p>Like, I need to be, like, way more philosophical to, like, truly appreciate.</p><p>ARTHUR</p><p>Yeah, yeah. Well, like, the joke is just that, like, if you, like, look, historically, a lot of the, like, you know, like, like, early modern, sort of, like, renaissance people, like, descartes or whatever, it was like, the rationalists who, like, did all of the, like, good empirical, like, science.</p><p>AARON</p><p>I didn't know.</p><p>ARTHUR</p><p>And then. And then, like, the empiricists, like, most of their critiques of rationalism are, like, well, from first principles. Like, rationalism doesn't really make a lot of sense, you know? So, like, I don't know. It was just a good joke. Unlike the irony of that, honestly, I.</p><p>AARON</p><p>Think you just know, like, way more about intellectual history. For better. I mean, obviously there's, like, you know, like, for better. I mean, the worst part was gonna, which I was, like, sort of half planning to say was, like, there was, like, probably some opportunity costs there, but, like, maybe not. Some people just, like, know way more than me about, like, everything. It's just like, yeah, totally possible. Were you a philosophy. You were a philosophy major, right? Okay, so that's my one out, is I wasn't. I was a philosophy minor. For a big difference. I know.</p><p>ARTHUR</p><p>Nice.</p><p>AARON</p><p>In fact, actually, that's sort of endogenous. Ha, ha. Because one of the reasons that I chose not to do a major was because I didn't want to do, like, the old philosophy, like, you had to.</p><p>ARTHUR</p><p>Do, like, some history requirement or.</p><p>AARON</p><p>Yeah, like, yeah, a couple things that were like that. Yeah, I think, like, maybe like, two or three, like, classes. Like, yeah, basically old bullshit. Um. Sorry. I mean, to reopen that. Yeah. Also, I'm like, you don't have, like, I'm happy to go, actually. I'm not happy to go for literally.</p><p>ARTHUR</p><p>As long as you want.</p><p>AARON</p><p>Like, honk out sooner than I guess. But, like, yeah, just, like, saying, like, you don't have to. You don't have to, like, no, I'm not a slave here.</p><p>ARTHUR</p><p>I'm having a good time. Okay.</p><p>AARON</p><p>Yeah. Okay. I feel like I actually don't really want to talk about climbing, in part because I. I'm just. Yeah. Honestly, like. Like, so I, like, made this clear to you. Maybe I'll cut this part out. But, like, yeah, I'm not, like, doing amazing, and there's, like, a lot of baggage there. Not that I don't think anything would, like, it's really just, like, if there's nothing, like, deep or, like, oh, no. I think I'm gonna, like, handicap myself for life by, like, having a conversation about climbing. I just, like, honestly, like, don't really feel like it right now. That's okay. Yeah.</p><p>ARTHUR</p><p>We can call it. Or if there's some, like, I don't know, some, like, very light hearted topic we could conclude on.</p><p>AARON</p><p>Um. I don't know. I'm trying. Like, I'm sort of drawing a blank. There's, like, too many. Oh, on the opposite of. There's, like, too many options. Like, one. Like, one thing that, like, I thought that ran through my head was, like, uh. Like, I don't know. Like, what's your, like, life story? Slash, like, is there anything in your. We've been talking about ideas. Like, not. Not ourselves.</p><p>ARTHUR</p><p>Like, right.</p><p>AARON</p><p>Yeah. I don't know. Is there an interesting. Is you want to tell your whole life story or maybe a sub part or just, like, a single episode that's like. Yeah, I don't know. We can also just. Yeah, this is just, like, one of, like, I don't know. N thoughts.</p><p>ARTHUR</p><p>Yeah. Yeah, I think, like, how do I even summarize? It's always funny when people. This is so mundane. We can just cut all of this out if it's really boring and mundane, but, like, it one of the things I've noticed about this when like I talk to people and people are you from like I feel like I like moved when I was in middle school. So like I never know exactly how to answer the question of like where I'm from because it's not like I'm some like like you know like diplomats kid who like moved around my whole life. It's more just like I like grew up in like the like San Francisco Bay Area like until I was twelve and then like moved to Denver. So it's like. But yeah, I feel like I don't.</p><p>AARON</p><p>Have an answer for you because I'm really boring. I basically lived in DC forever.</p><p>ARTHUR</p><p>Yeah, but you know I think Denver was a really nice place to grow up. I feel like this inherently gets me down climbing path.</p><p>AARON</p><p>No, that's fine. There's no like, there's no like oh big, like there's no like trauma or whatever.</p><p>ARTHUR</p><p>But I feel very lucky that like my like parents are very like outdoorsy people. Like my mom is like a professional ski racer for some time. Like she grew up like skiing a lot and then I had an uncle who like both of my parents grew up in the Denver area so like, and have a bunch of siblings. So like almost all of our extended family like lives there and one of them, my dad's brother, like their parents weren't like really that outdoorsy or like rock climbers or anything but he like loved rock climbing and when he was 18 he like went to see you boulder and like started climbing and stuff. So like, yeah, I feel like for my like life story, the big thing for me was just like having a sort of like mentor who had like been like rock climbing for a long time like well before the age of like gyms and stuff and was just like very experienced in the outdoors and like you know after like a few times of going to the climbing gym when I was twelve, like I went like to El Dorado Canyon and he like took me up some like big like multi pitch like routes and stuff and I feel like just like, I don't know, having that experience in outdoors was like so formative for me because then when I like got a certain level of like experience and competence and was able to go do that on my own, like I had a friend who, who his dad was a rock climber and he kind of had a somewhat similar background of like doing it when he was little and then we were both at the level where we could like safely like sport climb on our own and I got, like, quick draws for, like, my birthday and then, like, you know, just, like, started going rock climbing with him. And I feel like a lot of my, like, in high school. I don't know. In high school, like, I don't know if this comes off as, like, surprising or unsurprising, but I was, like, actually a fucking terrible student for, like.</p><p>AARON</p><p>No, that's surprising. Yeah, that's surprising because you're. I don't know. Because for the obvious reason, they, like, obviously smart. And, like, usually being smart correlates with not being a terrible student.</p><p>ARTHUR</p><p>Right. Yeah, but, yeah, I was like. I feel like. I don't know, to be honest. Like, kind of the classic, like, underachieving, like, smart kid for, like, most of my life. Like, I just, like, something about the. Just, like, discipline and, like, having to learn all this stuff that, like, wasn't that interesting to me. I feel like I had the very stereotypical story of just kind of, like, blowing off school a little bit and, like, I kind of got my shit together, like, halfway through high school because I was, like, you know, very much had the expectation, like, I'm gonna go to college and, like, realize, oh, shit, if I want to get into, like, a good college, I have to, like, do well in high school. So I kind of, like, got my shit together and got, like, good grades for two years. But, like, for most of my life, I did not get very good grades. And, like. But I think, like, throughout all of that, like, much more important or, like, thing that I was, like, focused a lot more on was literally just, like, going climbing with my friend Eric.</p><p>AARON</p><p>Yeah, no, fuck it. Let's talk. Let's talk about it. Although, like. Yeah, no, no, I was like, I don't know. Maybe we'll have to supplement or whatever with, like, another, like, half episode or something. Yeah, no, no, because I remember on the hike, we, like. Yeah, I don't know. Have we mentioned this? That, like. Yeah, I think we've met in person. Yeah, that is true. Hike. Which is fun, which we should do. Not the same thing, but, like, another hype again. Yeah, if you are so inclined. Agreed. Oh, yeah. So I was just like, I don't get. That's interesting. I, like. I feel like I'm bad. I don't want to say. Oh, wow. Thanks for sharing your answer. Let me share my. But, like, one. Like, I don't know, maybe. Maybe people will, like, find interesting like, that. Like, I was the. Also very similar, like, timeline and, like, importance to me was, like, climbing. Like, yeah, I think I literally started when I was twelve, just like you or whatever, but I was a total gym rat and so, like, I don't know. Did you? I don't know. Sorry, I feel like I was. I've been like, the modality of like saying, like, oh, convince me that climbing outside is better, but like, I don't know, like, what are you. Yeah, yeah. Like, um. Oh, I doing.</p><p>ARTHUR</p><p>Certainly don't think it's better or worse, I think.</p><p>AARON</p><p>No, no.</p><p>ARTHUR</p><p>Very different experiences.</p><p>AARON</p><p>Yeah, yeah, totally. Like, I don't know. I don't even have a quet. Like, I don't know if I like, have it. Have an actual question, though. Like, did you think, you know, like, why you were like, drawn to climbing outside?</p><p>ARTHUR</p><p>Yeah, I think for me I do. Which was just that, like, I. It's always hard to disentangle, like, you know, I'm sure you've thought too about the, like, how much does parenting matter versus like genes and environment and things like that. So it's like, you know, it's hard to know how much of it was, like, literally my parents influence versus just like, the fact that I am my parents children genetically. But like, I think I was always like drawn to kind of like outdoor, sort of like adventuring, just like generally because like, even prior to climbing, I like, grew up, like skiing and like hiking and like reading the Boy Scouts by the chance. No, I was not. Yeah, I think I was. Part of my like, bad student ness was like wrapped up in the same reason why I never like, did like Boy Scouts or like, or like team sports, which I was just like very much like a stubborn kind of individualist. Like, I want to do my own thing and like yada, yada, yada, you know, from a young age. So I think, like, organized things, like the Boy Scouts were like, not that appealing to me. And that was part of what was appealing about climbing outside in particular was that sense of like incredible, like freedom and independence, you know? So, yeah, I think, like, at first I just fell in love with like, the movement of rock climbing through the gym, like, just like you did, like the first few times I in the gym. But then when my uncle took me tread climbing, I was like, oh, no, this is the thing, like, you know what I mean? Like, getting to the top of these like, giant walls. Like, I think it's super, like, cliche whenever, like, people talk about this stuff, but it's like, there's something about that experience of like, when you climb a big route outside and you're like, staying standing on the top of like, a, you're halfway up, like, a 400 foot cliff and you just have this deep sense of, like, humans are, like, one not supposed to be here in some sense. And that, like, without this, like, modern equipment and shit, like, you would never be in this position. Like, this is so dangerous and crazy, even though it's not necessarily dangerous and crazy if you're, like, doing it correctly. And that sense of, like, oh, very few people are able to, like, be in these kinds of positions. Like, there was something, like, very, like, aesthetically appealing to me about that, honestly. And, like, I think so. That was a big aspect of it. Just, like, the actual places that you get to see and go was really inspiring. I think I love just, like, being in nature in that way, in a way that's very interactive. You know? It's not just like, you're, like, looking at the pretty trees, but you're, like, really getting to understand, like, especially in, like, trad climbing. Like, oh, like, this kind of, like, part of the rock is like something that, like, with my gear and abilities and skills, I can, like, safely travel. And it, like, gives you this whole, like, interactive sense of, like, understanding this part of nature, which is like a rock wall, you know? And that was quite beautiful to me.</p><p>AARON</p><p>That's awesome. We're totally the opposite people in every possible way. Not every possible way, but the other.</p><p>ARTHUR</p><p>Thing that I was gonna say, oh, there's so there's, like, the aesthetic side of things. And then there was also a big part of it for me was, like, this, like, risk sort of thing, which is not like, I think a lot of this, especially whenever you tell anyone you're interrupting rock climbing, they're like, oh, have you seen free solo? You know, it's like the meme or whatever. But, like, I think when, like, people think about rock climbing, like, they just think of, like, sort of a reckless, kind of, like, adrenaline junkie sort of pursuit.</p><p>AARON</p><p>Yeah.</p><p>ARTHUR</p><p>I think what was really beautiful about rock climbing to me and, like, spoke to me on, like, both an intellectual and aesthetic level was that, like, something that's interesting about it is, like, gym climbing, right, is, like, extremely safe, right? Like, way safer than, like, driving to the rock climbing gym, right? But there's this whole spectrum in climbing from slightly more dangerous but still relatively safe and probably safer than the drive to and from is sport climbing outside on modern bolted routes where the falls are safe, that's also very safe all the way to free soloing on the far other end, hard routes or whatever. And then somewhere in the middle would be, like, trad climbing, like, well traveled, established routes that are, like, within your ability are, like, scary. And, like, I think what I loved about it was, like, there's this whole spectrum of, like, risk level, right?</p><p>AARON</p><p>Yeah.</p><p>ARTHUR</p><p>And, like, a lot of what, like, becoming a better rock climber in, like, this, like, outdoor context is. Is, like, learning how to, like, rationally analyze and assess risk, like, on the fly, right? Like, because you have to learn, like, on a mental level to. To overcome this inbuilt fear of falling and fear of heights. And you have to look at it and be like, okay, my little lizard brain is like, this is absolutely insane. I'm 200ft off the ground looking straight to the river. I am totally freaked out right now. You have to learn how to override that and think more rationally and be like, how far below me is my last piece of gear? Like, is that piece of your solid? Do I, like, trust the cam that I placed in that crack or whatever, you know? And, like, I think, like, I love that, like, mental skill of, like, like, learning how to, like, work with your mind in that way and, like, learning how to, like, overcome these, like, instinctual feelings and, like, put them in this, like, rational context and, like, do that on the fly that, you know? And one of those, like, parameters is, like, your, like, physical ability, right? Like, that's something that happens in trad climbing is, like, sure. Like, maybe some routes are, like, kind of sketchy and not well protected, but they're like, you know, five seven, and you're, like, you know, you can climb five seven with your eyes closed, you know, in your sleep. And, like, like, you have to, like, learn to, like, trust that physical ability and, like, I don't know. I'm just rambling, but, like, I think at an aesthetic level, like, both the beauty of where you are and the pursuit itself being, like, something that's, like, where you're, like, safety is very much in your own hands and, like, you can, like, totally be safe and secure, but you have to, like, no one can draw that line for you. Like, you have to decide, like, what risk you're willing to tolerate and, like, how you're willing to manage it. Like, I, like, was so addicted.</p><p>AARON</p><p>That's so. That's awesome. No, I'm glad I can't say it all, like, resonates, but, like, that's. I don't have anything interesting to say. That's just really cool. Like, I'm glad you. I'm glad you got to expect that. I, like, yeah, I mean, it's kind of like, this is sort of a dumb conversational movie. Just like, oh, just like, oh, just like doing the same, saying all that. But for me. But, like, I, like, interjected before and said that we were, like, very different. I forget what, like, prompted that exactly. Yeah. I mean, for one thing, I, like, guess I. Yeah, so, I mean. I mean, like, I don't want to. I don't want to overplay. Like. Like, the difference is I liked climbing outside. I would always, um, definitely, like, much more bouldering, though. Um, yeah.</p><p>ARTHUR</p><p>Side note, I think is a lot more dangerous than before realize.</p><p>AARON</p><p>Oh, yeah. Outdoors at least. Yeah, yeah.</p><p>ARTHUR</p><p>Just like, a lot of, like, I think, like, the most common, like, climbing injuries are, like, bouldering injuries because it's just like.</p><p>AARON</p><p>And there's a huge range from, like, you know, if you're. It's like a nine foot tall thing and you have seven crash pads, you're fine.</p><p>ARTHUR</p><p>Yeah. Like a very flat.</p><p>AARON</p><p>Yeah, yeah. Straight forward landing. No, you are also just, like, evoking, like, memories of, like. Yeah, both. Like, I. One thing I never engaged in is what you're just talking about, which is dealing with fear in the context of an actual, like, something where you actually do have to evaluate safety. So, like, I was fighting past. I was like, escape cat in the sense of, like, getting used to, like, climbing in the gym where it's, like, really safe. Actually, I was. I was, like, looking at my leg. At one point, I did get one injured, which is, like, at one point, like, I fell with, like, a rope, like, wrapped around my leg. And that was really fucking painful. But, like, I can't really see it anymore. At one point, it was. It was like a scar for a while, but it was like. It was, like, wrapped around my, like, the bottom of my leg or whatever. Um, I was fine. Um.</p><p>ARTHUR</p><p>Don't put your leg behind the rope.</p><p>AARON</p><p>Yeah, yeah, no, yeah. Backstopping. Don't do that. No, but, like, it was like. I mean, there was, like, some mental challenge there. And also there was, like, the hardest, like, actual, as in outdoor, like, route that I did was called, uh. I don't know. No one's gonna, like, recognize this out of, like, the nine people who are. Yeah, but buckets of blood, which is local, actually. It's like a local boulder. I don't think I did forget if I did it. Like, it's like a b ten in car. Yeah. Like, near the.</p><p>ARTHUR</p><p>Holy shit.</p><p>AARON</p><p>Like, in Carter rock. It's like a couple hard moves. It was like, right? Yeah, like, my. So the hard moves are short, but then there's, like, a b five top, which, like, honestly. Yeah, probably one of the most dangerous. Like, I guess not. Not dangerous, but, like, I guess, like. Like, danger in terms of, like, expected value. Things I did was like, yeah. In fact, topping that, and this is literally only one time. I don't want to, like, overplay this. And I would not. It was not like a life or death thing. It was like, I don't know. Yeah, you could also. I also have friends who are spotting me. I, like, they told me after. Sorry. Let me get back to the point, which is that. Yeah, basically, like, after, like, the hard part, like, doing the v five top, which was, like, you know, not. Not a super high ball, which means, like, I don't know, about 20ft or something. But, like, yeah, I don't know. Probably I'll look at the order of, like. I mean, I can just go and measure, like, 15 or 20ft, but, like, not over a super solid landing or. I don't remember the exact circumstances, but anyway, yeah, but, like. But, like, that was. That was, like, a one time thing, I guess maybe I'm also like, this is my signaling blame, like, brain, like, flexing my credentials or whatever. No, but for the. For the. For the most part, like, safety. Like, that, being scared was, like, just, like, I just went on, like, the cost side of the. Of the equation. I didn't like mixer. I mean, I definitely did, like, pushing, like, physical boundaries. Yeah. Um, and also, I mean, yeah, maybe this is, like, uh. I don't know, like, my. The cynical side of my brain, which I think is actually, like, correct in this case. It's just like, okay, like, as much as, like. Like, I'm not talking about this for you, but, like. Like, for me, you can always, like, put things in, like, high minded terms and in fact, and sometimes, like. Like, you really do enjoy stuff, but, I mean, a lot of that, like, is derived from your brain, like, trying to do, like, status y things and, like, signaling things or whatever. Um, but, like, yeah, with that said, um. Uh, yeah, like. Like, one thing I've found in, like, especially actually indoor rope climbing and, like, pushing my boundaries, like, there was. It, um. I had never, like, previously been really good at anything physical. And, like, I was not. Like, I wasn't like, it was, like, pretty, like, average or whatever. Like, I played baseball. I was like, okay. And, like, I mean, I was, like, kind of short. So, like, I don't think I ever had, like, big basketball. I wasn't like, a rec basketball team. You know, I wasn't like, super, like, overweight. I wasn't, like, super, you know, I actually was tiny at what, some point. That's a whole nother story. But, like, you know, by the time I was, like, 1415 or whatever, yeah, it was, like, pretty average. Um, I mean, yeah, this is, like, sort of just besides the point, but, like, I guess the opposite of you. Like, I was always sort of like a try, you know, tryhard kid in school, like, did pretty well. So, like, kind of, like, used, you know, like, success for, like, lack of a better word in, like, that domain or whatever. But, like, finally, like, one thing I guess I figured out is that I was, yeah, I guess both be a combination of, like, really liking it and, like, working, like, pretty hard. And also I'm, like, pretty sure that you can get into, like, the genetic aspect or whatever, but, like, to some extent just, like, like, not, like, being, like, lucky. Is that, like, I was really good at endurance, especially if I trained it. I think I was, like, naturally good. Yeah, yeah. And so, like, I don't know. I don't have, like, a ton of, like, insight here. It was just cool to be, like, yeah. To be actually, like, quite good at even, like, this one, like, subtype of climbing, which is, like, basically indoor compet not only competitions. I mean, I was. I did like, competitions, but also, also just, like, in general being, like, pretty good at, like, you know, the 40 foot, like, indoor. Indoor roots or whatever and, yeah, I mean, like, this is maybe just, like, take this part out and, like, the last thing I'm trying to do at all is, like, is like, I don't think this actually comes across as, like, a brag because at this point, like, I am, like, not nearly as good shape as I was. Like, look, when I was, like, going to the competitions, like, there were a couple years where, like, I was able to make, like, the national, like, level competition, which is, like, like, a couple, like, late, like, levels, I guess two yet two, like, competitions, but, like, those decompose into, like, three, like, cutoffs or whatever that you, like, have to get to or whatever and no, like, I don't think, honestly, was there anything, like, super deep here? I could, like, make something up? It was just cool. Like, yeah, like, being like, um.</p><p>ARTHUR</p><p>Uh, yeah, I think that that resonates with me, too. And that, like, in some ways, probably that status y sort of stuff explains part of why.</p><p>AARON</p><p>Oh, I mean, who cares?</p><p>ARTHUR</p><p>Like, no, no, no. But I'm saying it maybe explains part of why I, like, wasn't that interested in competition climbing. Was that like, I mean, one, like, it was the sort of contrarian, like whatever individualist streak I talked about earlier, which was the. I like because I think very early on I had this much more kind of like whatever high minded aesthetic climbing is beautiful kind of like, notion. Like, I think that made competitions less interesting to me. But also the other thing was that, like similar to you, I was like, holy shit, this is sport that I'm actually really good at. But like, I wasn't insanely good at like competition climbing. Like I was, I was, you know.</p><p>AARON</p><p>Did you do any better than average competitions like that?</p><p>ARTHUR</p><p>I. Yeah, like some. I did some competitions, yeah, but like very little. Like I was on like the competitive team at like my gym for like a little while and then I was like, nah. And I noped out of that. But you know, I mean I climbed like, like, you know, solidly, like twelve plus. Yeah, some like, you know, low 13s in the gym or whatever.</p><p>AARON</p><p>Yeah, that's, that's like, that's like seriously talented.</p><p>ARTHUR</p><p>Yeah, but, but like, I wasn't like. I think because I was. It's certainly like in part like largely influenced by some of those status things as well. I think part of why I like, was like, oh, competitions aren't for me was that I found that like everything that I was rambling about earlier and the sort of like cognitive skill, like managing risk sort of stuff, like I realized that was actually the skill that I was like on the much farther right tail of the distribution on. And like, you know, there's a lot of so much just like luck and privilege of like having the like, you know, like parents and mentors and friends and connections to like be able to like get into that kind of like outdoor climbing at a young age that a lot of people just like don't have access to. So it's like a very limited sample. But I felt like I was like of the like gym kids that I knew my age. Like a good chunk of them were like stronger than me and climbing at the gym, but I was like, but you guys. Yeah, like, you know, climb, like, you know, 511 trad.</p><p>AARON</p><p>Climb.</p><p>ARTHUR</p><p>You know what I mean?</p><p>AARON</p><p>Like an example is me. Like, I definitely. There was never a point in time when I was able to climb 511 trad.</p><p>ARTHUR</p><p>Yeah, so I think like some of that, like, some of that was certainly part of like, what motivated me was it was like, oh, I'm actually really good at this like sport or whatever, or at least compared to like, you know, not compared to people who like make it their lives or whatever, but in terms of, like, casual hobbyist, like, teenagers, early twenties. So I think, like, that was. That was really appealing to me about it as well. Like, no doubt.</p><p>AARON</p><p>Yeah, there's, like, a. Yeah, I, like, maybe I'll just, like, end up, like, keep, like, keeping talking or keep talking until, like, I just, like, say everything there's to be said, but, like, there's, like, I don't know. I feel like I have, like, a lot to say about, like, climbing and, like, my climbing history or whatever, but, like, part of this. And, like, I really don't mean this to be, like, oh, like, what was me or whatever, but, like, another interesting. Yeah, so, like, I guess at least I find it interesting. Maybe you all too pejanora listeners, like, highlights. Like, the thing I just said was, like, this is such a. This is. This sounds so, like, melodramatic or something, but, like, I really, I think, like, learned, like, for the first time and in a way that. Sorry. Without hedging, like, sorry. Basically, I, like, discovered, I guess, for the first time that, like, like, you can't just hardweight your work to relative success. And I actually, this is, like, a. Something, like, very close to that is, like, in a. In, like, my list of, like, hot takes or, like, my hot takes thread, like, ongoing hot takes it on Twitter or whatever. And it sounds. I don't know, I don't want to be, like. So I don't want to come across as, like, cynical. I don't actually. I am, like, a skeptical person. I don't think I'm, like, broadly a cynical person. Um, and, like, I don't want. I actually don't think this is cynical. I think this is just, like, I don't know, like, maybe it, like, kind of reads that way and it's the kind of thing that, like, a lot of people, like, implicitly believe. But, like, I don't know, it sounds like very pessimist coded to say, but, like, um, I think it's a lot of. It's, like, very important in, like, a lot of domains. Um. Uh, but, like, yeah, I do think at some point, and maybe this had to do with. This is getting way back, but, like, maybe it had to do with, like, baseball at least, like, being largely a skill sport in some sense. Like, yeah, like, like, most healthy adult humans. Like, there is not. If they could, like, figure out how to swing a bat in, like, such a way that they, like, made, like, there is no, like, hardcore obvious physical constraint on, like, any arbitrary human being. Like, being, like, the best baseball player in the world. And that's not literally true because of course, like, you know, hand eye coordination and willpower and like a lot of things like that are in fact like, like largely genetically determined, probably even if you like, take away the willpower part or like just go to like, you know, hand eye coordination, reaction time, whatever, etc. Etc. But it's like not, it's not as salient to you or whatever. And like, I think it's true that like definitely I remember I was like getting frustrated with like just not being as good as I wanted to be. And I do think it's to some extent, like if you push it to the limit and you like, and you like, I think, you know, a random, like mediocre, like twelve year old or a 14 year old or whatever, tries like insanely hard to like be a really good baseball player that, um, either they can or they're like getting wrong information somehow. And I think climbing. Yeah, there was just like a real. It made it a lot more salient or like the physical constraints became a lot more salient, especially in the sense of just watching other people, like, not, again, this sounds like melodramatic. Like not work nearly as hard. Yeah. And just really like naturally being able to do a lot like harder climbs and like there I think, hmm. Yeah, again, this like sounds melodramatic or whatever. I like, I think there was like, maybe there was a time where it kind of was in like an emotion, like some sort of like emotional sense, like hard. Like I didn't get both, like non obvious so I was like learning something new, but also like wasn't like super obviously like true for or something. There's like a, like a fate, like a. Yeah, maybe like a, you know, one to like three year long phase where like, um, I was sort of like watching other people get bigger and stronger and yet me trying to push the limits in terms of doing what I could to make myself stronger. And at the end of the day, they were still just a lot better than me and. Yeah, I don't know. I keep saying like, oh, I don't want to come across a cynical. I'm like. And like, I don't. Yeah, I don't. I don't know. But that's, that's the whole story or not the whole story. That's like the whole. It's not. No, like, you know, bigger lesson in there, right?</p><p>ARTHUR</p><p>I think it. Part of what's fun about it to me as a sport is like, like, I mean, I guess if you're lucky enough to, like, have that natural talent is one thing, and, like, clearly both of us did to some degree, you know? But, like, also, I think being just, like, a sort of, like, naturally very skinny, like, not super, like, athletic build person. Like, part of what I loved about climbing was, like, despite all of what you're saying, I feel like there's, like, a reasonable, like, diversity of, like, body types that are, like, to be very successful because it's all, like, very relative, like, strength, like, to your body weight and all that. And, like, I think that aspect of it, plus the fact that I think there's a much higher, like, level of achievement that's possible than people realize with, like, very little physical training. Like, just an enormous amount of climbing at the, like, typical range of, like, hobbyist skill level. Like, so, so much of it is, like, technique based.</p><p>AARON</p><p>Oh, yeah, I totally disagree.</p><p>ARTHUR</p><p>Really? Yeah, I mean, like, I think it. I think. I. I don't know. I want to hear what you have to say, but, like, I also think I'm talking about, like, a particular range or distribution that, like, I think, like, up until, like, I don't know, to put it in concrete terms, like, in typical gym grades, I think you can you bolder, like, up to, like, v six, maybe even v seven with, like, very little, like, serious climbing strength. Like, I think it's only when you get into that upper echelon of, like, serious competitors and, like, strong amateurs where you, like, you just need to hang board, you know what I mean? But, like, I think there you can get much farther than people realize on, like, technique alone. And, like, this is at least obvious to me because over the last, like, two plus two and a half years or so, I have, like, basically not rock climbed at all. And just like, a month ago, I realized that I live now a very, like, physically inactive lifestyle and that I should start being physically active again. And I'm, like, going on, like, probably, like, five gym sessions in the last month. Like, I've been finally, like.</p><p>AARON</p><p>Or, like, crystal city.</p><p>ARTHUR</p><p>Yeah, yeah, yeah. Cause I was like, oh, I'll just, like, start going to the climbing gym again. And I was, like, almost a little bit. Like, I had this almost ego thing where I was a bit embarrassed to start climbing again because I was like.</p><p>AARON</p><p>Oh, man, this is.</p><p>ARTHUR</p><p>I'm gonna be so bad. But, like, people aren't gonna know that. Like, really, I'm, like, good. You know what I mean? And it was so stupid. But, like, I got over that and, like, to be honest, like, I have done no serious training. I literally just, like, go and boulder until I'm tired, and then I go home and, like, already on my, like, fifth session, I'm climbing at, like, 85% of my maximum ever bouldering level. You know what I mean? And, like, I'm not in good climbing shape, so, like, I think. I don't know. It's just surprising, like, how far you can get when, like, you know, you don't forget your technique. Right. And at the end of the day, like, I've been climbing for, like, 13 years now, you know? And it's like, I have, like, just built up a lot of, like, skill and experience that's, like, still there.</p><p>AARON</p><p>Yeah. This is. Man, you're saying, like, the perfect things to, like, get me to, like, keep talking, and I pre. Like, I, like, I'm like, that's, like, a good thing or whatever. Yeah. So maybe. I know. I know I'm not. I'm sort of inviting myself here. Maybe I'll. Maybe I'll join you one of these days. I have been thinking. This has been on my mind on and off, you know, for. Yeah. Like, yeah, I don't know how much I'll cut, but, like, long story short, I. Sorry. Yeah, so. Bunch of, like, connected thoughts. Yeah. Like, one thing I want to say is, like, I don't think that will be true for me. It's an empirical question. Right. Like, um. And I think that's largely just because of, like, the, um. Now. Yeah. Connecting this back, like, to, like, the body type thing. Like. Like what? Um. Yeah, one of the reasons other people were, like, more talented and also something that, like, I have. Has been, like, you know, just. Just, like, I have, like, I guess struggled with is such, like, a. I don't know, like, therapy term or whatever. Sorry. I'm, like, dancing around just saying that, like, I was, like, there's a lot of pressure. Not from anyone, like, explicitly, but, like, at least in my case, just, like, for myself to, like, be good and, like, that means being light and there's. But I think there's been, like, a fair amount of, like, hand wringing in, like, especially, like, youth competitive circles about. About, like, eating disorders and stuff. And, like, um. I was. I don't. I don't think I was, like, ever, like, full on anorexic. I was definitely, uh, not doing, like, I was definitely, like, affect. Yeah, pretty consciously trying to, like, limit my weight in, like, a time where, like, I really wasn't supposed to be. And, like, combine that with, like, a couple other, like, situation. Like, things, like, I can get into. But, like, as, yeah, can just, like, mention is, like, other factors or whatever. Like, was probably, like, a pretty bad idea for me. And all that is to say is, like, yeah, it would have been nice to be one of the people who was, like, naturally very, like, you know, you know, very, very skinny, frankly. But, like, and that was something I was like, I guess, like, yes, to some extent, like, jealous of. And also, like, again, like, super matter of factly, like, yeah, I am, like, probably a good almost double my. I mean, I've grown vertically in this time period, but probably almost double the weight when I was climbing at my best right now, which is like, yeah, I mean. I mean, like, one thing that, like.</p><p>ARTHUR</p><p>I. I feel bad now because I realize that, like, I think my general point about, like, technique being really important is totally true. But, like, when I put that point estimate on it.</p><p>AARON</p><p>No, no, no.</p><p>ARTHUR</p><p>That should have come with, like, the enormous asterisks of, like, I am naturally an extremely skinny person. Like, so much so that I, like, literally, like, this is something I struggled with a lot, actually, as a teenager was, like, given our, like, body expectations for men or whatever, it just, like, became obvious to me that, like, I, like, literally could not gain weight by eating unless I, like, severely over ate to the point of discomfort at, like, every meal. Right? And because of that, like, when I go through times where I don't climb or do any other form of fitness, like, all that happens is I lose weight because I lose muscle, you know? So, like, every time I take time off from climbing and come back, like, I am starting in the position of, like, actually weighing less than I did when I was really into climbing and, like, building muscle on top of that. So, like, enormous asterisks to my, like, no, no, no. Three weeks back, 80% of my original endurance is going to take longer to build. So I don't think I could sport climb very hard, but, like, bouldering strength or whatever, like, a huge part of that is, like, I stay very skinny and, like, thankfully not because I have, like, eating problems, but just because I, like, don't gain weight.</p><p>AARON</p><p>Yeah, yeah, yeah, sure, sure. No, no, I mean, like, yes, I think you're actually. I think you're totally, like, right? Like, you didn't need to add the disclaimer, but the disclaimer is, like, definitely true. And actually, like, I don't know, this is, like, whatever. At first I said, I was like, oh, I don't really want to talk about this. Like, whatever. Let's just like, yeah, like, dive in or whatever. I do think, like, there are. I don't. I can't remember, like, specific examples of this. I'm pretty sure, like, somehow, like, either, like, media sources or, like, specific people basically said something along the lines of, like, no, actually being skinny, like, doesn't help. And this is, like, basically just, like, gaslighting or whatever. Oh, no, no, it totally does. Like, yeah, yeah.</p><p>ARTHUR</p><p>I mean, the way.</p><p>AARON</p><p>Not just being skinny. I mean, like, I guess, you know, to some extent, that's, like, a little, like, it's like, the laugher curve or whatever. Like, the ideal amount of, like, muscle weight isn't literally zero.</p><p>ARTHUR</p><p>Right.</p><p>AARON</p><p>You know, but, like, it's pretty. You know, it's pretty skinny is, like, it's, like, ideal.</p><p>ARTHUR</p><p>And the way I think about, like, how, like. And thankfully, I think there's been, like, I was never super deep in, like, the competition world, but I think a lot of, like, prominent people have, like, spoken out about this in the last few years. I think there's, like, more attention on, like, the risk of eating disorders now. Like, the problem is, I'm sure by, like, no means solved. I'm sure it's still a huge issue. But I think part of what's so tricky about it and, like, why it'll continue to be an issue is, like, it just is a fact that, like, at a physical level, like, one of the most important things in rock climbing is, like, how strong are you in some very particular modalities versus how much do you weigh? And, like, you're saying there's some curve in the optimum mushroom, not zero, but starting from a baseline of being relatively fit and in good climbing shape and having strong muscles in those very particular muscle groups and modalities that are necessary for climbing when you're looking at that equation. Right. It's much easier to just like. Or not like, again, I'm like.</p><p>AARON</p><p>No, you're easier to lose weight.</p><p>ARTHUR</p><p>Yeah.</p><p>AARON</p><p>Oh, yeah.</p><p>ARTHUR</p><p>Very practical level. Virtually everyone, especially when you get to the point that you're, like, pretty close, you're, like, pretty far in the diminishing returns of, like, training hard, right?</p><p>AARON</p><p>Yeah, yeah.</p><p>ARTHUR</p><p>Once you get well into that diminishing returns, part. Part of, like, the curve of, like, training climbing strength, it's just going to be much easier for people to, like, lose and again, easier in a very, like, circumscribed, like.</p><p>AARON</p><p>I mean, for most people, though.</p><p>ARTHUR</p><p>Yeah. I mean, the reason why I'm adding that caveat is just, like, I want to be sensitive to the, like, there are, like, very real, like, psychological and like, other health costs to that. So it's not, like, easy.</p><p>AARON</p><p>Like, oh, yeah.</p><p>ARTHUR</p><p>But, like. But in terms of, like, if you are single mindedly focusing on. On getting better at rock climbing, like, there becomes a point where it's much easier for people to lose weight than it is to get stronger. And I think, like, that, like, fundamental fact, like, people are gonna have to, like, figure out how to, like, grapple with that for this sport to not be incredibly unhealthy for a lot of people that take it really seriously, you know? Like, I think it's just, it's no to me, like, that's the reason why there's no, like, sadly, like, no, like, mystery as to why there's this eating disorder problem.</p><p>AARON</p><p>Yeah, and, like, yes. Something you said before. It's like, oh, like you were, you know, I forget which one is, like, maybe, like, embarrassed. Like, like, when you went to the gym for the first time or whatever, and, like, yeah, and you were, like, pleasantly surprised. But, like, this sort of. There's been a lot of reasons why I haven't gone climbing since. I think I forget the exact. I think it was, like, I want to say, like, October 22 or 21 or something.</p><p>ARTHUR</p><p>Yeah.</p><p>AARON</p><p>Could be 2021, something like that. And there's, like, a proximate cause, which is like, oh, I was also, like, doing. I was, like, climbing college. But, like, actually, yes, it's, like, worth mentioning. Like, one thing, I think I was like, I. It's like, oh, you know, you don't never know the counterfactual or whatever, but, like, I think one of the something, like, very fortunate that happened was that I wound up at a college where I was, by. I was so far, at least coming in as a freshman. I was so far and away the best person on the very casual climbing team that I was able to sort of detach a little bit and, like, without it incurring such a, like, social psychological cost or whatever. And then eventually I, like, in fact, this is relevant, you know, there's a. Went to EA global, came back, got really sick, and, like, was, like, very, very sick for like, a week or two. And, like, and by that time, like, because of, like, the being sick and ye global, I think at some point I had been away, like, not climb for, like, three weeks or something, which is, like, by far the most. Yeah, or it was like a whole month or something. Yeah, yeah, the most. I had, like, been away from climbing, like, you know, in, like, years and years, and then I just, like, never went back. And I might, I don't know? Yeah, I. Honest to God, I'm sounding so melodramatic. But, like, I don't know. Yeah, yeah, I might. I think I do. Like, but, like, the dynamic you were mentioning, which, like, you were pleasantly surprised and, like, maybe it's not impossible that, like, I will be too, but, like, no, it's, like, not gonna be as fun or whatever. Like, you know, being like, oh, yeah, I used to be, like, the best person in the gym, and, like, now I'm really, really, really not. Yeah. And, like, much worse than I used to be. I mean, yeah, people like improvement. You know, it's psychologically, I mean.</p><p>ARTHUR</p><p>But that is the flip side, right? Well, I guess, one. It's interesting how much my personal story in some ways, like, mirrors yours. And that, like, I started taking climbing a lot less seriously when I was in college. And similarly, like, like, you were very similar to you. Like, I wasn't the strongest person on the casual climbing team. There's one guy who's, like, a super strong boulderer, but I was, like, definitely the most experienced rock climber. And, like, we would do outdoor excursions and stuff. And, like, you know, it was just like, I had a lot of other things going on. I was focusing on life, and I was casual. Right. It was like, it was something I still did regularly, but I wasn't, like, like, you know, really, really serious about, like, training and, like, fitness and all that. And then similarly, I went and studied abroad the fall of my junior year and was in, like, you know, was in, like, northeastern India, where there are no rock climbing gyms and no rock climbing to be found and, like, didn't rock climb at all for months and then came back and then, like, probably went to the gym a couple times, then COVID happened.</p><p>AARON</p><p>Yeah. Yeah.</p><p>ARTHUR</p><p>And, like, it was just all these things. Like, I just sort of, like, stopped doing it. Kind of, like, I kind of never came back, you know? And then I tried to come back to rock climbing after my kidney surgery, but I was, like, too.</p><p>AARON</p><p>Arthur donated his kidney. How did we not talk about that? Sorry, man, we're gonna have to cut this part from the podcast.</p><p>ARTHUR</p><p>But, like, I came back too soon after that and, like, sort of, like, injured my abdomen and then, like, took a long time off after that because I was like, I want to make sure when I start climbing again that I won't injure myself.</p><p>AARON</p><p>Yeah.</p><p>ARTHUR</p><p>Thankfully hasn't happened. I think I'm fine. Fine. But I will say on this psychological point, like, if you really internalize, like, if you really try to, like, let go of that, like, self comparison.</p><p>AARON</p><p>Yeah.</p><p>ARTHUR</p><p>Like how you used to be.</p><p>AARON</p><p>Yeah.</p><p>ARTHUR</p><p>I think part of what's been fun about coming back to climbing again is that, like, I am already so much noticeably stronger on like, the fifth trip to the climbing gym than I was on the first trip. Right. Because when you're starting from, like, a baseline of, like, no rock climbing fitness, like, especially when you already know sort of what to do to, like, get that back, like, you progress very quickly at the beginning. So, like, I think, I don't know. If I was to, like, lightly encourage you, I would say that if you're able to, like, let go of that, like, comparison to your previous self or whatever and just like, have fun kind of being a beginner again, like, at least in terms of fitness, like, and just like starting from that zero baseline, like, it's pretty cool how fast you improve, you know? Like, and I see that with my friends who, like, have gotten into it. Like, they get strong really fucking quickly because it's like these very weird, specific muscles that you just don't have if you don't do it.</p><p>AARON</p><p>Yeah.</p><p>ARTHUR</p><p>Anyways, I think TLDR we have solved everything. All of the problems of ethics and AI and reading old philosophy. Indeed.</p>]]></content:encoded></item><item><title><![CDATA[Drunk Pigeon Hour!]]></title><description><![CDATA[You earned it]]></description><link>https://www.aaronbergman.net/p/drunk-pigeon-hour</link><guid isPermaLink="false">https://www.aaronbergman.net/p/drunk-pigeon-hour</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Sat, 09 Mar 2024 21:15:21 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/142352197/9995ba130ce46d41e8b4a5eff5fababd.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>Intro</h1><p>Around New Years, <a href="https://twitter.com/absurdlymax">Max Alexander</a>, <a href="https://twitter.com/Laura_k_Duffy">Laura Duffy</a>, <a href="https://twitter.com/SpacedOutMatt">Matt</a> and <a href="https://twitter.com/AaronBergman18">I</a> tried to raise money for animal welfare (more specifically,  the <a href="https://funds.effectivealtruism.org/funds/animal-welfare">EA Animal Welfare Fund</a>) on Twitter. We put out a list of incentives (see the pink image below), one of which was to record a drunk podcast episode if the greater Very Online Effective Altruism community managed to collectively donate $10,000.</p><p>To absolutely nobody&#8217;s surprise, they did ($10k), and then did it again ($20k) and then almost did it a third time ($28,945 as of March 9, 2024). </p><p>To everyone who gave or helped us spread the word, and on behalf of the untold number of animals these dollars will help, <strong>thank you.</strong></p><p>And although our active promotion on Twitter has come to an end, <strong><a href="https://www.givingwhatwecan.org/fundraisers/ea-twitter-23">it is not too late to give!</a></strong> </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1cf5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe546f7df-fdbb-4afc-9879-aefd5a12d695_2228x1722.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1cf5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe546f7df-fdbb-4afc-9879-aefd5a12d695_2228x1722.png 424w, https://substackcdn.com/image/fetch/$s_!1cf5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe546f7df-fdbb-4afc-9879-aefd5a12d695_2228x1722.png 848w, https://substackcdn.com/image/fetch/$s_!1cf5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe546f7df-fdbb-4afc-9879-aefd5a12d695_2228x1722.png 1272w, https://substackcdn.com/image/fetch/$s_!1cf5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe546f7df-fdbb-4afc-9879-aefd5a12d695_2228x1722.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1cf5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe546f7df-fdbb-4afc-9879-aefd5a12d695_2228x1722.png" width="1456" height="1125" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e546f7df-fdbb-4afc-9879-aefd5a12d695_2228x1722.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1125,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2582676,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1cf5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe546f7df-fdbb-4afc-9879-aefd5a12d695_2228x1722.png 424w, https://substackcdn.com/image/fetch/$s_!1cf5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe546f7df-fdbb-4afc-9879-aefd5a12d695_2228x1722.png 848w, https://substackcdn.com/image/fetch/$s_!1cf5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe546f7df-fdbb-4afc-9879-aefd5a12d695_2228x1722.png 1272w, https://substackcdn.com/image/fetch/$s_!1cf5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe546f7df-fdbb-4afc-9879-aefd5a12d695_2228x1722.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I give a bit more context in a short monologue intro I recorded (sober) after the conversation, so without further ado, Drunk Pigeon Hour:</p><h1>Transcript</h1><p><em>(Note: very imperfect - sorry!)</em></p><h4>Monologue</h4><p>Hi, this is Aaron. This episode of Pigeon Hour is very special for a couple of reasons.</p><p>The first is that it was recorded in person, so three of us were physically within a couple feet of each other. Second, it was recorded while we were drunk or maybe just slightly inebriated. Honestly, I didn't get super drunk, so I hope people forgive me for that.</p><p>But the occasion for drinking was that this, a drunk Pigeon Hour episode, was an incentive for a fundraiser that a couple of friends and I hosted on Twitter, around a little bit before New Year's and basically around Christmas time. We basically said, if we raise $10,000 total, we will do a drunk Pigeon Hour podcast. And we did, in fact, we are almost at $29,000, just shy of it. So technically the fundraiser has ended, but it looks like you can still donate. So, I will figure out a way to link that.</p><p>And also just a huge thank you to everyone who donated. I know that's really cliche, but this time it really matters because we were raising money for the Effective Altruism Animal Welfare Fund, which is a strong contender for the best use of money in the universe.</p><p>Without further ado, I present me, Matt, and Laura. Unfortunately, the other co-host Max was stuck in New Jersey and so was unable to participate tragically. </p><p>Yeah so here it is!</p><h4>Conversation</h4><p>AARON</p><p>Hello, people who are maybe listening to this. I just, like, drank alcohol for, like, the first time in a while. I don't know. Maybe I do like alcohol. Maybe I'll find that out now.</p><p>MATT</p><p>Um, All right, yeah, so this is, this is Drunk Pigeon Hour! Remember what I said earlier when I was like, as soon as we are recording, as soon as we press record, it's going to get weird and awkward.</p><p>LAURA</p><p>I am actually interested in the types of ads people get on Twitter. Like, just asking around, because I find that I get either, like, DeSantis ads. I get American Petroleum Institute ads, Ashdale College.</p><p>MATT</p><p>Weirdly, I've been getting ads for an AI assistant targeted at lobbyists. So it's, it's like step up your lobbying game, like use this like tuned, I assume it's like tuned ChatGPT or something. Um, I don't know, but it's, yeah, it's like AI assistant for lobbyists, and it's like, like, oh, like your competitors are all using this, like you need to buy this product.</p><p>So, so yeah, Twitter thinks I'm a lobbyist. I haven't gotten any DeSantis ads, actually.</p><p>AARON</p><p>I think I might just like have personalization turned off. Like not because I actually like ad personalization. I think I'm just like trying to like, uh, this is, this is like a half-baked protest of them getting rid of circles. I will try to minimize how much revenue they can make from me.</p><p>MATT</p><p>So, so when I, I like went through a Tumblr phase, like very late. In like 2018, I was like, um, like I don't like, uh, like what's happening on a lot of other social media.</p><p>Like maybe I'll try like Tumblr as a, as an alternative.</p><p>And I would get a lot of ads for like plus-sized women's flannels.</p><p>So, so like the Twitter ad targeting does not faze me because I'm like, oh, okay, like, I can, hold on.</p><p>AARON</p><p>Sorry, keep going. I can see every ad I've ever.</p><p>MATT</p><p>Come across, actually, in your giant CSV of Twitter data.</p><p>AARON</p><p>Just because I'm a nerd. I like, download. Well, there's actually a couple of things. I just download my Twitter data once in a while. Actually do have a little web app that I might try to improve at some point, which is like, you drop it in and then it turns them. It gives you a csV, like a spreadsheet of your tweets, but that doesn't do anything with any of the other data that they put in there.</p><p>MATT</p><p>I feel like it's going to be hard to get meaningful information out of this giant csv in a short amount of time.</p><p>AARON</p><p>It's a giant JSON, actually.</p><p>MATT</p><p>Are you just going to drop it all into c long and tell it to parse it for you or tell it to give you insights into your ads.</p><p>AARON</p><p>Wait, hold on. This is such a.</p><p>MATT</p><p>Wait. Do people call it &#8220;C-Long&#8221; or &#8220;Clong&#8221;?</p><p>AARON</p><p>Why would it be long?</p><p>MATT</p><p>Well, because it's like Claude Long.</p><p>LAURA</p><p>I've never heard this phrase.</p><p>MATT</p><p>This is like Anthropic&#8217;s chat bot with a long context with so like you can put. Aaron will be like, oh, can I paste the entire group chat history?</p><p>AARON</p><p>Oh yeah, I got clong. Apparently that wasn't acceptable so that it.</p><p>MATT</p><p>Can summarize it for me and tell me what's happened since I was last year. And everyone is like, Aaron, don't give our data to Anthropic, is already suss.</p><p>LAURA</p><p>Enough with the impressions feel about the Internet privacy stuff. Are you instinctively weirded out by them farming out your personal information or just like, it gives me good ads or whatever? I don't care.</p><p>MATT</p><p>I lean a little towards feeling weird having my data sold. I don't have a really strong, and this is probably like a personal failing of mine of not having a really strong, well formed opinion here. But I feel a little sketched out when I'm like all my data is being sold to everyone and I don't share. There is this vibe on Twitter that the EU cookies prompts are like destroying the Internet. This is regulation gone wrong. I don't share that instinct. But maybe it's just because I have average tolerance for clicking no cookies or yes cookies on stuff. And I have this vibe that will.</p><p>AARON</p><p>Sketch down by data. I think I'm broadly fine with companies having my information and selling it to ad targeting. Specifically. I do trust Google a lot to not be weird about it, even if it's technically legal. And by be weird about it, what do mean? Like, I don't even know what I mean exactly. If one of their random employees, I don't know if I got into a fight or something with one of their random employees, it would be hard for this person to track down and just see my individual data. And that's just a random example off the top of my head. But yeah, I could see my view changing if they started, I don't know, or it started leaching into the physical world more. But it seems just like for online ads, I'm pretty cool with everything.</p><p>LAURA</p><p>Have you ever gone into the ad personalization and tried see what demographics they peg you?</p><p>AARON</p><p>Oh yeah. We can pull up mine right now.</p><p>LAURA</p><p>It's so much fun doing that. It's like they get me somewhat like the age, gender, they can predict relationship status, which is really weird.</p><p>AARON</p><p>That's weird.</p><p>MATT</p><p>Did you test this when you were in and not in relationships to see if they got it right?</p><p>LAURA</p><p>No, I think it's like they accumulate data over time. I don't know. But then it's like we say that you work in a mid sized finance. Fair enough.</p><p>MATT</p><p>That's sort of close.</p><p>LAURA</p><p>Yeah.</p><p>AARON</p><p>Sorry. Keep on podcasting.</p><p>LAURA</p><p>Okay.</p><p>MATT</p><p>Do they include political affiliation in the data you can see?</p><p>AARON</p><p>Okay.</p><p>MATT</p><p>I would have been very curious, because I think we're all a little bit idiosyncratic. I'm probably the most normie of any of us in terms of. I can be pretty easily sorted into, like, yeah, you're clearly a Democrat, but all of us have that classic slightly. I don't know what you want to call it. Like, neoliberal project vibe or, like, supply side. Yeah. Like, some of that going on in a way that I'm very curious.</p><p>LAURA</p><p>The algorithm is like, advertising deSantis.</p><p>AARON</p><p>Yeah.</p><p>MATT</p><p>I guess it must think that there's some probability that you're going to vote in a republican primary.</p><p>LAURA</p><p>I live in DC. Why on earth would I even vote, period.</p><p>MATT</p><p>Well, in the primary, your vote is going to count. I actually would think that in the primary, DC is probably pretty competitive, but I guess it votes pretty. I think it's worth.</p><p>AARON</p><p>I feel like I've seen, like, a.</p><p>MATT</p><p>I think it's probably hopeless to live. Find your demographic information from Twitter. But, like.</p><p>AARON</p><p>Age 13 to 54. Yeah, they got it right. Good job. I'm only 50, 99.9% confident. Wait, that's a pretty General.</p><p>MATT</p><p>What's this list above?</p><p>AARON</p><p>Oh, yeah. This is such a nerd snipe. For me, it's just like seeing y'all. I don't watch any. I don't regularly watch any sort of tv series. And it's like, best guesses of, like, I assume that's what it is. It thinks you watch dune, and I haven't heard of a lot of these.</p><p>MATT</p><p>Wait, you watch cocaine there?</p><p>AARON</p><p>Big bang theory? No, I definitely have watched the big Bang theory. Like, I don't know, ten years ago. I don't know. Was it just, like, random korean script.</p><p>MATT</p><p>Or whatever, when I got Covid real bad. Not real bad, but I was very sick and in bed in 2022. Yeah, the big bang theory was like, what I would say.</p><p>AARON</p><p>These are my interest. It's actually pretty interesting, I think. Wait, hold on. Let me.</p><p>MATT</p><p>Oh, wait, it's like, true or false for each of these?</p><p>AARON</p><p>No, I think you can manually just disable and say, like, oh, I'm not, actually. And, like, I did that for Olivia Rodrigo because I posted about her once, and then it took over my feed, and so then I had to say, like, no, I'm not interested in Olivia Rodrigo.</p><p>MATT</p><p>Wait, can you control f true here? Because almost all of these. Wait, sorry. Is that argentine politics?</p><p>AARON</p><p>No, it's just this.</p><p>MATT</p><p>Oh, wait, so it thinks you have no interest?</p><p>AARON</p><p>No, this is disabled, so I haven't. And for some reason, this isn't the list. Maybe it was, like, keywords instead of topics or something, where it was the.</p><p>MATT</p><p>Got it.</p><p>AARON</p><p>Yes. This is interesting. It thinks I'm interested in apple stock, and, I don't know, a lot of these are just random.</p><p>MATT</p><p>Wait, so argentine politics was something it thought you were interested in? Yeah. Right.</p><p>AARON</p><p>Can.</p><p>MATT</p><p>Do you follow Maya on Twitter?</p><p>AARON</p><p>Who's Maya?</p><p>MATT</p><p>Like, monetarist Maya? Like, neoliberal shell two years ago.</p><p>AARON</p><p>I mean, maybe. Wait, hold on. Maybe I'm just like.</p><p>MATT</p><p>Yeah, hardcore libertarianism.</p><p>LAURA</p><p>Yeah. No, so far so good with him. I feel like.</p><p>AARON</p><p>Maia, is it this person? Oh, I am.</p><p>MATT</p><p>Yeah.</p><p>AARON</p><p>Okay.</p><p>MATT</p><p>Yeah, she was, like, neoliberal shell two years ago.</p><p>AARON</p><p>Sorry, this is, like, such an errands. Like snipe. I got my gender right. Maybe. I don't know if I told you that. Yeah. English. Nice.</p><p>MATT</p><p>Wait, is that dogecoin?</p><p>AARON</p><p>I assume there's, like, an explicit thing, which is like, we're going to err way on the side of false positives instead of false negatives, which is like. I mean, I don't know. I'm not that interested in AB club, which.</p><p>MATT</p><p>You'Re well known for throwing staplers at your subordinate.</p><p>AARON</p><p>Yeah.</p><p>LAURA</p><p>Wait, who did you guys support in 2020 primary?</p><p>MATT</p><p>You were a Pete stan.</p><p>LAURA</p><p>I was a Pete stan. Yes, by that point, definitely hardcore. But I totally get. In 2016, I actually was a Bernie fan, which was like, I don't know how much I was really into this, or just, like, everybody around me was into it. So I was trying to convince myself that he was better than Hillary, but I don't know, that fell apart pretty quickly once he started losing. And, yeah, I didn't really know a whole lot about politics. And then, like, six months later, I became, like, a Reddit libertarian.</p><p>AARON</p><p>We think we've talked about your ideological evolution.</p><p>MATT</p><p>Have you ever done the thing of plotting it out on the political? I feel like that's a really interesting.</p><p>LAURA</p><p>Exercise that doesn't capture the online. I was into Ben Shapiro.</p><p>MATT</p><p>Really? Oh, my God. That's such a funny lore fact.</p><p>AARON</p><p>I don't think I've ever listened to Ben Shapiro besides, like, random clips on Twitter that I like scroll?</p><p>MATT</p><p>I mean, he talks very fast. I will give him that.</p><p>LAURA</p><p>And he's funny. And I think it's like the fast talking plus being funny is like, you can get away with a lot of stuff and people just end up like, oh, sure, I'm not really listening to this because it's on in the background.</p><p>AARON</p><p>Yeah.</p><p>MATT</p><p>In defense of the Bernie thing. So I will say I did not support Bernie in 2016, but there was this moment right about when he announced where I was very intrigued. And there's something about his backstory that's very inspiring. This is a guy who has been just extraordinarily consistent in his politics for close to 50 years, was saying lots of really good stuff about gay rights when he was like, Burlington mayor way back in the day, was giving speeches on the floor of the House in the number one sound very similar to the things he's saying today, which reflects, you could say, maybe a very myopic, closed minded thing, but also an ideological consistency. That's admirable. And I think is pointing at problems that are real often. And so I think there is this thing that's, to me, very much understandable about why he was a very inspiring candidate. But when it came down to nitty gritty details and also to his decisions about who to hire subordinates and stuff, very quickly you look at the Bernie campaign alumni and the nuances of his views and stuff, and you're like, okay, wait, this is maybe an inspiring story, but does it actually hold up?</p><p>AARON</p><p>Probably not.</p><p>LAURA</p><p>Yeah, that is interesting. It's like Bernie went woke in 2020, kind of fell apart, in my opinion.</p><p>AARON</p><p>I stopped following or not following on social media, just like following him in general, I guess. 2016 also, I was 16. You were not 16. You were.</p><p>MATT</p><p>Yeah, I was in college at that time, so I was about 20.</p><p>AARON</p><p>So that was, you can't blame it. Anything that I do under the age of 18 is like just a race when I turn 18.</p><p>LAURA</p><p>Okay, 2028 draft. Who do we want to be democratic nominee?</p><p>AARON</p><p>Oh, Jesse from pigeonhole. I honestly think he should run. Hello, Jesse. If you're listening to this, we're going to make you listen to this. Sorry. Besides that, I don't know.</p><p>MATT</p><p>I don't have, like, an obvious front runner in mind.</p><p>AARON</p><p>Wait, 2028? We might be dead by 2028. Sorry, we don't talk about AI.</p><p>MATT</p><p>Yeah.</p><p>AARON</p><p>No, but honestly, that is beyond the range of planability, I think. I don't actually think all humans are going to be dead by 2028. But that is a long way away. All I want in life is not all I want. This is actually what I want out of a political leader. Not all I want is somebody who is good on AI and also doesn't tells the Justice Department to not sue California or whatever about their gestation. Or maybe it's like New Jersey or something about the gestation crate.</p><p>MATT</p><p>Oh, yeah. Top twelve.</p><p>AARON</p><p>Yeah. Those are my two criteria.</p><p>MATT</p><p>Corey Booker is going to be right on the latter.</p><p>AARON</p><p>Yeah.</p><p>MATT</p><p>I have no idea about his views on.</p><p>AARON</p><p>If to some extent. Maybe this is actively changing as we speak, basically. But until recently it wasn't a salient political issue and so it was pretty hard to tell. I don't know. I don't think Biden has a strong take on it. He's like, he's like a thousand years old.</p><p>LAURA</p><p>Watch what Mitch should have possibly decided. That's real if we don't do mean.</p><p>AARON</p><p>But like, but his executive order was way better than I would have imagined. And I, like, I tweeted about, know, I don't think I could have predicted that necessarily.</p><p>MATT</p><p>I agree. I mean, I think the Biden administration has been very reasonable on AI safety issues and that generally is reflective. Yeah, I think that's reflective of the.</p><p>AARON</p><p>Tongue we know Joe Biden is listening to.</p><p>MATT</p><p>Okay.</p><p>AARON</p><p>Okay.</p><p>MATT</p><p>Topics that are not is like, this is a reward for the fundraiser. Do we want to talk about fundraiser and retrospective on that?</p><p>AARON</p><p>Sure.</p><p>MATT</p><p>Because I feel like, I don't know. That ended up going at least like one sigma above.</p><p>AARON</p><p>How much? Wait, how much did we actually raise?</p><p>MATT</p><p>We raised like 22,500.</p><p>LAURA</p><p>Okay. Really pissed that you don't have to go to Ava.</p><p>AARON</p><p>I guess this person, I won't name them, but somebody who works at a prestigious organization basically was seriously considering donating a good amount of his donation budget specifically for the shrimp costume. And, and we chatted about it over Twitter, DM, and I think he ended up not doing it, which I think was like the right call because for tax reasons, it would have been like, oh. He thought like, oh, yeah, actually, even though that's pretty funny, it's not worth losing. I don't know, maybe like 1000 out of $5,000 tax reasons or whatever. Clearly this guy is actually thinking through his donations pretty well. But I don't know, it brought him to the brink of donating several, I think, I don't know, like single digit thousands of dollars. Exactly.</p><p>LAURA</p><p>Clearly an issue in the tax.</p><p>AARON</p><p>Do you have any tax take? Oh, wait, sorry.</p><p>MATT</p><p>Yeah, I do think we should like, I mean, to the extent you are allowed by your employer too, in public space.</p><p>AARON</p><p>All people at think tanks, they're supposed to go on podcast and tweet. How could you not be allowed to do that kind of thing?</p><p>MATT</p><p>Sorry, keep going. But yeah, no, I mean, I think it's worth dwelling on it a little bit longer because I feel like, yeah, okay, so we didn't raise a billion dollars as you were interested in doing.</p><p>AARON</p><p>Yeah. Wait, can I make the case for like. Oh, wait. Yeah. Why? Being slightly unhinged may have been actually object level. Good. Yeah, basically, I think this didn't end up exposed to. We learned this didn't actually end up happening. I think almost all of the impact money, because it's basically one of the same in this context. Sorry. Most of the expected money would come in the form of basically having some pretty large, probably billionaire account, just like deciding like, oh, yeah, I'll just drop a couple of mil on this funny fundraiser or whatever, or maybe less, honestly, listen, $20,000, a lot of money. It's probably more money than I have personally ever donated. On the other hand, there's definitely some pretty EA adjacent or broadly rationalist AI adjacent accounts whose net worth is in at least tens of millions of dollars, for whom $100,000 just would not actually affect their quality of life or whatever. And I think, yeah, there's not nontrivial chance going in that somebody would just decide to give a bunch of money.</p><p>MATT</p><p>I don't know. My view is that even the kinds of multimillionaires and billionaires that hang out on Twitter are not going to ever have dropped that much on a random fundraiser. They're more rational.</p><p>AARON</p><p>Well, there was proof of concept for rich people being insane. Is Balaji giving like a million dollars to James Medlock.</p><p>MATT</p><p>That's true.</p><p>AARON</p><p>That was pretty idiosyncratic. Sorry. So maybe that's not fair. On the other hand. On the other hand, I don't know, people do things for clout. And so, yeah, I would have, quote, tweeted. If somebody was like, oh yeah, here's $100,000 guys, I would have quote, tweeted the shit out of them. They would have gotten as much possible. I don't know. I would guess if you have a lot of rich people friends, they're also probably on Twitter, especially if it's broadly like tech money or whatever. And so there's that. There's also the fact that, I don't know, it's like object people, at least some subset of rich people have a good think. EA is basically even if they don't identify as an EA themselves, think like, oh yeah, this is broadly legit and correct or whatever. And so it's not just like a random.</p><p>MATT</p><p>That's true. I do think the choice of the animal welfare fund made that harder. Right. I think if it's like bed nets, I think it's more likely that sort of random EA rich person would be like, yes, this is clearly good. And I think we chose something that I think we could all get behind.</p><p>AARON</p><p>Because we have, there was a lot of politicking around.</p><p>MATT</p><p>Yeah, we all have different estimates of the relative good of different cause areas and this was the one we could very clearly agree on, which I think is very reasonable and good. And I'm glad we raised money for the animal welfare fund, but I do think that reduces the chance of, yeah.</p><p>LAURA</p><p>I think it pushes the envelope towards the animal welfare fund being more acceptable as in mainstream ea.org, just like Givewell would be. And so by forcing that issue, maybe we have done more good for the.</p><p>AARON</p><p>That there's like that second order effect. I do just think even though you're like, I think choosing this over AMF or whatever, global health fund or whatever decreased the chance of a random person. Not a random person, but probably decrease the total amount of expected money being given. I think that was just trumped by the fact that I think the animal welfare, the number I pull out of thin air is not necessarily not out of thin air, but very uncertain is like 1000 x or whatever relative to the standards you vote for. Quote, let it be known that there is a rabbit on the premises. Do they interact with other rodents?</p><p>MATT</p><p>Okay, so rabbits aren't rodents. We can put this on the pod. So rabbits are lagging wars, which is.</p><p>AARON</p><p>Fuck is that?</p><p>MATT</p><p>It's a whole separate category of animals.</p><p>AARON</p><p>I just found out that elk were part of it. Like a type of deer. This is another world shattering insight.</p><p>MATT</p><p>No, but rabbits are evolutionarily not part of the same. I guess it's a family on the classification tree.</p><p>AARON</p><p>Nobody, they taught us that in 7th grade.</p><p>MATT</p><p>Yeah, so they're not part of the same family as rodents. They're their own thing. What freaks me out is that guinea pigs and rabbits seem like pretty similar, they have similar diet.</p><p>AARON</p><p>That's what I was thinking.</p><p>MATT</p><p>They have similar digestive systems, similar kind of like general needs, but they're actually like, guinea pigs are more closely related to rats than they are to rabbits. And it's like a convergent evolution thing that they ended up.</p><p>AARON</p><p>All mammals are the same. Honestly.</p><p>MATT</p><p>Yeah. So it's like, super weird, but they're not rodents, to answer your question. Rabbits do like these kinds of rabbits. So these are all pet rabbits are descended from european. They're not descended from american rabbits because.</p><p>LAURA</p><p>American rabbits like cotton tails. Oh, those are different.</p><p>MATT</p><p>Yeah. So these guys are the kinds of rabbits that will live in warrens. Warrens. So, like, tunnel systems that they like. Like Elizabeth Warren. Yeah. And so they'll live socially with other rabbits, and they'll dig warrens. And so they're used to living in social groups. They're used to having a space they need to keep clean. And so that's why they can be, like, litter box trained, is that they're used to having a warren where you don't just want to leave poop everywhere. Whereas american rabbits are more solitary. They live above ground, or in my understanding is they sometimes will live in holes, but only occupying a hole that another animal has dug. They won't do their hole themselves. And so then they are just not social. They're not easily litter box trained, that kind of stuff. So all the domestic rabbits are bred from european ones.</p><p>AARON</p><p>I was thinking, if you got a guinea pig, would they become friends? Okay.</p><p>MATT</p><p>So apparently they have generally similar dispositions and it can get along, but people don't recommend it because each of them can carry diseases that can hurt the other one. And so you actually don't want to do it. But it does seem very cute to have rabbit.</p><p>AARON</p><p>No, I mean, yeah. My last pet was a guinea pig, circa 20. Died like, a decade ago. I'm still not over it.</p><p>MATT</p><p>Would you consider another one?</p><p>AARON</p><p>Probably. Like, if I get a pet, it'll be like a dog or a pig. I really do want a pig. Like an actual pig.</p><p>MATT</p><p>Wait, like, not a guinea pig? Like a full size pig?</p><p>AARON</p><p>Yeah. I just tweeted about this. I think that they're really cool and we would be friends. I'm being slightly sarcastic, but I do think if I had a very large amount of money, then the two luxury purchases would be, like, a lot of massages and a caretaker and space and whatever else a pig needs. And so I could have a pet.</p><p>MATT</p><p>Like, andy organized a not EADC, but EADC adjacent trip to Rosie's farm sanctuary.</p><p>AARON</p><p>Oh, I remember this. Yeah.</p><p>MATT</p><p>And we got to pet pigs. And they were very sweet and seems very cute and stuff. They're just like, they feel dense, not like stupid. But when you pet them, you're like, this animal is very large and heavy for its size. That was my biggest surprising takeaway, like, interacting with the hair is not soft either. No, they're pretty coarse, but they seem like sweeties, but they are just like very robust.</p><p>LAURA</p><p>Have you guys seen Dave?</p><p>AARON</p><p>Yes.</p><p>LAURA</p><p>That's like one of the top ten movies of all time.</p><p>AARON</p><p>You guys watch movies? I don't know. Maybe when I was like four. I don't like.</p><p>LAURA</p><p>Okay, so the actor who played farmer Hoggett in this movie ended up becoming a vegan activist after he realized, after having to train all of the animals, that they were extremely intelligent. And obviously the movie is about not killing animals, and so that ended up going pretty well.</p><p>AARON</p><p>Yeah, that's interesting. Good brown.</p><p>MATT</p><p>Okay, sorry. Yeah, no, this is all tracked. No, this is great. We are doing a drunk podcast rather than a sober podcast, I think, precisely because we are trying to give the people some sidetracks and stuff. Right. But I jokingly put on my list of topics like, we solved the two envelopes paradox once and for all.</p><p>AARON</p><p>No, but it's two boxing.</p><p>MATT</p><p>No. Two envelopes. No. So this is the fundamental challenge to questions about, I think one of the fundamental challenges to be like, you multiply out the numbers and the number.</p><p>AARON</p><p>Yeah, I feel like I don't have like a cash take. So just like, tell me the thing.</p><p>MATT</p><p>Okay.</p><p>AARON</p><p>I'll tell you the correct answer. Yeah.</p><p>MATT</p><p>Okay, great. We were leading into this. You were saying, like, animal charity is 1000 x game, right?</p><p>AARON</p><p>Conditional. Yeah.</p><p>MATT</p><p>And I think it's hard to easily get to 1000 x, but it is totally possible to get to 50 x if you just sit down and multiply out numbers and you're like, probability of sentience and welfare range.</p><p>AARON</p><p>I totally stand by that as my actual point estimate. Maybe like a log mean or something. I'm actually not sure, but. Sorry, keep going.</p><p>MATT</p><p>Okay, so one line of argument raised against this is the two envelopes problem, and I'm worried I'm going to do a poor job explaining this. Laura, please feel free to jump in if I say something wrong. So two envelopes is like, it comes from the thing of, like, suppose you're given two envelopes and you're told that one envelope has twice as much money in it as the other.</p><p>AARON</p><p>Oh, you are going to switch back and forth forever.</p><p>MATT</p><p>Exactly. Every time. You're like, if I switch the other envelope and it has half as much money as this envelope, then I lose 0.5. But if it has twice as much money as this envelope, then I gain one. And so I can never decide on which envelope because it always looks like it's positive ev to switch the other. So that's where the name comes from.</p><p>AARON</p><p>I like a part that you're like, you like goggles?</p><p>MATT</p><p>So let me do the brief summary, which is that basically, depending on which underlying units you pick, whether you work in welfare range, units that are using one human as the baseline or one chicken as the baseline, you can end up with different outputs of the expected value calculation. Because it's like, basically, is it like big number of chickens times some fraction of the human welfare range that dominates? Or is it like some small probability that chickens are basically not sentient times? So then a human has like a huge human's welfare range is huge in chicken units, and which of those dominates is determined by which unit you work in.</p><p>AARON</p><p>I also think, yeah, this is not a good conducive to this problem. Is not conducive to alcohol or whatever. Or alcohol is not going to this issue. To this problem or whatever. In the maximally abstract envelope thing. I have an intuition that's something weird kind of probably fake going on. I don't actually see what the issue is here. I don't believe you yet that there's like an actual issue here. It's like, okay, just do the better one. I don't know.</p><p>MATT</p><p>Okay, wait, I'll get a piece of paper. Talk amongst yourselves, and I think I'll be able to show this is like.</p><p>LAURA</p><p>Me as the stats person, just saying I don't care about the math. At some point where it's like, look, I looked at an animal and I'm like, okay, so we have evolutionarily pretty similar paths. It would be insane to think that it's not feeling like, it's not capable of feeling hedonic pain to pretty much the same extent as me. So I'm just going to ballpark it. And I don't actually care for webs.</p><p>AARON</p><p>I feel like I've proven my pro animal bona fide. I think it's bona fide. But here, and I don't share that intuition, I still think that we can go into that megapig discourse. Wait, yeah, sort of. Wait, not exactly megapig discourse. Yeah, I remember. I think I got cyberbullyed by, even though they didn't cyberbully me because I was informed of offline bullying via cyber about somebody's, sorry, this is going to sound absolutely incoherent. So we'll take this part out. Yeah. I was like, oh, I think it's like some metaphysical appeal to neuron counts. You specifically told me like, oh, yeah, Mr. So and so didn't think this checked out. Or whatever. Do you know what I'm talking about?</p><p>LAURA</p><p>Yeah.</p><p>AARON</p><p>Okay. No, but maybe I put it in dawn or Cringey or pretentious terms, but I do think I'm standing by my metaphysical neurons claim here. Not that I'm super confident in anything, but just that we're really radically unsure about the nature of sentience and qualia and consciousness. And probably it has something to do with neurons, at least. They're clearly related in a very boring sciency way. Yeah. It's not insane to me that, like, that, like. Like the unit of. Yeah, like the. The thing. The thing that, like, produces or like is, or like is directly, like one to one associated with, like, particular, like. Like, I guess, amount, for lack of better terms, of conscious experience, is some sort of physical thing. The neurons jumps out as the unit that might make sense. And then there's like, oh, yeah, do we really think all the neurons that control the tongue, like the motor function of the tongue, are those really make you quadrillion more important than a seal or whatever? And then I go back to, okay, even though I haven't done any research on this, maybe it's just like opiate. The neurons directly related neuron counts directly of. Sorry. Neurons directly involved in pretty low level hedonic sensations. The most obvious one would be literal opioid receptors. Maybe those are the ones that matter. This is like, kind of. I feel like we've sort of lost the plot a little.</p><p>MATT</p><p>Okay, this is like weird drunk math.</p><p>AARON</p><p>But I think your handwriting is pretty good.</p><p>MATT</p><p>I think I have it. So suppose we work in human units. I have a hypothetical intervention that can help ten chickens or one human, and we assume that when I say help, it's like, help them. The same of it. So if I work in human units, I say maybe there is a 50% chance that a chicken is zero one to 1100 of a human and a 50% chance that a chicken and a human are equal. Obviously, this is a thought experiment. I'm not saying that this is my real world probabilities, but suppose that these are my credences. So I do out the EV. The thing that helps ten chickens. I say that, okay, in half of the world, chickens are one 100th of a human, so helping ten of them is worth, like, zero five. Sorry, helping ten of them is zero one. And so 0.5 times zero one times ten is zero five. And then in the other half of the world, I say that a chicken and a human are equal. So then my intervention helps ten chickens, which is like helping ten humans so my total credence, like the benefit in that set of worlds with my 0.5 probability, is five. And so in the end, the chicken intervention wins because it has, on net, an ev of 5.5 versus one for the human intervention. Because the human intervention always helps one human. I switch it around and I say my base unit of welfare range, or, like moral weight, or whatever you want to say, is chicken units. Like, one chicken's worth of moral weight. So in half of the world, a human is worth 100 chickens, and then in the other half of the world, a human is worth one chicken. So I do out the ev for my intervention that helps the one human. Now, in the chicken units, and in chicken units, like, half of the time, that human is worth 100 chickens. And so I get 0.5 times, 100 times one, which is 50. And then in the other half of the world, the chicken and the human are equal. And so then it's 0.5 times one, times one, because I'm helping one human, so that's 0.5. The ev is 50.5. And then I do have my ev for my chicken welfare thing. That's, like, ten chickens, and I always help ten chickens. And so it's ten as my units of good. So when I worked in human units, I said that the chickens won because it was 5.5 human units versus one human unit for helping the human. When I did it in chicken units, it was 50.5 to help the humans versus ten to help the chickens. And so now I'm like, okay, my ev is changing just based on which units I work in. And I think this is, like, the two envelopes problem that's applied to animals. Brian Tomasic has, like, a long post about this, but I think this is, like, this is a statement or an example of the problem.</p><p>AARON</p><p>Cool.</p><p>LAURA</p><p>Can I just say something about the moral weight project? It's like, really just. We ended up coming up with numbers, which I think may have been a bit of a mistake in the end, because I think the real value of that was going through the literature and finding out the similarities and the traits between animals and humans, and then there are a surprising number of them that we have in common. And so at the end of the day, it's a judgment call. And I don't know what you do with it, because that is, like, a legit statistical problem with things that arises when you put numbers on stuff.</p><p>MATT</p><p>So I'm pretty sympathetic to what you're saying here of, like, the core insight of the moral weight project is, like, when we look at features that could plausibly determine capacity to experience welfare, we find that a pig and a human have a ton in common. Obviously, pigs cannot write poetry, but they do show evidence of grief behavior when another pig dies. And they show evidence of vocalizing in response to pain and all of these things. I think coming out of the moral waste project being like, wow. Under some form of utilitarianism, it's really hard to justify harms to, or like harms to pigs. Really. Morally matter makes complete sense. I think the challenge here is when you get to something like black soldier flies or shrimp, where when you actually look at the welfare range table, you see that the number of proxies that they likely or definitely have is remarkably low. The shrimp number is hinging on. It's not hinging on a ton. They share a few things. And because there aren't that many categories overall, that ends up being in the median case. Like, they have a moral weight, like one 30th of a human. And so I worry that sort of your articulation of the benefit starts to break down when you get to those animals. And we start to like, I don't know what you do without numbers there. And I think those numbers are really susceptible to this kind of 200.</p><p>AARON</p><p>I have a question.</p><p>MATT</p><p>Yeah, go.</p><p>AARON</p><p>Wait. This supposed to be like 5.5 versus one?</p><p>MATT</p><p>Yeah.</p><p>AARON</p><p>And this is 50.5 versus ten? Yeah. It sound like the same thing to me.</p><p>MATT</p><p>No, but they've inverted this case, the chickens one. So it's like when I'm working in human units, right? Like, half the time, I help.</p><p>AARON</p><p>If you're working in human units, then the chicken intervention looks 5.5 times better. Yes. Wait, can I write this down over here?</p><p>MATT</p><p>Yeah. And maybe I'm not an expert on this problem. This is just like something that tortures me when I try and sleep at night, not like a thing that I've carefully studied. So maybe I'm stating this wrong, but, yeah. When I work in human units, the 50% probability in this sort of toy example that the chickens and the humans are equal means that the fact that my intervention can help more chickens makes the ev higher. And then when I work in the chicken units, the fact that human might be 100 times more sentient than the chicken or more capable of realizing welfare, to be technical, that means the human intervention just clearly wins.</p><p>AARON</p><p>Just to check that I would have this right, the claim is that in human units, the chicken intervention looks 5.5 times better than the human intervention. But when you use chicken units, the human intervention looks 5.5 times better than the chicken intervention. Is that correct?</p><p>MATT</p><p>Yes, that's right.</p><p>AARON</p><p>Wait, hold on. Give me another minute.</p><p>MATT</p><p>This is why doing this drunk was a bad idea.</p><p>AARON</p><p>In human.</p><p>LAURA</p><p>No, I think that's actually right. And I don't know what to do about the flies and shrimp and stuff like this. This is like where I draw my line of like, okay, so lemonstone quote.</p><p>MATT</p><p>Tweeted me, oh, my God.</p><p>LAURA</p><p>I think he actually had a point of, there's a type of ea that is like, I'm going to set my budget constraint and then maximize within that versus start with a blank slate and allow the reason to take me wherever it goes. And I'm definitely in the former camp of like, my budget constraint is like, I care about humans and a couple of types of animals, and I'm just like drawing the line there. And I don't know what you do with the other types of things.</p><p>MATT</p><p>I am very skeptical of arguments that are like, we should end Medicare to spend it all on shrimp.</p><p>AARON</p><p>No one's suggesting that. No, there's like a lot of boring, prosaic reasons.</p><p>MATT</p><p>I guess what I'm saying is there's a sense in which, like, totally agreeing with you. But I think the challenge is that object level.</p><p>AARON</p><p>Yeah, you set us up. The political economy, I like totally by double it.</p><p>MATT</p><p>I think that there is. This is great. Aaron, I think you should have to take another shot for.</p><p>AARON</p><p>I'm sorry, this isn't fair. How many guys, I don't even drink, so I feel like one drink is like, is it infinity times more than normalize it? So it's a little bit handle.</p><p>MATT</p><p>I think there has to be room for moral innovation in my view. I think that your line of thinking, we don't want to do radical things based on sort of out there moral principles in the short term. Right. We totally want to be very pragmatic and careful when our moral ideas sort of put us really far outside of what's socially normal. But I don't think you get to where we are. I don't know what we owe the future was like a book that maybe was not perfect, but I think it eloquently argues with the fact that the first person to be like, hey, slavery in the Americas is wrong. Or I should say really the first person who is not themselves enslaved. Because of course, the people who are actually victims of this system were like, this is wrong from the start. But the first people to be like, random white people in the north being like, hey, this system is wrong. Looks super weird. And the same is true for almost any moral innovation. And so you have to, I think saying, like, my budget constraint is totally fixed seems wrong to me because it leaves no room for being wrong about some of your fundamental morals.</p><p>LAURA</p><p>Yeah, okay. A couple of things here. I totally get that appeal 100%. At the same time, a lot of people have said this about things that now we look back at as being really bad, like the USSR. I think communism ends up looking pretty bad in retrospect, even though I think there are a lot of very good moral intuitions underpinning it.</p><p>AARON</p><p>Yeah, I don't know. It's like, mostly an empirical question in that case, about what government policies do to human preference satisfaction, which is like, pretty. Maybe I'm too econ. These seem like very different questions.</p><p>LAURA</p><p>It's like we let our reason go astray, I think.</p><p>MATT</p><p>Right, we, as in some humans.</p><p>AARON</p><p>No, I think. Wait, at first glance. At first glance, I think communism and things in that vicinity seem way more intuitively appealing than they actually, or than they deserve to be, basically. And the notion of who is it? Like Adam Smith? Something Smith? Yeah, like free hand of the market or whatever. Invisible hand. Invisible free hand of the bunny ear of the market. I think maybe it's like, field intuitive to me at this point, because I've heard it a lot. But no, I totally disagree that people's natural intuition was that communism can't work. I think it's like, isn't true.</p><p>MATT</p><p>I'm not sure you guys are disagreeing with one.</p><p>AARON</p><p>Yeah.</p><p>MATT</p><p>Like, I think, Laura, if I can attempt to restate your point, is that to at least a subset of the people in the USSR at the time of the russian revolution, communism plausibly looked like the same kind of moral innovation as lots of stuff we looked back on as being really good, like the abolition of slavery or like, women's rights or any of those other things. And so you need heuristics that will defend against these false moral innovations.</p><p>AARON</p><p>Wait, no, you guys are both wrong. Wait, hold on. No, the issue there isn't that we disregard, I guess, humans, I don't know exactly who's responsible for what, but people disregarded some sort of deserving heuristic that would have gardened against communism. The issue was that, like, it was that, like, we had, like, lots of empirical, or, like, it's not even necessarily. I mean, in this case, it is empirical evidence, but, like, like, after a couple years of, like, communism or whatever, we had, like, lots of good evidence to think, oh, no, books like that doesn't actually help people, and then they didn't take action on that. That's the problem. If we were sitting here in 1910 or whatever, and I think it's totally possible, I will be convinced communism is, in fact, the right thing to do. But the thing that would be wrong is if, okay, five years later, you have kids starving or people starving or whatever, and maybe you can find intellectuals who claim and seem reasonably correct that they can explain how this downstream of your policies. Then doubling down is the issue, not the ex ante hypothesis that communism is good. I don't even know if that made any sense, I think.</p><p>LAURA</p><p>But we're in the ex ante position right now.</p><p>AARON</p><p>Yeah, totally. Maybe we'll find out some sort of, whether it's empirical or philosophical or something like maybe in five years or two years or whatever, there'll be some new insight that sheds light on how morally valuable shrimp are. And we should take that into account.</p><p>LAURA</p><p>I don't know. Because it's really easy to get good feedback when other fellow humans are starving to death versus. How are you supposed to judge? No, we've made an improvement.</p><p>AARON</p><p>Yeah, I do think. Okay. Yes. That's like a substantial difference. Consciousness is, like, extremely hard. Nobody knows what the hell is going on. It kind of drives me insane.</p><p>MATT</p><p>Whomstemonga has not been driven insane by the hard problem of consciousness.</p><p>AARON</p><p>Yeah. For real. I don't know. I don't have to say. It's like, you kind of got to make your best guess at some point.</p><p>MATT</p><p>Okay, wait, so maybe tacking back to how to solve it, did you successfully do math on this piece of paper?</p><p>AARON</p><p>Mostly? No, mostly I was word selling.</p><p>MATT</p><p>I like the verb form there.</p><p>AARON</p><p>Yeah. No, I mean, like, I don't have, like, a fully thought out thing. I think in part this might be because of the alcohol. I'm pretty sure that what's going on here is just that, like, in fact, like, there actually is an asymmetry between chicken units and human units, which is that. Which is that we have much better idea. The real uncertainty here is how valuable a chicken is. There's probably somebody in the world who doubts this, but I think the common sense thing and thing that everybody assumes is we basically have it because we're all humans and there's a lot of good reasons to think we have a decent idea of how valuable another human life is. And if we don't, it's going to be a lot worse for other species. And so just, like, taking that as a given, the human units are the correct unit because the thing with the unit is that you take it as given or whatever. The real uncertainty here isn't the relationship between chickens and humans. The real question is how valuable is a chicken? And so the human units are just like the correct one to use.</p><p>LAURA</p><p>Yeah, there's something there, which is the right theory is kind of driving a lot of the problem in the two envelope stuff. Because if you just chose one theory, then the units wouldn't really matter which one. The equality theory is like, you've resolved all the inter theoretic uncertainty and so wouldn't that get rid of.</p><p>AARON</p><p>I don't know if you know, if there's, like. I'm not exactly sure what you mean by theory.</p><p>LAURA</p><p>Like, are they equal, the equality theory versus are they 1100 theory? And we're assuming that each of them has five probabilities each end. So if we resolved that, it's like we decide upon the 1100 theory, then the problem goes away.</p><p>AARON</p><p>Yeah, I mean, that's true, but you might not be able to.</p><p>MATT</p><p>Yeah, I think it doesn't reflect our current state or, like.</p><p>AARON</p><p>No, just like taking as given the numbers, like, invented, which I think is fine for the illustration of the problem. Maybe a better example is what's, like, another thing, chicken versus a rabbit. I don't know. Or like rabbits. I don't know.</p><p>MATT</p><p>Chicken versus shrimp. I think it's like a real one. Because if you're the animal welfare fund, you are practically making that decision.</p><p>AARON</p><p>Yeah. I think that becomes harder. But it's not, like, fundamentally different. And it's like the question of, like, okay, which actually makes sense, makes more sense to use as a unit. And maybe you actually can come up with two, if you can just come up with two different species for which, on the merits, they're equally valid as a unit and there's no issue anymore. It really is 50 50 in the end.</p><p>MATT</p><p>Yeah. I don't know. I see the point you're making. With humans, we know in some sense we have much more information about how capable of realizing welfare a human is. But I guess I treat this as, like, man, I don't know. It's like why all of my confidence intervals are just, like, massive on all these things is I'm just very confused by these problems and how much that.</p><p>AARON</p><p>Seems like I'm confused by this one. Sorry, I'm, like, half joking. It is like maybe. I don't know, maybe I'll be less confident. Alcohol or so.</p><p>MATT</p><p>Yeah, I don't know. I think it's maybe much more concerning to me the idea that working in a different unit changes your conclusion radically.</p><p>AARON</p><p>Than it is to you.</p><p>LAURA</p><p>Sometimes. I don't know if this is, like, too much of a stoner cake or something like that.</p><p>AARON</p><p>Bring it on.</p><p>LAURA</p><p>I kind of doubt working with numbers at all.</p><p>MATT</p><p>Okay. Fit me well.</p><p>LAURA</p><p>It's just like when he's.</p><p>AARON</p><p>Stop doing that.</p><p>LAURA</p><p>I don't know what to do, because expected value theory. Okay, so one of the things that, when we hired a professional philosopher to talk about uncertainty.</p><p>MATT</p><p>Pause for a sec. Howie is very sweetly washing his ears, which is very cute in the background. He's like, yeah, I see how he licks his paws and squeezes his ear.</p><p>AARON</p><p>Is it unethical for me to videotape?</p><p>MATT</p><p>No, you're more than welcome to videotape it, but I don't know, he might be done.</p><p>AARON</p><p>Yeah, that was out.</p><p>MATT</p><p>Laura, I'm very sorry. No, yeah, you were saying you hired the professional philosopher.</p><p>LAURA</p><p>Yeah. And one of the first days, she's like, okay, well, is it the same type of uncertainty if we, say, have a one in ten chance of saving the life of a person we know for sure is conscious, versus we have a certain chance of saving the life of an animal that has, like, a one in ten probability of being sentient? These seem like different types.</p><p>AARON</p><p>I mean, maybe in some sense they're like different types. Sure. But what are the implications? It's not obviously the same.</p><p>LAURA</p><p>It kind of calls into question as to whether we can use the same mathematical approach for analyzing each of these.</p><p>AARON</p><p>I think my main take is, like, you got a better idea? That was like, a generic.</p><p>LAURA</p><p>No, I don't.</p><p>AARON</p><p>Yeah. It's like, okay, yeah, these numbers are, like, probably. It seems like the least bad option if you're going by intuition. I don't know. I think all things considered, sometimes using numbers is good because our brains aren't built to handle getting moral questions correct.</p><p>MATT</p><p>Yeah, I mean, I think that there is a very strong piece of evidence for what you're saying, Aaron, which is.</p><p>AARON</p><p>The whole paper on this. It's called the unreasonable efficacy of mathematics in the natural sciences.</p><p>MATT</p><p>Or this is. This is interesting. I was going to make sort of an easier or simpler argument, which is just like, I think the global health ea pitch of, like, we tend to get charity radically wrong.</p><p>AARON</p><p>Often.</p><p>MATT</p><p>Charities very plausibly do differ by 100 x or 1000 x in cost effectiveness. And most of the time, most people don't take that into account and end up helping people close to them or help an issue that's salient to them or help whatever they've heard about most and leave opportunities for doing what I think is very difficult to argue as not being radically more effective opportunities on the table as a result. Now, I led into this saying that I have this very profound uncertainty when it comes to human versus animal trade offs. So I'm not saying that, yes, we just should shut up and multiply. But I do think that is sort of like the intuition for why I think the stoner take is very hard for me to endorse is that we know in other cases, actually bringing numbers to the problem leads to saving many more lives of real people who have all of the same hopes and dreams and fears and feelings and experiences as the people who would have been saved in alternate options.</p><p>LAURA</p><p>Isn't that just like still underlying this is we're sure that all humans are equal. And that's like our theory that we have endorsed.</p><p>AARON</p><p>Wait, what?</p><p>MATT</p><p>Or like on welfare ranges, the differences among different humans are sufficiently small in terms of capacity to realize welfare. That plausibly they are.</p><p>AARON</p><p>Yeah, I don't think anyone believes that. Does anyone believe that? Wait, some people that everybody's hedonic range is the same.</p><p>LAURA</p><p>Randomly select a person who lives in Kenya. You would think that they have the same welfare range, a priority as somebody.</p><p>MATT</p><p>Who lives in the description. The fundamental statistics of describing their welfare range are the same.</p><p>AARON</p><p>Yeah, I think that's probably correct. It's also at an individual level, I think it's probably quite varied between humans.</p><p>LAURA</p><p>So I don't think we can say that we can have the same assumption about animals. And that's where it kind of breaks down, is we don't know the right theory to apply it.</p><p>AARON</p><p>Well, yeah, it's a hard question. Sorry, I'm being like kind of sarcastic.</p><p>LAURA</p><p>I think you have to have the theory right. And you can't easily average over theories with numbers.</p><p>MATT</p><p>Yeah, no, I mean, I think you're right. I think this is the challenge of the two envelopes. Problem is exactly this kind of thing. I'm like four chapters into moral uncertainty. The book.</p><p>AARON</p><p>By Will.</p><p>MATT</p><p>Yeah. McCaskill, Ord and Bryke Fist. I'm probably getting that name. But they have a third co author who is not as much of like an.</p><p>AARON</p><p>Yeah, I don't know. I don't have any super eloquent take except that to justify the use of math right now. Although I actually think I could. Yeah, I think mostly it's like, insofar as there's any disagreement, it's like we're both pointing at the issue, pointing at a question, and saying, look at that problem. It's, like, really hard. And then I'm saying like, yeah, I know. Shit. You should probably just do your best to answer it. Sorry, maybe I'm just not actually adding any insight here or whatever, but I agree with you that a lot of these problems are very difficult, actually. Sorry, maybe this is, like, a little bit of a nonsense. Whatever. Getting back to the hard problem of consciousness, I really do think it feels like a cruel joke that we have to implicitly, we have to make decisions about potentially gigantic numbers of digital lives or, like, digital sentience or, you know, whatever you want to call it, without having any goddamn idea, like, what the fuck is up with consciousness. And, I don't know, it doesn't seem fair. Okay.</p><p>MATT</p><p>Yeah, wait, okay, so fundraiser. This is great. We've done all of these branching off things. So we talked about how much we raised, which was, like, amount that I was quite happy with, though. Maybe that's, like, selfish because I didn't have to wear a shrink costume. And we talked about. Cause prio. We haven't talked about the whole fake OpenAI thing.</p><p>AARON</p><p>Fake open AI.</p><p>MATT</p><p>Wait. Like the entire.</p><p>AARON</p><p>Oh, well, shout out to I really. God damn it, Qualy. I hope you turn into a human at some point, because let it be known that Qualy made a whole ass Google Doc to plan out the whole thing and was, like, the driving. Yeah, I think it's fair to say Qualy was the driving force.</p><p>MATT</p><p>Yeah, totally. Like, absolutely had the concept, did the Google Doc. I think everybody played their parts really well, and I think that was very fun.</p><p>AARON</p><p>Yeah, you did. Good job, everybody.</p><p>MATT</p><p>But, yeah, that was fun. It was very unexpected. Also, I enjoyed that. I was still seeing tweets and replies that were like, wait, this was a bit. I didn't get this after the end of it, which maybe suggests. But if you look at the graph I think I sent in, maybe we.</p><p>AARON</p><p>Should pull up my. We can analyze my Twitter data and find out which things got how many views have.</p><p>MATT</p><p>Like, you have your text here. I think the graph of donations by date is, like, I sent in the text chat between.</p><p>AARON</p><p>Maybe I can pull it like, media.</p><p>MATT</p><p>Like you and me and Max and Laura. And it's very clear that that correlated with a. I think it's probably pretty close to the end.</p><p>AARON</p><p>Maybe I just missed this. Oh, Laura, thank you for making.</p><p>MATT</p><p>Yeah, the cards were amazing cards.</p><p>AARON</p><p>They're beautiful.</p><p>MATT</p><p>Oh, wait, okay, maybe it's not. I thought I said. Anyway, yeah, we got, like, a couple grand at the start, and then definitely at least five grand, maybe like, ten grand, somewhere in the five to ten range.</p><p>AARON</p><p>Can we get a good csv going? Do you have access to. You don't have to do this right now.</p><p>MATT</p><p>Wait, yeah, let me grab that.</p><p>AARON</p><p>I want to get, like, aerospace engineering grade cpus going to analyze the causal interactions here based on, I don't know, a few kilobytes of data. It's a baby laptop.</p><p>MATT</p><p>Yeah, this is what the charts looked like. So it's basically like there was some increase in the first. We raised, like, a couple of grand in the first couple of days. Then, yeah, we raised close to ten grand over the course of the quality thing, and then there was basically flat for a week, and then we raised another ten grand right at the end.</p><p>AARON</p><p>That's cool. Good job, guys.</p><p>MATT</p><p>And I was very surprised by this.</p><p>AARON</p><p>Maybe I didn't really internalize that or something. Maybe I was sort of checked out at that point. Sorry.</p><p>MATT</p><p>I guess. No, you were on vacation because when you were coming back from vacation, it's when you did, like, the fake Sama.</p><p>AARON</p><p>Yeah, that was on the plane.</p><p>LAURA</p><p>Okay, yeah, I remember this. My mom got there the next day. I'm like, I'm checking out, not doing anything.</p><p>AARON</p><p>Yeah, whatever. I'll get rstudio revving later. Actually, I'm gradually turning it into my worst enemy or something like that.</p><p>MATT</p><p>Wait, how so?</p><p>AARON</p><p>I just use Python because it's actually faster and catchy and I don't have to know anything. Also, wait, this is like a rant. This is sort of a totally off topic take, but something I was thinking about. No, actually, I feel like a big question is like, oh, are LLMs going to make it easy for people to do bad things that make it easier for me to do? Maybe not terrible things, but things that are, like, I don't know, I guess of dubious or various things that are mostly in the realm of copyright violation or pirating are not ever enforced, as far as I can tell. But, no, I just couldn't have done a lot of things in the past, but now I can, so that's my anecdote.</p><p>MATT</p><p>Okay, I have a whole python.</p><p>AARON</p><p>You can give me a list of YouTube URLs. I guess Google must do, like, a pretty good job of policing how public websites do for YouTube to md three sites, because nothing really just works very well very fast. But you can just do that in python, like, five minutes. But I couldn't do that before, so.</p><p>MATT</p><p>I feel like, to me, it's obvious that LLMs make it easier for people to do bad stuff. Exactly as you said because they let make in general make it easier for people to do stuff and they have some protections on this, but those protections are going to be imperfect. I think the much more interesting question in some sense is this like a step change relative to the fact that Google makes it way easier for you to do stuff and including bad stuff and the printing press made it way easier for you to do?</p><p>AARON</p><p>I wouldn't even call it a printing press.</p><p>MATT</p><p>I like think including bad stuff. So it's like, right, like every invention that generally increases people's capability to do stuff and share information also has these bad effects. And I think the hard question is, are LLMs, wait, did I just x.</p><p>AARON</p><p>No, I don't think, wait, did I just like, hold on. I'm pretty sure it's like still wait, how do I have four things?</p><p>LAURA</p><p>What is the benefit of LLMs versus.</p><p>AARON</p><p>You can ask it something and it tells you the answer.</p><p>LAURA</p><p>I know, but Google does this too.</p><p>AARON</p><p>I don't mean, I don't know if I have like a super, I don't think I have any insightful take it just in some sense, maybe these are all not the same, but maybe they're all of similar magnitude, but like object level. Now we live in a world with viruses CRISPR. Honestly, I think to the EA movement's credit, indefinite pause, stop. AI is just not, it's not something that I support. It's not something like most people support, it's not like the official EA position and I think for good reason. But yeah, going back to whatever it was like 1416 or whatever, who knows? If somebody said somebody invented the printing press and somebody else was like, yeah, we should, well I think there's some pretty big dis analysis just because of I guess, biotech in particular, but just like how destructive existing technologies are now. But if somebody had said back then, yeah, let's wait six months and see if we can think of any reason not to release the printing press. I don't think that would have been a terrible thing to do. I don't know, people. I feel like I'm saying something that's going to get coded as pretty extreme. But like x ante hard ex ante. People love thinking, exposed nobody. Like I don't know. I don't actually think that was relevant to anything. Maybe I'm just shit faced right now.</p><p>MATT</p><p>On one shot of vodka.</p><p>AARON</p><p>$15 just to have one shot.</p><p>MATT</p><p>I'll have a little.</p><p>AARON</p><p>Yeah. I think is honestly, wait. Yeah, this is actually interesting. Every time I drink I hope that it'll be the time that I discover that I like drinking and it doesn't happen, and I think that this is just because my brain is weird. I don't hate it. I don't feel, like, bad. I don't know. I've used other drugs, which I like. Alcohol just doesn't do it for me. Yeah, screw you, alcohol.</p><p>MATT</p><p>Yes. And you're now 15.99 cheaper or 50. 99 poorer.</p><p>AARON</p><p>Yeah, I mean, this will last me a lifetime.</p><p>MATT</p><p>You can use it for, like, cleaning your sink.</p><p>AARON</p><p>Wait, this has got to be the randomest take of all time. But, yeah, actually, like, isopropyl alcohol, top tier, disinfected. Because you don't have to do anything with it. You leave it there, it evaporates on its own.</p><p>MATT</p><p>Honestly. Yeah.</p><p>AARON</p><p>I mean, you don't want to be in an enclosed place or whatever. Sorry. To keep. Forget. This is like.</p><p>MATT</p><p>No, I mean, it seems like a good take to me.</p><p>AARON</p><p>That's all.</p><p>MATT</p><p>Yeah, this is like a very non sequitur.</p><p>AARON</p><p>But what are your guys' favorite cleaning suppliers?</p><p>MATT</p><p>Okay, this is kind of bad. Okay, this is not that bad. But I'm, like, a big fan of Clorox wipes.</p><p>AARON</p><p>Scandalous.</p><p>MATT</p><p>I feel like this gets looked down on a little bit because it's like, in theory, I should be using a spray cleaner and sponge more.</p><p>AARON</p><p>If you're like, art porn, what theories do you guys.</p><p>MATT</p><p>If you're very sustainable, very like, you shouldn't just be buying your plastic bucket of Clorox infused wet wipes and you're killing the planet.</p><p>AARON</p><p>What I thought you were going to say is like, oh, this is like germaphobe coating.</p><p>MATT</p><p>No, I think this is fine. I don't wipe down my groceries with Clorox wipes. This is like, oh, if I need to do my deep clean of the kitchen, what am I going to reach for? I feel like my roommate in college was very much like, oh, I used to be this person. No, I'm saying he was like an anti wet wipe on sustainability reasons person. He was like, oh, you should use a rag and a spray cleaner and wash the rag after, and then you will have not used vast quantities of resources to clean your kitchen.</p><p>AARON</p><p>At one point, I tweeted that I bought regular. Actually, don't do this anymore because it's no longer practical. But I buy regularly about 36 packs of bottled water for like $5 or whatever. And people actually, I think it was like, this is like close to a scissor statement, honestly. Because object level, you know what I am, right. It's not bad. For anything. I'm sorry. It just checks out. But people who are normally pretty technocratic or whatever were kind of like, I don't know, they were like getting heated on.</p><p>MATT</p><p>I think this is an amazing scissor statement.</p><p>AARON</p><p>Yeah.</p><p>MATT</p><p>Because I do.</p><p>AARON</p><p>I used to be like, if I were to take my twelve year old self, I would have been incredibly offended, enraged.</p><p>MATT</p><p>And to be fair, I think in my ideal policy world, there would be a carbon tax that slightly increases the price of that bottled water. Because actually it is kind of wasteful to. There is something, something bad has happened there and you should internalize those.</p><p>AARON</p><p>Yeah, I think in this particular, I think like thin plastic is just like not. Yeah, I don't think it would raise it like very large amount. I guess.</p><p>MATT</p><p>I think this is probably right that even a relatively high carbon tax would not radically change the price.</p><p>LAURA</p><p>It's not just carbon, though. I think because there is land use implicated in this.</p><p>AARON</p><p>No, there's not.</p><p>LAURA</p><p>Yeah, you're filling up more landfills.</p><p>AARON</p><p>Yeah, I'm just doing like hearsay right now. Heresy.</p><p>MATT</p><p>Hearsay. Hearsay is going to be whatever. Well, wait, no, heresy is, if you're arguing against standardly accepted doctrine. Hearsay is like, well, it's both. Then you're just saying shit.</p><p>AARON</p><p>I'm doing both right now. Which is that actually landfills are usually like on the outskirts of town. It's like, fine.</p><p>LAURA</p><p>They'Re on the outskirts of town until the town sprawls, and then the elementary school is on a phone.</p><p>AARON</p><p>Yeah, no, I agree in principle. I don't have a conceptual reason why you're wrong. I just think basically, honestly, the actual heuristic operating here is that I basically outsource what I should pay attention to, to other people. And since I've never seen a less wrong post or gave Warren post about how actually landfills are filling up, it's like, fine, probably.</p><p>LAURA</p><p>No, this is me being devil's advocate. I really don't care that about personal waste.</p><p>MATT</p><p>Yeah, I mean, I think plausibly here, there is, right? So I think object level, the things that matter, when we think about plastic, there is a carbon impact. There is a production impact of like, you need to think about what pollution happened when the oil was drilled and stuff. And then there is like a disposal impact. If you successfully get that bottle into a trash can, for what it's worth.</p><p>AARON</p><p>My bottles are going into their goddamn trash can.</p><p>MATT</p><p>Ideally a recycling. No, apparently recycling, I mean, recycling is.</p><p>AARON</p><p>Well, I mean, my sense is like apparently recycling. Yeah, I recycle metal. I think I do paper out of convenience.</p><p>MATT</p><p>If you successfully get that bottle handle a waste disposal system that is properly disposing of it, rather than like you're throwing it on a slap, then I think my guess is that the willingness to pay, or if you really crunch the numbers really hard, it would not be once again, a huge cost for the landfill costs. On the flip side, if you throw it in a river, that's very bad. My guess is that it would be right for everyone on Twitter to flame you for buying bottles and throwing them in a river if you did that.</p><p>AARON</p><p>What is an ed impact on wild animal welfare and equilibrium? No, just kidding. This is something. Yeah, don't worry, guys. No, I was actually the leave no trade coordinator for my Boy scout troop. It's actually kind of ironic because I think probably like a dumb ideology or.</p><p>LAURA</p><p>Whatever, it's a public good for the other people around you to not have a bunch of garbage around on that trail.</p><p>AARON</p><p>Yeah, I do think I went to an overnight training for this. They're very hardcore, but basically conceptually incoherent people. I guess people aren't conceptually incoherent. Their concepts are incoherent who think it's really important that you don't impact, that you walk through mud instead of expanding the trail or whatever. This is not even worth the time right now. Let's figure out how many digital shrimp needs by the heat up of the universe.</p><p>MATT</p><p>Yeah, I mean, I will say it's probably worth mentioning here, right, that in practice, your carbon and land use footprint is actually really small relative to the average. Yes, you buy a bunch of bottled water, but you live in a dense, walkable community and you rarely drive and all of these things. So in practice, all the people who are roasting you on Twitter for buying.</p><p>AARON</p><p>All the water, what they should roast me for is buying grass. Diet is. This is actually plausibly like the worst thing that I do from a climate standpoint. Yeah, I think this is probably mean. I've given my take on listen, if you're one of the seven people listening to this, you probably know what it is.</p><p>MATT</p><p>Yeah, it is true that you have. Two of your regular listeners are here on the podcast with you, which reduces the audience.</p><p>AARON</p><p>Yeah, I think my sister, the episode of my sister is going to get some. It's going to get some people.</p><p>MATT</p><p>Oh, yeah, me too.</p><p>LAURA</p><p>I want to hear a normie also.</p><p>AARON</p><p>I don't know. I get along great, honestly. Should I call her up? But her friend is like, she's the perfect intermediary my sister and I. Yeah, I guess we don't talk about. I don't know, she's like much more, like happy go lucky. Like less, I don't know, like nerdy person or whatever. But yeah, like our friend. You know what? I'm friends with Annie, too. Annie is like a good intermediary.</p><p>MATT</p><p>Can I just say, I think my favorite pigeonhower moment. Okay, I have a couple favorite pigeonhole. One is when you literally said, like, hedonic utilitarianism and stuff. That's like in the transcript of the Un Max episode. And it's just like the most perfect distillation of pigeon hour is that line. But then also when you just left in an episode with you and Sarah, where you just left in the part where you're discussing whether she could sleep on your couch if she visits DC.</p><p>AARON</p><p>How weird.</p><p>MATT</p><p>And I think at some point you're like, we're going to edit this out.</p><p>AARON</p><p>I always do that and then don't do it. Whatever.</p><p>MATT</p><p>Talking about travel logistics.</p><p>AARON</p><p>It'S like not a big deal.</p><p>MATT</p><p>You can tell that last bit of vodka really got me.</p><p>AARON</p><p>But, yeah, I feel like this isn't especially, like, insane. I don't know. Well, no, mine was like, I took it out eventually, but I left in Nathan's 15 minutes bath or, sorry, five minute bathroom break. I actually felt kind of bad because it was honestly as normal and as minimally embarrassing as it could have been. Like, oh, yeah, I'm going to use the bathroom now. And there was like five minutes of silence. Yeah.</p><p>MATT</p><p>And I have been amused by the fact that this is like, Sarah has started her own podcast now. You have inspired others with your hedonic utilitarianism, honestly, and your travel.</p><p>AARON</p><p>I really shouldn't be. As people can tell, I'm not the most eloquent person. Alcohol doesn't help, but I'm never the most eloquent person. But you know what it's like I'm creating awareness of people with.</p><p>MATT</p><p>Well, I mean, what I was going to say is, in some sense, it is this very unique take on the genre to leave in the bathroom break and the discussion of travel arrangements. Right. I'm laughing now, but I sort of genuinely got a kick out of those things. And in some sense, you're subverting the norms. This is actually art.</p><p>AARON</p><p>Yes, it is. Yeah. Honestly, I think I feel like I mentioned this before, but all this, my ethos here, I feel like very self important. Discussing my ethos as an artist, as a creator, as they say, is basically doing the opposite of what I tried to do the first time when I started the podcast, which is when I interviewed Rob Wiblin and spent hundreds, honestly, it was a good episode. It was a two hour episode, probably spent not something in the hundreds of hours. I think I guesstimated maybe like 250 hours or something in total for two hour episode. It was cool, right? I think the output was awesome, but that was a lot of effort.</p><p>MATT</p><p>Wait, okay, so how do you get to 250 hours?</p><p>AARON</p><p>So I did a bunch of research.</p><p>MATT</p><p>That's like six weeks of work.</p><p>AARON</p><p>So the stages, I pulled 250 out of my, like. But I do remember, like, I do remember like the, like the hundreds thing, like it's probably at least 100 hours. But good question. No, I think most of it was like going, reading all of Rob's old blog posts and every time he was interviewed on another podcast and taking notes and coming with questions based on that stuff. And then.</p><p>MATT</p><p>What even, like then you presumably recorded and then you edit. Well, did you edit or did.</p><p>AARON</p><p>No, they edited. And then there was a whole back and forth before we were putting together what questions, basically having sketching an outline, like a real talk about.</p><p>MATT</p><p>Sure, I've wondered about this for 80k, like how much the guests prep, the specific questions they are asked.</p><p>AARON</p><p>I mean, I don't think it's a secret. Maybe, maybe, I don't know if somebody like anonymous ADK Gmail account or email account, like send says, better take that shit out, this is top secret than I will. But no, I don't think it's secret to say that. At least the questions are decided ahead of time. Well, not decided unilaterally by one party. I think the main thing is, just like they're not trying to, the ebook is not conducive to what you would do if you wanted to see if a politician was lying or whatever. It's supposed to be like eliciting people's views. They do this election, I'm being a stand because I don't like a lie, but I really do think it's like a good show or whatever. And they do this election at the level of taking good people to come on. They want to hear the expected stuff. Honestly, I do think maybe it's a little bit on the margin, it should be a little bit more pushback or whatever during the interview to stuff. But no, I think in general, and maybe the more important thing is making it so that people can take out stuff after the fact. I don't know, this makes total sense. I don't see how people don't do that. You know what I mean? Why would you not want to make everybody chill and try to catch them saying something they don't want to say on a live feed or whatever?</p><p>MATT</p><p>I mean, I think it all depends on what context, right? Because clearly there are venues where you're trying to. Okay, I'm going to make an analogy with a job interview, right? Where it's like you wouldn't want a candidate to be able to edit the transcript, the job interview after the fact, to make themselves look good. Because the goal of the job interview is to understand the candidate's strengths and weaknesses. And that requires sort of pushing them and putting them on the spot a little bit in ways that they may, after the fact, be like, I wish I hadn't said this thing. Because your goal is to elicit an understanding of how they will do in a workplace where they're going to be asked to do various challenging things that they can't go back and edit afterwards. So, too, with a politician where you're trying to decide, do I want this person to be president? You don't want them to present. To get to present their best face in all.</p><p>AARON</p><p>No. Yeah, I totally agree. It seems like most podcast episodes don't have those features or whatever. There's good reason to get an unbiased view of. It's important to convey full information about what actually went down during the physical. But I guess real live recording instead of write. Nobody thinks that authors have a duty to public. If they wrote a sentence and then take it out, nobody thinks that that should go in a footnote or something. You know what I mean? So it seems like very intuitive to me that. And I think honestly, substantively, the most important thing is that people are just more relaxed, more willing to say things that are closer to the line, because then they can think about it later, say, like, oh, yeah, maybe we can take this out instead of just ignoring whole topic sections of topic entirely. I. It. I might have to cut this out. Yeah.</p><p>MATT</p><p>No, it's like getting late. We have recorded.</p><p>AARON</p><p>This is actually kind of sad. That's only 11:00 p.m. I feel like we should be, like, raging right now.</p><p>LAURA</p><p>But except for on behalf of the machine, likewise.</p><p>AARON</p><p>Actually, not even on behalf of the.</p><p>MATT</p><p>Machine, honestly, you're not a mayor Pete, Stan.</p><p>AARON</p><p>Wait, what does that have to do with.</p><p>MATT</p><p>Oh, the classic meme is Mayor Pete standing in front of a whiteboard. And the whiteboard has been edited to say, what if we rage on behalf of the machine? It's like a commentary on Mayor Pete's campaign.</p><p>AARON</p><p>My mom definitely I had do not deserve one. She definitely clicked send. Anyway. It's not even that important. It's not like say, tell me if you're alive. It's like a general.</p><p>MATT</p><p>Are we going to leave that bit in the podcast?</p><p>AARON</p><p>Yeah, I'm actually hoping to interview her about tennising me. Just kidding about her. Like I probably shouldn't say that in case it never actually happens.</p><p>MATT</p><p>Do we want to do a wrap here and do we have any final things we want to say that we.</p><p>AARON</p><p>Can, like we can always do?</p><p>LAURA</p><p>We want to put confidence intervals on the number of people who are going to listen.</p><p>MATT</p><p>Yes.</p><p>AARON</p><p>Should I pull up the data? Episode data or whatever about how many people. Oh, no, because. Sorry. Wait, I do have access to that. Moving to substack from spot in general.</p><p>MATT</p><p>People are always underestimate how big the variance is and everything. So I think I need to put some probability on this going like weirdly viral. It's like some subreddit dedicated to sexy men's voices discovers air. Yes, correct.</p><p>AARON</p><p>In fact, they're already doing it right, as we see.</p><p>MATT</p><p>And so then this gets like reposted a thousand times.</p><p>AARON</p><p>Wait, it doesn't give me. Oh, wait, no, sorry. Podcast. So best of pigeon hour, mine's been an episode, has 65 downloads.</p><p>LAURA</p><p>Nice.</p><p>AARON</p><p>Wait, what's that? Wait, so some of the old ones are like one of the ones that I was on like a while ago, like before pre pigeonhole had two and 161. I guess it's the peak. I think that was the, even though it's like mostly hosted elsewhere, I guess. Oh, Daniel Salons has one, I think. Laura, you were. Wait, no, this isn't, I think it must be wrong because it says that you only have 29, but that must be since we moved. I moved it to Sunstack. So I don't think this is actually correct.</p><p>MATT</p><p>I can only say there was at least one because I listened to it. That's the only data I have daily.</p><p>AARON</p><p>What happened? Yeah, so the most downloads in a single day was 78 on January 24.</p><p>MATT</p><p>That's surprisingly many.</p><p>LAURA</p><p>Yeah.</p><p>AARON</p><p>What happened? Okay, wait, why was on January 20? Oh, that was best of pigeonhole. It's not insane. Whatever.</p><p>LAURA</p><p>All right, my CI is like 15% to 595%.</p><p>MATT</p><p>Okay. My 95 is going to be wider for weird tail things. So I'm going to say like ten to 5000. Wait, I don't think there's a 5% chance or I don't think there's a 2.5% chance that there are more than 5000 lessons. I'll go ten to 1000.</p><p>AARON</p><p>I will go eleven to 990 so I can sound better. Wait in fact we're both right. Mine is more right.</p><p>MATT</p><p>Okay, I just want to note Aaron that it is a crime that you don't use the fuck your life bing bong throwing the pigeon video as your intro.</p><p>AARON</p><p>What is anybody talking about right now?</p><p>MATT</p><p>Okay, give me a sec, give me a sec. I sent this to you at some right here we are on the perfect and throws a pigeon at them.</p><p>AARON</p><p>I wonder hopefully pigeon is doing well wait really hope yeah this is all working et cetera.</p><p>MATT</p><p>Honestly if this wasn't recording I will be so happy.</p><p>AARON</p><p>No, that's what happened in my first episode with Nathan. Paul Nathan or UK Nathan yes forecaster Nathan yes predict market yeah no, I mean I felt bad like it was totally my fault. I was like yeah, that's why I pay 1059 a month for fancy Google Meet and two terabytes. Cool. Can I subscribe?</p><p>MATT</p><p>Yes, you can press stop all.</p>]]></content:encoded></item><item><title><![CDATA[#8: Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all]]></title><description><![CDATA[Episode #8 of Pigeon Hour]]></description><link>https://www.aaronbergman.net/p/max-alexander-and-i-solve-ethics</link><guid isPermaLink="false">https://www.aaronbergman.net/p/max-alexander-and-i-solve-ethics</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Mon, 06 Nov 2023 03:07:47 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/138626668/02a045495bd8fabd240f995e9f5d70cd.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<ul><li><p>Follow <a href="https://twitter.com/absurdlymax">&#8288;Max on Twitter&#8288;</a></p></li><li><p>And read his <a href="https://scouting-ahead.com/">&#8288;blog&#8288;</a></p></li><li><p>Listen here or on <a href="https://open.spotify.com/episode/6Soapp5xkiEkvoGI0W5EOk?si=fc23f9e5780444e2">Spotify</a> or <a href="https://podcasts.apple.com/us/podcast/pigeon-hour/id1693154768?i=1000633769531">Apple Podcasts</a> </p></li><li><p>RIP Google Podcasts &#129702;&#129702;&#129702;</p></li></ul><p><strong>Summary</strong></p><p>In this philosophical and reflective episode, hosts Aaron and Max engage in a profound debate over the nature of consciousness, moral realism, and subjective experience. Max, a skeptic of moral realism, challenges Aaron on the objective moral distinction between worlds with varying levels of suffering. They ponder the hard problem of consciousness, discussing the possibility of philosophical zombies and whether computations could account for consciousness. As they delve into the implications of AI on moral frameworks, their conversation extends to the origins of normativity and the nonexistence of free will.</p><p>The tone shifts as they discuss practical advice for running an Effective Altruism group, emphasizing the importance of co-organizers and the balance between being hospitable and maintaining normalcy. They exchange views on the potential risks and benefits of being open in community building and the value of transparency and honest feedback.</p><p>Transitioning to lighter topics, Max and Aaron share their experiences with social media, the impact of Twitter on communication, and the humorous side of office gossip. They also touch on the role of anonymity in online discussions, pondering its significance against the backdrop of the Effective Altruism community.</p><p>As the episode draws to a close, they explore the consequences of public online behavior for employment and personal life, sharing anecdotes and contemplating the broader implications of engaging in sensitive discourses. Despite their digressions into various topics, the duo manages to weave a coherent narrative of their musings, leaving listeners with much to reflect upon.</p><h2>Transcript</h2><p><strong>AARON:</strong>&nbsp;Without any ado whatsoever. Max Alexander and I discuss a bunch of philosophy things and more.</p><p><strong>MAX:</strong>&nbsp;I don't think moral realism is true or something.</p><p><strong>AARON:</strong>&nbsp;Okay, yeah, we can debate this.</p><p><strong>MAX:</strong>&nbsp;That's actually an issue then, because if it's just the case that utilitarianism and this an axiology, which is true or something, whether or not I'm bothered by or would make certain traits personally doesn't actually matter. But if you had the godlike AI or like, I need to give it my axiological system or something, and there's not an objective one, then this becomes more of a problem that you keep running into these issues or something.</p><p><strong>AARON:</strong>&nbsp;Okay, yeah, let's debate. Because you think I'm really wrong about this, and I think you're wrong, but I think your position is more plausible than you think. My position is probably. I'm at like 70%. Some version of moral realism is true. And I think you're at, like, what? Tell me. Like, I don't know, 90 or something.</p><p><strong>MAX:</strong>&nbsp;I was going to probably 99% or something. I've yet to hear a thing that's plausible or something here.</p><p><strong>AARON:</strong>&nbsp;Okay, well, here, let's figure it out once and for all. So you can press a button that doesn't do Nick. The only thing that happens is that it creates somebody in the world who's experiencing bad pain. There's no other effect in the world. And then you have to order these two worlds. There's no normativity involved. You only have to order them according to how good they are. This is my intuition pump. This isn't like a formal argument. This is my intuition pump that says, okay, the one without that suffering person and no other changes. Subjectively, not subjectively. There's a fact of the matter as to which one is better is, like, not. I mean, I feel like, morally better and better here just are synonyms. All things considered. Better, morally better, whatever. Do you have a response, or do you just want to say, like, no, you're a formal argument.</p><p><strong>MAX:</strong>&nbsp;What makes this fact of the matter the case or something like that?</p><p><strong>AARON:</strong>&nbsp;Okay, I need to get into my headspace where I've done this or had this debate before. I do know. I'll defer to Sharon Roulette not too long ago, like ADK podcast guest who basically made the case for hedonic moral realism and hedonic value being the one thing that intrinsically matters and a moral realist view based on that. And I basically disagree with her. Okay. It's like settling in now. Yeah. So it is just the fact of the matter that pleasure is moral is good. And if you say that's not true, then you're wrong and pain is bad. And if you say that that's not true, you're just wrong. That's kind of the argument. That's it. And then I can build on top of it. Where do you get ordering of the world from? But that's the core of the argument here.</p><p><strong>MAX:</strong>&nbsp;Yeah. I think you need an explanation for why this is the fact of the matter or something.</p><p><strong>AARON:</strong>&nbsp;Okay. I mean, do I need an explanation for why one equals one or something like that? Do you need an explanation?</p><p><strong>MAX:</strong>&nbsp;Yes, I think yes. Really? Because we take this to be the case or something, but the symbols one plus one equals two or something is like by itself not true or something. It's like just a bunch of lines, really, or something. Like there's all these axioms and things we build on with the mathematical system and you could do other ones. There are like a bunch of other systems.</p><p><strong>AARON:</strong>&nbsp;I guess if you're a true epistemological nihilist and you think there are no statements that are true, then I'm probably not going to convince you. Is that the case for you?</p><p><strong>MAX:</strong>&nbsp;I don't think it's the case that there aren't things that are true or something.</p><p><strong>AARON:</strong>&nbsp;Do you think there's anything that is true? Can you give me an example? Is there a bed behind you?</p><p><strong>MAX:</strong>&nbsp;I'll say yes, but that's probably couched. You could probably, I think, represent the universe or something as a bunch of a matrix of atom positions or subatomic particle positions and these things, and maybe the rules that govern the equations that govern how they interact or something.</p><p><strong>AARON:</strong>&nbsp;Yeah, I agree.</p><p><strong>MAX:</strong>&nbsp;Make claims that are truth valuable based on that matrix or something. And then you could be like, we can then draw fuzzy concepts around certain things in this matrix and then say more true and true things or whatever.</p><p><strong>AARON:</strong>&nbsp;So I think our disagreement here is that maybe, I don't know if it's a disagreement, but the hard problem of consciousness introduces the fact that that description of the world is just not complete. You have subjective experience also. Are you a phenomenal realist?</p><p><strong>MAX:</strong>&nbsp;What's the definition of that again?</p><p><strong>AARON:</strong>&nbsp;So you think qualia is like a real legit thing?</p><p><strong>MAX:</strong>&nbsp;Is qualia just experience or something?</p><p><strong>AARON:</strong>&nbsp;I'm sorry, I feel like I'm just depending, assuming that, you know, every single term that has ever been used in philosophy. I feel like the thing that I want to say is qualia is like. Yeah, I'll say it's like subjective experience, basically. But then people will say, like, oh, qualia exists, but not in the sense that people normally think. And. But I want to use the strong version of qualia that people argue about. It's real. It's, like, genuine. There's no illusions going on. There's no such thing as functional qualia. If there's functional pain, it is like a different thing than what people mean when they say pain. Most people mean when they say pace and pain. Most of the time, there's, like, a real, genuine, legit subjective experience going on. Do you think this thing is real?</p><p><strong>MAX:</strong>&nbsp;I would say yes, but it does seem like what I would say subjective experiences or something is like a type of computation or something.</p><p><strong>AARON:</strong>&nbsp;I actually lean towards functionalism. Are you familiar with functionalism and theory of philosophy of mind or whatever?</p><p><strong>MAX:</strong>&nbsp;Yeah. Wouldn't that just be the. I think that's what I said. Right. It's just computations or whatever.</p><p><strong>AARON:</strong>&nbsp;So I'm actually not super sure about this. So I apologize to the philosophical community if I'm getting this wrong, but my sense is that when people say functionalism, sometimes they mean that, as in not exactly empirical, but sort of empirical fact of the world is that if you have some computations, you get Qualia, and other people mean that they are just identical. They are just the same thing. There's no, like, oh, you get one and you get the other. They're just the same thing. And I think to say that computations are identical just mean the same thing as qualia is just not true, because it's at least conceivable that. Tell me if you disagree with this. I claim that it's at least conceivable that you have computers that do some sort of computation that you hypothesize might be conscious, but they are not, in fact, conscious. And I think this is conceivable.</p><p><strong>MAX:</strong>&nbsp;Yeah, I agree. Though I would say when it is the case, something, it doesn't seem that means there's something besides just computations going on, or, like, the specific type of computations, like, really, what you said there is. It's conceivable you could have computations that.</p><p><strong>AARON:</strong>&nbsp;Look like we want. Okay, yeah, sorry. You're right. Actually, what I mean is that. Sorry, not what I mean, but what I should have said is that it is conceivable that functionalism is false. Meaning that you can get. Meaning that you can have two sets of systems doing computations, and one of them has qualia and the other one does not. One of them is conscious, the other one is not. Do you think this is conceivable?</p><p><strong>MAX:</strong>&nbsp;Well, do you mean like identical computations or something? I think that'd be the necessary thing or something, because my computer is doing computations right now, but I don't think it's conscious. And I'm doing computations right now, and I do think I'm conscious.</p><p><strong>AARON:</strong>&nbsp;So if you think that you could run your brain's program on an arbitrarily powerful large computer, do you think it's conceivable that that hypothetical computer would not have conscious experience?</p><p><strong>MAX:</strong>&nbsp;I do, but I think this is like the real question. Or the reason I would say it's conceivable that's not having conscious experience is because I would think your simulation just isn't doing the right sort of thing. You think it is, but it's not. For whatever reason, carbon and atoms have interactions that we didn't realize, but they do.</p><p><strong>AARON:</strong>&nbsp;Yeah, actually, this is a good point. I'm actually very sympathetic to this. When people say, I feel like functionalism and I forget what the term is, but substance based, at some point you're just going to get down to like, oh, no, it needs to be like, yeah, if you really want the quarks to be doing the exact same things, then you're just getting two identical physical systems. Physical. I don't like that word so much, but whatever, I'll use it. I think we just reinvented the zombies thing. Like, are zombies conceivable? And I claim yes.</p><p><strong>MAX:</strong>&nbsp;I would think no, or something. Okay, I guess I don't know the P zombie example super well, but I would guess at a certain level of your ability to, like, if you knew all the physical things about the world, like all the physical things that were knowable, p zombies would not be possible for something like you'd be able to tell.</p><p><strong>AARON:</strong>&nbsp;But are they conceivable? I think that's the crux. I think.</p><p><strong>MAX:</strong>&nbsp;I think P zombies are like an epistemic problem or something. Right? Is probably what I would say. It's a lack of knowledge or something. Like if you knew all the relevant things and you would be able to.</p><p><strong>AARON:</strong>&nbsp;Tell, maybe, yeah, I think the fact that it's an epistemic problem is like a demonstration of its conceivability. I don't think it's conceivable that there's like, wait, what's a good example of something that's inconceivable? I feel like they're all very abstract, like a circular square or something. But I don't have a better example at hand. But no, one thing is, you don't know that I'm. I guess you know that you're a conscious, right. But, like, it's really, like, if you just stipulate that there's another version of you that is, like, yeah, in another everettian branch, maybe that's not a good example. I don't know. Because. I don't know. I feel like it's just weird with physics, but as close as you could possibly make it to you in just, like, whatever way, just, like, throw in all the stipulations you want. Do you think there's any chance it's not conscious, but you are?</p><p><strong>MAX:</strong>&nbsp;I guess I'm pretty confident it would.</p><p><strong>AARON:</strong>&nbsp;Be conscious or something.</p><p><strong>MAX:</strong>&nbsp;Like, Spock Teleporter thing. Yeah, I think we're probably both conscious. Or I was wrong about me being conscious in the first place, I guess. Or there is a very small chance that's incredibly. Probably not worth mentioning. But I'm already mentioning it, that I've lost my mind and the whole universe is just my brain making shit up or whatever. And so it's all just like, I'm.</p><p><strong>AARON:</strong>&nbsp;The only real thing connecting this back to moral realism. Yes. My original claim was that the matrix of quarks and positions and physical laws is just not a complete description of the universe. And I think that you want to say that it is and that valence. So, like, valence is just encoded in that description somewhere.</p><p><strong>MAX:</strong>&nbsp;Yeah. Or, like, the thing you're interested in is, like, because you have to zoom in or something. I don't know if you've heard of Spinoza's God or something, but he got excommunicated from Judaism for this.</p><p><strong>AARON:</strong>&nbsp;I didn't even know you could get excommunicated.</p><p><strong>MAX:</strong>&nbsp;I didn't know either, but he did, and they still aren't over it. Somebody wrote a letter to someone high up in whatever jewish thing, and they were like, documentary, and they're like, we'll never talk about Spinoza. So his claim is, like, God is the universe. So if the totality of all physical interactions and matter is, like, the universe, he probably won't put it that way, but that's his idea. And so if that's true, the rock over there is not conscious, but I am. And so it's like a merger. Property of a smaller part of a system is consciousness or something. It's not just like the universe being conscious. Yeah, I would say that it is. The smaller. It's just a bunch of interactions is what Wally is or whatever.</p><p><strong>AARON:</strong>&nbsp;Yeah. I don't think, unfortunately, that we're going to solve this year. I actually do know we disagree on. Now, more fundamentally, I don't know what the appropriate term is, but I think that you're wrong about Valence just being, I guess, logically implied by the whole mathematical thing, like all the quirks and physical laws. You think it's like a logical necessity that falls out of that, that valence is real and happens, in fact.</p><p><strong>MAX:</strong>&nbsp;I don't know if logical. I don't know enough logic, I guess, to say this is the case, but it is like given the physical rules governing our universe and all the.</p><p><strong>AARON:</strong>&nbsp;Oh, no. You know, but they're not given. That was a part of the description.</p><p><strong>MAX:</strong>&nbsp;What do you mean?</p><p><strong>AARON:</strong>&nbsp;The description of how quirks interact, or like physical laws or whatever is like part of the model that I'm referring to. And I think you want to say that given this whole picture of the physical world, and I'll just use physical to mean what most people think as physical. And yet the thing that science is concerned with, just things that are subject to observation and causality and measurable causality, I guess you think that it's like, okay, we have a whole picture of the universe under this world, under this view, and then it just is definitely the case, given all this, that I know that you're sentient. I'm sentient. Both sentience and dalence is like, wherever it is happening is just like directly implied by this whole description of the universe or whatever.</p><p><strong>MAX:</strong>&nbsp;Yeah. Or like, insofar as it does exist or something.</p><p><strong>AARON:</strong>&nbsp;Yeah. Okay. Unfortunately, I don't think we're going to resolve this debate, but I do think this is like the crux of disagreement.</p><p><strong>MAX:</strong>&nbsp;I think probably that sort of approach that I take you to be taken is probably the most convincing way one could go about doing it.</p><p><strong>AARON:</strong>&nbsp;Yeah, I'm not convinced of it, but.</p><p><strong>MAX:</strong>&nbsp;I do think this is the case.</p><p><strong>AARON:</strong>&nbsp;Can I get you to 2%?</p><p><strong>MAX:</strong>&nbsp;Because I could give you this example. Right. And what you're saying maybe gets you out of it or something, or can get you more out of it than someone else would be able to. And so, like a typical way you might go about saying moral realism is true or something, there's like this book, or it's like a series of lectures, they turn into a book by Christine Korsgaard, I think is her name. She's like a know.</p><p><strong>AARON:</strong>&nbsp;I don't believe she's a.</p><p><strong>MAX:</strong>&nbsp;She is called kantian. I don't know if she's literally a.</p><p><strong>AARON:</strong>&nbsp;Okay.</p><p><strong>MAX:</strong>&nbsp;But it uses whatever and like the idea is sort of the way you get normativity, which is the thing that tells you. You can imagine the space of possible ethical systems or something like, there's a bunch. One tells you to stab everyone. We probably shouldn't do that. And the way you get to pick which one is normativity that tells you what you ought to do. And the way you get normativity is like, it follows from human reason or something like this, right? We kind of have this natural reason, whatever Kant called it. He used a different term, I think. And you can make these arguments, and it follows from this, right? And then the question is like, well, what if there are aliens? Right? And then their reason leads them to pick a different ethical theory out of the available ones they could have picked. And these two ethical theories clash with each other. They say they do different things. Who gets to win that clash or something? Like, who ought to. You need now a meta normative theory or something. And I think what your response would be is like, oh, the answer is in qualia or whatever, or something.</p><p><strong>AARON:</strong>&nbsp;No, I don't think normativity is. I'm actually not super sure about this, but I think, yeah, when I say moral realism, I mean, as far as I know, I'm, like, the only person who espouses this. I'm sure that's not true. There's definitely, like, 10,000 phds, but I don't know who they are. But my claim is that objective ordering of worlds exists, normativity per se. Not necessarily.</p><p><strong>MAX:</strong>&nbsp;I would say that. Okay, it's not clear what the difference is here or something, because earlier I kind of claimed that ethical theories are just isomorphic to orderings of possible worlds or something.</p><p><strong>AARON:</strong>&nbsp;More formally, the claim, it would be good if x happened, or even it would be good if you did x sometimes has an objective truth value, true or false. But the claim, like, you should do x. I don't know. I'm actually pretty divided. Or not, I don't know. But it seems like there's, like, an extra thing being added there when you go from, it would be good if you did x objectively to, you should objectively do x. And I kind of not willing to make that jump.</p><p><strong>MAX:</strong>&nbsp;I guess the problem is you have to or something at a certain point.</p><p><strong>AARON:</strong>&nbsp;Or it's like.</p><p><strong>MAX:</strong>&nbsp;That'S fair, what you get out of being like, there's this objective ordering or something, but it's not the case you have to do anything with. It's just like, I have this thing here.</p><p><strong>AARON:</strong>&nbsp;Oh, no, actually, yeah, this actually makes a lot of sense, I guess. I haven't thought about that much. Yeah, you might think, okay, if you're a moral realist, that includes normativity, but you don't think there's any form of divine punishment, then maybe just we're describing the same thing and we're just using different words or something like that. Because. Yeah. It's like, okay, I don't know. At the end of the day, no one's going to make you do it. There's no punishment. You really should or something. And I want to claim you really should. Well, it would be good if you did x. You should. Maybe you should, but I don't know if you objectively should or something. Yeah, maybe this is just the same thing.</p><p><strong>MAX:</strong>&nbsp;Yeah. It's, like less clear where we disagree then. Because I might be willing to say you have a preference function. Right. Maybe you could call it objective. It emerges objectively from the fact that there's an errand, that there's this betterness ordering of possible worlds based on the.</p><p><strong>AARON:</strong>&nbsp;Things for me or in my judgment of the world. Okay. Yeah, sure.</p><p><strong>MAX:</strong>&nbsp;And then I could say that. Right. And if that's what we're calling moral realism, sure. But probably what people really mean by moral realism is like, you have one of those two orderings, whichever one.</p><p><strong>AARON:</strong>&nbsp;Oh, no, that's not what I mean by moral realism. That's a psychological fact that is very contingent. What I mean is that one of those is true. My claim is that, in fact, my ordering of worlds. Well, there's two claims. One of them is, like, the normative ethical position, which is like, oh, I think it's true. But the other one is that, okay, conditional on it being true, it is objectively true. I feel like. Like kind of lost in word salad now.</p><p><strong>MAX:</strong>&nbsp;Yeah, I guess what is objectively true here? Mean or something at this man.</p><p><strong>AARON:</strong>&nbsp;Yeah. I don't know.</p><p><strong>MAX:</strong>&nbsp;Just like, is I'm trying to remember my McDowell or whatever and my Mackie. These are like people who wrote a bunch of articles against each other about this sort of objective, subjective thing.</p><p><strong>AARON:</strong>&nbsp;Maybe we should find another thing to discuss, argue about. I don't know. Do you have any ideas? It can be something like, really not deep.</p><p><strong>MAX:</strong>&nbsp;I mean, the other thing I'm almost certain is true is like, there's no free will or something.</p><p><strong>AARON:</strong>&nbsp;Yeah. Nobody thinks that that's not true.</p><p><strong>MAX:</strong>&nbsp;People really like free will. Even people who don't, who think free will is false. Like free will.</p><p><strong>AARON:</strong>&nbsp;Oh, yeah. I mean, I'm kind of sympathetic to what's the word for it. Yeah. I think that compatibilism has like some. There's like, no, hold on, hold on. Before you block me on Twitter. I don't think it's like saying anything fundamental about the universe or like the nature of reality in the way that moral realism is. I do think it's like a useful semantic distinction to make between. I don't know if useful is like maybe something like slightly stronger than useful, but the sense in which, okay, I can choose to pick up my phone here, even though there's no libertarian free will that is meaningfully different than somebody like my dad running over here and making me do that physically or something like that.</p><p><strong>MAX:</strong>&nbsp;I think you would just call that like coerced or uncoursed or something. I mean, the reason I say that is.</p><p><strong>AARON:</strong>&nbsp;Yeah, sure.</p><p><strong>MAX:</strong>&nbsp;When you say free will, most people think libertarian free will. And that carries things with it that compatibilism doesn't or something. And people basically just.</p><p><strong>AARON:</strong>&nbsp;Bailey. Yes, people with this. I agree with this. I think nobody. Yeah, basically nobody. Compatibilism is like a fake philosopher thing. Not entirely fake, but like basically fake philosopher thing.</p><p><strong>MAX:</strong>&nbsp;Or it's used to justify things that shouldn't be justifiable or something like that.</p><p><strong>AARON:</strong>&nbsp;Yeah. Although honestly, I kind of like it as a psychological crutch. So I'm just going to keep on doing that. Well, you can't stop me. Okay. Sorry.</p><p><strong>MAX:</strong>&nbsp;That's true. I could if I put like a button in your brain or a tumor or whatever.</p><p><strong>AARON:</strong>&nbsp;Yeah. But you can't do that, at least for now. I hope you'll choose not to if you do get the ability to do that. Yeah. Okay. Are there any other deep philosophy takes before, I don't know, say something else?</p><p><strong>MAX:</strong>&nbsp;No, I think you can move it.</p><p><strong>AARON:</strong>&nbsp;No. Okay. There's actually nothing particular on my mind. As usual, I did not prepare for this conversation. So sorry about that. Yeah. So do you want to tell me your life story or like something interesting about your life story that you want to discuss on a podcast? Not necessarily the whole thing, an arbitrary part, but is there. And one of these could be your experience in college, like running an EA group, for example.</p><p><strong>MAX:</strong>&nbsp;I guess I could give advice to EAA group organizers or something.</p><p><strong>AARON:</strong>&nbsp;Yes, do that.</p><p><strong>MAX:</strong>&nbsp;One is get co organizers because it's really difficult to do alone, especially if you're a certain type of person, which I am, which is like, I don't like sending emails. I can do it kind of, but I'll put them off sometimes too much and sometimes you're just like I have other things to do. I can't make this presentation. And if you have to do literally all of it yourself and all the tabling and these things, it can get demoralizing and just difficult to do. So if you're like, oh, I should do this high impact thing. Start a K group or something. Make sure you have other people with you and that they're like, the value aligned is probably the wrong way to say it, but committed or something. Or committed enough to put in at least the amount of time you're putting in or something.</p><p><strong>AARON:</strong>&nbsp;No, I totally agree with this. Even though I'm like a much less, I feel like committed community builder. I did have my stint in college and luckily I had a very good type, a co organizer who's also extroverted. I'm like me. So this was very, extremely good.</p><p><strong>MAX:</strong>&nbsp;Yeah, I think probably so there's kind of this community sentiment now. Maybe, or like maybe where we are, I should say in the community, people should be more normal. Why can't you just be normal or something like this?</p><p><strong>AARON:</strong>&nbsp;I'll represent the other position here.</p><p><strong>MAX:</strong>&nbsp;Well, I'm going to say something like, I think that's kind of bad to some degree. I think what you should do is be hospitable or something, which maybe means isn't that much different. But I think, I think lots of weird things. I may even say lots of weird things a lot of the time and these sorts of things, but one can be nice and understanding and welcoming while doing this. And that probably does mean there are certain types of weird you can't do. Like if you're a group organizer, probably don't start a group house with your members or something. I guess though if you're in college, maybe it's slightly different because dorms are kind of like a weird space. And is it bad if a bunch of EA's live in the same dorm or friends live in the same room? I don't know. That's fine. But if you're a group organizer, probably be careful about flirting with your members or something. Don't do power imbalances, I guess is the thing, but I think it's okay.</p><p><strong>AARON:</strong>&nbsp;To be like, don't do power. Kids. Do not do power imbalances. Okay. No, I agree with all this. Sorry, keep going.</p><p><strong>MAX:</strong>&nbsp;And maybe sometimes this does mean you shouldn't always say what you think is the case. Probably at the first EA meeting you don't go like, I think utilitarianism is true and we should donate the US budget to shrimp or something.</p><p><strong>AARON:</strong>&nbsp;I don't think that's true, actually. Only 90%.</p><p><strong>MAX:</strong>&nbsp;Okay. Yeah, see, that's reasonable. But most people will need room. And I think it is the case that part of letting people explore things and think for themselves is not weighing them down with what you think or something. And so you can be pushed back when they think things to help. Basically, you might think there are certain objectively true philosophy things. I don't necessarily think there's philosophy stuff, but you might take this to be the case. Good philosophy professor will not indoctrinate or something their students or very strongly convince them of this thing. They'll give them the space to sort of think through things rather than just tell them the right answer or something. And good group organizing looks like this too, I think. Especially when you're onboarding new eas or something.</p><p><strong>AARON:</strong>&nbsp;Yeah, I largely agree with this. I do think this can actually, and I'm not sure if we. Tell me what your thoughts are on this. I feel like people are going to say, oh, Aaron's pro indoctrination. I am not pro indoctrination. I love me a good critic. Everybody should write to me and tell me what I'm really wrong about. But I actually think that just being kind of upfront about what you think is actually sort of. Sorry, let me back up. Sometimes what people will go really hard on the don't indoctrinate and what that looks like is kind of refusing to let on what they think is the case on a forgiven subject. And I think this is actually not necessarily, I don't want to say literally in all cases that I can possibly imagine, but basically, yeah, as a rule of thumb, no, you should be okay telling people what you think and then if it's in fact the case that people disagree, especially in the EA context, in the EA community say that too. But yeah, I actually think community builders, this is a generalization that I'm not super confident in, but we think that community builders are a little too hesitant to just say like, oh yeah, no, I think this is true. Here's why. These are some of the other plausible views. Tell me why I'm wrong.</p><p><strong>MAX:</strong>&nbsp;Yeah, I think my guess would be that's maybe true in some sense, but there's like a sort of line you want to walk or something, and being on one side of the line could be very bad or lead to certain issues that I don't read the EA form that much. I'm a fake EA, but those are doing community building better or something post where people I think kind of the takeaway. I might have the title wrong, but it seems like the takeaway of the post I'm thinking of is like, epistemics are sometimes bad in community things or something.</p><p><strong>AARON:</strong>&nbsp;Was this kind of recent ish?</p><p><strong>MAX:</strong>&nbsp;Yeah, I think it's like people are way too into AI or something.</p><p><strong>AARON:</strong>&nbsp;Okay. I think this is like a good faith criticism that's absolutely wrong. And I actually have a twitter thread that I'll like. Maybe I'll actually, I'll see if I can find it right now, but if I can, like 2 seconds. But yeah, wait, keep going.</p><p><strong>MAX:</strong>&nbsp;Yeah, I mean, like, what I would say is there are failure modes that look like that or something, and you might want to avoid it for various reasons or. Yeah.</p><p><strong>AARON:</strong>&nbsp;Yes. Also, empirically, I did get like one round of feedback that was like, oh, no, thanks, Aaron, for not being so coy. So maybe I'm going hard. I'm like, the fact that I got ten people or whatever, I think probably.</p><p><strong>MAX:</strong>&nbsp;It'S like you'd expect it to be a dynamic system or something. Like the amount of coinness you should have goes down as number of sessions you've interacted with this person go up or something like that. The way to do there's not a catch all for how to run a discussion group or something. It's kind of based on the people you have in your group or something. And so I think you do actually want more tibbinness towards the start. You want to be more normal, more. Not just saying what you think is the case, or something like the first session to get people warmed up, kind of understand where their boundaries are or something like this. Boundaries probably isn't the right word, but how they are, because it can be intimidating. I think if the person, the group organizer is like, this is what I think is the case, tell me what is wrong. You just might not want to say why it's wrong. Because you're scared or something. You don't want them to dislike you for normal human reasons.</p><p><strong>AARON:</strong>&nbsp;Yeah. No, to be clear, I very much buy into the thing. Do not say what. Just because you think an arbitrary proposition to be true doesn't mean that it is a good idea to say that. There's trivial examples here. If you go to something I can make up, I don't know. You don't like your aunt's dress at a family gathering. You think that's like a true statement, that it's ugly? No one's going to say, oh yeah, well, maybe you shouldn't lie. If she really asks you, we can argue about that. And I think I'll probably defend that. You probably shouldn't lie, but you should be as nice as possible or whatever, but you don't like it if she in fact asks you point blank. But no, you shouldn't just say, oh, let me raise my hand like, hi, Aunt Mary, your dress is ugly, or whatever. On the other hand, then we could get into, okay, if somebody asks you, okay, what do you think about wild animals or whatever? I don't know. Yeah, you should be as polite and normal sounding as possible. But even if you're not directly lying, you shouldn't just say things that are like people would naturally take to be like a conclusion that you don't believe in or whatever.</p><p><strong>MAX:</strong>&nbsp;Yeah, I agree with this, though, probably there is another thing here. This is not really what you're saying or something, but if somebody maybe advice, I would say or something, which I think people often get, is also don't derail conversations or something. I guess even the pursuit of true things, like if you mention, if you're doing a discussion on animal welfare or something, and you're like, yes, in passing, you frame it for. You give like a 92nd spiel. Like framing. You're like, yes. And people also kind of care about animal welfare here. You could read Brian Tamask or something. If anyone's interested, I can send you links. And someone asks later on, what are your thoughts here? Maybe don't talk about animal welfare, wild animal welfare in the context. Because maybe you just think of derail it because you have to be like, here's all this stuff or something. It's just more productive to be like, we'll talk about it later or something.</p><p><strong>AARON:</strong>&nbsp;Yeah, okay. No, I agree. We're sort of debating vibes, right?</p><p><strong>MAX:</strong>&nbsp;We're like debating.</p><p><strong>AARON:</strong>&nbsp;I think we're like, no, sorry, I was using that sort of like maybe facetiously is the right word, like sarcastically or whatever. Unlike an analytic philosophy where we get to have propositions and then you say p and I say, not p, and then we yell at each other. Unfortunately, this isn't as conducive to that. Although I do think also for the case of EA fellowships per se, and maybe like the general case of group discussions, you also shouldn't let. There's like a thing where at least I've noticed, and maybe this is just like n equals three or whatever, but there will be people who are well meaning and everything. They just don't know what they're talking about and they'll derail the conversation. And then you're afraid to rerail the conversation because. I don't know, because that would be like indoctrination or something. And I think sometimes rerailing is just, like the right thing to do.</p><p><strong>MAX:</strong>&nbsp;Yeah. I mean, I haven't experienced it that much, I think. But see, no issues. That or something.</p><p><strong>AARON:</strong>&nbsp;I think rerailing is also. Maybe this is like a general point. I feel like I'm talking from the position as the leader or whatever, but no, just even in the other side of things, I feel like in general, in college, I feel like that's the biggest thing I could think of, like that category, taking college classes or whatever. I would want to know whether the professor thinks that the thing I said is a good point or a bad point. Right. I feel like a lot of times they would just say good point no matter what, and then it's no evidence as to whether it's actually a good point or not. Well, I'm being slightly overstating this a little bit.</p><p><strong>MAX:</strong>&nbsp;I think you're conflating good and true or something. Maybe.</p><p><strong>AARON:</strong>&nbsp;Yeah, sure. But, yeah, I don't remember any specific examples that I can give, but all I'm trying to say is that it's not just from the perspective of somebody who's trying to get people to lead them to the right answer or whatever. No. I don't know. I feel like it's also, well, in one sense, if you really value your epistemics, it might be like, help. You might want people to give you more people who, in fact, more knowledgeable and have thought a lot about some subject to be more upfront about what they think, but also as sort of a matter of just like, respect is not exactly the right word. I don't think the professors were being disrespectful to me, but it's like if we're just friends talking, or it puts you more on an equal playing field in some way. If the one person who is ostensibly and in fact more knowledgeable and ostensibly in a position of de facto power or whatever, leading this discussion or whatever, it puts them on a pedestal. If they try to be super coy about what they think and then not give you legible feedback as to whether a point you said is plausible or not. You can imagine, I don't know, just making an argument that is just kind of terrible. And then, I don't know, I would kind of want the pressure to say no. That's being polite about it, probably, but say, like, no, bad or something.</p><p><strong>MAX:</strong>&nbsp;Yeah. I mean, I guess there's something where maybe I do think professors should not just say everything's good or something. They should only say it when it's good or something. Where good doesn't mean true, in my mind. Yeah. But I think I probably lean towards this, which is like risk aversion in sort of community building and classroom building is valuable or something, because you might take the approach that if you're like, I think it is a more risky or something, risky probably isn't the right thing to say there. I mean like risking, like a risk averse versus risk, sort of like, yeah, it's more risky to sort of do the things you're describing and maybe it's like risk neutral or something. The EV is positive because you'll get lots of really engaged people as a result, and you'll also push people out and discounts out or something. I think I tend to favor being risk neutral here. So you maybe aren't making as many, really. You're losing out some value at one end, but you are including more people. And I sort of anticipate this on net being better or something like producing better results, at least like in community health sorts of senses.</p><p><strong>AARON:</strong>&nbsp;Yeah, I actually think. I agree with that. I do think that people in social situations are just like, in fact, I don't know if it's exactly a bias formally, but people are just like risk averse. Because the whole evos thing, it's like, oh, we used to if you angered somebody in your tribe, they might kill you or whatever, but now it's like, okay, somebody leaves your club, it's fine. No, maybe it's not fine. Right. That is in fact a loss potentially. Maybe it's not. But I think we sort of have to adjust, make ourselves adjust a little bit more in the pro risk direction.</p><p><strong>MAX:</strong>&nbsp;Yeah. Though I think probably if you take the project of EA seriously or something, there are probably good reasons to want various types of diversity and that sort of approach of, I guess being more risk averse is more likely to get you diversity or something.</p><p><strong>AARON:</strong>&nbsp;Yes, good point. Excellent. Good job. Everything you said is true, in fact and good. Any other hot or cold takes, in fact, or medium or lukewarm takes?</p><p><strong>MAX:</strong>&nbsp;Um, I mean, certainly. Right is the question.</p><p><strong>AARON:</strong>&nbsp;I don't know. It doesn't have to be related to the space is like wider than you probably initially think the space here is like. As long as it's not an info hazard. Not illegal to say. Yeah, you can kind of bring up whatever and not like singling out like random people and not mean, not mean.</p><p><strong>MAX:</strong>&nbsp;Like global parties gossip. I, like, know what Toby or did last.</p><p><strong>AARON:</strong>&nbsp;Done. Okay. Unfortunately, I'm not hip to that.</p><p><strong>MAX:</strong>&nbsp;And I'm sure there is gossip, like somebody didn't change coffee pot or something. But I don't work there, so I don't know.</p><p><strong>AARON:</strong>&nbsp;Maybe. Hopefully you will soon. CEa, if you're listening to this.</p><p><strong>MAX:</strong>&nbsp;Well, they're different organizations.</p><p><strong>AARON:</strong>&nbsp;I don't know. Oxford, every. Ea.org, if you're listening to this.</p><p><strong>MAX:</strong>&nbsp;Yeah. They're all.</p><p><strong>AARON:</strong>&nbsp;Wait, maybe. Like, what else? I don't know. What do you think about Twitter, like, in general? I don't know. Because this is how we met.</p><p><strong>MAX:</strong>&nbsp;Yeah.</p><p><strong>AARON:</strong>&nbsp;We have not met in real life.</p><p><strong>MAX:</strong>&nbsp;Worse as a platform than it was two years ago or something.</p><p><strong>AARON:</strong>&nbsp;Okay.</p><p><strong>MAX:</strong>&nbsp;Stability wise, and there are small changes that make it worse or something, but largely my experience is unchanged, I think.</p><p><strong>AARON:</strong>&nbsp;Do you think it's good, bad? I don't know. Do you think people should join Twitter on the market?</p><p><strong>MAX:</strong>&nbsp;I think EA should join EA. Twitter. I'm not sure if you join Twitter rather than other social medias or something. I think sort of the area of social media we're on is uniquely quite good or something.</p><p><strong>AARON:</strong>&nbsp;I agree.</p><p><strong>MAX:</strong>&nbsp;And some of this is like, you get interactions with people, which is good, and people are very nice or something, and very civil where we are. And it's less clear the sorts of personal ability or something and niceness that you get where we are in, like, are elsewhere in Twitter because I don't go elsewhere. But basically you should join Twitter, I guess, if you're going to enter a small community or something, if you're just going to use it to browse memes or something, it's not clear this is better than literally any other social media that has no.</p><p><strong>AARON:</strong>&nbsp;Yeah, I agree. Well, I guess our audience is, of all, maybe four people, is largely from Twitter. But you never know. There's like a non zero chance that somebody from the wider world will be listening. I think it's at least worth an experiment. Right. Maybe you could tell me something that I should experiment with. Is there anything else like Twitter that we don't have in common that you think that maybe I don't do? It's like, oh, he's an idiot for not doing.</p><p><strong>MAX:</strong>&nbsp;Oh, probably not. I mean, I'm sure you do better things than I do. Probably.</p><p><strong>AARON:</strong>&nbsp;Well, I mean, probably this is a large. Right? Like, I don't know.</p><p><strong>MAX:</strong>&nbsp;I think a benefit of using Twitter is like, it kind of opens you up or something. Probably is the case. It probably does literally build your social skills or something. I mean, maybe not in an obviously useful way, because it's like you're probably not necessarily that much better at doing in person stuff or something as a result of these Twitter. Maybe it improves you very slightly or something, but it's a different skill, texting versus talking.</p><p><strong>AARON:</strong>&nbsp;Actually, here's something I want your thoughts on recently. Maybe this is outing me as a true Twitter addict, but no, I, by and large, have had a really good experience and I stand by that. I think it's net on net. Not just on net, but just in general, added value to my life and stuff. And it's great, especially given the community that I'm in. The communities that I'm in. But yeah, this is going to kind of embarrassing. I've started thinking in tweets. I'm not 100% of the time, not like my brain is only stuck on Twitter mode, but I think on the margin there's been a chef toward a thought verbalizes an Aaron's brain as something that could be a tweet. And I'm not sure this is a positive.</p><p><strong>MAX:</strong>&nbsp;Like it is the case. I've had my friends open Twitter in front of me, like my Twitter and go through and read my tweets. Actually, many people in my life do this. I don't know why. I don't really want them to do that. And it does change the way you talk. Certainly part of that is probably character element, and part of it is probably like culture or something. So that's the case. I don't know if I experienced that or I do sometimes if I thought of a really stupid pun. Normally you don't do anything with that, but now I can or something. Right. It's worth holding on for the 6 seconds it takes to open my phone. But I think I actually kind of maybe think in tweets already or something. Like, if you read my writing, I've gotten feedback that it's both very poetic or something. And poems are short or something. It's like very stanza or something, which is kind of how Twitter works also. Right. I think if you looked at the formatting of some of my writing, you would see that it's very twitter like or something. In some sense, there's no character limit, and so maybe this is just the sort of thing you're experiencing or something. Or maybe it's more intense.</p><p><strong>AARON:</strong>&nbsp;Yeah, probably not exactly. Honestly, I don't think this is that big of a deal. One thing is, I think this is a causal effect. I've blogged less and. And I think it's like, not a direct replacement. Like, I think Scooter has been like an outlet for my ideas that actually feels less effortful and takes less. So it's not like a one for one thing. So other more worky things have filled in the gap for blogging. But I think it has been a causal reason that I haven't blogged as much as I would like to. Really would like have to or something. Yeah, I can see that being thing that is like ideas, there's no strong signal that a particular tweet is an important idea that's worth considering. Whereas if you've written a whole blog post on it and you have 200 subscribers or whatever, you put in a lot of effort. People are at least going to say like, oh, this is me. At least plausibly like an important idea. Like when they're coming into it or something like that.</p><p><strong>MAX:</strong>&nbsp;Yeah. And if you think something is valuable or something, maybe this is different for you or something. But I get like three likes on all my tweets. It's very rare I get ten likes or something. The number of followers. Gross. It's just stuck there forever.</p><p><strong>AARON:</strong>&nbsp;Feel like it's not true. Should I read all your bangers? I have a split screen going on. Should I search from Max Alexander?</p><p><strong>MAX:</strong>&nbsp;Search for my baggers. That's 20,000 tweets or posts.</p><p><strong>AARON:</strong>&nbsp;How many?</p><p><strong>MAX:</strong>&nbsp;The Ted like ones you'll find over Ted likes are know very small percentage of the total do.</p><p><strong>AARON:</strong>&nbsp;So why is your handle absurdly, Max? Is there a story you don't have to answer mean?</p><p><strong>MAX:</strong>&nbsp;It's very simple. So I think existentialism is right or something. And absurdism specifically. And my name is.</p><p><strong>AARON:</strong>&nbsp;Wait, really? Wait, what even is absurdism?</p><p><strong>MAX:</strong>&nbsp;Humans have this inherent search for meaning, and there's no inherent meaning in the universe.</p><p><strong>AARON:</strong>&nbsp;This is just moral realism in continental flavor.</p><p><strong>MAX:</strong>&nbsp;Well, and then you have to. So there is no moral truth. Right. And you have to make your own meaning or you could kill yourself.</p><p><strong>AARON:</strong>&nbsp;I guess this is not true. Okay, whatever, but whatever. This is just continental bullshit.</p><p><strong>MAX:</strong>&nbsp;If you're a moral antirealist or something, you probably end up being an existentialist or don't care about philosophy, I suppose.</p><p><strong>AARON:</strong>&nbsp;Oh, this is a good one. If anyone at OpenAI follows me, I just want to say that I'd probably pay $20 a month, maybe even more, for a safely aligned super intelligence. I will actually second that. So, open AI. We can promise you $40 if you do that.</p><p><strong>MAX:</strong>&nbsp;A month, in fact.</p><p><strong>AARON:</strong>&nbsp;Yes. Yeah. There's lots of bangers here. You guys should all follow Max and look up his bangers.</p><p><strong>MAX:</strong>&nbsp;I would be surprised if somebody's listening to this and isn't already.</p><p><strong>AARON:</strong>&nbsp;You never know. I'm going to have to check my listening data after this and we'll see how big our audience is. Kind of forget. Yeah. So once again, the space of. Also, I'm happy to take a break. There's no formalized thing here. Structure.</p><p><strong>MAX:</strong>&nbsp;I mean, I'll go for hours.</p><p><strong>AARON:</strong>&nbsp;Oh, really? Okay, cool. No. Are there any topics that are, like. I feel like. Yeah, the space is very large. Here's something. Wait, no, I was going to say, is there anything I do that you disagree with?</p><p><strong>MAX:</strong>&nbsp;That's like a classic.</p><p><strong>AARON:</strong>&nbsp;Yeah, I find you very unobjectionable. That's a boring compliment. I'm just kidding. Thank you. It's actually not.</p><p><strong>MAX:</strong>&nbsp;I have, like, I suppose writing takes or something, but I don't know if I can find the book.</p><p><strong>AARON:</strong>&nbsp;Oh, wait, no. We have those two similar blog posts, but you know what I'm talking about. Okay, can I just give the introduction to this?</p><p><strong>MAX:</strong>&nbsp;Okay.</p><p><strong>AARON:</strong>&nbsp;No, I think we just have similar blog posts that I will hopefully remember to link that I think are substantively very similar, except they have totally different vibes. And yours is very positive and mine is very negative. In fact, mine is called on suffering. I don't remember what yours is called.</p><p><strong>MAX:</strong>&nbsp;It's a wonderful life.</p><p><strong>AARON:</strong>&nbsp;There you go. Those are the vibes, but they're both like, oh, no. Hedonic value is actually super meaningful and important, but then we take it in opposite directions. That's it.</p><p><strong>MAX:</strong>&nbsp;Yeah.</p><p><strong>AARON:</strong>&nbsp;I don't know if I feel like your title actually appears to something. I feel like your title is bad, but besides that, the piece is good.</p><p><strong>MAX:</strong>&nbsp;Well, there's a movie called it's a wonderful life and the post is, like Christmas themed.</p><p><strong>AARON:</strong>&nbsp;Oh, I feel like I keep not getting things like that.</p><p><strong>MAX:</strong>&nbsp;It's okay.</p><p><strong>AARON:</strong>&nbsp;I feel like I vaguely knew that was like a phrase people said, but wasn't sure where it came from. I remember there's like an EA thing called non trivial that I helped on a little bit and I didn't realize it was a reference to non trivial pursuits, which is like a board game or something. No, I think it was actually originally called non trivial pursuits, I think. I'm sorry, Peter McIntyre, if you're listening to this, I apologize for any false, like, I don't know, like a reasonable amount of time, and I had no idea. And then the guy who I was working under brought this up and I was like, wait, that's a board game. Or something. Anyway, this is not an important aside, I'm kind of a bad podcaster because Dwarkesh Patel, who's about 100,000 times better at podcasting than me, goes for like six or 8 hours. That stresses me out so much. Even from an out. Like, not even doing it, doing it, I would just die. I would simply fizzle out of existence. But even thinking about it is very stressful.</p><p><strong>MAX:</strong>&nbsp;It depends who guessed it, but I could probably talk to somebody for six continuous hours or something.</p><p><strong>AARON:</strong>&nbsp;Tell me we don't have to discuss this all but one thing could be the virtues of being anonymous. Not anonymous.</p><p><strong>MAX:</strong>&nbsp;Sure. Okay. I think the virtue is probably like comfortableness or something is the primary one, and maybe some risk aversion or something.</p><p><strong>AARON:</strong>&nbsp;Yeah.</p><p><strong>MAX:</strong>&nbsp;Probably. It's not that common or something, but it's probably more common than people think. But being socially ostracized or being very publicly canceled or something, and maybe for bad reasons, one might say as well does occur. And you might be the sort of person who wants to avoid this. I mean, in some sense you've kind of been piled on by various points and it seems like you just were fine with this.</p><p><strong>AARON:</strong>&nbsp;TBD. So I have not been formally hired by anyone since I've discussed this. No, there was like the tweet where I was like, we should bury poor people underground. Just kidding. That's not what I said. That is what a subset of people who piled on me said. I said, which is not in fact what I said. I said, I asked a question, which was like, why are we not building underground more? No, but yeah, I feel like this is definitely just like a personal taste thing. I don't know. I'm sure there are, but on very broadly, don't even want to say like EA Twitter, but like extended, I don't know, like broadly, like somewhat intellectual ish, English speaking Twitter or something like that. Are there examples of people with under, say, 5000 followers who have said something that is not legitimately indicative? I mean, this is doing a lot of work here, I want to say not legitimately indicative of them being a terrible person. Right. That's doing a lot of work. Right. But I think that neither of us are. So then this is the question.</p><p><strong>MAX:</strong>&nbsp;My guess would be, like, teachers or something, maybe that sometimes happens to, or like, I think the more, I think probably the sorts of jobs we end up wanting or something like this are more okay with online presence. Because I don't know, everyone at your, ethink priorities is terminally online, right? And so they're not going to mind if you are too.</p><p><strong>AARON:</strong>&nbsp;I'm not counting on getting an EA or a job.</p><p><strong>MAX:</strong>&nbsp;Like corporate and public sector jobs I think are generally are like you should make your social media private or something.</p><p><strong>AARON:</strong>&nbsp;Yeah.</p><p><strong>MAX:</strong>&nbsp;They tell you that not just the.</p><p><strong>AARON:</strong>&nbsp;Corporate world is like a big category, right? If you're a software engineer, first of all, I think if you actually have reprehensible opinions, I think it is in your self interest to be an alt. If you want to say them out loud. Not being a bad person is doing a lot of work here. But for what it's worth, I really do think I'm extending there are communists that I would say are genuinely not bad people. I really think they're wrong about the whole communism thing, but that does not fall into my category of automatically makes it such that you should be an alt to avoid being exposed.</p><p><strong>MAX:</strong>&nbsp;I think probably there are good reasons in EA specifically to maybe make an alt that are not the case elsewhere. And this is like the community is homogeneous and lots of people with lots of power are also on Twitter, basically. Maybe if somebody rethink it's a bad vibe of you or something, that's indicative they shouldn't hire you. I guess maybe. But maybe it doesn't matter how I use Twitter. I don't think this is true, but maybe I use Twitter in a way that's abrasive or maybe they just don't like my puns or something. This is information that will make them change their opinion about me in an interview. And maybe it's not that impactful and maybe it is a little impactful and maybe sometimes it's positive it will make them like me more or something. But the community is so insular, such that this is more of a problem or something.</p><p><strong>AARON:</strong>&nbsp;Yeah, I feel like my sense is just that it's more symmetrical than this is giving it credit for. Yeah, I guess if you have a lot of heterodox views that are on hot especially, I don't know. Yeah. If you disagree with a lot of people in EA who are powerful, not just not in a weird way, just have hiring power or whatever, on topics that are really emotionally salient or whatever. Yeah. I would probably say if you want to very frankly discuss your opinions and get hired at the orgs where these people have hiring power, it's like probably make an alt. I just don't think that's true for that many people.</p><p><strong>MAX:</strong>&nbsp;My guess is this is more important the smaller the is or something. My guess would be open philanthropy, like where you think I don't know, global parties institute. It doesn't matter if you make an alternate, really, you're okay. But there are some EA orgs that are like three people or something. Right. And probably your online presence matters, but.</p><p><strong>AARON:</strong>&nbsp;I mean that's also. They also hire a fewer open spots. Yeah. I just feel like the upside is or maybe the downside. I think we probably disagree on the downside somewhat. I mean, just like stepping away from the abstract level. I just personally think that you, in fact, I can take this out, but can we just say that your last name is not in fact Alexander? Yeah, that's okay. I feel like you could just use your real name and your life would.</p><p><strong>MAX:</strong>&nbsp;Well, it is my real name. It just my middle name.</p><p><strong>AARON:</strong>&nbsp;Okay. Wow. Okay. Very, very slate sarcodex esque.</p><p><strong>MAX:</strong>&nbsp;That's why I did it, because.</p><p><strong>AARON:</strong>&nbsp;Oh, nice.</p><p><strong>MAX:</strong>&nbsp;But it was like really nice.</p><p><strong>AARON:</strong>&nbsp;I actually don't know why I didn't connect the dots there. Yeah, I feel like just like you personally in expectation, it would probably not be a dramatic change to your life if I were you and I was suddenly cast into your shoes. But with my beliefs, I would just use my real name or something like that.</p><p><strong>MAX:</strong>&nbsp;Yeah, I think it's like at the point now it doesn't matter or something. There's no real upside to changing it and there's no real downside.</p><p><strong>AARON:</strong>&nbsp;Yeah, it's not a huge deal.</p><p><strong>MAX:</strong>&nbsp;Yeah, I think probably, well, some people like to be horny on Twitter or something and probably if you want to do that, you should be anonymous or.</p><p><strong>AARON:</strong>&nbsp;I mean, but once again, even I feel like it depends what you mean by be horny. Right. If you're going to post nude photos on Twitter. Yeah. Actually had a surprisingly good dm conversation with a porn account. I did not follow them, for what it's worth.</p><p><strong>MAX:</strong>&nbsp;They're presumably run by real.</p><p><strong>AARON:</strong>&nbsp;No, no, it was actually. No, she was very polite. Yes, I think I'm correctly assuming the name clearly. I won't say the name, but like accounting. But it was clearly indicative. It was a woman. She basically objected to me suggesting that minimum wage jobs in the US are uncommon. And I think this is actually, in fact, I deleted my tweet because it was like, I think giving a false impression that, yeah, I live in a wealthy area. It's true that server sector jobs generally pay better than minimum wage, but elsewhere in the US it's not true. It's a very polite interaction. Anyway, sorry, a total side story. Oh yeah, horny on Twitter. Yes, probably. Yeah, I agree. I don't know, but that can mean multiple things. I guess I have participated somewhat in gender discourse. I guess. I don't think I've been extremely horny. Yeah, fair enough. I don't think I've been very horny on me.</p><p><strong>MAX:</strong>&nbsp;Yeah. I think probably there's maybe something to be the case that, and you see this as prestige increases or something. This isn't totally true because I think Elie Iser Zukowski posted about whatever or something, but Will McCaskill and Peter Wildefer are not engaging in gender discourse or whatever. Right. Publicly anyway.</p><p><strong>AARON:</strong>&nbsp;Wait, do I want to put my money or not my money, my social reputation doesn't use Twitter very much, which I think, I don't know. I would in fact take this question much more seriously if I was him. Like the question of whether to just be more frank and open and just, I don't know, maybe he doesn't want to do. I'm like totally hypothesizing here. Wait, Peter totally does sometimes, at least in one case. I know. No, I have disagreed with Peter on gender discourse and that's just like somewhat. I don't know. Right.</p><p><strong>MAX:</strong>&nbsp;I just don't see. He's not starting gender discourse, I guess I should say.</p><p><strong>AARON:</strong>&nbsp;Well, I mean, Adam and Eve started gender. No, but I don't know. I don't want to single anybody out here. I feel like, yeah, if you're like the one face of a social movement, yeah, you should probably take it pretty seriously. At least the question of like, yeah, you should be more risk averse. I will buy into that. I don't think my position here is like absolute whatsoever. I think, yeah. For people with fewer than 5000 followers, it's like a gradient, right? I just think on the margin people are general or like in general, people maybe are more risk averse than they have to be or something like that.</p><p><strong>MAX:</strong>&nbsp;Yeah. Though I'm not sure there are that many major examples. Because one reason you might be anonymous is not because you are scared about people who follow you finding you elsewhere. It's because you don't want the inverse to be the case or something.</p><p><strong>AARON:</strong>&nbsp;Wait, what's the, I'm sorry, you don't.</p><p><strong>MAX:</strong>&nbsp;Want your parents to google you Twitter account and read your tweets? Yeah, everyone I know knows my Twitter. I don't even know how they all found it, but.</p><p><strong>AARON:</strong>&nbsp;That'S interesting. Okay, well, I mean, yeah.</p><p><strong>MAX:</strong>&nbsp;Read stupid tweets of me in front of me.</p><p><strong>AARON:</strong>&nbsp;Well, there you go. That's like a good real life example of like, okay. That's like a real downside, I guess. Or maybe it's not because it's kind of a funny thing, but.</p><p><strong>MAX:</strong>&nbsp;I think you do get this. People will joke about this sometimes, I think on Twitter or something, or I've seen it ever where it's like, I hope my boss is not using Twitter today because they'll be like, why weren't you working or something?</p><p><strong>AARON:</strong>&nbsp;Oh, yeah.</p><p><strong>MAX:</strong>&nbsp;Literally just tweeting or something.</p><p><strong>AARON:</strong>&nbsp;Yeah. I know. If you've called in sick to work and you're, like, lying about that, you're.</p><p><strong>MAX:</strong>&nbsp;Just tweeting a bunch, they might be kind of suspicious or something. If your boss could see you all the time, you open Twitter for like a minute to tweet or something, they'd probably be like a little judgy or something, right? Maybe they wouldn't.</p><p><strong>AARON:</strong>&nbsp;Yeah, yeah, sure. I mean, it's just like a matter of degree.</p><p><strong>MAX:</strong>&nbsp;These are probably the most real world scenarios, is like somebody, you know, gets information that isn't really that damaging but slightly inconveniences you or something.</p><p><strong>AARON:</strong>&nbsp;Yeah. Oh, actually, I talked about this more with Nathan Young, but then I didn't record his audio. I'm sorry. I've apologized for this before, but I'm so bad at me. This is before. I really hope that I've solved this for this particular, I think I have for this episode. But, yeah, on my one viral tweet or whatever, one thing is just like, oh, yeah. A lot of people basically, I think they were earnest. Not earnest, exactly. That's like, maybe too generous, but they genuinely thought that I was saying something bad, like something immoral. I think they were wrong about that, but I don't think they were lying. But then, in fact, okay, several of my real life friends either, it's like, come up somehow and none of them actually thought I was saying anything bad. And in fact, I met somebody basically at a party who was talking about this. And then it was like, oh, that's me. I was the one who posted that. But basically the point here is that nothing bad in my. As far as I can tell, maybe I'm wrong, but as far as I can tell, nothing bad in my life has come with a viral tweet that mainly was viral from people quotating it, saying that. I was saying something like immoral. N equals one.</p><p><strong>MAX:</strong>&nbsp;Yeah. I don't know how come it would be or something. I think, like Contra points or something. I believe it's her. Has videos about canceling and stuff like this. I think she was canceled at various. Lindsay Elsie, you may have heard of, I think, kind of got run off the Internet.</p><p><strong>AARON:</strong>&nbsp;Actually don't know this person.</p><p><strong>MAX:</strong>&nbsp;She does, like, video, or she did video essays about media and stuff. She's a fiction author now. There are orders of magnitude more well known than we are for something.</p><p><strong>AARON:</strong>&nbsp;There's, like, canceling that really genuinely ruins people's lives. And then there's, like, Barry Weiss. I don't know, she has like a substack or like, I don't even know, like, IDW person or I think. I don't even know if she got. There's people like this. I'm sure they have other academics and they're professors at universities who don't like them, but they have successful, profitable substacs. And it seems to me like their lives aren't made a lot worse by being, quote unquote, canceled. No, but then there really are. Right. I'm sure that there's definitely cases of normalish people. Yeah.</p><p><strong>MAX:</strong>&nbsp;I don't know, but maybe it's just more psychologically damaging, regardless of consequences, to be piled on when it's your real face or something.</p><p><strong>AARON:</strong>&nbsp;Yeah. Also, I think 90. I don't know about at least 90%. Probably not like 99 and then like ten, nine. But something in between these two numbers of pylons just don't have any real world consequences whatsoever. Canceling is like a much less common phenomenon.</p><p><strong>MAX:</strong>&nbsp;Yeah, that seems right to me.</p><p><strong>AARON:</strong>&nbsp;Yeah. Let it be known that we are two white. We can also take this part out if you don't want this identifying information. 220 something white males discussing being canceled on a podcast, which is possibly the most basic thing that has ever happened in the universe.</p><p><strong>MAX:</strong>&nbsp;I mean, I want to be known for this. When I, my children come up to me and say, we found the one podcast interview you did.</p><p><strong>AARON:</strong>&nbsp;Yeah. I feel like I want to reiterate that there's a core thing here, which is, like, if you really hate jewish people and you think they should die, there's like, a fundamental thing there where if you're open and honest about your opinions, people are going to correctly think that you're a bad person. Whereas I think neither of us hold any views that are like that.</p><p><strong>MAX:</strong>&nbsp;Agree. I think.</p><p><strong>AARON:</strong>&nbsp;Yeah, no, maybe I wouldn't know, which is fine. But I'm sort of using an extreme example that I think very few people, at least, who I would interact with, hold. But if you're in fact, like, a person who has views that I think really indicate immoral behavior or something. Sorry. Maybe I'll even take that part out because it could be like, clip eclipsed or whatever, but if you just say like, oh yeah, I steal things from street vendors or whatever. I don't know, that's a bad thing. And you're like being honest, don't do that. And then people are like, yeah, you have to make, I guess, have some confidence that you're like a person who's like, I don't know, doesn't do very, doesn't do pretty things or have views that true legitimately indicate that you either do or would do very immoral things that are regarded to be immoral by a large number of people or whatever. This is doing a lot of the.</p><p><strong>MAX:</strong>&nbsp;Mean, like I think this goes back to earlier things or something. I just thought of it or something. I mean, it's not clear the sign of this or something, but I wrote a whole blog post about joining EA or something which I think has some genuinely very good parts in it that are really good explanations of why someone should be motivated to do EA or something. There's lots of really embarrassing stuff about life in high school or something in it.</p><p><strong>AARON:</strong>&nbsp;Yeah, no, I'm pretty sure everyone in.</p><p><strong>MAX:</strong>&nbsp;80 who works at 80k. Not everyone, but I have good reason to think many people at read this, which is like, I don't know what to make of that. It's just kind of weird. Right, people?</p><p><strong>AARON:</strong>&nbsp;Yeah, no, I agree. It's kind of weird. I think like object level. I don't know. I don't even know if embarrassed. Embarrassing is like the right word. Yeah, it's like genuinely, I guess, more vulnerable than a lot of people are in their public blogs or whatever. But I don't really think it reflects poorly on you when take it in aggregate. Right. I mean maybe you disagree with that, but especially I don't think so.</p><p><strong>MAX:</strong>&nbsp;Or if you think it does, you probably are judgy to a degree I think is unreasonable or something from when I was like 13 or something.</p><p><strong>AARON:</strong>&nbsp;Yeah, sure.</p><p><strong>MAX:</strong>&nbsp;That bad? Really?</p><p><strong>AARON:</strong>&nbsp;What bad things? Should I. No, there's something that I actually don't even think is immoral, but it's like somewhat embarrassing. Maybe I'll even say it on the next podcast episode after I've thought about it for ten minutes instead of 1 minute or something like that. But I don't know, I think if I said it, it would be okay. Nothing that bad would happen. I guess. You're going to continue being Max Alexander instead of Max.</p><p><strong>MAX:</strong>&nbsp;I mean, like it's the brand. Like I can't.</p><p><strong>AARON:</strong>&nbsp;Wait. I feel like this is not a good reason. I feel like pretty quickly. Yeah. I feel like path dependence is like a real thing, but in this particular case, it's just like, not as big as maybe scenes or something like that. I don't know. Yeah.</p><p><strong>MAX:</strong>&nbsp;I just don't care what the upside is or something.</p><p><strong>AARON:</strong>&nbsp;Yeah, true. Yeah. I'm thinking maybe we wrap up.</p>]]></content:encoded></item><item><title><![CDATA[#7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can]]></title><description><![CDATA[Listen on Spotify or Apple Podcasts]]></description><link>https://www.aaronbergman.net/p/podcast-holly-elmore-on-ai-pause</link><guid isPermaLink="false">https://www.aaronbergman.net/p/podcast-holly-elmore-on-ai-pause</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Tue, 17 Oct 2023 02:04:44 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/138029106/b00180ab91d3019b21d87a0ee0b693a5.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<ul><li><p><strong>Listen on <a href="https://podcasters.spotify.com/pod/show/aaron-bergman9/episodes/Holly-Elmore-on-AI-pause--wild-animal-welfare--and-some-cool-biology-things-I-couldnt-fully-follow-but-maybe-you-can-e2als3i">Spotify</a> or <a href="https://podcasts.apple.com/us/podcast/holly-elmore-on-ai-pause-wild-animal-welfare-and-some/id1693154768?i=1000631547245">Apple Podcasts</a></strong></p></li><li><p><strong>Be sure to check out and follow <a href="https://hollyelmore.substack.com/">Holly&#8217;s Substack</a> and org <a href="https://pauseai.info/">Pause AI</a>.</strong> </p></li></ul><h1>Blurb and summary from <a href="https://podcasters.spotify.com/pod/dashboard/episode/claude.ai">Clong</a></h1><h2>Blurb</h2><p>Holly and Aaron had a wide-ranging discussion touching on effective altruism, AI alignment, genetic conflict, wild animal welfare, and the importance of public advocacy in the AI safety space. Holly spoke about her background in evolutionary biology and how she became involved in effective altruism. She discussed her reservations around wild animal welfare and her perspective on the challenges of AI alignment. They talked about the value of public opinion polls, the psychology of AI researchers, and whether certain AI labs like OpenAI might be net positive actors. Holly argued for the strategic importance of public advocacy and pushing the Overton window within EA on AI safety issues.</p><h2>Detailed summary</h2><ul><li><p>Holly's background - PhD in evolutionary biology, got into EA through New Atheism and looking for community with positive values, did EA organizing at Harvard</p></li><li><p>Worked at Rethink Priorities on wild animal welfare but had reservations about imposing values on animals and whether we're at the right margin yet</p></li><li><p>Got inspired by FLI letter to focus more on AI safety advocacy and importance of public opinion</p></li><li><p>Discussed genetic conflict and challenges of alignment even with "closest" agents</p></li><li><p>Talked about the value of public opinion polls and influencing politicians</p></li><li><p>Discussed the psychology and motives of AI researchers</p></li><li><p>Disagreed a bit on whether certain labs like OpenAI might be net positive actors</p></li><li><p>Holly argued for importance of public advocacy in AI safety, thinks we have power to shift Overton window</p></li><li><p>Talked about the dynamics between different AI researchers and competition for status</p></li><li><p>Discussed how rationalists often dismiss advocacy and politics</p></li><li><p>Holly thinks advocacy is neglected and can push the Overton window even within EA</p></li><li><p>Also discussed Holly's evolutionary biology takes, memetic drive, gradient descent vs. natural selection</p></li></ul><div class="image-gallery-embed" data-attrs="{&quot;gallery&quot;:{&quot;images&quot;:[{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/077a334b-295c-4776-8c18-0ecde972c231_1024x1024.png&quot;},{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b9eb7c8f-d57b-47ff-80ef-ef4ff1371e4e_1024x1024.png&quot;},{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/056b7c6c-c12c-4aba-98b0-29bcf67330ff_1024x1024.png&quot;},{&quot;type&quot;:&quot;image/webp&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/10086d1f-d937-4d02-9e08-7633de545be1_1024x1024.webp&quot;},{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/807fbd19-e537-4cb3-a72e-558e1c2edbc0_1024x1024.png&quot;},{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fdd8d59b-19ab-4d22-b78f-b4b6dc863822_1024x1024.png&quot;},{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c05b6ea4-8991-4ce6-adb0-e342d59ebb14_1024x1024.png&quot;},{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/227d7840-7d7a-45ca-8588-bd3283a0db9b_1024x1024.png&quot;},{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5923f646-df42-416d-9dad-98681cb49777_1024x1024.png&quot;}],&quot;caption&quot;:&quot;Some DALLE3 created art, inspired by the episode&quot;,&quot;alt&quot;:&quot;&quot;,&quot;staticGalleryImage&quot;:{&quot;type&quot;:&quot;image/png&quot;,&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4a66c46e-bf37-4e49-b70b-9f269eb3caab_1456x1454.png&quot;}},&quot;isEditorNode&quot;:true}"></div><h1>Full transcript (very imperfect)</h1><p>AARON</p><p>You're an AI pause, Advocate. Can you remind me of your shtick before that? Did you have an EA career or something?</p><p>HOLLY</p><p>Yeah, before that I was an academic. I got into EA when I was doing my PhD in evolutionary biology, and I had been into New Atheism before that. I had done a lot of organizing for that in college. And while the enlightenment stuff and what I think is the truth about there not being a God was very important to me, but I didn't like the lack of positive values. Half the people there were sort of people like me who are looking for community after leaving their religion that they grew up in. And sometimes as many as half of the people there were just looking for a way for it to be okay for them to upset people and take away stuff that was important to them. And I didn't love that. I didn't love organizing a space for that. And when I got to my first year at Harvard, harvard Effective Altruism was advertising for its fellowship, which became the Elite Fellowship eventually. And I was like, wow, this is like, everything I want. And it has this positive organizing value around doing good. And so I was totally made for it. And pretty much immediately I did that fellowship, even though it was for undergrad. I did that fellowship, and I was immediately doing a lot of grad school organizing, and I did that for, like, six more years. And yeah, by the time I got to the end of grad school, I realized I was very sick in my fifth year, and I realized the stuff I kept doing was EA organizing, and I did not want to keep doing work. And that was pretty clear. I thought, oh, because I'm really into my academic area, I'll do that, but I'll also have a component of doing good. I took giving what we can in the middle of grad school, and I thought, I actually just enjoy doing this more, so why would I do anything else? Then after grad school, I started applying for EA jobs, and pretty soon I got a job at Rethink Priorities, and they suggested that I work on wild animal welfare. And I have to say, from the beginning, it was a little bit like I don't know, I'd always had very mixed feelings about wild animal welfare as a cause area. How much do they assume the audience knows about EA?</p><p>AARON</p><p>A lot, I guess. I think as of right now, it's a pretty hardcore dozen people. Also. Wait, what year is any of this approximately?</p><p>HOLLY</p><p>So I graduated in 2020.</p><p>AARON</p><p>Okay.</p><p>HOLLY</p><p>Yeah. And then I was like, really?</p><p>AARON</p><p>Okay, this is not extremely distant history. Sometimes people are like, oh, yeah, like the OG days, like four or something. I'm like, oh, my God.</p><p>HOLLY</p><p>Oh, yeah, no, I wish I had been in these circles then, but no, it wasn't until like, 2014 that I really got inducted. Yeah, which now feels old because everybody's so young. But yeah, in 2020, I finished my PhD, and I got this awesome remote job at Rethink Priorities during the Pandemic, which was great, but I was working on wild animal welfare, which I'd always had some. So wild animal welfare, just for anyone who's not familiar, is like looking at the state of the natural world and seeing if there's a way that usually the hedonic so, like, feeling pleasure, not pain sort of welfare of animals can be maximized. So that's in contrast to a lot of other ways of looking at the natural world, like conservation, which are more about preserving a state of the world the way preserving, maybe ecosystem balance, something like that. Preserving species diversity. The priority with wild animal welfare is the effect of welfare, like how it feels to be the animals. So it is very understudied, but I had a lot of reservations about it because I'm nervous about maximizing our values too hard onto animals or imposing them on other species.</p><p>AARON</p><p>Okay, that's interesting, just because we're so far away from the margin of I'm like a very pro wild animal animal welfare pilled person.</p><p>HOLLY</p><p>I'm definitely pro in theory.</p><p>AARON</p><p>How many other people it's like you and formerly you and six other people or whatever seems like we're quite far away from the margin at which we're over optimizing in terms of giving heroin to all the sheep or I don't know, the bugs and stuff.</p><p>HOLLY</p><p>But it's true the field is moving in more my direction and I think it's just because they're hiring more biologists and we tend to think this way or have more of this perspective. But I'm a big fan of Brian domestics work. But stuff like finding out which species have the most capacity for welfare I think is already sort of the wrong scale. I think a lot will just depend on how much. What are the conditions for that species?</p><p>AARON</p><p>Yeah, no, there's like seven from the.</p><p>HOLLY</p><p>Coarseness and the abstraction, but also there's a lot of you don't want anybody to actually do stuff like that and it would be more possible to do the more simple sounding stuff. My work there just was consisted of being a huge downer. I respect that. I did do some work that I'm proud of. I have a whole sequence on EA forum about how we could reduce the use of rodenticide, which I think was the single most promising intervention that we came up with in the time that I was there. I mean, I didn't come up with it, but that we narrowed down. And even that just doesn't affect that many animals directly. It's really more about the impact is from what you think you'll get with moral circle expansion or setting precedents for the treatment of non human animals or wild animals, or semi wild animals, maybe like being able to be expanded into wild animals. And so it all felt not quite up to EA standards of impact. And I felt kind of uncomfortable trying to make this thing happen in EA when I wasn't sure that my tentative conclusion on wild animal welfare, after working on it and thinking about it a lot for three years, was that we're sort of waiting for transformative technology that's not here yet in order to be able to do the kinds of interventions that we want. And there are going to be other issues with the transformative technology that we have to deal with first.</p><p>AARON</p><p>Yeah, no, I've been thinking not that seriously or in any formal way, just like once in a while I just have a thought like oh, I wonder how the field of, like, I guess wild animal sorry, not wild animal. Just like animal welfare in general and including wild animal welfare might make use of AI above and beyond. I feel like there's like a simple take which is probably mostly true, which is like, oh, I mean the phrase that everybody loves to say is make AI go well or whatever that but that's basically true. Probably you make aligned AI. I know that's like a very oversimplification and then you can have a bunch of wealth or whatever to do whatever you want. I feel like that's kind of like the standard line, but do you have any takes on, I don't know, maybe in the next couple of years or anything more specifically beyond just general purpose AI alignment, for lack of a better term, how animal welfare might put to use transformative AI.</p><p>HOLLY</p><p>My last work at Rethink Priorities was like looking a sort of zoomed out look at the field and where it should go. And so we're apparently going to do a public version, but I don't know if that's going to happen. It's been a while now since I was expecting to get a call about it. But yeah, I'm trying to think of what can I scrape from that?</p><p>AARON</p><p>As much as you can, don't reveal any classified information. But what was the general thing that this was about?</p><p>HOLLY</p><p>There are things that I think so I sort of broke it down into a couple of categories. There's like things that we could do in a world where we don't get AGI for a long time, but we get just transformative AI. Short of that, it's just able to do a lot of parallel tasks. And I think we could do a lot we could get a lot of what we want for wild animals by doing a ton of surveillance and having the ability to make incredibly precise changes to the ecosystem. Having surveillance so we know when something is like, and the capacity to do really intense simulation of the ecosystem and know what's going to happen as a result of little things. We could do that all without AGI. You could just do that with just a lot of computational power. I think our ability to simulate the environment right now is not the best, but it's not because it's impossible. It's just like we just need a lot more observations and a lot more ability to simulate a comparison is meteorology. Meteorology used to be much more of an art, but it became more of a science once they started just literally taking for every block of air and they're getting smaller and smaller, the blocks. They just do Bernoulli's Law on it and figure out what's going to happen in that block. And then you just sort of add it all together and you get actually pretty good.</p><p>AARON</p><p>Do you know how big the blocks are?</p><p>HOLLY</p><p>They get smaller all the time. That's the resolution increase, but I don't know how big the blocks are okay right now. And shockingly, that just works. That gives you a lot of the picture of what's going to happen with weather. And I think that modeling ecosystem dynamics is very similar to weather. You could say more players than ecosystems, and I think we could, with enough surveillance, get a lot better at monitoring the ecosystem and then actually have more of a chance of implementing the kinds of sweeping interventions we want. But the price would be just like never ending surveillance and having to be the stewards of the environment if we weren't automating. Depending on how much you want to automate and depending on how much you can automate without AGI or without handing it over to another intelligence.</p><p>AARON</p><p>Yeah, I've heard this. Maybe I haven't thought enough. And for some reason, I'm just, like, intuitively. I feel like I'm more skeptical of this kind of thing relative to the actual. There's a lot of things that I feel like a person might be skeptical about superhuman AI. And I'm less skeptical of that or less skeptical of things that sound as weird as this. Maybe because it's not. One thing I'm just concerned about is I feel like there's a larger scale I can imagine, just like the choice of how much, like, ecosystem is like yeah, how much ecosystem is available for wild animals is like a pretty macro level choice that might be not at all deterministic. So you could imagine spreading or terraforming other planets and things like that, or basically continuing to remove the amount of available ecosystem and also at a much more practical level, clean meat development. I have no idea what the technical bottlenecks on that are right now, but seems kind of possible that I don't know, AI can help it in some capacity.</p><p>HOLLY</p><p>Oh, I thought you're going to say that it would increase the amount of space available for wild animals. Is this like a big controversy within, I don't know, this part of the EA animal movement? If you advocate diet change and if you get people to be vegetarians, does that just free up more land for wild animals to suffer on? I thought this was like, guys, we just will never do anything if we don't choose sort of like a zone of influence and accomplish something there. It seemed like this could go on forever. It was like, literally, I rethink actually. A lot of discussions would end in like, okay, so this seems like really good for all of our target populations, but what about wild animals? I could just reverse everything. I don't know. The thoughts I came to on that were that it is worthwhile to try to figure out what are all of the actual direct effects, but I don't think we should let that guide our decision making. Only you have to have some kind of theory of change, of what is the direct effect going to lead to? And I just think that it's so illegible what you're trying to do. If you're, like, you should eat this kind of fish to save animals. It doesn't lead society to adopt, to understand and adopt your values. It's so predicated on a moment in time that might be convenient. Maybe I'm not looking hard enough at that problem, but the conclusion I ended up coming to was just like, look, I just think we have to have some idea of not just the direct impacts, but something about the indirect impacts and what's likely to facilitate other direct impacts that we want in the future.</p><p>AARON</p><p>Yeah. I also share your I don't know. I'm not sure if we share the same or I also feel conflicted about this kind of thing. Yeah. And I don't know, at the very least, I have a very high bar for saying, actually the worst of factory farming is like, we should just like, yeah, we should be okay with that, because some particular model says that at this moment in time, it has some net positive effect on animal welfare.</p><p>HOLLY</p><p>What morality is that really compatible with? I mean, I understand our morality, but maybe but pretty much anyone else who hears that conclusion is going to think that that means that the suffering doesn't matter or something.</p><p>AARON</p><p>Yeah, I don't know. I think maybe more than you, I'm willing to bite the bullet if somebody really could convince me that, yeah, chicken farming is actually just, in fact, good, even though it's counterintuitive, I'll be like, all right, fine.</p><p>HOLLY</p><p>Surely there are other ways of occupying.</p><p>AARON</p><p>Yeah.</p><p>HOLLY</p><p>Same with sometimes I would get from very classical wild animal suffering people, like, comments on my rodenticide work saying, like, well, what if it's good to have more rats? I don't know. There are surely other vehicles for utility other than ones that humans are bent on destroying.</p><p>AARON</p><p>Yeah, it's kind of neither here nor there, but I don't actually know if this is causally important, but at least psychologically. I remember seeing a mouse in a glue trap was very had an impact on me from maybe turning me, like, animal welfare pills or something. That's like, neither here nor there. It's like a random anecdote, but yeah, seems bad. All right, what came after rethink for you?</p><p>HOLLY</p><p>Yeah. Well, after the publication of the FLI Letter and Eliezer's article in Time, I was super inspired by pause. A number of emotional changes happened to me about AI safety. Nothing intellectual changed, but just I'd always been confused at and kind of taken it as a sign that people weren't really serious about AI risk when they would say things like, I don't know, the only option is alignment. The only option is for us to do cool, nerd stuff that we love doing nothing else would. I bought the arguments, but I just wasn't there emotionally. And seeing Eliezer advocate political change because he wants to save everyone's lives and he thinks that's something that we can do. Just kind of I'm sure I didn't want to face it before because it was upsetting. Not that I haven't faced a lot of upsetting and depressing things like I worked in wild animal welfare, for God's sake, but there was something that didn't quite add up for me, or I hadn't quite grocked about AI safety until seeing Eliezer really show that his concern is about everyone dying. And he's consistent with that. He's not caught on only one way of doing it, and it just kind of got in my head and I kept wanting to talk about it at work and it sort of became clear like they weren't going to pursue that sort of intervention. But I kept thinking of all these parallels between animal advocacy stuff that I knew and what could be done in AI safety. And these polls kept coming out showing that there was really high support for Paws and I just thought, this is such a huge opportunity, I really would love to help out. Originally I was looking around for who was going to be leading campaigns that I could volunteer in, and then eventually I thought, it just doesn't seem like somebody else is going to do this in the Bay Area. So I just ended up quitting rethink and being an independent organizer. And that has been really I mean, honestly, it's like a tough subject. It's like a lot to deal with, but honestly, compared to wild animal welfare, it's not that bad. And I think I'm pretty used to dealing with tough and depressing low tractability causes, but I actually think this is really tractable. I've been shocked how quickly things have moved and I sort of had this sense that, okay, people are reluctant in EA and AI safety in particular, they're not used to advocacy. They kind of vaguely think that that's bad politics is a mind killer and it's a little bit of a threat to the stuff they really love doing. Maybe that's not going to be so ascendant anymore and it's just stuff they're not familiar with. But I have the feeling that if somebody just keeps making this case that people will take to it, that I could push the Oberson window with NEA and that's gone really well.</p><p>AARON</p><p>Yeah.</p><p>HOLLY</p><p>And then of course, the public is just like pretty down. It's great.</p><p>AARON</p><p>Yeah. I feel like it's kind of weird because being in DC and I've always been, I feel like I actually used to be more into politics, to be clear. I understand or correct me if I'm wrong, but advocacy doesn't just mean in the political system or two politicians or whatever, but I assume that's like a part of what you're thinking about or not really.</p><p>HOLLY</p><p>Yeah. Early on was considering working on more political process type advocacy and I think that's really important. I totally would have done it. I just thought that it was more neglected in our community to do advocacy to the public and a lot of people had entanglements that prevented them from doing so. They work sort of with AI labs or it's important to their work that they not declare against AI labs or something like that or be perceived that way. And so they didn't want to do public advocacy that could threaten what else they're doing. But I didn't have anything like that. I've been around for a long time in EA and I've been keeping up on AI safety, but I've never really worked. That's not true. I did a PiBBs fellowship, but.</p><p>AARON</p><p>I've.</p><p>HOLLY</p><p>Never worked for anybody in like I was just more free than a lot of other people to do the public messaging and so I kind of felt that I should. Yeah, I'm also more willing to get into conflict than other EA's and so that seems valuable, no?</p><p>AARON</p><p>Yeah, I respect that. Respect that a lot. Yeah. So like one thing I feel like I've seen a lot of people on Twitter, for example. Well, not for example. That's really just it, I guess, talking about polls that come out saying like, oh yeah, the public is super enthusiastic about X, Y or Z, I feel like these are almost meaningless and maybe you can convince me otherwise. It's not exactly to be clear, I'm not saying that. I guess it could always be worse, right? All things considered, like a poll showing X thing is being supported is better than the opposite result, but you can really get people to say anything. Maybe I'm just wondering about the degree to which the public how do you imagine the public and I'm doing air quotes to playing into policies either of, I guess, industry actors or government actors?</p><p>HOLLY</p><p>Well, this is something actually that I also felt that a lot of EA's were unfamiliar with. But it does matter to our representatives, like what the constituents think it matters a mean if you talk to somebody who's ever interned in a congressperson's office, one person calling and writing letters for something can have actually depending on how contested a policy is, can have a largeish impact. My ex husband was an intern for Jim Cooper and they had this whole system for scoring when calls came in versus letters. Was it a handwritten letter, a typed letter? All of those things went into how many points it got and that was something they really cared about. Politicians do pay attention to opinion polls and they pay attention to what their vocal constituents want and they pay attention to not going against what is the norm opinion. Even if nobody in particular is pushing them on it or seems to feel strongly about it. They really are trying to calibrate themselves to what is the norm. So those are always also sometimes politicians just get directly convinced by arguments of what a policy should be. So yeah, public opinion is, I think, underappreciated by ya's because it doesn't feel like mechanistic. They're looking more for what's this weird policy hack that's going to solve what's? This super clever policy that's going to solve things rather than just like what's acceptable discourse, like how far out of his comfort zone does this politician have to go to advocate for this thing? How unpopular is it going to be to say stuff that's against this thing that now has a lot of public support?</p><p>AARON</p><p>Yeah, I guess mainly I'm like I guess I'm also I definitely could be wrong with this, but I would expect that a lot of the yeah, like for like when politicians like, get or congresspeople like, get letters and emails or whatever on a particular especially when it's relevant to a particular bill. And it's like, okay, this bill has already been filtered for the fact that it's going to get some yes votes and some no votes and it's close to or something like that. Hearing from an interested constituency is really, I don't know, I guess interesting evidence. On the other hand, I don't know, you can kind of just get Americans to say a lot of different things that I think are basically not extremely unlikely to be enacted into laws. You know what I mean? I don't know. You can just look at opinion. Sorry. No great example comes to mind right now. But I don't know, if you ask the public, should we do more safety research into, I don't know, anything. If it sounds good, then people will say yes, or am I mistaken about this?</p><p>HOLLY</p><p>I mean, on these polls, usually they ask the other way around as well. Do you think AI is really promising for its benefits and should be accelerated? They answer consistently. It's not just like, well now that sounds positive. Okay. I mean, a well done poll will correct for these things. Yeah. I've encountered a lot of skepticism about the polls. Most of the polls on this have been done by YouGov, which is pretty reputable. And then the ones that were replicated by rethink priorities, they found very consistent results and I very much trust Rethink priorities on polls. Yeah. I've had people say, well, these framings are I don't know, they object and wonder if it's like getting at the person's true beliefs. And I kind of think like, I don't know, basically this is like the kind of advocacy message that I would give and people are really receptive to it. So to me that's really promising. Whether or not if you educated them a lot more about the topic, they would think the same is I don't think the question but that's sometimes an objection that I get. Yeah, I think they're indicative. And then I also think politicians just care directly about these things. If they're able to cite that most of the public agrees with this policy, that sort of gives them a lot of what they want, regardless of whether there's some qualification to does the public really think this or are they thinking hard enough about it? And then polls are always newsworthy. Weirdly. Just any poll can be a news story and journalists love them and so it's a great chance to get exposure for the whatever thing. And politicians do care what's in the news. Actually, I think we just have more influence over the political process than EA's and less wrongers tend to believe it's true. I think a lot of people got burned in AI safety, like in the previous 20 years because it would be dismissed. It just wasn't in the overton window. But I think we have a lot of power now. Weirdly. People care what effective altruists think. People see us as having real expertise. The AI safety community does know the most about this. It's pretty wild now that's being recognized publicly and journalists and the people who influence politicians, not directly the people, but the Fourth Estate type, people pay attention to this and they influence policy. And there's many levels of I wrote if people want a more detailed explanation of this, but still high level and accessible, I hope I wrote a thing on EA forum called The Case for AI Safety Advocacy. And that kind of goes over this concept of outside versus inside game. So inside game is like working within a system to change it. Outside game is like working outside the system to put pressure on that system to change it. And I think there's many small versions of this. I think that it's helpful within EA and AI safety to be pushing the overton window of what I think that people have a wrong understanding of how hard it is to communicate this topic and how hard it is to influence governments. I want it to be more acceptable. I want it to feel more possible in EA and AI safety to go this route. And then there's the public public level of trying to make them more familiar with the issue, frame it in the way that I want, which is know, with Sam Altman's tour, the issue kind of got framed as like, well, AI is going to get built, but how are we going to do it safely? And then I would like to take that a step back and be like, should AI be built or should AGI be just if we tried, we could just not do that, or we could at least reduce the speed. And so, yeah, I want people to be exposed to that frame. I want people to not be taken in by other frames that don't include the full gamut of options. I think that's very possible. And then there's a lot of this is more of the classic thing that's been going on in AI safety for the last ten years is trying to influence AI development to be more safety conscious. And that's like another kind of dynamic. There, like trying to change sort of the general flavor, like, what's acceptable? Do we have to care about safety? What is safety? That's also kind of a window pushing exercise.</p><p>AARON</p><p>Yeah. Cool. Luckily, okay, this is not actually directly responding to anything you just said, which is luck. So I pulled up this post. So I should have read that. Luckily, I did read the case for slowing down. It was like some other popular post as part of the, like, governance fundamentals series. I think this is by somebody, Zach wait, what was it called? Wait.</p><p>HOLLY</p><p>Is it by Zach or.</p><p>AARON</p><p>Katya, I think yeah, let's think about slowing down AI. That one. So that is fresh in my mind, but yours is not yet. So what's the plan? Do you have a plan? You don't have to have a plan. I don't have plans very much.</p><p>HOLLY</p><p>Well, right now I'm hopeful about the UK AI summit. Pause AI and I have planned a multi city protest on the 21 October to encourage the UK AI Safety Summit to focus on safety first and to have as a topic arranging a pause or that of negotiation. There's a lot of a little bit upsetting advertising for that thing that's like, we need to keep up capabilities too. And I just think that's really a secondary objective. And that's how I wanted to be focused on safety. So I'm hopeful about the level of global coordination that we're already seeing. It's going so much faster than we thought. Already the UN Secretary General has been talking about this and there have been meetings about this. It's happened so much faster at the beginning of this year. Nobody thought we could talk about nobody was thinking we'd be talking about this as a mainstream topic. And then actually governments have been very receptive anyway. So right now I'm focused on other than just influencing opinion, the targets I'm focused on, or things like encouraging these international like, I have a protest on Friday, my first protest that I'm leading and kind of nervous that's against Meta. It's at the Meta building in San Francisco about their sharing of model weights. They call it open source. It's like not exactly open source, but I'm probably not going to repeat that message because it's pretty complicated to explain. I really love the pause message because it's just so hard to misinterpret and it conveys pretty clearly what we want very quickly. And you don't have a lot of bandwidth and advocacy. You write a lot of materials for a protest, but mostly what people see is the title.</p><p>AARON</p><p>That's interesting because I sort of have the opposite sense. I agree that in terms of how many informational bits you're conveying in a particular phrase, pause AI is simpler, but in some sense it's not nearly as obvious. At least maybe I'm more of a tech brain person or whatever. But why that is good, as opposed to don't give extremely powerful thing to the worst people in the world. That's like a longer everyone.</p><p>HOLLY</p><p>Maybe I'm just weird. I've gotten the feedback from open source ML people is the number one thing is like, it's too late, there's already super powerful models. There's nothing you can do to stop us, which sounds so villainous, I don't know if that's what they mean. Well, actually the number one message is you're stupid, you're not an ML engineer. Which like, okay, number two is like, it's too late, there's nothing you can do. There's all of these other and Meta is not even the most powerful generator of models that it share of open source models. I was like, okay, fine. And I don't know, I don't think that protesting too much is really the best in these situations. I just mostly kind of let that lie. I could give my theory of change on this and why I'm focusing on Meta. Meta is a large company I'm hoping to have influence on. There is a Meta building in San Francisco near where yeah, Meta is the biggest company that is doing this and I think there should be a norm against model weight sharing. I was hoping it would be something that other employees of other labs would be comfortable attending and that is a policy that is not shared across the labs. Obviously the biggest labs don't do it. So OpenAI is called OpenAI but very quickly decided not to do that. Yeah, I kind of wanted to start in a way that made it more clear than pause AI. Does that anybody's welcome something? I thought a one off issue like this that a lot of people could agree and form a coalition around would be good. A lot of people think that this is like a lot of the open source ML people think know this is like a secret. What I'm saying is secretly an argument for tyranny. I just want centralization of power. I just think that there are elites that are better qualified to run everything. It was even suggested I didn't mention China. It even suggested that I was racist because I didn't think that foreign people could make better AIS than Meta.</p><p>AARON</p><p>I'm grimacing here. The intellectual disagreeableness, if that's an appropriate term or something like that. Good on you for standing up to some pretty bad arguments.</p><p>HOLLY</p><p>Yeah, it's not like that worth it. I'm lucky that I truly am curious about what people think about stuff like that. I just find it really interesting. I spent way too much time understanding the alt. Right. For instance, I'm kind of like sure I'm on list somewhere because of the forums I was on just because I was interested and it is something that serves me well with my adversaries. I've enjoyed some conversations with people where I kind of like because my position on all this is that look, I need to be convinced and the public needs to be convinced that this is safe before we go ahead. So I kind of like not having to be the smart person making the arguments. I kind of like being like, can you explain like I'm five. I still don't get it. How does this work?</p><p>AARON</p><p>Yeah, no, I was thinking actually not long ago about open source. Like the phrase has such a positive connotation and in a lot of contexts it really is good. I don't know. I'm glad that random tech I don't know, things from 2004 or whatever, like the reddit source code is like all right, seems cool that it's open source. I don't actually know if that was how that right. But yeah, I feel like maybe even just breaking down what the positive connotation comes from and why it's in people's self. This is really what I was thinking about, is like, why is it in people's self interest to open source things that they made and that might break apart the allure or sort of ethical halo that it has around it? And I was thinking it probably has something to do with, oh, this is like how if you're a tech person who makes some cool product, you could try to put a gate around it by keeping it closed source and maybe trying to get intellectual property or something. But probably you're extremely talented already, or pretty wealthy. Definitely can be hired in the future. And if you're not wealthy yet I don't mean to put things in just materialist terms, but basically it could easily be just like in a yeah, I think I'll probably take that bit out because I didn't mean to put it in strictly like monetary terms, but basically it just seems like pretty plausibly in an arbitrary tech person's self interest, broadly construed to, in fact, open source their thing, which is totally fine and normal.</p><p>HOLLY</p><p>I think that's like 99 it's like a way of showing magnanimity showing, but.</p><p>AARON</p><p>I don't make this sound so like, I think 99.9% of human behavior is like this. I'm not saying it's like, oh, it's some secret, terrible self interested thing, but just making it more mechanistic. Okay, it's like it's like a status thing. It's like an advertising thing. It's like, okay, you're not really in need of direct economic rewards, or sort of makes sense to play the long game in some sense, and this is totally normal and fine, but at the end of the day, there's reasons why it makes sense, why it's in people's self interest to open source.</p><p>HOLLY</p><p>Literally, the culture of open source has been able to bully people into, like, oh, it's immoral to keep it for yourself. You have to release those. So it's just, like, set the norms in a lot of ways, I'm not the bully. Sounds bad, but I mean, it's just like there is a lot of pressure. It looks bad if something is closed source.</p><p>AARON</p><p>Yeah, it's kind of weird that Meta I don't know, does Meta really think it's in their I don't know. Most economic take on this would be like, oh, they somehow think it's in their shareholders interest to open source.</p><p>HOLLY</p><p>There are a lot of speculations on why they're doing this. One is that? Yeah, their models aren't as good as the top labs, but if it's open source, then open source quote, unquote then people will integrate it llama Two into their apps. Or People Will Use It And Become I don't know, it's a little weird because I don't know why using llama Two commits you to using llama Three or something, but it just ways for their models to get in in places where if you just had to pay for their models too, people would go for better ones. That's one thing. Another is, yeah, I guess these are too speculative. I don't want to be seen repeating them since I'm about to do this purchase. But there's speculation that it's in best interests in various ways to do this. I think it's possible also that just like so what happened with the release of Llama One is they were going to allow approved people to download the weights, but then within four days somebody had leaked Llama One on four chan and then they just were like, well, whatever, we'll just release the weights. And then they released Llama Two with the weights from the beginning. And it's not like 100% clear that they intended to do full open source or what they call Open source. And I keep saying it's not open source because this is like a little bit of a tricky point to make. So I'm not emphasizing it too much. So they say that they're open source, but they're not. The algorithms are not open source. There are open source ML models that have everything open sourced and I don't think that that's good. I think that's worse. So I don't want to criticize them for that. But they're saying it's open source because there's all this goodwill associated with open source. But actually what they're doing is releasing the product for free or like trade secrets even you could say like things that should be trade secrets. And yeah, they're telling people how to make it themselves. So it's like a little bit of a they're intentionally using this label that has a lot of positive connotations but probably according to Open Source Initiative, which makes the open Source license, it should be called something else or there should just be like a new category for LLMs being but I don't want things to be more open. It could easily sound like a rebuke that it should be more open to make that point. But I also don't want to call it Open source because I think Open source software should probably does deserve a lot of its positive connotation, but they're not releasing the part, that the software part because that would cut into their business. I think it would be much worse. I think they shouldn't do it. But I also am not clear on this because the Open Source ML critics say that everyone does have access to the same data set as Llama Two. But I don't know. Llama Two had 7 billion tokens and that's more than GPT Four. And I don't understand all of the details here. It's possible that the tokenization process was different or something and that's why there were more. But Meta didn't say what was in the longitude data set and usually there's some description given of what's in the data set that led some people to speculate that maybe they're using private data. They do have access to a lot of private data that shouldn't be. It's not just like the common crawl backup of the Internet. Everybody's basing their training on that and then maybe some works of literature they're not supposed to. There's like a data set there that is in question, but metas is bigger than bigger than I think well, sorry, I don't have a list in front of me. I'm not going to get stuff wrong, but it's bigger than kind of similar models and I thought that they have access to extra stuff that's not public. And it seems like people are asking if maybe that's part of the training set. But yeah, the ML people would have or the open source ML people that I've been talking to would have believed that anybody who's decent can just access all of the training sets that they've all used.</p><p>AARON</p><p>Aside, I tried to download in case I'm guessing, I don't know, it depends how many people listen to this. But in one sense, for a competent ML engineer, I'm sure open source really does mean that. But then there's people like me. I don't know. I knew a little bit of R, I think. I feel like I caught on the very last boat where I could know just barely enough programming to try to learn more, I guess. Coming out of college, I don't know, a couple of months ago, I tried to do the thing where you download Llama too, but I tried it all and now I just have like it didn't work. I have like a bunch of empty folders and I forget got some error message or whatever. Then I tried to train my own tried to train my own model on my MacBook. It just printed. That's like the only thing that a language model would do because that was like the most common token in the training set. So anyway, I'm just like, sorry, this is not important whatsoever.</p><p>HOLLY</p><p>Yeah, I feel like torn about this because I used to be a genomicist and I used to do computational biology and it was not machine learning, but I used a highly parallel GPU cluster. And so I know some stuff about it and part of me wants to mess around with it, but part of me feels like I shouldn't get seduced by this. I am kind of worried that this has happened in the AI safety community. It's always been people who are interested in from the beginning, it was people who are interested in singularity and then realized there was this problem. And so it's always been like people really interested in tech and wanting to be close to it. And I think we've been really influenced by our direction, has been really influenced by wanting to be where the action is with AI development. And I don't know that that was right.</p><p>AARON</p><p>Not personal, but I guess individual level I'm not super worried about people like you and me losing the plot by learning more about ML on their personal.</p><p>HOLLY</p><p>You know what I mean? But it does just feel sort of like I guess, yeah, this is maybe more of like a confession than, like a point. But it does feel a little bit like it's hard for me to enjoy in good conscience, like, the cool stuff.</p><p>AARON</p><p>Okay. Yeah.</p><p>HOLLY</p><p>I just see people be so attached to this as their identity. They really don't want to go in a direction of not pursuing tech because this is kind of their whole thing. And what would they do if we weren't working toward AI? This is a big fear that people express to me with they don't say it in so many words usually, but they say things like, well, I don't want AI to never get built about a pause. Which, by the way, just to clear up, my assumption is that a pause would be unless society ends for some other reason, that a pause would eventually be lifted. It couldn't be forever. But some people are worried that if you stop the momentum now, people are just so luddite in their insides that we would just never pick it up again. Or something like that. And, yeah, there's some identity stuff that's been expressed. Again, not in so many words to me about who will we be if we're just sort of like activists instead of working on.</p><p>AARON</p><p>Maybe one thing that we might actually disagree on. It's kind of important is whether so I think we both agree that Aipause is better than the status quo, at least broadly, whatever. I know that can mean different things, but yeah, maybe I'm not super convinced, actually, that if I could just, like what am I trying to say? Maybe at least right now, if I could just imagine the world where open eye and Anthropic had a couple more years to do stuff and nobody else did, that would be better. I kind of think that they are reasonably responsible actors. And so I don't know. I don't think that actually that's not an actual possibility. But, like, maybe, like, we have a different idea about, like, the degree to which, like, a problem is just, like, a million different not even a million, but, say, like, a thousand different actors, like, having increasingly powerful models versus, like, the actual, like like the actual, like, state of the art right now, being plausibly near a dangerous threshold or something. Does this make any sense to you?</p><p>HOLLY</p><p>Both those things are yeah, and this is one thing I really like about the pause position is that unlike a lot of proposals that try to allow for alignment, it's not really close to a bad choice. It's just more safe. I mean, it might be foregoing some value if there is a way to get an aligned AI faster. But, yeah, I like the pause position because it's kind of robust to this. I can't claim to know more about alignment than OpenAI or anthropic staff. I think they know much more about it. But I have fundamental doubts about the concept of alignment that make me think I'm concerned about even if things go right, like, what perverse consequences go nominally right, like, what perverse consequences could follow from that. I have, I don't know, like a theory of psychology that's, like, not super compatible with alignment. Like, I think, like yeah, like humans in living in society together are aligned with each other, but the society is a big part of that. The people you're closest to are also my background in evolutionary biology has a lot to do with genetic conflict.</p><p>AARON</p><p>What is that?</p><p>HOLLY</p><p>Genetic conflict is so interesting. Okay, this is like the most fascinating topic in biology, but it's like, essentially that in a sexual species, you're related to your close family, you're related to your ken, but you're not the same as them. You have different interests. And mothers and fathers of the same children have largely overlapping interests, but they have slightly different interests in what happens with those children. The payoff to mom is different than the payoff to dad per child. One of the classic genetic conflict arenas and one that my advisor worked on was my advisor was David Haig, was pregnancy. So mom and dad both want an offspring that's healthy. But mom is thinking about all of her offspring into the future. When she thinks about how much.</p><p>AARON</p><p>When.</p><p>HOLLY</p><p>Mom is giving resources to one baby, that is in some sense depleting her ability to have future children. But for dad, unless the species is.</p><p>AARON</p><p>Perfect, might be another father in the future.</p><p>HOLLY</p><p>Yeah, it's in his interest to take a little more. And it's really interesting. Like the tissues that the placenta is an androgenetic tissue. This is all kind of complicated. I'm trying to gloss over some details, but it's like guided more by genes that are active in when they come from the father, which there's this thing called genomic imprinting that first, and then there's this back and forth. There's like this evolution between it's going to serve alleles that came from dad imprinted, from dad to ask for more nutrients, even if that's not good for the mother and not what the mother wants. So the mother's going to respond. And you can see sometimes alleles are pretty mismatched and you get like, mom's alleles want a pretty big baby and a small placenta. So sometimes you'll see that and then dad's alleles want a big placenta and like, a smaller baby. These are so cool, but they're so hellishly complicated to talk about because it involves a bunch of genetic concepts that nobody talks about for any other reason.</p><p>AARON</p><p>I'm happy to talk about that. Maybe part of that dips below or into the weeds threshold, which I've kind of lost it, but I'm super interested in this stuff.</p><p>HOLLY</p><p>Yeah, anyway, so the basic idea is just that even the people that you're closest with and cooperate with the most, they tend to be clearly this is predicated on our genetic system. There's other and even though ML sort of evolves similarly to natural selection through gradient descent, it doesn't have the same there's no recombination, there's not genes, so there's a lot of dis analogies there. But the idea that being aligned to our psychology would just be like one thing. Our psychology is pretty conditional. I would agree that it could be one thing if we had a VNM utility function and you could give it to AGI, I would think, yes, that captures it. But even then, that utility function, it covers when you're in conflict with someone, it covers different scenarios. And so I just am like not when people say alignment. I think what they're imagining is like an omniscient. God, who knows what would be best? And that is different than what I think could be meant by just aligning values.</p><p>AARON</p><p>No, I broadly very much agree, although I do think at least this is my perception, is that based on the right 95 to 2010 Miri corpus or whatever, alignment was like alignment meant something that was kind of not actually possible in the way that you're saying. But now that we have it seems like actually humans have been able to get ML models to understand basically human language pretty shockingly. Well, and so actually, just the concern about maybe I'm sort of losing my train of thought a little bit, but I guess maybe alignment and misalignment aren't as binary as they were initially foreseen to be or something. You can still get a language model, for example, that tries to well, I guess there's different types of misleading but be deceptive or tamper with its reward function or whatever. Or you can get one that's sort of like earnestly trying to do the thing that its user wants. And that's not an incoherent concept anymore.</p><p>HOLLY</p><p>No, it's not. Yeah, so yes, there is like, I guess the point of bringing up the VNM utility function was that there was sort of in the past a way that you could mathematically I don't know, of course utility functions are still real, but that's not what we're thinking anymore. We're thinking more like training and getting the gist of what and then getting corrections when you're not doing the right thing according to our values. But yeah, sorry. So the last piece I should have said originally was that I think with humans we're already substantially unaligned, but a lot of how we work together is that we have roughly similar capabilities. And if the idea of making AGI is to have much greater capabilities than we have, that's the whole point. I just think when you scale up like that, the divisions in your psyche or are just going to be magnified as well. And this is like an informal view that I've been developing for a long time, but just that it's actually the low capabilities that allows alignment or similar capabilities that makes alignment possible. And then there are, of course, mathematical structures that could be aligned at different capabilities. So I guess I have more hope if you could find the utility function that would describe this. But if it's just a matter of acting in distribution, when you increase your capabilities, you're going to go out of distribution or you're going to go in different contexts, and then the magnitude of mismatch is going to be huge. I wish I had a more formal way of describing this, but that's like my fundamental skepticism right now that makes me just not want anyone to build it. I think that you could have very sophisticated ideas about alignment, but then still just with not when you increase capabilities enough, any little chink is going to be magnified and it could be yeah.</p><p>AARON</p><p>Seems largely right, I guess. You clearly have a better mechanistic understanding of ML.</p><p>HOLLY</p><p>I don't know. My PiBBs project was to compare natural selection and gradient descent and then compare gradient hacking to miotic drive, which is the most analogous biological this is a very cool thing, too. Meatic drive. So Meiosis, I'll start with that for everyone.</p><p>AARON</p><p>That's one of the cell things.</p><p>HOLLY</p><p>Yes. Right. So Mitosis is the one where cells just divide in your body to make more skin. But Meiosis is the special one where you go through two divisions to make gametes. So you go from like we normally have two sets of chromosomes in each cell, but the gametes, they recombine between the chromosomes. You get different combinations with new chromosomes and then they divide again to bring them down to one copy each. And then like that, those are your gametes. And the gametes eggs come together with sperm to make a zygote and the cycle goes on. But during Meiosis, the point of it is to I mean, I'm going to just assert some things that are not universally accepted, but I think this is by far the best explanation. But the point of it is to take this like, you have this huge collection of genes that might have individually different interests, and you recombine them so that they don't know which genes they're going to be with in the next generation. They know which genes they're going to be with, but which allele of those genes. So I'm going to maybe simplify some terminology because otherwise, what's to stop a bunch of genes from getting together and saying, like, hey, if we just hack the Meiosis system or like the division system to get into the gametes, we can get into the gametes at a higher rate than 50%. And it doesn't matter. We don't have to contribute to making this body. We can just work on that.</p><p>AARON</p><p>What is to stop that?</p><p>HOLLY</p><p>Yeah, well, Meiosis is to stop that. Meiosis is like a government system for the genes. It makes it so that they can't plan to be with a little cabal in the next generation because they have some chance of getting separated. And so their best chance is to just focus on making a good organism. But you do see lots of examples in nature of where that cooperation is breaking down. So some group of genes has found an exploit and it is fucking up the species. Species do go extinct because of this. It's hard to witness this happening. But there are several species. There's this species of cedar that has a form of this which is, I think, maternal genome. It's maternal genome elimination. So when the zygote comes together, the maternal chromosomes are just thrown away and it's like terrible because that affects the way that the thing works and grows, that it's put them in a death spiral and they're probably going to be extinct. And they're trees, so they live a long time, but they're probably going to be extinct in the next century. There's lots of ways to hack meiosis to get temporary benefit for genes. This, by the way, I just think is like nail in the coffin. Obviously, gene centered view is the best evolutionarily. What is the best the gene centered view of evolution.</p><p>AARON</p><p>As opposed to sort of standard, I guess, high school college thing would just be like organisms.</p><p>HOLLY</p><p>Yeah, would be individuals. Not that there's not an accurate way to talk in terms of individuals or even in terms of groups, but to me, conceptually.</p><p>AARON</p><p>They'Re all legit in some sense. Yeah, you could talk about any of them. Did anybody take like a quirk level? Probably not. That whatever comes below the level of a gene, like an individual.</p><p>HOLLY</p><p>Well, there is argument about what is a gene because there's multiple concepts of genes. You could look at what's the part that makes a protein or you can look at what is the unit that tends to stay together in recombination or something like over time.</p><p>AARON</p><p>I'm sorry, I feel like I cut you off. It's something interesting. There was meiosis.</p><p>HOLLY</p><p>Meiotic drive is like the process of hacking meiosis so that a handful of genes can be more represented in the next generation. So otherwise the only way to get more represented in the next generation is to just make a better organism, like to be naturally selected. But you can just cheat and be like, well, if I'm in 90% of the sperm, I will be next in the next generation. And essentially meiosis has to work for natural selection to work in large organisms with a large genome and then yeah, ingredient descent. We thought the analogy was going to be with gradient hacking, that there would possibly be some analogy. But I think that the recombination thing is really the key in Meadic Drive. And then there's really nothing like that in.</p><p>AARON</p><p>There'S. No selection per se. I don't know, maybe that doesn't. Make a whole lot of sense.</p><p>HOLLY</p><p>Well, I mean, in gradient, there's no.</p><p>AARON</p><p>G in analog, right?</p><p>HOLLY</p><p>There's no gene analog. Yeah, but there is, like I mean, it's a hill climbing algorithm, like natural selection. So this is especially, I think, easy to see if you're familiar with adaptive landscapes, which looks very similar to I mean, if you look at a schematic or like a model of an illustration of gradient descent, it looks very similar to adaptive landscapes. They're both, like, in dimensional spaces, and you're looking at vectors at any given point. So the adaptive landscape concept that's usually taught for evolution is, like, on one axis you have fitness, and on the other axis you have well, you can have a lot of things, but you have and you have fitness of a population, and then you have fitness on the other axis. And what it tells you is the shape of the curve there tells you which direction evolution is going to push or natural selection is going to push each generation. And so with gradient descent, there's, like, finding the gradient to get to the lowest value of the cost function, to get to a local minimum at every step. And you follow that. And so that part is very similar to natural selection, but the Miosis hacking just has a different mechanism than gradient hacking would. Gradient hacking probably has to be more about I kind of thought that there was a way for this to work. If fine tuning creates a different compartment that doesn't there's not full backpropagation, so there's like kind of two different compartments in the layers or something. But I don't know if that's right. My collaborator doesn't seem to think that that's very interesting. I don't know if they don't even.</p><p>AARON</p><p>Know what backup that's like a term I've heard like a billion times.</p><p>HOLLY</p><p>It's updating all the weights and all the layers based on that iteration.</p><p>AARON</p><p>All right. I mean, I can hear those words. I'll have to look it up later.</p><p>HOLLY</p><p>You don't have to full I think there are probably things I'm not understanding about the ML process very well, but I had thought that it was something like yeah, like in yeah, sorry, it's probably too tenuous. But anyway, yeah, I've been working on this a little bit for the last year, but I'm not super sharp on my arguments about that.</p><p>AARON</p><p>Well, I wouldn't notice. You can kind of say whatever, and I'll nod along.</p><p>HOLLY</p><p>I got to guard my reputation off the cuff anymore.</p><p>AARON</p><p>We'll edit it so you're correct no matter what.</p><p>HOLLY</p><p>Have you ever edited the Oohs and UMS out of a podcast and just been like, wow, I sound so smart? Like, even after you heard yourself the first time, you do the editing yourself, but then you listen to it and you're like, who is this person? Looks so smart.</p><p>AARON</p><p>I haven't, but actually, the 80,000 Hours After hours podcast, the first episode of theirs, I interviewed Rob and his producer Kieran Harris, and that they have actual professional sound editing. And so, yeah, I went from totally incoherent, not totally incoherent, but sarcastically totally incoherent to sounding like a normal person. Because of that.</p><p>HOLLY</p><p>I used to use it to take my laughter out of I did a podcast when I was an organizer at Harvard. Like, I did the Harvard Effective Alchruism podcast, and I laughed a lot more than I did now than I do now, which is kind of like and we even got comments about it. We got very few comments, but they were like, girl hosts laughs too much. But when I take my laughter out, I would do it myself. I was like, wow, this does sound suddenly, like, so much more serious.</p><p>AARON</p><p>Yeah, I don't know. Yeah, I definitely say like and too much. So maybe I will try to actually.</p><p>HOLLY</p><p>Realistically, that sounds like so much effort, it's not really worth it. And nobody else really notices. But I go through periods where I say like, a lot, and when I hear myself back in interviews, that really bugs me.</p><p>AARON</p><p>Yeah.</p><p>HOLLY</p><p>God, it sounds so stupid.</p><p>AARON</p><p>No. Well, I'm definitely worse. Yeah. I'm sure there'll be a way to automate this. Well, not sure, but probably not too distant.</p><p>HOLLY</p><p>Future people were sending around, like, transcripts of Trump to underscore how incoherent he is. I'm like, I sound like that sometimes.</p><p>AARON</p><p>Oh, yeah, same. I didn't actually realize that this is especially bad. When I get this transcribed, I don't know how people this is a good example. Like the last 10 seconds, if I get it transcribed, it'll make no sense whatsoever. But there's like a free service called AssemblyAI Playground where it does free dr</p><p>AARONased transcription and that makes sense. But if we just get this transcribed without identifying who's speaking, it'll be even worse than that. Yeah, actually this is like a totally random thought, but I actually spent not zero amount of effort trying to figure out how to combine the highest quality transcription, like whisper, with the slightly less good</p><p>AARONased transcriptions. You could get the speaker you could infer who's speaking based on the lower quality one, but then replace incorrect words with correct words. And I never I don't know, I'm.</p><p>HOLLY</p><p>Sure somebody that'd be nice. I would do transcripts if it were that easy, but I just never have but it is annoying because I do like to give people the chance to veto certain segments and that can get tough because even if I talk you.</p><p>AARON</p><p>Have podcasts that I don't know about.</p><p>HOLLY</p><p>Well, I used to have the Harvard one, which is called the turning test. And then yeah, I do have I.</p><p>AARON</p><p>Probably listened to that and didn't know it was you.</p><p>HOLLY</p><p>Okay, maybe Alish was the other host.</p><p>AARON</p><p>I mean, it's been a little while since yeah.</p><p>HOLLY</p><p>And then on my I like, publish audio stuff sometimes, but it's called low effort. To underscore.</p><p>AARON</p><p>Oh, yeah, I didn't actually. Okay. Great minds think alike. Low effort podcasts are the future. In fact, this is super intelligent.</p><p>HOLLY</p><p>I just have them as a way to catch up with friends and stuff and talk about their lives in a way that might recorded conversations are just better. You're more on and you get to talk about stuff that's interesting but feels too like, well, you already know this if you're not recording it.</p><p>AARON</p><p>Okay, well, I feel like there's a lot of people that I interact with casually that I don't actually they have these rich online profiles and somehow I don't know about it or something. I mean, I could know about it, but I just never clicked their substack link for some reason. So I will be listening to your casual.</p><p>HOLLY</p><p>Actually, in the 15 minutes you gave us when we pushed back the podcast, I found something like a practice talk I had given and put it on it. So that's audio that I just cool. But that's for paid subscribers. I like to give them a little something.</p><p>AARON</p><p>No, I saw that. I did two minutes of research or whatever. Cool.</p><p>HOLLY</p><p>Yeah. It's a little weird. I've always had that blog as very low effort, just whenever I feel like it. And that's why it's lasted so long. But I did start doing paid and I do feel like more responsibility to the paid subscribers now.</p><p>AARON</p><p>Yeah. Kind of the reason that I started this is because whenever I feel so much I don't know, it's very hard for me to write a low effort blog post. Even the lowest effort one still takes at the end of the day, it's like several hours. Oh, I'm going to bang it out in half an hour and no matter what, my brain doesn't let me do that.</p><p>HOLLY</p><p>That usually takes 4 hours. Yeah, I have like a four hour and an eight hour.</p><p>AARON</p><p>Wow. I feel like some people apparently Scott Alexander said that. Oh, yeah. He just writes as fast as he talks and he just clicks send or whatever. It's like, oh, if I could do.</p><p>HOLLY</p><p>That, I would have written in those paragraphs. It's crazy. Yeah, you see that when you see him in person. I've never met him, I've never talked to him, but I've been to meetups where he was and I'm at this conference or not there right now this week that he's supposed to be at.</p><p>AARON</p><p>Oh, manifest.</p><p>HOLLY</p><p>Yeah.</p><p>AARON</p><p>Nice. Okay.</p><p>HOLLY</p><p>Cool Lighthaven. They're now calling. It looks amazing. Rose Garden. And no.</p><p>AARON</p><p>I like, vaguely noticed. Think I've been to Berkeley, I think twice. Right? Definitely. This is weird. Definitely once.</p><p>HOLLY</p><p>Berkeley is awesome. Yeah.</p><p>AARON</p><p>I feel like sort of decided consciously not to try to, or maybe not decided forever, but had a period of time where I was like, oh, I should move there, or we'll move there. But then I was like I think being around other EA's in high and rational high concentration activates my status brain or something. It is very less personally bad. And DC is kind of sus that I was born here and also went to college here and maybe is also a good place to live. But I feel like maybe it's actually just true.</p><p>HOLLY</p><p>I think it's true. I mean, I always like the DCAS. I think they're very sane.</p><p>AARON</p><p>I think both clusters should be more like the other one a little bit.</p><p>HOLLY</p><p>I think so. I love Berkeley and I think I'm really enjoying it because I'm older than you. I think if you have your own personality before coming to Berkeley, that's great, but you can easily get swept. It's like Disneyland for all the people I knew on the internet, there's a physical version of them here and you can just walk it's all in walking distance. That's all pretty cool. Especially during the pandemic. I was not around almost any friends and now I see friends every day and I get to do cool stuff. And the culture is sometimes it's like a really annoying near miss for me, but a lot of the times it's just like, oh, wow, how do I know so many people who are so similar to me? This is great.</p><p>AARON</p><p>Yeah, that's definitely cool. Yeah, I've definitely had that in Eags and stuff. Cool. I feel like you have a party, right?</p><p>HOLLY</p><p>You don't have to answer that Robin Hansen's talk. I mean, probably know what he's going to say. That's the thing when you know someone's rich online profile so well, it can be weird to see them in person and just hear them say only stuff from that only a subset of those things. I'm not saying Robin's, like like, I don't know, I haven't seen him enough in person. But Stephen Pinker was this way for like I was in the evolutionary biology department, but it was kind of close to the psychology department. And I went to a lab meeting there and I talked to Steve a few times and then he actually was, yeah, like, why don't we have a meeting and talk about your career? And I was such I had read every word he'd ever written at that.</p><p>AARON</p><p>Um, that's cool.</p><p>HOLLY</p><p>But I just had nothing to say to him. And then I realized pretty much everything I did say, I knew that he was going to answer because he's not someone who speaks very spontaneously. He pretty much has Cached chunks and loads them. The only spontaneous conversation we ever had was about AI and it was because we.</p><p>AARON</p><p>Listened to a lot of ADK. But I think I mean, I did talk to like for this other podcast episode and I don't know, I didn't have that. Totally. I feel like it was like I didn't know everything he was going to say, but who else would be like that?</p><p>HOLLY</p><p>Rob has a lot of off the cuff content. He doesn't say everything he thinks.</p><p>AARON</p><p>True. Yeah. Oh, we didn't talk about we can cut this part. We didn't talk about whether there's a conspiracy to not fund pause research or pause not research pause stuff. Do you want to have a comment that we can edit out?</p><p>HOLLY</p><p>I wouldn't call it a conspiracy, but I just think there's like, a reluctance to do it.</p><p>AARON</p><p>Yeah.</p><p>HOLLY</p><p>And some of it is like I think people are just being honest about it. They're like, yeah, it would get in the way of what I'm already doing. I'm trying to have a good relationship with AI companies and I feel like this would piss them off. I don't feel like they're giving their reasoning and it could make sense. I just think that they are wrong that their whole organization shouldn't be able to fund other causes.</p><p>AARON</p><p>If this is OpenPhil, I feel like that's not a good yeah. If you're like a multibillion dollar grant organization, it's very hard to have a single yeah, it's like that's not like a person with views who needs to it's not like a single agent necessarily. I mean, it kind of acts that way.</p><p>HOLLY</p><p>Yeah. I don't even know not sure how much I can say. Yeah. I'm not sure that AI companies expect that. I'm not sure if it's like that actual that's been communicated to people like OpenPhil and they are acting accordingly, or if they're just afraid of that and acting accordingly. I don't just I feel like there should be some way for OpenPhil or Dustin to fund advocacy interventions. I think part of it is that the people making those decisions aren't convinced of them, aren't convinced that advocacy is good. And I think there are some things like that. I don't know. It's hard for me to ignore that. Holden is married to Daniela Amade and they all used to live with his brother in law, dario Amade of Anthropic. And Daniel is also of like I'm not trying to say that there's something sinister going on, but it's just like, who wants to believe that their wife is doing something really bad if like, who wants to really go there and consider that possibility? I just think that's concerning. Of course, he's probably not thinking as clearly about that as somebody else would. That bothers me. I really was bothered by holden went on that six month sabbatical and came back with his playbook for AI safety. And it was just like, more of the same. He didn't even mention public advocacy. It was like the reason he went on that sabbatical it was because of well, never mind. I'm not sure of the reason he went on that sabbatical, but it was like the news that happened during that sabbatical was all about public is kind of into this now. It just seemed like he should at least engage on that, and he didn't. And he even suggested starting a new AI company. I just thought it just seems so dated. It just wasn't, considering the strategic position we're in now. And I kind of wondered if that was because, I don't know, he's really bought into what Daniela and Dario think.</p><p>AARON</p><p>About I'm kind of more bought into the perspective of much better than replacement cutting edge AI lab is like, maybe not good or something than you seem to be. I don't have a super strong view on this. I haven't thought about it nearly as much as either you or any of the people you just mentioned, but I don't know, it doesn't seem crazy.</p><p>HOLLY</p><p>Yeah, I guess I look at it as like that would be. I don't think it's impossible that somebody could just come up with the answer to alignment and if they're able to use that AI to make sure that nobody else makes unaligned AI or something like that, and it doesn't become a totalitarian dictatorship or something, all of those things, I don't think it's impossible. I don't even know how unlikely it is. If you told me in ten years that that's how it turned out, I would be like, oh, wow. But I wouldn't be like no. But as far as the best action to take and to advocate for, I think pause is the best. I think we don't have to help another AI lab get started, but our opportunity now is before we've gone far enough with AGI pursuits, is to implement a pause and have some barrier to if someone breaks the pause they're not like one step away from. I do just think that that's overall the best action to take, but if I'm just dispassionately mapping what could happen, I could see a lot of things happening. I could see alignment by default being true. I could see that we just like I don't know, there's just like something we don't get. Maybe we are just projecting our own instincts onto AI. That would surprise me less than everything going perfect, or like one singleton forming. That was good.</p><p>AARON</p><p>Yeah, maybe. Also, let me know whatever you want to wrap up much. I don't think I've made this a public take. Not that it's been a secret, but I think maybe even more, at least relative to the other AI safety pilled. Not the other, but relative to the AI safety pilled, like Twitter sphere or something like it. It seems pretty possible that OpenAI is I was going to say net good. I don't have problems with that phrase. epistemically.</p><p>HOLLY</p><p>It seems like they've done a really good job with the product so far. I'll definitely say that.</p><p>AARON</p><p>Yeah, I'm just a lot I don't know, I feel like it's easy to and I don't think they've acted perfectly or anthropic, but it's really easy to, I guess, miss it. It seems like in the world where, I don't know, meta and some random I don't know, whatever pick your other the next five labs or whoever would come along in the next five years or whatever, the world where those labs companies are at the cutting edge, it seems like a lot worse for maybe not super explicit reasons or reasons that are meta.</p><p>HOLLY</p><p>Just seems like less that's, like, all frankly, take that out, because I don't want to be making I want to be very on the up and up with what I'm saying about meta. But, yeah, I mean, just Yan LeCun's way of talking about and there was that article recently that alleged that Zuck just wants to be that he says things about just wanting to win and they think that open source is a way to do it and that Jan Lacoon is not just saying his opinion, it's calculated to undermine all the safety stuff.</p><p>AARON</p><p>It's so weird. Yeah. Also another just weird thing is that even though all of this is in some sense in some sense, it's like the extreme cutting edge of capitalism. On the other sense, okay, the key movers here have more money. It's like marginal money. Probably doesn't actually per se is probably not actually directly good for them or whatever. Once you have $100 million or whatever, the next million dollars isn't all that great. And it seems like a lot of them are, if not ethically motivated motivated by things beyond pure status, actually. Sorry, not pure status, but maybe at least like pure monetary incentives. Sorry, I sort of lost my train of thought.</p><p>HOLLY</p><p>I frequently think that people underrate the importance of the motive that just, like, people like doing what they're doing. They like their science, they like their work, and they don't want to think that it's bad. I just think, as simple as that, they really enjoy doing their work. They enjoy the kind of status that it brings, even if it's not financial, even if the wards aren't necessarily financial. The dynamic between Lacoon and Benjio and Hinton is really interesting because I'm just paraphrasing interactions I've remembered, but they seem to be saying, just give it up, Yan. We made a mistake. We need to course correct. And they both express henton and Benjio both expressed a lot of remorse about even though they didn't think that they did it on, but, like, they feel very sad that their life's work might have this legacy. And they seem to think that Yan Mccun is not dealing with that. And this could be a way of insisting that nothing's wrong and everything's good and just pushing harder in the other direction might be, like, a way of getting away from that possibility. I don't know.</p><p>AARON</p><p>Yeah, it sort of sucks that the psychology of a couple of dudes is quite important. Yeah. I don't know.</p><p>HOLLY</p><p>This is another area where my history of animal advocacy is interesting because I was a kid, vegetarian, and so I observed over many years how people would react to that and especially how they would react when they didn't think they had to make good arguments. It was one of the ways I first got interested in rationality, actually, because people would just give adults would just give the worst arguments for same so far. Yes, and I'm seeing that a lot with this. People who are unquestionably, the smartest people I knew are now saying the dumbest shit, now that pause is on the table and they're getting better about it. I mean, I think they were just taken aback at first, but they would say just like the dumbest reasons that it wasn't going to work, it just revealed. They obviously didn't want it to be a thing, or they didn't want to think about a new paradigm, or they kind of wanted things to be the way they were, where the focus was on technical stuff. I was having a conversation with somebody about the first instance of the Campaign for AI safety website. That's the Australian AI Safety Advocacy Group. And the first version of that website was a bit amateurish, I will definitely say, but I was in this thread and the people in it were making fun of it and picking on little things about it that didn't even make any sense. There was one line that was like ML engineers could be made to work on AI safety, or instead they could work on AI safety. Retrained was the word they used. And this is very similar. Like in vegan advocacy, you hear this all the time. Like slaughterhouse workers can be retrained in organic farming. It's not a great it's a little sillier in that case, very silly.</p><p>AARON</p><p>In the first case. I don't think it's that silly.</p><p>HOLLY</p><p>Yeah, but the point of that kind of thing is we care about the jobs of the people who be affected by this. And there are jobs in our thing.</p><p>AARON</p><p>Silicon Valley ML experts really struggling to make ends meet.</p><p>HOLLY</p><p>But that line was picked on and made fun of. And actually one person who was like a very smart person, knows a lot about the topic, was like, this would be like forced labor camps. And they might not have said camp, they might have just said forced labor program or something like that. And I was just like, what the dude? That's the most uncharitable explanation I've ever reaction I've ever heard. The reason that we can't pause or advocate for AI safety in public is that just everybody who wants to do it is too stupid. And so the only thing we can do is what you're doing, I guess, which I guess I won't say what it is because I want to maintain their anonymity. But it really struck me that happened in April and I just thought it was just very recognizable to me as the kind of terrible argument that only makes sense if you just think you have everybody's on your side and you can do a status move to keep people out or to keep something else out. That particular incident influenced me strongly to push for this harder because I don't know, if you're just present, like, making the argument more even if your argument is stupid, people just don't react that dumb.</p><p>AARON</p><p>No, I'm glad you updated in that. Like, I do think it's very good that AI safety seems NEA. It seems, like, pretty high. I don't know, it depends what status hierarchy you're talking about. But in all relevant domains, it seems pretty high status. And actually, it's kind of crazy how smart everybody is. This is my personal I don't know. Yeah, I feel like technical AI safety people really fucking smart. And so yeah, I've seen some people on Twitter say only once or twice because it's so far from true, but once or twice? Yeah, I guess they're just not smart enough to work in ML. It's like, okay, I don't know. It's like the farthest possible thing from the truth.</p><p>HOLLY</p><p>Yeah. The ML people, the open source ML people who are trying to hurt my feelings definitely want to go in on, like, I'm not smart enough, or my degree isn't a dumb subject or something. Yeah, it's great to be smart, but there just are more important things, and I just don't think you have to be a genius to see the logic of what I'm saying. Anyway, what I was saying was there's like a status quo or a relative status quo that a lot of people were comfortable even. I think Jan Lacoon was comfortable with being cool ML genius and doesn't want there to be some moral or ethical question with it. At least that's the picture I get from his interaction with the other Turing Prize winners. And then within AI safety, people don't really want to think about switching gears. Or maybe the landscape has shifted and now the next move is something that's not the skills they've spent all their time developing and not the skills that kind of got them into this whole thing, which I don't want anybody working on technical stuff to quit or something.</p><p>AARON</p><p>Yeah, the soy lent is just adds to the ethos.</p><p>HOLLY</p><p>Yeah, guys, I've been drinking a soy lent the whole time. It's not that I love them, but I do go through these periods where I feel like kind of nauseous and don't want to eat, and, like, soylent is whatever works.</p><p>AARON</p><p>Yeah, cool. I think I'm, like, slightly running out.</p><p>HOLLY</p><p>Of steam, which is like, there by four.</p><p>AARON</p><p>Okay. Yeah. But you are invited back on pigeon hour anytime. Not literally anytime, but virtually anytime.</p><p>HOLLY</p><p>We can record one for my thing.</p><p>AARON</p><p>Oh, yeah, totally. Any closing takes? Thoughts? I don't have any. You don't have to either.</p><p>HOLLY</p><p>Yeah, it was a fun time. Thank you.</p><p>AARON</p><p>Oh, cool. Yeah, no, maybe at some other point we can just discuss all your Evo biology takes or whatever, because that was quite interesting.</p><p>HOLLY</p><p>Oh, yeah. There's going to be maybe this chat cone thing, which is like, the less Wrong did, like, the Miri conversations last year, and they're trying to replicate that for more topics. And there might be one on evolution soon that I might be part of.</p><p>AARON</p><p>I'll keep an eye on that.</p><p>HOLLY</p><p>So I don't know if accompanying readings are fun for the podcast. Anyway. Yeah, I should probably go because I also need to pee. I've had three different liquids over here this whole time.</p><p>AARON</p><p>Okay. That's a great reason. Thank you so much.</p><p>HOLLY</p><p>Okay, bye. Thank you.</p>]]></content:encoded></item><item><title><![CDATA[#6 Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense)]]></title><description><![CDATA[This wide-ranging conversation between Daniel and Aaron touches on movies, business drama, philosophy of language, ethics and legal theory. The two debate major ethical concepts like utilitarianism and moral realism.]]></description><link>https://www.aaronbergman.net/p/6-daniel-filan-on-why-im-wrong-about</link><guid isPermaLink="false">https://www.aaronbergman.net/p/6-daniel-filan-on-why-im-wrong-about</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Mon, 07 Aug 2023 04:24:25 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/135785095/0fa9969a68d779b0b89e123a328de7f7.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>Listen on: </p><ul><li><p><a href="https://podcasters.spotify.com/pod/show/aaron-bergman9/episodes/6-Daniel-Filan-on-why-Im-wrong-about-ethics--Oppenheimer-and-what-names-mean-in-like-a-hardcore-phil-of-language-sense-e27qo0f">Spotify</a></p></li><li><p><a href="https://podcasts.apple.com/us/podcast/6-daniel-filan-on-why-im-wrong-about-ethics-oppenheimer/id1693154768?i=1000623637923">Apple Podcasts</a></p></li><li><p><a href="https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy9lNDJlYjllYy9wb2RjYXN0L3Jzcw/episode/YjU1NGY0NTgtNzY1My00YmQzLWE4NzEtNjZiZTk5ODUyNGM1?sa=X&amp;ved=0CAUQkfYCahcKEwi4g92C_OKAAxUAAAAAHQAAAAAQLQ">Google Podcasts</a></p></li></ul><p><em>Note: the core discussion on ethics begins at 7:58 and moves into philosophy of language at ~1:12:19</em></p><h4>Daniel&#8217;s stuff:</h4><ul><li><p><a href="https://axrp.net/">AI X-risk podcast </a></p></li><li><p><a href="https://thefilancabinet.com/">The Filan Cabined podcast</a></p></li><li><p><a href="https://danielfilan.com/">Personal website and blog</a></p></li></ul><div><hr></div><h3>Blurb and bulleted summary from <a href="http://claude.ai">Clong</a></h3><p>This wide-ranging conversation between Daniel and Aaron touches on movies, business drama, philosophy of language, ethics and legal theory. The two debate major ethical concepts like utilitarianism and moral realism. Thought experiments around rational beings choosing to undergo suffering feature prominently. meandering tangents explore the semantics of names and references.</p><ul><li><p>Aaron asserts that total utilitarianism does not imply that any amount of suffering can be morally justified by creating more happiness. His argument is that the affirmative case for this offsetting ability has not been clearly made.</p></li><li><p>He proposes a thought experiment - if offered to experience the suffering of all factory farmed animals in exchange for unlimited happiness, even a perfectly rational being would refuse. This indicates there are some levels of suffering not offsettable.</p></li><li><p>Aaron links this to experiences like hunger where you realize suffering can be worse than you appreciate normally. This causes his intuition some suffering can't be outweighed.</p></li><li><p>Daniel disagrees, believing with the right probabilities and magnitudes of suffering versus happiness, rational beings would take that gamble.</p></li><li><p>For example, Daniel thinks the atomic bombing of Japan could be offset by reducing more suffering. Aaron is less sure given the pain inflicted.</p></li><li><p>Daniel also proposes offsets for animal farming, but Aaron doesn't think factory farming harm is offsettable by any amount of enjoyment of meat.</p></li><li><p>They discuss definitions of rationality and whether evolution pressures against suicide impact the rationality of not killing oneself.</p></li><li><p>Aaron ties his argument to siding with what a perfectly rational being would choose to experience, not necessarily what they would prefer.</p></li><li><p>They debate whether hypothetical aliens pursuing "schmorality" could point to a concept truly analogous to human morality. Aaron believes not.</p></li></ul><h1>Transcript</h1><p><em>(Very imperfect)</em></p><p>AARON</p><p>O'how's, it going it's going all right.</p><p>DANIEL</p><p>Yeah, I just so yesterday I saw Barbie and today I saw Oppenheimer, so it's good to oh, cool. That cultural.</p><p>AARON</p><p>Nice, nice.</p><p>DANIEL</p><p>Do you have takes? Yeah, I thought it was all right. It was a decent view of Oppenheimer as a person. It was like a how? I don't know. I feel like the public can tend to be taken in by this physicist figures you get this with quotes, right? Like, the guy was just very good at having fun with journalists, and now we get these amazing nuggets of wisdom from Einstein. I don't know. I think that guy was just having good I don't know. The thing that I'm coming away from is I thought I only watched Barbie because it was coming out on the same day as Oppenheimer, right? Like, otherwise it wouldn't have occurred to me to watch it. I was like, yeah, whatever. Barbie is, like, along for the ride, and Oppenheimer is going to be amazing, but in like, maybe Oppenheimer was a bit better than Barbie, but I'm not even sure of that, actually.</p><p>AARON</p><p>Yeah, I've been seeing people say that on Twitter. I haven't seen either, but I've been seeing several people say that I'm following, say, like, Barbie was exceptional. And also that kind of makes sense because I'm following all these EA people who are probably care more about the subject matter for the latter one. So it's like, I kind of believe that Barbie is, like, aesthetically better or something. That's my take. Right.</p><p>DANIEL</p><p>Guess. Well, if you haven't seen them, I guess I don't want to spoil them for you. They're trying to do different things aesthetically. Right. Like, I'm not quite sure I'd want to say one is aesthetically better. Probably in some ways, I think Barbie probably has more aesthetic blunders than Oppenheimer does. Okay. But yeah, I don't know if you haven't seen it, I feel like I don't want to spoil it for you.</p><p>AARON</p><p>Okay. No, that's fine. This isn't supposed to be like probably isn't the most important the most interesting thing we could be talking about is that the bar?</p><p>DANIEL</p><p>Oh, jeez.</p><p>AARON</p><p>Oh, no, that's a terrible bar. That was like an overstatement. That would be a very high bar. It would also be, like, kind of paralyzing. I don't know. Actually know what that would be, honestly. Probably some social juicy gossip thing. Not that we necessarily have any.</p><p>DANIEL</p><p>Yeah, I think your interestingness. Yeah, I think I don't have the know, the closest to gossip thing I saw was like, do you see this bit of Carolyn Elson's diaries and letters to SBF that was leaked to the.</p><p>AARON</p><p>No, I don't. Was this like today or recently? How recently?</p><p>DANIEL</p><p>This was like a few days ago.</p><p>AARON</p><p>I've been seeing her face on Twitter, but I don't actually think I know anything about this. And no, I would not have.</p><p>DANIEL</p><p>Background of who she is and stuff.</p><p>AARON</p><p>Yeah, hold on. Let the audience know that I am on a beach family vacation against my will. Just kidding. Not against my will. And I have to text my sister back. Okay, there we go. I mean, I broadly know the FTX story. I know that she was wait, I'm like literally blanking on the Alameda.</p><p>DANIEL</p><p>That's the name of research.</p><p>AARON</p><p>Okay. Yeah. So she was CEO, right? Yeah. Or like some sort of like I think I know the basics.</p><p>DANIEL</p><p>The like, she was one of the OG Stanford EA people and was around.</p><p>AARON</p><p>Yeah, that's like a generation. Not an actual generation, like an EA generation. Which is what, like six years or.</p><p>DANIEL</p><p>Like the I don't know, I've noticed like, in the there's like I feel like there's this gap between pre COVID people and post COVID people. No one left their house. Partly people moved away, but also you were inside for a while and never saw anyone in person. So it felt like, oh, there's like this crop of new people or something. Whereas in previous years, there'd be some number of new people per year and they'd get gradually integrated in. Anyway, all that is to say that, I don't know, I think SBF's side of the legal battle leaked some documents to The New York Times, which were honestly just like her saying, like, oh, I feel very stressed and I don't like my job, and I'm sort of glad that the thing is blown up now. I don't know. It honestly wasn't that salacious. But I think that's, like, the way I get in the loop on gossip like some of the New York Times.</p><p>AARON</p><p>And I eventually I love how it's funny that this particular piece of gossip is, like, running through the most famous and prestigious news organization in the world. Or, like, one of them or something. Yeah. Instead of just being like, oh, yeah, these two people are dating, or whatever. Anyway, okay, I will maybe check that out.</p><p>DANIEL</p><p>Yeah, I mean, honestly, it's not even that interesting.</p><p>AARON</p><p>The whole thing is pretty I am pretty. This is maybe bad, but I can't wait to watch the Michael Lewis documentary, pseudo documentary or whatever.</p><p>DANIEL</p><p>Yeah, it'll be good to read the book. Yeah, it's very surreal. I don't know. I was watching Oppenheimer. Right. And I have to admit, part of what I'm thinking is be if humanity survives, there's going to be this style movie about open AI, presumably, right? And I'm like, oh, man, it'll be amazing to see my friend group depicted on film. But that is going to happen. It's just going to be about FTX and about how they're all criminals. So that's not great.</p><p>AARON</p><p>Yeah, actually, everybody dunks on crypto now, and it's like low status now or whatever. I still think it's really cool. I never had more than maybe $2,000 or whatever, which is not a trivial I mean, it's not a large amount of my money either, but it's not like, nothing. But I don't know, if it wasn't for all the cultural baggage, I feel like I would be a crypto bro or I would be predisposed to being a crypto bro or something.</p><p>DANIEL</p><p>Yeah. I should say I was like joking about the greedy crypto people who want their money to not be stolen. I currently have a Monero sticker on the back of my a big I don't know, I'm a fan of the crypto space. It seems cool. Yeah. I guess especially the bit that is less about running weird scams. The bit that's running weird scams I'm less of a fan of.</p><p>AARON</p><p>Yeah. Yes. I'm also anti scam. Right, thank you. Okay, so I think that thing that we were talking about last time we talked, which is like the thing I think we actually both know stuff about instead of just like, repeating New York Times articles is my nuanced ethics takes and why you think about talk about that and then we can just also branch off from there.</p><p>DANIEL</p><p>Yeah, we can talk about that.</p><p>AARON</p><p>Maybe see where that did. I luckily I have a split screen up, so I can pull up things. Maybe this is kind of like egotistical or something to center my particular view, but you've definitely given me some of the better pushback or whatever that I haven't gotten that much feedback of any kind, I guess, but it's still interesting to hear your take. So basically my ethical position or the thing that I think is true is that which I think is not the default view. I think most people think this is wrong is that total utilitarianism does not imply that for some amount of suffering that could be created there exists some other extremely large arbitrarily, large amount of happiness that could also be created which would morally justify the former. Basically.</p><p>DANIEL</p><p>So you think that even under total utilitarianism there can be big amounts of suffering such that there's no way to morally tip the calculus. However much pleasure you can create, it's just not going to outweigh the fact that you inflicted that much suffering on some people.</p><p>AARON</p><p>Yeah, and I'd highlight the word inflicted if something's already there and you can't do anything about it, that's kind of neither here nor there as it pertains to your actions or something. So it's really about you increasing, you creating suffering that wouldn't have otherwise been created. Yeah. It's also been a couple of months since I've thought about this in extreme detail, although I thought about it quite a bit. Yeah.</p><p>DANIEL</p><p>Maybe I should say my contrary view, I guess, when you say that, I don't know, does total utilitarianism imply something or not? I'm like, well, presumably it depends on what we mean by total utilitarianism. Right. So setting that aside, I think that thesis is probably false. I think that yeah. You can offset great amounts of suffering with great amounts of pleasure, even for arbitrary amounts of suffering.</p><p>AARON</p><p>Okay. I do think that position is like the much more common and even, I'd say default view. Do you agree with that? It's sort of like the implicit position of people who are of self described total utilitarians who haven't thought a ton about this particular question.</p><p>DANIEL</p><p>Yeah, I think it's probably the implicit default. I think it's the implicit default in ethical theory or something. I think that in practice, when you're being a utilitarian, I don't know, normally, if you're trying to be a utilitarian and you see yourself inflicting a large amount of suffering, I don't know. I do think there's some instinct to be like, is there any way we can get around this?</p><p>AARON</p><p>Yeah, for sure. And to be clear, I don't think this would look like a thought experiment. I think what it looks like in practice and also I will throw in caveats as I see necessary, but I think what it looks like in practice is like, spreading either wild animals or humans or even sentient digital life through the universe. That's in a non as risky way, but that's still just maybe like, say, making the earth, making multiple copies of humanity or something like that. That would be an example that's probably not like an example of what an example of creating suffering would be. For example, just creating another duplicate of earth. Okay.</p><p>DANIEL</p><p>Anything that would be like so much suffering that we shouldn't even the pleasures of earth outweighs.</p><p>AARON</p><p>Not necessarily, which is kind of a cop out. But my inclination is that if you include wild animals, the answer is yes, that creating another earth especially. Yeah, but I'm much more committed to some amount. It's like some amount than this particular time and place in human industry is like that or whatever.</p><p>DANIEL</p><p>Okay, can I get a feel of some other concrete cases to see?</p><p>AARON</p><p>Yeah.</p><p>DANIEL</p><p>So one example that's on my mind is, like, the atomic bombing of Hiroshima and Nagasaki, right? So the standard case for this is, like, yeah, what? A hundred OD thousand people died? Like, quite terrible, quite awful. And a lot of them died, I guess a lot of them were sort of some people were sort of instantly vaporized, but a lot of people died in extremely painful ways. But the countercase is like, well, the alternative to that would have been like, an incredibly grueling land invasion of Japan, where many more people would have died or know regardless of what the actual alternatives were. If you think about the atomic bombings, do you think that's like the kind of infliction of suffering where there's just not an offsetting amount of pleasure that could make that okay?</p><p>AARON</p><p>My intuition is no, that it is offsettable, but I would also emphasize that given the actual historical contingencies, the alternative, the implicit case for the bombing includes reducing suffering elsewhere rather than merely creating happiness. There can definitely be two bad choices that you have to make or something. And my claim doesn't really pertain to that, at least not directly.</p><p>DANIEL</p><p>Right. Sorry. But when you said you thought your answer was no, you think you can't offset that with pleasure?</p><p>AARON</p><p>My intuition is that you can, but I know very little about how painful those deaths were and how long they lasted.</p><p>DANIEL</p><p>Yeah, so the non offset so it's like, further out than atomic bombing.</p><p>AARON</p><p>That's my guess, but I'm like.</p><p>DANIEL</p><p>Okay, sure, that's your guess. You're not super confident. That's fine. I guess another thing would be, like, the animal farming system. So, as you're aware, tons of animals get kept in farms for humans to eat, by many count. Many of them live extremely horrible lives. Is there some amount that humans could enjoy meat such that that would be okay?</p><p>AARON</p><p>No. So the only reason I'm hesitating is because, like, the question is, like, what the actual alternative is here, but, like, if it's like, if it's, like, people enjoy, like, a meat a normal amount and there's no basically the answer is no. Although, like, what I would actually endorse doing depends on what the alternative is.</p><p>DANIEL</p><p>Okay, but you think that factory farming is so bad that it's not offsettable by pleasure.</p><p>AARON</p><p>Yeah, that's right. I'm somewhat maybe more confident than the atomic bombing case, but again, I don't know what it's like to be a factory farm pig. I wouldn't say I'm, like, 99% sure. Probably more than 70% or something. Or 70%, like, conditional on me being right about this thesis, I guess something like that, which I'm like. Yeah, okay. I don't know. Some percent, maybe, not probably not 99% sure, but also more than 60. Probably more than 70% sure or something.</p><p>DANIEL</p><p>All right. Yeah. So I guess maybe can you tell us a little bit about why you would believe that there's some threshold that you like where you can no longer compensate by permitting pleasure?</p><p>AARON</p><p>Yes. Let me run through my argument and sort of a motivation, and the motivation actually is sort of more a direct answer to what you just said. So the actual argument that I have and I have a blog post about this that I'll link, it was part of an EA forum post also that you'll also link in the show description is that the affirmative default case doesn't seem to actually be made anywhere. That's not the complete argument, but it's a core piece of it, which is that it seems to be, like, the default received view, which doesn't mean it's wrong, but does mean that we should be skeptical. If you accept that I'm right, that the affirmative case hasn't been made, we can talk about that. Then you should default to some other heuristic. And the heuristic that I assert and sort of argue, but kind of just assert is a good heuristic is. Okay. Is you do the following thought experiment. If I was a maximally or perfectly rational being, would I personally choose to undergo this amount of suffering in compensation or not compensation, exchange for later undergoing or earlier undergoing some arbitrarily large amount of happiness. And I personally have the intuition that there are events or things that certainly conceivable states and almost certainly possible states that I could be in such that even as a rational being, like as a maximum rational being, I would choose to just disappear and not exist rather than undergo both of these things.</p><p>DANIEL</p><p>Okay.</p><p>AARON</p><p>Yeah.</p><p>DANIEL</p><p>Why do you think that?</p><p>AARON</p><p>Yeah, so good question. I think the answer comes at a couple of different levels. So there's a question of why I'm saying it and why I'm saying it is because I'm pretty sure this is the answer I would actually give if actually given if Credibly offered this option. But that just pushes the question back. Okay, why do I feel that.</p><p>DANIEL</p><p>Even what option are we talking about here? There exists a thing such that for.</p><p>AARON</p><p>All pleasures, basically, for example, let's just run with the fact, the assumption that a genie God descends. And I think it's credible, and he offers that I can live the life of every factory, farmed animal in exchange for whatever I want for any amount of time or something like that. Literally, I don't have to give the answer now. It can just be like an arbitrarily good state for an arbitrarily long period of time.</p><p>DANIEL</p><p>Oh, yeah.</p><p>AARON</p><p>And not only would I say the words no, I don't want to do that, I think that the words no, I don't want to do that, are selfishly in a non pejorative sense. Correct. And then there's a question of why do I have that intuition? And now I'm introspecting, which is maybe not super reliable. I think part of my intuition that I can kind of maybe sort of access via introspection just comes from basically, I'm very fortunate to not have had a mostly relatively comfortable life, like as a Westerner with access to painkillers, living in the 21st century. Even still, there have definitely been times when I've been suffered, at least not in a relative sense, but just like, in an absolute sense to me, in a pretty bad way. And one example I can give was just like, I was on a backpacking trip, and this is the example I give in another blog post I can link. I was on a backpacking trip, and we didn't have enough food, and I was basically very hungry for like five days. And I actually think that this is a good and I'm rambling on, but I'll finish up. I think it's illustrative. I think there's some level of suffering where you're still able to do at least for me, I'm still able to do something like reasoning and intentionally storing memories. One of the memories I tried to intentionally codify via language or something was like, yeah, this is really bad, this really sucks, or something like, that what.</p><p>DANIEL</p><p>Sucked about it, you were just like, really hungry yeah.</p><p>AARON</p><p>For five days.</p><p>DANIEL</p><p>Okay. And you codified the thought, like, feeling of this hunger I'm feeling, this really sucks.</p><p>AARON</p><p>Something like that. Right. I could probably explicate it more, but that's basically okay. Actually, hold on. All right. Let me add so not just it really sucks, but it sucks in a way that I can't normally appreciate, so I don't normally have access to how bad it sucks. I don't want to forget about this later or something.</p><p>DANIEL</p><p>Yeah. The fact that there are pains that are really bad where you don't normally appreciate how bad they are, it's not clear how that implies non offset ability.</p><p>AARON</p><p>Right, I agree. It doesn't.</p><p>DANIEL</p><p>Okay.</p><p>AARON</p><p>I do think that's causally responsible for my intuition that I lend link to a heuristic that I then argue does constitute an argument in the absence of other arguments for offset ability.</p><p>DANIEL</p><p>Yeah. Okay. So that causes this intuition, and then you give some arguments, and the argument is like, you think that if a genie offered you to live liable factory farmed animals in exchange for whatever you wanted, you wouldn't go for that.</p><p>AARON</p><p>Yes. And furthermore, I also wouldn't go for it if I was much more rational.</p><p>DANIEL</p><p>If you were rational, yeah. Okay. Yeah. What do I think about this? One thing I think is that the I think the case of live experience this suffering and then experience this pleasure, to me, I think that this is kind of the wrong way to go about this. Because the thing about experiencing suffering is that it's not just we don't live in this totally dualistic world where suffering just affects only your immaterial mind or something in a way where afterwards you could just be the same. In the real world, suffering actually affects you. Right. Perhaps indelibly. I think instead, maybe the thing I'd want to say is suppose you're offered a gamble, right, where there's like a 1% chance that you're going to have to undergo excruciating suffering and a 99% chance that you get extremely awesome pleasures or something.</p><p>AARON</p><p>Yeah.</p><p>DANIEL</p><p>And this is meant to model a situation in which you do some action in which one person is going to undergo really bad suffering and 99 other people are going to undergo really great pleasure. And to me, I guess my intuition is that for any bad thing, you could make the probability small enough and you can make the rest of the probability mass good enough that I want to do that. I feel like that's worth it for me. And now it feels a little bit unsatisfying that we're just going that we're both drilling down to, like, well, this is the choice I would make, and then maybe you can disagree that it's the choice you would make. But yeah, I guess about the gambling case, what do you think about that? Let's say it's literally a one in a million chance that you would have to undergo, let's say, the life of one factory farmed animal.</p><p>AARON</p><p>Yeah.</p><p>DANIEL</p><p>Or is that not enough? Do you want it to be like, more?</p><p>AARON</p><p>Well, I guess it would have to be like one of the worst factory farmed animals. Life, I think would make that like.</p><p>DANIEL</p><p>Yeah, okay, let's say it's like, maybe literally one in a billion chance.</p><p>AARON</p><p>First of all, I do agree that these are basically isomorphic or morally equivalent, or if anything, time ordering in my example does mess things up a little bit, I'll be happy to reverse them or say that instead compare one person to 1000 people. So, yeah, you can make the probability small enough that my intuition changes. Yeah. So in fact, 1%, I'm very like, no, definitely not doing that. One in a million. I'm like, I don't know, kind of 50 50. I don't have a strong intuition either way. 100 trillion. I have the intuition. You know what? That's just not going to happen. That's my first order intuition. I do think that considering the case where you live, one being lives both lives, or you have, say, one being undergoing the suffering and then like 100 trillion undergoing the pleasure makes small probabilities more if you agree that they're sort of isomorphic makes them more complete or something like that, or complete more real in some. Not tangible is not the right word, but more right.</p><p>DANIEL</p><p>You're less tempted to round it to zero.</p><p>AARON</p><p>Yeah. And so I tend to think that I trust my intuitions more about reasoning. Okay, there's one person undergoing suffering and like 100 trillion undergoing happiness as it pertains to the question of offset ability more than I trust my intuitions about small probabilities.</p><p>DANIEL</p><p>I guess that's strange because that strikes me as strange because I feel like you're regularly in situations where you make choices that have some probability of causing you quite bad suffering, but a large probability of being fun. Like going to the beach. There could be a shark there. I guess this is maybe against your will, but you can go to a restaurant, maybe get food poisoning, but how often are you like, oh man, if I flip this switch, one person will be poisoned, but 99 people will?</p><p>AARON</p><p>Well, then you'd have to think that, okay, staying home would actually be safer for some reason, which I don't affirmatively think is true, but this actually does work out for the question of whether you should kill yourself. And there hopefully this doesn't get censored by Apple or whatever, so nobody do that. But there I just think that my lizard brain or there's enough evolutionary pressure to not trust that I would be rational when it comes to the question of whether to avoid a small chance of suffering by unaliving myself, as they say on TikTok.</p><p>DANIEL</p><p>Hang on, evolution is pressured. So there's some evolutionary pressure to make sure you really don't want to kill yourself, but you think that's like, irrational.</p><p>AARON</p><p>I haven't actually given this a ton of thought. It gets hard when you loop in altruism and yeah, the question also there's like some chance that of sentient's after death, there's not literally zero or something like that. Yeah, I guess those are kind of cop outs. So I don't know, I feel like it certainly could be. And I agree this is sort of like a strike against my argument or something. I can set up a situation you have no potential to improve the lives of others, and you can be absolutely sure that you're not going to experience any sentience after death. And then I feel like my argument does kind of imply that, yeah, that's like the rational thing to do. I wouldn't do it. Right. So I agree. This is like a strike against me.</p><p>DANIEL</p><p>Yeah. I guess I just want to make two points. So the first point I want to make is just methodologically. If we're talking about which are you likely to be more rational about gambles of small risks, small probabilities of risk versus large rewards as opposed to situations where you can do a thing that affects a large number of people one way and a small number of people another way? I think the gambles are more like decisions that you make a bunch and you should be rational about and then just the second thing in terms of like, I don't know, I took you to be making some sort of argument along the lines of there's evolutionary pressure to want to not kill yourself. Therefore, that's like a debunking explanation. The fact that there was evolutionary pressure to not kill ourselves means that our instinct that we shouldn't kill ourselves is irrational. Whereas I would tend to look at it and say the fact that there was very strong evolutionary pressure to not kill ourselves is an explanation of why I don't want to kill myself. And I see that as affirming the choice to not kill myself, actually.</p><p>AARON</p><p>Well, I just want to say I don't think it's an affirmative argument that it is irrational. I think it opens up the question. I think it means it's more plausible that for other I guess not even necessarily for other reasons, but it just makes it more plausible that it is irrational. Well.</p><p>DANIEL</p><p>Yeah, I take exactly the opposite view. Okay. I think that if I'm thinking about, like, oh, what do I really want? If I consider my true preferences, do I really want to kill myself or something? And then I learn that, oh, evolution has shaped me to not kill myself, I think the inference I should make is like, oh, I guess probably the way evolution did that is that it made it such that my true desires are to not kill myself.</p><p>AARON</p><p>Yeah. So one thing is I just don't think preferences have any intrinsic value. So I don't know, we might just like I guess I should ask, do you agree with that or disagree with.</p><p>DANIEL</p><p>That do I think preferences have intrinsic value? No, but so no, but I think like, the whole game here is like, what do I prefer? Or like, what would I prefer if I understood things really clearly?</p><p>AARON</p><p>Yes. And this is something I didn't really highlight or maybe I didn't say it at all, is that I forget if I really argue it or kind of just assert it, but I at least assert that the answer to hedonic utilitarian. What you should do under hedonic utilitarianism is maybe not identical to, but exactly the same as what a rational agent would do or what a rational agent would prefer if they were to experience everything that this agent would cause. Or something like that. And so these should give you the exact same answers is something I believe sure. Because I do think preferences are like we're built to understand or sort of intuit and reason about our own preferences.</p><p>DANIEL</p><p>Kind of, yeah. But broadly, I guess the point I'm making at a high level is just like if we're talking about what's ethical or what's good or whatever, I take this to ultimately be a question about what should I understand myself as preferring? Or to the extent that it's not a question of that, then it's like, I don't know, then I'm a bit less interested in the exercise.</p><p>AARON</p><p>Yeah. It's not ideal that I appeal to this fake and that fake ideally rational being or something. But here's a reason you might think it's more worth thinking about this. Maybe you've heard about I think Tomasic makes an argument about yeah. At least in principle, you can have a pig that's in extreme pain but really doesn't want to be killed still or doesn't want to be taken out of its suffering or whatever, true ultimate preference or whatever. And so at least I think this is pretty convincing evidence that you can have where that's just like, wrong about what would be good for it, you know what I mean?</p><p>DANIEL</p><p>Yeah, sorry, I'm not talking about preference versus hedonic utilitarianism or anything. I'm talking about what do I want or what do I want for living things or something. That's what I'm talking about.</p><p>AARON</p><p>Yeah. That language elicits preferences to me and I guess the analogous but the idea.</p><p>DANIEL</p><p>Is that the answer to what I want for living things could be like hedonic utilitarianism, if you see what I mean.</p><p>AARON</p><p>Or it could be by that do you mean what hedonic utilitarianism prescribes?</p><p>DANIEL</p><p>Yeah, it could be that what I want is that just whatever maximizes beings pleasure no matter what they want.</p><p>AARON</p><p>Yeah. Okay. Yeah, so I agree with that.</p><p>DANIEL</p><p>Yeah. So anyway, heading back just to the suicide case right. If I learn that evolution has shaped me to not want to kill myself, then that makes me think that I'm being rational in my choice to not kill myself.</p><p>AARON</p><p>Why?</p><p>DANIEL</p><p>Because being rational is something like optimally achieving your goals. And I'm a little bit like I sort of roughly know the results of killing myself, right? There might be some question about like, but what are my goals? And if I learned that evolution has shaped my goals such that I would hate killing myself right, then I'm like, oh, I guess killing myself probably ranks really low on the list of states ordered by how much I like them.</p><p>AARON</p><p>Yeah, I guess then it seems like you have two mutually incompatible goals. Like, one is staying alive and one is hedonic utilitarianism and then you have to choose which of these predominates or whatever.</p><p>DANIEL</p><p>Yeah, well, I think that to the extent that evolution is shaping me to not want to commit suicide, it looks like the not killing myself one is winning. I think it's evidence. I don't think it's conclusive. Right. Because there could be multiple things going on. But I take evolutionary explanations for why somebody would want X. I think that's evidence that they are rational in pursuing X rather than evidence that they are irrational in pursuing X.</p><p>AARON</p><p>Sometimes that's true, but not always. Yeah, there's a lot in general it is. Yeah. But I feel like moral anti realistic, we can also get into that. Are going to not think this is like woo or Joe Carl Smith says when he's like making fun of moralists I don't know, in a tongue in cheek way. In one of his posts arguing for explicating his stance on antirealism basically says moral realists want to say that evolution is not sensitive to moral reasons and therefore evolutionary arguments. Actually, I don't want to quote him from memory. I'll just assert that evolution is sensitive to a lot of things, but one of them is not moral reasons and therefore evolutionary arguments are not a good evidence or are not good evidence when it comes to purely, maybe not even purely, but philosophical claims or object level moral claims, I guess, yeah, they can be evidenced by something, but not that.</p><p>DANIEL</p><p>Yeah, I think that's wrong because I think that evolution why do I think it's wrong? I think it's wrong because what are we talking about when we talk about morality? We're talking about some logical object that's like the completion of a bunch of intuitions we have. Right. And those I haven't thought about intuitions are the product of evolution. The reason we care about morality at all is because of evolution under the standard theory that evolution is the reason our brains are the way they are.</p><p>AARON</p><p>Yeah, I think this is a very strange coincidence and I am kind of weirded out by this, but yes, I.</p><p>DANIEL</p><p>Don'T think it's a coincidence or like not a coincidence.</p><p>AARON</p><p>So it's not a coincidence like conditional honor, evolutionary history. It is like no extremely lucky or something that we like, of course we'd find it earthlings wound up with morality and stuff. Well, of course you would.</p><p>DANIEL</p><p>Wait. Have you read the metafic sequence by Elizar? Yudkowski.</p><p>AARON</p><p>I don't think so. And I respect Elias a ton, except I think he's really wrong about ethics and meta ethics in a lot of like I don't even know if I but I have not, so I'm not really giving them full time.</p><p>DANIEL</p><p>Okay. I don't know. I basically take this from my understanding of the meta ethics sequence, which I recommend people read, but I don't think it's a coincidence. I don't think we got lucky. I think it's a coincidence. There are some species that get evolved, right, and they end up caring about schmorality, right?</p><p>AARON</p><p>Yeah.</p><p>DANIEL</p><p>And there are some species that get evolved, right? And they end up caring about the prime numbers or whatever, and we evolved and we ended up caring about morality. And it's not like a total so, okay, partly I'm just like, yeah, each one of them is really glad they didn't turn out to be the other things. The ones that care about two of.</p><p>AARON</p><p>Them are wrong, but two of them are wrong.</p><p>DANIEL</p><p>Well, they're morally wrong. Two of them do morally wrong things all the time. Right?</p><p>AARON</p><p>I want to say that I hate when people say that. Sorry. So what I am saying is that you can call those by different names, but if I'm understanding this argument right, they all think that they're getting at the same core concept, which is like, no, what should we do in some okay, so does schmorality have any sort of normativity?</p><p>DANIEL</p><p>No, it has schmormativity.</p><p>AARON</p><p>Okay, well, I don't know what schmormativity is.</p><p>DANIEL</p><p>You know how normativity I feel like that's good. Schmormativity is about promoting the schmud.</p><p>AARON</p><p>Okay, so it sounds like that's just normativity, except it's normativity about different propositions. That's what it sounds like.</p><p>DANIEL</p><p>Well, basically, I don't know, instead of these schmalians wait, no, they're aliens. They're not shmalians. They're aliens. They just do a bunch of schmud things, right? They engage in projects, they try and figure out what the schmud is. They pursue a schmud and then they look at humans, they're like, oh, these humans are doing morally good things. That's horrible. I'm so glad that we pursue the schmood instead.</p><p>AARON</p><p>Yeah, I don't know if it's incoherent. I don't think they're being incoherent. Your description of a hypothetical let's just take for granted whatever in the thought experiment is in fact happening. I think your description is not correct. And the reason it's not correct is because there is like, what's a good analogy? So when it comes to abstract concepts in general, it is very possible for okay, I feel like it's hard to explain directly, but here an analogy, is you can have two different people who have very different conceptions of justice, but fundamentally are earnestly trying to get at the same thing. Maybe justice isn't well defined or isn't like, actually, I should probably have come up with a good example here. But you know what? I'm happy to change the word for what I use as morality or whatever, but it has the same core meaning, which is like, okay, really, what should you do at the end of the day?</p><p>DANIEL</p><p>Yeah.</p><p>AARON</p><p>What should you do?</p><p>DANIEL</p><p>Whereas they care about morality, which is what they should do, which is a different thing. They have strong desires to do what they should do.</p><p>AARON</p><p>I don't think it is coherent to say that there are multiple meanings of the word should or multiple kinds. Yeah.</p><p>DANIEL</p><p>No, there aren't.</p><p>AARON</p><p>Sorry. There aren't multiple meanings of the word should. Fine.</p><p>DANIEL</p><p>There's just a different word, which is schmood, which means something different, and that's what their desires are pegged to.</p><p>AARON</p><p>I don't think it's coherent, given what you've already the entire picture, I think, is incoherent. Given everything else besides the word schmoud, it is incoherent to assert that there is something broadly not analogous, like maybe isomorphic to normativity or, like, the word should. Yeah. There is only what's yeah. I feel like I'm not gonna I'm not gonna be able to verbalize it super well. I do. Yeah. Can you take something can you pick.</p><p>DANIEL</p><p>A sentence that I said that was wrong or that was incoherent?</p><p>AARON</p><p>Well, it's all wrong because these aliens don't exist.</p><p>DANIEL</p><p>The aliens existed.</p><p>AARON</p><p>Okay, well, then we're debating, like, I actually don't know. It depends. You're asserting something about their culture and psychology, and then the question is, like, are you right or wrong about that? If we just take for granted that you're right, then you're right. All right. I'm saying no, you can't be sure. So conditional on being right, you're right. Then there's a question of, like, okay, what is the probability? So, like, conditional on aliens with something broad, are you willing to accept this phrase, like, something broadly analogous to morality? Is that okay?</p><p>DANIEL</p><p>Yeah, sure.</p><p>AARON</p><p>Okay. So if we accept that there's aliens with something broadly analogous to morality, then you want to say that they can have not only a different word, but truly a pointer to a different concept. And I think that's false.</p><p>DANIEL</p><p>So you think that in conceptual space, there's morality and that there's, like, nothing near it for miles.</p><p>AARON</p><p>The study, like yeah, basically. At least when we're talking about, like, the like, at the at the pre conclusion stage. So, like, before you get to the point where you're like, oh, yeah, I'm certain that, like, the answer is just that we need, like, we need to make as many tennis balls as possible or whatever the general thing of, like, okay, broadly, what is the right thing to do? What should I do? Would it be good for me to do this cluster of things yeah. Is, like, miles from everything else.</p><p>DANIEL</p><p>Okay. I think there's something true to that. I think I agree with that in some ways and on others, my other response is I think it's not a total coincidence that humans ended up caring about morality. I think if you look at these evolutionary arguments for why humans would be motivated to pursue morality. They rely on very high level facts. Like, there are a bunch of humans around. There's not one human who's, like, a billion times more powerful than everyone else. We have language. We talk through things. We reason. We need to make decisions. We need to cooperate in certain ways to produce stuff. And it's not about the fact that we're bipedal or something. So in that sense, I think it's not a total coincidence that we ended up caring about morality. And so in some sense, I think because that's true, you could maybe say you couldn't slightly tweak our species that it cared about something other than morality, which is kind of like saying that there's nothing that close to morality in concept space.</p><p>AARON</p><p>But I think I misspoke earlier what I should have said is that it's very weird that we care about that most people at least partially care about suffering and happiness. I think that's just a true statement. Sorry, that is the weird thing. Why is it weird? The weird thing is that it happens to be correct, even though I only.</p><p>DANIEL</p><p>Have what do you mean it's correct?</p><p>AARON</p><p>Now we have to get okay, so this is going into moral realism. I think moral realism is true, at least.</p><p>DANIEL</p><p>Sorry, what do you mean by moral realism? Wait, different by moral realism?</p><p>AARON</p><p>Yes. So I actually have sort of a weak version of moral realism, which is, like, not that normative statements are true, but that there is, like, an objective. So if you can rank hypothetical states of the world in an ordinal way such that one is objectively better than another.</p><p>DANIEL</p><p>Yes. Okay. I agree with that, by the way. I think that's true. Okay.</p><p>AARON</p><p>It sounds like you're a moral realist.</p><p>DANIEL</p><p>Yeah, I am.</p><p>AARON</p><p>Okay. Oh, really? Okay. I don't know. I thought you weren't. Okay, cool.</p><p>DANIEL</p><p>Lots of people in my reference class aren't. I think most Bay Area rationalists are not moral realists, but I am.</p><p>AARON</p><p>Okay. Maybe I was confused. Okay, that's weird. Okay. Sorry about that. Wait, so what do I mean by it happens to be true? It's like it happens to coincide with yeah, sorry, go ahead.</p><p>DANIEL</p><p>You said it happens to be correct that we care about morality or that we care about suffering and pleasure and something and stuff.</p><p>AARON</p><p>Maybe that wasn't the ideal terminology it happens to so, like, it's not morally correct? The caring about it isn't the morally correct thing. It seems sort of like the caring is instrumentally useful in promoting what happens to be legitimately good or something. Or, like legitimately good or something like that.</p><p>DANIEL</p><p>But but I think, like so the aliens could say a similar thing, right? They could say, like, oh, hey, we've noticed that we all care about schmurality. We all really care about promoting Schmeasure and avoiding Schmuffering. Right? And they'd say, like, they'd say, like, yeah, what's? What's wrong?</p><p>AARON</p><p>I feel like it's not maybe I'm just missing something, but at least to me, it's like, only adding to the confusion to talk about two different concepts of morality rather than just like, okay, this alien thinks that you should tile the universe paperclips, or something like that, or even that more reasonably, more plausibly. Justice is like that. Yeah. I guess this gets back to there's only one concept anywhere near that vicinity in concept space or something. Maybe we disagree about that. Yeah.</p><p>DANIEL</p><p>Okay. If I said paperclips instead of schmorality, would you be happy?</p><p>AARON</p><p>Yes.</p><p>DANIEL</p><p>I mean, cool, okay, for doing the.</p><p>AARON</p><p>Morally correct thing and making me happy.</p><p>DANIEL</p><p>I strive to. But take the paperclipper species, right? What they do is they notice, like, hey, we really care about making paperclips, right? And, hey, the fact that we care about making paperclips, that's instrumentally useful in making sure that we end up making a bunch of paperclips, right? Isn't that an amazing coincidence that we ended up caring our desires were structured in this correct way that ends up with us making a bunch of paperclips. Is that like, oh, no, total coincidence. That's just what you cared about.</p><p>AARON</p><p>You left at the part where they assert that they're correct about this. That's the weird thing.</p><p>DANIEL</p><p>What proposition are they correct about?</p><p>AARON</p><p>Or sorry, I don't think they're correct implicitly.</p><p>DANIEL</p><p>What proposition do they claim they're correct about?</p><p>AARON</p><p>They claim that the world in which there is many paperclips is better than the world in which there is fewer paperclips.</p><p>DANIEL</p><p>Oh, no, they just think it's more paperclipy. They don't think it's better. They don't care about goodness. They care about paperclips.</p><p>AARON</p><p>So it sounds like we're not talking about anything remotely like morality, then, because I could say, yeah, morality, morality. It's pretty airy. It's a lot of air in here. I don't know, maybe I'm just confused.</p><p>DANIEL</p><p>No, what I'm saying is, so you're like, oh, it's like this total coincidence that humans we got so lucky. It's so weird that humans ended up caring about morality, and it's like, well, we had to care about something, right? Like anything we don't care about.</p><p>AARON</p><p>Oh, wow, sorry, I misspoke earlier. And I think that's generating some confusion. I think it's a weird coincidence that we care about happiness and suffering.</p><p>DANIEL</p><p>Happiness and suffering, sorry. Yeah, but mutatus mutantus, I think you want to say that's like a weird coincidence. And I'm like, well, we had to care about something.</p><p>AARON</p><p>Yeah, but it could have been like, I don't know, could it have been otherwise, right? At least conceivably it could have been otherwise.</p><p>DANIEL</p><p>Yeah, the paperclip guys, they're like, conceivably, we could have ended up caring about pleasure and suffering. I'm so glad we avoided that.</p><p>AARON</p><p>Yeah, but they're wrong and we're right.</p><p>DANIEL</p><p>Right about what?</p><p>AARON</p><p>And then maybe I don't agree. Maybe this isn't the point you're making. I'm sort of saying that in a blunt way to emphasize it. I feel like people should be skeptical when I say, like okay, I have good reason to think that even though we're in a very similar epistemic position, I have reason to believe that we're right and not the aliens. Right. That's like a hard case to make, but I do think it's true.</p><p>DANIEL</p><p>There's no proposition that the aliens and us disagree on yes.</p><p>AARON</p><p>The intrinsic value of pleasure and happiness.</p><p>DANIEL</p><p>Yeah, no, they don't care about value. They care about schmalu, which is just.</p><p>AARON</p><p>Like, how much paperclips there is. I don't think that's coherent. I don't think they can care about value.</p><p>DANIEL</p><p>Okay.</p><p>AARON</p><p>They can, but only insofar as it's a pointer to the exact same not exact, but like, basically the same concept as our value.</p><p>DANIEL</p><p>So do you reject the orthogonality thesis?</p><p>AARON</p><p>No.</p><p>DANIEL</p><p>Okay. I think that is super intelligent.</p><p>AARON</p><p>Yeah.</p><p>DANIEL</p><p>So I take the orthogonality thesis to mean that really smart agents can be motivated by approximately any desires. Does that sound right to you?</p><p>AARON</p><p>Yeah.</p><p>DANIEL</p><p>So what if the desire is like, produce a ton of paperclips?</p><p>AARON</p><p>Yeah, it can do that descriptively. It's not morally good.</p><p>DANIEL</p><p>Oh, no, it's not morally good at all. They're not trying to be morally good. They're just trying to produce a bunch of paperclips.</p><p>AARON</p><p>Okay, in that case, we don't disagree. Yeah, I agree. This is like a conceivable state of the world.</p><p>DANIEL</p><p>Yeah. But what I'm trying to say is when you say it's weird that we got lucky the reason you think it's weird is that you're one of the humans who cares about pleasure and suffering. Whereas if you were one of the aliens who cared about paperclips. The analogous shmarin instead of Aaron would be saying, like, oh, it's crazy that we care about paperclips, because that actually causes us to make a ton of paperclips.</p><p>AARON</p><p>Do they intrinsically care about paperclips or is it a means of cement?</p><p>DANIEL</p><p>Intrinsically, like, same as in the Orphogonality thesis.</p><p>AARON</p><p>Do they experience happiness because of the paperclips or is it more of a functional intrinsic value?</p><p>DANIEL</p><p>I think they probably experience happiness when they create paperclips, but they're not motivated by the happiness. They're motivated by like, they're happy because they succeeded at their goal of making tons of paperclips. If they can make tons of paperclips but not be happy about it, they'd be like, yeah, we should do that. Sorry. No, they wouldn't. They'd say, like, we should do that and then they would do it.</p><p>AARON</p><p>Would your case still work if we just pretended that they're not sentient?</p><p>DANIEL</p><p>Yeah, sure.</p><p>AARON</p><p>Okay. I think this makes it cleaner for both sides. Yeah, in that case, yes. So I think the thing that I reject is that there's an analog term that's anything like morality in their universe. They can use a different word, but it's pointing to the same concept.</p><p>DANIEL</p><p>When you say anything like morality. So the shared concepts sorry, the shared properties between morality and paperclip promotion is just that you have a species that is dedicated to promoting it.</p><p>AARON</p><p>I disagree. I think morality is about goodness and badness.</p><p>DANIEL</p><p>Yes, that's right.</p><p>AARON</p><p>Okay. And I think it is totally conceivable. Not even conceivable. So humans wait, what's a good example? In some sense I intrinsically seem to value about regular. I don't know if this is a good example. Let's run with it intrinsically value like regulating my heartbeat. It happens to be true that this is conducive to my happiness and at least local non suffering. But even if it weren't, my brain stem would still try really hard to keep my heart beating or something like that. I reject that there's any way in which promoting heart beatingness is an intrinsic moral or schmoral value or even that could be it could be hypothesized as one but it is not in fact one or something like that.</p><p>DANIEL</p><p>Okay.</p><p>AARON</p><p>Likewise, these aliens could claim that making paperclips is intrinsically good. They could also just make them and not make that claim. And those are two very different things.</p><p>DANIEL</p><p>They don't claim it's good. They don't think it's good.</p><p>AARON</p><p>They think it's claim it schmud.</p><p>DANIEL</p><p>Which they prefer. Yeah, they prefer.</p><p>AARON</p><p>Don't. I think that is also incoherent. I think there is like one concept in that space because wait, I feel like also this is just like at some point it has to cash out in the real world. Right? Unless we're talking about really speculative not even physics.</p><p>DANIEL</p><p>What I mean is they just spend all of their time promoting paperclips and then you send them a copy of Jeremy Bentham's collected writings, they read it and they're like all right, cool. And then they just keep on making paperclips because that's what they want to do.</p><p>AARON</p><p>Yeah. So descriptively.</p><p>DANIEL</p><p>Sure.</p><p>AARON</p><p>But they never claim that. It's like we haven't even introduced objectivity to this example. So did they ever claim that it's objectively the right thing to do?</p><p>DANIEL</p><p>No, they claim that it's objectively the paperclipy thing to do.</p><p>AARON</p><p>I agree with that. It is the paperclippy thing to do.</p><p>DANIEL</p><p>Yeah, they're right about stuff. Yeah.</p><p>AARON</p><p>So they're right about that. They're just not a right. So I do think this all comes back down to the question of whether there's analogous concepts in near ish morality that an alien species might point at. Because if there's not, then the paperclippiness is just like a totally radically different type of thing.</p><p>DANIEL</p><p>But why does it like when did I say that they were closely analogous? This is what I don't understand.</p><p>AARON</p><p>So it seems to be insinuated by the closeness of the word semantic.</p><p>DANIEL</p><p>Oh yeah, whatever. When I was making it a similar sounding word, all I meant to say is that they talk about it plays a similar role in their culture as morality plays in our culture. Sorry. In terms of their motivations, I should say. Oh, yeah.</p><p>AARON</p><p>I think there's plenty of human cultures that are getting at morality. Yeah. So I think especially historically, plenty of human cultures that are getting at the same core concept of morality but just are wrong about it.</p><p>DANIEL</p><p>Yeah, I think that's right.</p><p>AARON</p><p>Fundamentalist religious communities or whatever, you can't just appeal to like, oh, we're like they have some sort of weird it's kind of similar but very different thing called morality.</p><p>DANIEL</p><p>Although, I don't know, I actually think that okay, backing up. All I'm saying is that beings have to care about something, and we ended up caring about morality. And I don't think, like I don't know, I don't think that's super surprising or coincidental or whatever. A side point I want to make is that I think if you get super into being religious, you might actually start referring to a different concept by morality. How familiar are you with classical theism?</p><p>AARON</p><p>That's not a term that I recognize, although I took a couple of theology classes, so maybe more of them if I hadn't done that.</p><p>DANIEL</p><p>Yeah, so classical theism, it's a view about the nature of God, which is that I'm going to do a bad job of describing it. Yeah, I'm not a classical theist, so you shouldn't take classical theist doctrine from me. But it's basically that God is like sort of God's the being whose attributes are like his existence or something like that. It's weird. But anyway, there's like some school of philosophical where they're like, yeah, there's this transcendent thing called God. We can know God exists from first principles and in particular their account of goodness. So how do you get around the Euphyro dilemma, right? Instead of something like divine command theory, what they say is that when we talk about things being good, good just refers to the nature of God. And if you really internalize that, then I think you might end up referring to something different than actual goodness. Although I think it's probably there's no such being as God in the article. Theist sense.</p><p>AARON</p><p>Yeah. So they argue what we mean by good is this other.</p><p>DANIEL</p><p>Concept. They would say that when everyone talks about good, what they actually mean is pertaining to the divine nature, but we just didn't really know that we meant that the same way that when we talked about water, we always meant H 20, but we didn't used to know that.</p><p>AARON</p><p>I'm actually not sure if this is I'm very unconfident, but I kind of want to bite the bullet and say, like, okay, fine, in that case, yeah, I'm talking about the divine nature, but we just have radically different understandings of what the divine nature is.</p><p>DANIEL</p><p>You think you're talking about the divine nature.</p><p>AARON</p><p>Right?</p><p>DANIEL</p><p>Why do you think that?</p><p>AARON</p><p>Sorry, I think I very slightly was not quite pedantic enough. Sorry, bad cell phone or whatever. Once again, not very confident at all.</p><p>DANIEL</p><p>But.</p><p>AARON</p><p>Think think that I'm willing to I'm so I think that I'm referring to the divine nature, but what I mean by the divine nature is that which these fundamentalist people are referring to. So I want to get around the term and say like, okay, whatever these fundamentalists are referring to, I am also referring to them.</p><p>DANIEL</p><p>Yeah, I should say classical theism is not slightly a different when people say fundamentalists, they often mean like a different corner of Christian space than classical theists. Classical. Theists think like Ed Fesser esoteric Catholics or something. Yeah, they're super into it.</p><p>AARON</p><p>Okay, anyway yes, just to put it all together, I think that when I say morality, I am referring to the same thing that these people are referring to by the divine nature. That's what it took me like five minutes to actually say.</p><p>DANIEL</p><p>Oh yeah, so I don't think you are. So when they refer to the divine nature, what they at least think they mean is they think that the divine is sort of defined by the fact that its existence is logically necessary. Its existence is in some sense attributes it couldn't conceivably not have its various attributes. The fact that it is like the primary cause of the world and sustainer of all things. And I just really doubt that the nature of that thing is what you mean by morality.</p><p>AARON</p><p>No, those are properties that they assert, but I feel like tell me if I'm wrong. But my guess is that if one such person were to just suddenly come to believe that actually all of that's right. Except it's not actually logically necessary that the divine nature exists. It happens to be true, but it's not logically necessary. They would still be sort of pointing to the same concept. And I just think, yeah, it's like that, except all those lists of properties are wrong.</p><p>DANIEL</p><p>I think if that were true, then classical theism would be false.</p><p>AARON</p><p>Okay.</p><p>DANIEL</p><p>So maybe in fact you're referring to the same thing that they actually mean by the divine nature, but what they think they mean is this classical theistic thing. Right. And it seems plausible to me that some people get into it enough that what they actually are trying to get at when they say good is different than what normal people are trying to get at when they say good.</p><p>AARON</p><p>Yeah, I don't think that's true. Okay, let's set aside the word morality because especially I feel like in circles that we're in, it has a strong connotation with a sort of like modern ish analytics philosophy, maybe like some other things that are in that category.</p><p>DANIEL</p><p>Your video is worsen, but your sound is back.</p><p>AARON</p><p>Okay, well, okay, I'll just keep talking. All right, so you have the divine nature and morality and maybe other things that are like those two things but still apart from them. So in that class of things and then there's the question of like, okay, maybe everybody necessarily anybody who thinks that there's any true statements about something broadly in their vicinity of goodness in the idea space is pointing to the meta level of that or whichever one of those is truly correct or something. This is pretty speculative. I have not thought about this. I'm not super confident.</p><p>DANIEL</p><p>Yeah, I think I broadly believe this, but I think this is right about most people when they talk. But you could imagine even with utilitarianism, right? Imagine somebody getting super into the weeds of utilitarianism. They lived utilitarianism twenty four, seven. And then maybe at some point they just substitute in utilitarianism for morality. Now when they say morality, they actually just mean utilitarianism and they're just discarding the latter of the broad concepts and intuitions behind them. Such a person might just I don't know, I think that's the kind of thing that can happen. And then you might just want a.</p><p>AARON</p><p>Different thing by the word. I don't know if it's a bad thing, but I feel like I do this when I say, oh, x is moral to do or morally good to do. It's like, what's the real semantic relationship between that and it's correct on utilitarianism to do? I feel like they're not defined as the same, but they happen to be the same or something. Now we're just talking about how people use words.</p><p>DANIEL</p><p>Yeah, they're definitely going to happen to be the same in the case that utilitarianism is like the right theory of morality. But you could imagine that. You could imagine even in the case where utilitarianism was the wrong theory, you might still just mean utilitarianism by the word good because you just forgot the intuitions from which you were building theory of morality and you're just like, okay, look, I'm just going to talk about utilitarianism now.</p><p>AARON</p><p>Yeah, I think this is like, yeah, this could happen. I feel like this is a cop out and like a non answer, but I feel like getting into the weeds of the philosophy of language and what people mean by concepts and words and true the true nature of concepts. It's just not actually that useful. Or maybe it's just not as interesting to me as I'm glad that somebody thought about that ever.</p><p>DANIEL</p><p>I think this can happen, though. I think this is actually a practical concern. Right. Okay. Utilitarianism might be wrong, right? Does that strike you as right? Yeah, I think it's possible for you to use language in such a way that if utilitarianism were wrong, what that would mean is that in ordinary language, goodness, the good thing to do is not always the utilitarian thing to do, right? Yes, but I think it's possible to get down an ideological rabbit hole. This is not specific to utilitarianism. Right. I think this can happen to tons of things where when you say goodness, you just mean utilitarianism and you don't have a word for what everyone else meant by goodness, then I think that's really hard to recover from. And I think that's the kind of thing that can conceivably happen and maybe sometimes actually happens.</p><p>AARON</p><p>Yeah, I guess as an empirical matter and like an empirical psychological matter and yes. Do people's brains ever operate this way? Yes. I don't really know where that leaves that leaves us. Maybe we should move on to a different topic or whatever.</p><p>DANIEL</p><p>Can I just say one more thing?</p><p>AARON</p><p>Yeah, totally.</p><p>DANIEL</p><p>First, I should just give this broad disclaimer that I'm not a philosopher and I don't really know what I'm talking about. But the second thing is that particular final point. I was sort of inspired by a paper I read. I think it's called, like, do Christians and Muslims worship the same god? Which is actually a paper about the philosophy of naming and what it means for proper names to refer to the same thing. And it's pretty interesting, and it has a footnote about why you would want to discourage blasphemy, which is sort of about this. Anyway.</p><p>AARON</p><p>No, I personally don't find this super interesting. I can sort of see how somebody would and I also think it's potentially important, but I think it's maybe yeah.</p><p>DANIEL</p><p>Actually it's actually kind of funny. Can I tell you a thing that I'm a little bit confused about?</p><p>AARON</p><p>Yeah, sure.</p><p>DANIEL</p><p>So philosophers just there's this branch of philosophy that's the philosophy of language, and in particular the philosophy of right. Like, what does it mean when we say a word refers to something in the real world? And some subsection of this is the philosophy of proper names. Right. So when I say Aaron is going to the like, what do I mean by know who is like, if it turned out that these interactions that I'd been having with an online like, all of them were faked, but there was a real human named Bergman, would that count as making that send is true or whatever? Anyway, there's some philosophy on this topic, and apparently we didn't need it to build a really smart AI. No AI person has studied this. Essentially, these theories are not really baked into the way we do AI these days.</p><p>AARON</p><p>What do you think that implies or suggests?</p><p>DANIEL</p><p>I think it's a bit confusing. I think naively, you might have thought that AIS would have to refer to things, and naively, you might have thought that in order for us to make that happen, we would have had to understand the philosophy of reference or of naming, at least on some sort of basic level. But apparently we just didn't have to. Apparently we could just like I don't have that.</p><p>AARON</p><p>In fact, just hearing your description, my initial intuition is like, man, this does not matter for anything.</p><p>DANIEL</p><p>Okay. Can I try and convince you that it should matter? Yeah, tell me how I fail to convince you.</p><p>AARON</p><p>Yeah, all right.</p><p>DANIEL</p><p>Humans are pretty smart, right? We're like the prototypical smart thing. How are humans smart? I think one of the main ingredients of that is that we have language. Right?</p><p>AARON</p><p>Yes. Oh, and by the way, this gets to the unpublished episode with Nathan Barnard.</p><p>DANIEL</p><p>Coming out an UN I think I've seen an episode with him.</p><p>AARON</p><p>Oh, yeah. This is the second one because he's.</p><p>DANIEL</p><p>Been very oh, exciting. All right, well well, maybe all this will be superseded by this unpublished episode.</p><p>AARON</p><p>I don't think so. We'll see.</p><p>DANIEL</p><p>But okay, we have language, right. Why is language useful? Well, I think it's probably useful in part because it refers to stuff. When I say stuff, I'm talking about the real world, right?</p><p>AARON</p><p>Yes.</p><p>DANIEL</p><p>Now, you might think that in order to build a machine that was smart and wielded the language usefully, it would also have to have language. We would have to build it such that its language referred to the real world. Right. And you might further think that in order to build something that use languages that actually succeeds at doing reference, we would have to understand what reference was.</p><p>AARON</p><p>Yes. I don't think that's right. Because insofar as we can get what we call useful is language in, language out without any direct interaction, without the AIS directly manipulating the world, or maybe not directly, but without using language understanders or beings that do have this reference property, that's what their language means to them, then this would be right. But because we have Chat GPT, what the use comes from is like giving language to humans, and the humans have reference to the real world. But if the humans you need some connection to your reference, but it doesn't have to be at every level or something like that.</p><p>DANIEL</p><p>Okay, so do you think that suppose we had something that was like Chat GPT, but we gave it access to some robot limbs and it could pick up mice. Maybe it could pick up apples and throw the apples into the power furnace powering its data center. We give it these limbs and these actuators sort of analogous to how humans interact with the world. Do you think in order to make a thing like that that worked, we would need to understand the philosophy of reference?</p><p>AARON</p><p>No. I'm not sure why.</p><p>DANIEL</p><p>I also don't know why.</p><p>AARON</p><p>Okay, well, evolution didn't understand the philosophy of reference. I don't know what that tells us.</p><p>DANIEL</p><p>I actually think this is, like, my lead answer of, like, we're just making AIS by just randomly tweaking them until they work. That's my rough summary of Scastic gradient descent. In some sense, this does not require you to have a strong sense of how to implement your AIS. Maybe that's why we don't need to.</p><p>AARON</p><p>Understand philosophy or the SDD process is doing the philosophy. In some sense, that's kind of how I think about it or how I think about it now. I guess during the SDD process, you're, like, tweaking basically the algorithm, and at the end of the day, probably in order to, say, pick up marbles or something, reference to a particular marble or the concept of marble, not only the concept, but both the concept and probably a particular marble is going to be encoded. Well, I guess the concept of marble will be if that's how it was trained, that will be encoded in the weights themselves, you know what I mean? But then maybe a particular marble will be encoded vision to see that marble be encoded in a particular layers activation.</p><p>DANIEL</p><p>Or something, something like that, maybe. Yeah, I think this is like yeah, I guess what we're getting at is something like look, meaning is like a thing you need in order to make something work, but if you can just directly have a thing that gradually gets itself to work, that will automatically produce meaning, and therefore we don't have to think about it.</p><p>AARON</p><p>It will have needed to figure out meaning along the way.</p><p>DANIEL</p><p>Yeah, but we won't have needed to figure it out. That'll just happen in the training process.</p><p>AARON</p><p>Yeah. I mean, in the same way that everything happens in the training process. Yeah, that's where all the magic happens.</p><p>DANIEL</p><p>All right, so do you want to hear my new philosophy of language proposal?</p><p>AARON</p><p>Yes.</p><p>DANIEL</p><p>Yeah. So here's the new proposal. I think the theory of reference is not totally solved to everyone's satisfaction. So what we're going to do is we're going to train Chat GPT to manipulate objects in the physical world, right? And then we're going to give the weights to the philosophers. We're also going to give it like, a bunch of the training checkpoints, right?</p><p>AARON</p><p>And then they're going to look at.</p><p>DANIEL</p><p>This, and then they're going to figure out the philosophy of meaning.</p><p>AARON</p><p>What are training checkpoints?</p><p>DANIEL</p><p>Oh, just like the weights at various points during training.</p><p>AARON</p><p>Okay, and your proposal is that the philosophers are going to well, we haven't solved neck interpretability anyway, right? Yeah. I feel like this is empirically not possible, but conceptually, maybe the outcome won't be like solving meeting, but either solving meeting or deciding that it was a confused question or something. There was no answer, but something like resolvative.</p><p>DANIEL</p><p>Yeah. I don't know. I brought this up as like a reductiod absurdum or something or sort of to troll. But actually, if we get good enough at mechanical interpretability, maybe this does just shine light on the correct theory of reference.</p><p>AARON</p><p>I mean, I'm just skeptical that we need a theory of reference. I don't know, it seems like kind of like philosopher word games to me or something like that. I mean, I can be convinced otherwise. It's like haven't seen that.</p><p>DANIEL</p><p>I'm not sure that we need it. Right. I think I get fine without an explicit one, but I don't think you can tell.</p><p>AARON</p><p>Yes. Okay.</p><p>DANIEL</p><p>Can I tell you my favorite? It's sort of like a joke. It's a sentence that yeah. All right, so here's the sentence. You know Homer, right? Like the Greek poet who wrote the Iliad and the.</p><p>AARON</p><p>Oh, is that the.</p><p>DANIEL</p><p>No, this is the setup, by the way, do you know anything else about Homer?</p><p>AARON</p><p>Male? I don't know that I think that.</p><p>DANIEL</p><p>Yeah, okay, all right. This is not going to be funny as a joke, but it's meant to be a brain tickler, right? So the Iliad and the Odyssey, they weren't actually written by Homer? They were written by a different Greek name, by a different Greek man who.</p><p>AARON</p><p>Was also named I thought I saw somebody tweet this.</p><p>DANIEL</p><p>I think she got it from me.</p><p>AARON</p><p>That'S my okay, cool.</p><p>DANIEL</p><p>She might have got it from the lecture that I watched.</p><p>AARON</p><p>Maybe you can explain to me. Other people are saying, oh yeah, I don't think they were rolling on the ground laughing or whatever, but they were like, oh, ha, this is actually very funny after you explain it. And I did not have that intuition at I'm like, okay, so there's two guys named the where's the brain tickly part?</p><p>DANIEL</p><p>Oh, the brain tickly part is this. How could that sentence possibly be true when all you knew about Homer was that he was a Greek guy who wrote The Iliad and The Odyssey and that he was named.</p><p>AARON</p><p>How could that sentence okay, so I feel like the sentence on its own doesn't have a truth value, but what it implies. If I just heard that in normal conversation in fact, when I heard it just now, and if I were to hear it in normal conversation, what I would take it to mean is the famous guy who all the academics talk about, turns out, yes, that is Person A. And there was also this other person who is not somebody else has a better, more solid understanding of Homer beyond defining him as the author of The Iliad, The Odyssey, even though that's really all I know about him. I trust there's other people for whom this is not the case. And implicitly, I'm thinking, okay, so there's some philosophy or history dudes or whatever, who they know where he was born, they know his middle name or whatever, and so we're just going to call him Person A. And in fact, there was another guy named Domer, and there's no contradiction there or whatever.</p><p>DANIEL</p><p>What if nobody alive? What if everything that so I think this is actually plausible, I think, in terms of what living people know about Homer, I think it's just that he was a guy named Homer, he was Greek, he wrote The Iliad and The Odyssey, or at least is reputed to have. And maybe we know something about the period in which he lived, and maybe you can figure out the part of Greece in which he lived from the language, but I think that's probably all humanity currently knows about.</p><p>AARON</p><p>So maybe, maybe the statement can be it feels like it can be false. And the way it could be false is if we took a census of just suppose we had a census of everybody who ever lived in that period and there was only one Homer, well, then we would know that statement is false.</p><p>DANIEL</p><p>What do you mean, only one Homer?</p><p>AARON</p><p>I mean, there was not two individuals in the census, this hypothetical census named.</p><p>DANIEL</p><p>Homer who were given the name Homer. Gotcha yeah, that would make it.</p><p>AARON</p><p>And so it seems to be carrying substantive information that, in fact, we have historical evidence of two different individuals, and we have reason to believe there were two different individuals who went by the name Homer, and one of them wrote The Iliad and The Odyssey. And given those two facts, then the statement is true.</p><p>DANIEL</p><p>Okay, if the statement were so in the past, there are two different people named Homer, and only one of them wrote The Iliad of The Odyssey. But then why would we not say that The Iliad of The Odyssey were written by Homer? Why would we say they weren't written by Homer if they were written by a different guy who was also named Homer?</p><p>AARON</p><p>Yeah, so this gets back to the difference between the statement per se and my interpretation. So the statement per se, it sounds like there's no difference there. Or the phrase, like, some other guy named where it's, like, redundant, maybe not wrong, but like redundant or something they may have even wrong. I don't know. The information carried in the statement would be equivalent if you just said, we have good reason to believe there was not merely one Homer, but two. And indeed, one of these people wrote The Odyssey and the same statement, basically.</p><p>DANIEL</p><p>All right, so here's the thing I'm going to hit you up with. I think usually people have, like most people have names that other people also have, right?</p><p>AARON</p><p>Yes.</p><p>DANIEL</p><p>Like, there's more than one person named Daniel. There's more than one person named Aaron.</p><p>AARON</p><p>Right.</p><p>DANIEL</p><p>There was probably more than one person named Homer around the time when Homer was supposed to have all right, so so, yeah, homer didn't write Thalia and The Odyssey. They were written by some other guy who was also named Homer.</p><p>AARON</p><p>Yeah, I think that's a true statement.</p><p>DANIEL</p><p>Oh, I think it's a false. Can I try and convince you that you're wrong to say that's a true statement?</p><p>AARON</p><p>Yeah.</p><p>DANIEL</p><p>All right, here's one statement. Homer wrote the iliad and the odyssey. Right?</p><p>AARON</p><p>Yes.</p><p>DANIEL</p><p>Do you think that's true?</p><p>AARON</p><p>Okay, so think it is both true and false, depending on the reference of Homer.</p><p>DANIEL</p><p>Oh, yeah. So what is the reference?</p><p>AARON</p><p>Something like, yeah, maybe I'm willing to take back the thing that I previously said because this feels like more normal language or something when I say I'm talking to Daniel, right, that feels like a true statement. But maybe my sister has a friend named Daniel, and if I told that to her right, like, she would be right to say that it's false because you know what? I keep getting back to the fact that who gives a shit? You know? What I mean. I still struggle to see. You can dig down into the true, whether a particular proposition is true or false or indeterminate or something. But in normal language, we have a million psychological and maybe not psychological, but we have a million ways to figure out what is meant by a particular proposition beyond the information contained in its words. Okay. I don't know. This is not an answer or whatever, but it still seems like it's all fine, even if we never figure out.</p><p>DANIEL</p><p>I guess sorry, I'm going to do a little bit of sweeping. Your audience doesn't want to hear that. I'm going to sweep them, then.</p><p>AARON</p><p>No, that's totally cool. We're pro sweeping.</p><p>DANIEL</p><p>All right. Finish. All right.</p><p>AARON</p><p>Yeah.</p><p>DANIEL</p><p>I'm inclined to agree that it's fine. So when you say there's a million opportunities to understand the content of a sentence other than just the information contained the words, or understand what somebody means beyond just the statement info containing the words, you might still want to know what the info contained the words actually is. I should say, broadly, the way I relate to this is as an interesting puzzle.</p><p>AARON</p><p>Yeah, no, I kind of agree. Maybe I'm just like more. Yeah, I think it's like I can see why somebody was lined. It interesting.</p><p>DANIEL</p><p>Yeah. It gets to a thing where when you try to think of what we mean by something like Homer or what we mean by something like Daniel Filon, at least when other people say it, often it will be you'll come up with a candidate definition. And then there'll be some example which you hadn't anticipated, which I think is part of what makes this interesting. So, for instance, you might think that Daniel Phylan is the person named Daniel Filon, but here's a sentence, daniel Filon could have been named Sam. Or actually, here's a better one. Daniel Filon could have been named Patrick. Like, my dad actually sort of wanted to for a while. My dad was thinking of calling me Patrick. Right.</p><p>AARON</p><p>I was almost, yeah.</p><p>DANIEL</p><p>Yeah. So if you think about the sentence, daniel Filon could have been named Patrick. If Daniel Filon just means, like, a person named Daniel Filon, then that's.</p><p>AARON</p><p>I mean yeah, but that shouldn't.</p><p>DANIEL</p><p>So then you might say, like, oh, what Daniel Filon means is it's actually just an abbreviation of a bunch of things you might know about me. Right. Like, Daniel Filon is this guy who is Australian, but now lives in Berkeley and hosts this podcast and a few other things. And then the trouble is, you could imagine a parallel world right, where I didn't do any of those things.</p><p>AARON</p><p>Well, I feel like that's a bad definition. It would be, Janopilot is a human being who was both psychologically and genetically continuous with the being who existed before he was named, or something like that.</p><p>DANIEL</p><p>Okay. But you still have to nail down which being Daniel Fallon is supposed to be psychologically and genetically continuous with wait, what? Sorry. When you say, like, Daniel Phylan means just beings that are like, human beings that are psychologically and genetically continuous with the being before they were named. I think that's what you said.</p><p>AARON</p><p>Which angry definition. Yeah, well, I'm talking about you. Yeah, beyond that, I don't think there's any other verbal mishmash I can say will point to that. There's, like, a human being there's, like, a human being that's, like, the atoms aren't the same. Plus not all the memories are the same. There's personal identity issues, but there's a human being with basically your genetics, like, whatever, how many your age, plus a couple months. And that is also Daniel filon.</p><p>DANIEL</p><p>Yeah. Can you try and say that without using the word you imagine it's somebody who you're not talking to, and so you don't get to wait, what?</p><p>AARON</p><p>I don't even know wait, what am I supposed to be trying to gesture towards what I'm trying to say?</p><p>DANIEL</p><p>Yeah, give a definition of what you mean by Daniel Filon in a way that's valid. Like, I would still be Daniel Filon in a way where imagine a counterfactual world where, like, grown up to hate EA or something. You would want to still call that guy Daniel Filon. Daniel but but you're not allowed to use the word you okay?</p><p>AARON</p><p>Yeah. Daniel Filin is the human being who is currently not. I feel like I kind of mean two different things. Honestly, I don't think there's one definition. One is, like, the actual is the current instantiation and the current and actual instantiation of a particular human being. And the more and the other definition or meaning I have is all like, all human all human beings who either were or will be. I don't know about could be, honestly, or I think could be. I don't know about could have been. Yeah, maybe could have been. Yes. Let's go with what could have been. So throughout the multiverse, if that's a thing, all those beings who either were will be, could have been or could be psychologically and genetically continuous with a human being who was conceived or, like I guess I guess I guess this being started existing when he was a genetic entity or, like, had his full genome or something, which is hard.</p><p>DANIEL</p><p>Which beings are.</p><p>AARON</p><p>The counterfactual alternatives of the current being named Daniel Phylan? And this being, in turn, is defined as the current instantiation of an original past self. And that original past self can be delineated in time by the moment that a particular human being had the had all the genes or whatever.</p><p>DANIEL</p><p>So it's things that branch off the current being that is named Daniel Filon right.</p><p>AARON</p><p>Or things that branch off the yeah, branch off, but ultra retrospectively, I guess, but yeah.</p><p>DANIEL</p><p>Okay. And the current being suppose, like so I haven't actually told you this, but my legal name is actually Steve Schmuckson, not Daniel Phylan. Is there anything that the name Daniel Filon refers to?</p><p>AARON</p><p>Like, there's no factor of the matter.</p><p>DANIEL</p><p>You think there's no factor of the.</p><p>AARON</p><p>Here'S my concern. Where is the fact of the matter located? Or something like that. Is it in my neuro? Yeah. Is it, like, moral truth? What is it like, referential truth? Is there anything referential truth is like.</p><p>DANIEL</p><p>Oh, I don't know. I guess probably not.</p><p>AARON</p><p>Okay.</p><p>DANIEL</p><p>But I guess when you say the person names Daniel Filon, I think there's still a question of, like, wait, who is the like, how do you figure out who the person named Daniel Filon is? Like, I think that gets back to.</p><p>AARON</p><p>The probably it's probably multiple people. Wait, hold on. Pause. Okay, I'll cut this part out. Lindsay, I'm in the middle of a sorry. Sorry. Bye. Okay, I'm back.</p><p>DANIEL</p><p>Yeah, but when you say, like, the person names Daniel Filon and you're using that in your definition of what do I mean by Daniel filon? That strikes me as kind of circular because how do we know which person is the one who's named Daniel Filon?</p><p>AARON</p><p>Yeah, I agree. That's a poor definition. I feel like I very weekly think that I could come up with a more rigorous definition that would be, like, really annoying non intuitive.</p><p>DANIEL</p><p>Okay.</p><p>AARON</p><p>Not super sure about that.</p><p>DANIEL</p><p>You should try and then read some phil articles because it's all totally doesn't.</p><p>AARON</p><p>Matter and it's like a fake question. Oh, yeah, it doesn't matter.</p><p>DANIEL</p><p>I just think it's a fun puzzle.</p><p>AARON</p><p>Yeah, but it feels like it's not even yeah, so there's, like, a lot of things I feel like there's mathematical questions that don't matter but are more meaningful in some sense than even this feels kind of like maybe not. How many angels dance in the head of a pin? Yeah, actually kind of like that. Yeah. How many angels can dance in the head of a pin?</p><p>DANIEL</p><p>I think that question is meaningful.</p><p>AARON</p><p>What's the answer?</p><p>DANIEL</p><p>What's the answer? I guess it depends what you mean by angel. Normally in the Christian tradition, I think angels are supposed to not be material.</p><p>AARON</p><p>I think maybe, like, tradition. I'm asking about the actual answer.</p><p>DANIEL</p><p>Yeah, I mean, the actual answer to how many angels can dance on the yeah, I think when you use the word angel okay, the tricky thing here is when you use the word angel, you might be primarily referring to angels in the Jewish tradition about which no.</p><p>AARON</p><p>I'm referring to real angels.</p><p>DANIEL</p><p>There aren't any real angels.</p><p>AARON</p><p>Okay, well, then how many angels can dance in the head of a pen?</p><p>DANIEL</p><p>Zero. Because there aren't any.</p><p>AARON</p><p>I'm kind of joking sort of adopting your stance when I came to whatever, the aliens with the weird word.</p><p>DANIEL</p><p>I gave you an answer. What do you want?</p><p>AARON</p><p>Yeah, I'm also going to give you, like, a series of answers. I mean, I'm not actually going through I think it'll be annoying, but I could give you a series of answers like that or whatever, like I'm referring.</p><p>DANIEL</p><p>To I'm not sure you could give me another question. That's my answer.</p><p>AARON</p><p>Oh, okay.</p><p>DANIEL</p><p>As for how many actual angels, could.</p><p>AARON</p><p>I feel like I might be trapped here because I thought that was going to trip you up, and it's just like, yeah, it sounds like the right answer. Honestly.</p><p>DANIEL</p><p>Well, I guess you might think that. Suppose all dogs suddenly died, right. And then later I asked you how many dogs could fit in this room, there would still be an answer to that question that was like greater than zero. Yeah. I think the word angels just, like it just depends on what the word angels refers to. And I'm like, well, if it has to refer to actual angels, then there aren't any actual angels. If we're referring to angels as conceived of in the Christian tradition, then I think infinitely many. If we're referring to angels as conceived of in other traditions, then I think that I don't know the answer.</p><p>AARON</p><p>Yes, that sounds right. I'm glad you find this sorry. That was like an hour, so that was an annoying way of putting it.</p><p>DANIEL</p><p>I liked it. That was a fine thing to say.</p><p>AARON</p><p>At the metal level. At the metal level. I find it interesting that some people find this interesting.</p><p>DANIEL</p><p>Yeah. Okay, before you go away and try and figure out theory of naming, can I add some side constraints? Some constraints that you might not have thought of?</p><p>AARON</p><p>Sure.</p><p>DANIEL</p><p>Okay, so here's a sentence. Like, harry Potter is a wizard. Right.</p><p>AARON</p><p>There are no wizards.</p><p>DANIEL</p><p>You think it's false that Harry Potter is a wizard?</p><p>AARON</p><p>Yes.</p><p>DANIEL</p><p>All right, but let's just take the okay, like like you kind of know what that means, right?</p><p>AARON</p><p>Yes.</p><p>DANIEL</p><p>Let's take another sentence. Like, thor is the god of lightning, right?</p><p>AARON</p><p>Yes.</p><p>DANIEL</p><p>Now, I take it you don't believe in the literal existence of Thor or of Harry Potter. Right?</p><p>AARON</p><p>Yeah. Right.</p><p>DANIEL</p><p>But when I talk about I'm, I'm wielding the name Harry Potter, and I'm doing a sort of similar thing as when I wield the name Aaron Bergman. Right.</p><p>AARON</p><p>Similar. Not the same, but similar.</p><p>DANIEL</p><p>Yeah. Okay, cool. So Harry Potter the thing about Harry Potter is it's like an empty name, right? It's a name that doesn't refer to anything that actually exists. Right.</p><p>AARON</p><p>Doesn't refer to any configuration of actually existed molecules. It refers to some abstractions, and it refers to a common set of a grouping of properties in various people's minds.</p><p>DANIEL</p><p>Oh, you think it refers to the grouping of properties rather than so if I said, like, Thor actually exists, that would be true, according to you?</p><p>AARON</p><p>No, I'm trying to figure out why. I think I figured why I totally.</p><p>DANIEL</p><p>Think this is a solvable problem, by the way.</p><p>AARON</p><p>Okay.</p><p>DANIEL</p><p>I'm not trying to say this is some sort of deepity, like, you will never know. I think this is conceivable. Anyway, the point is, Harry Potter and Thor are examples of names that don't refer to actual humans or gods or whatever, but they're different, right?</p><p>AARON</p><p>Yes. So that's interesting.</p><p>DANIEL</p><p>You might have thought that names were nailed down by the sets of things they referred to.</p><p>AARON</p><p>Hold on. I think something can refer to something without or sorry, there are things besides maybe you don't have a good word, but there are thingy like things, for lack of a better term, that exist in some meaningful sense of exist that are not configurations of quarks or identifiable configuration, or like, yeah, let's go to configurations.</p><p>DANIEL</p><p>Quarks and leptons. Sure. And you don't just mean like, the Em field. You mean like, things can refer to non physical stuff.</p><p>AARON</p><p>I think physical is in a useful category. This is also a hot take in some.</p><p>DANIEL</p><p>Like, wait, do you think that Harry Potter is like, this non physical being that flies around on a broomstick, or do you think that Harry Potter is like, the concept?</p><p>AARON</p><p>So I think there's multiple things that that term means, and the way it's actually used is depends on Bergman.</p><p>DANIEL</p><p>Do you think Aaron Bergman means multiple?</p><p>AARON</p><p>No.</p><p>DANIEL</p><p>What's the difference?</p><p>AARON</p><p>Well, I can in fact, Harry Potter might only refer to exactly two things.</p><p>DANIEL</p><p>What are the two things that Harry Potter refers to?</p><p>AARON</p><p>Sorry, wait, maybe I'm wrong about that. Okay, hold on. So, like, if I use the not, I don't know, because what I want to say is harry refers to what you think it refers to in two different contexts. And one context is where we pretend that he exists, and the other context is when we recognize or pretend that he doesn't. And now you're going to say, oh, who's you referring to. Am I right?</p><p>DANIEL</p><p>Yeah.</p><p>AARON</p><p>Okay, that sounds like what I'm going to say. Okay. No, I feel like there's, like an er Harry Potter, which is like a cluster of traits, like a cluster of things. There's no hardened, well defined thing in the same way there's no well defined notion of what is a bottle of wine. You can keep adding weird tidbits to.</p><p>DANIEL</p><p>The bottle of wine, but the er Harry Potter is like a bundle of traits.</p><p>AARON</p><p>Characteristics. Traits. Okay.</p><p>DANIEL</p><p>Rishi Sunak a bundle of traits.</p><p>AARON</p><p>I think there's, like two levels. There's, like, the metal Rishi Sunak and the thing that people normally refer to when they refer to Rishi Sunak, which is not merely which is not a bundle of traits. It is distinguished from other from like it is a physical or like a biological mind like thing that is individuated or pointed out in person space by the bundle of traits or something like that.</p><p>DANIEL</p><p>Yeah, he is that. But I think that when people say Rishi Sunak, I don't think they ever mean the bundle of traits. I think they mean, like, the guy. I think the guy has the bundle of traits, but they don't mean the. Traits, they mean the guy.</p><p>AARON</p><p>Yeah, I think that's right. I think the way that they, with their mind brain lands on that actual meaning is, like, in some sense, recognizing those letters as pointing to characteristics, as pointing to things, to maybe things or characteristics such as the Prime Minister of Britain or UK or whatever, like things.</p><p>DANIEL</p><p>That embody the they don't they don't refer to the characteristics themselves. They refer to the things that embody the characteristics. Right.</p><p>AARON</p><p>I think as an empirical matter, this is true. I can imagine a world in which it's sometimes the first of the bundle of characteristics.</p><p>DANIEL</p><p>Yeah, I guess I think that would be people speaking a different language. Right. Like, there are all sorts of different languages. Some of them might have the word Rishi sunak. That happens to mean, like, the property of being the Prime Minister of Great Britain and Northern Ireland.</p><p>AARON</p><p>Well, like, okay, so let's say in a thousand years or whatever, and there's still humans or whatever, there's like a mythology about some being. And in the same way that there's mythology about Thor, there's mythology about this being who's in various myths plays the role of the not plays the role, but is the role in the myths of the Prime Minister of the UK. Which is like some ancient society and has these various traits, then it would behave kind of thought. But yeah, this is like a conceivable thing, in which case there is a reference I wouldn't say that means that the language people speak is in English anymore because they use rishi sunak in that way.</p><p>DANIEL</p><p>But when they said rishi sunak, they were actually referring to the traits not like some sort of being.</p><p>AARON</p><p>Well, maybe there were historians in that society who were referring to the being, but most normal people weren't or something.</p><p>DANIEL</p><p>I guess I think they would be referring to, like I guess to them I would call rishi sunak. Like, sorry, what kinds of things do these people believe about rishi sunak? But how are they using sentences involving rishi sunak?</p><p>AARON</p><p>So somebody might say, oh, you know, rishi sunak isn't actually a lie. That would be a true statement. It would also be a true sorry. Sorry, or like, wait, yeah.</p><p>DANIEL</p><p>Sorry is the idea that these people have myths about. Right, all right, sorry. That's the question I was asking. Okay, all right, cool. I guess this would be sort of similar to the case of Santa Claus. The phrase Santa Claus comes from St. Nicholas, who was probably a real guy from Turkey named okay, I like, vaguely.</p><p>AARON</p><p>Knew that, I think.</p><p>DANIEL</p><p>Yeah, but I guess this gets us back to where we started with when we say Santa Claus, do we mean like, the bundle of ideas around Santa Claus or do we mean like a guy who dispenses a bunch of presents.</p><p>AARON</p><p>On I mean, I want to step back.</p><p>DANIEL</p><p>Anyway.</p><p>AARON</p><p>Yeah, I feel like maybe insofar as I feel like maybe it does matter, or like, yeah, the question of meaning or sorry, it can matter, but it just has a different answer in particular different cases. And so the right way to go about it is to just discuss reference in the case of morality, for example, the case of Santa Claus and another. And there's no general answer. Or maybe there is a general answer, but it's so abstract that it's not.</p><p>DANIEL</p><p>Useful in any way that might be. Well, I think even abstract answers can be pretty yeah, I think you might have some hope that there's a general answer for the case of proper names to be even concrete. I think you might think that there's some theory that's sort of specific that unifies the names aaron Bergman, Santa Claus and Zeus.</p><p>AARON</p><p>Yeah. And I guess I think, oh, it'll be a lot easier and quicker just to actually disambiguate case by case. Maybe I'm wrong. Maybe I'm wrong. So if some tenured philosophers at whatever university want to work on this, people.</p><p>DANIEL</p><p>Can do that, I should say. I've read theories that purport to explain all of these three naming practices that I found somewhat convincing. When I say papers, I mean one paper. It's actually the paper I cited earlier.</p><p>AARON</p><p>Okay, you can send it to me or, like, send me a link or whatever, if you want.</p><p>DANIEL</p><p>Yeah, really, what's happening in this conversation is I read one paper and now I'm trolling you about it. I hope it's a good kind of trolling.</p><p>AARON</p><p>Yeah, it feels like benevolent trolling. But I actually do think this is kind of meaningful in the context of morality, or at least it's actually kind of non obvious in that case, whereas it generally is obvious, like, what a particular person in real life is referring to. In the case of Santa Claus, just depending on and morality happens to be important. Right. So maybe there's other cases like that. Or I could see legal battles over, like, what does a law refer to? There's, like, two different people. It's like the guy, the state, there's the name itself. Yes, sure. I don't know.</p><p>DANIEL</p><p>Yeah. This reminds me of various formulations of originalism, which is you've heard of originalism, I guess constitutional.</p><p>AARON</p><p>Yeah.</p><p>DANIEL</p><p>So original it's this theory that when you're interpreting laws, you should interpret the original thing going on there, rather than what we currently want it to be, or whatever. And there's this question of, like, wait, what thing that was originally going on? Should we interpret? And sometimes you occasionally hear people say that it's about the original intent. I think this is definitely false, but more often people will say, oh, they mean the original public meaning. But sometimes people say, oh, no, it's the original meaning. In a legal context, people try to get at what exactly they mean by originalism, and it has some of its flavor.</p><p>AARON</p><p>Yeah, I could talk about object level or at the level we've been talking. I don't think it's, like a fact of the matter, but object level. If you convince me that originalism was true, maybe you couldn't what I want to say is because those people weren't playing by the rules or whatever, we just got to norm it out or something sorry. People writing the Constitution weren't doing it under the pretext of originalism. I don't know. It I could be wrong about this.</p><p>DANIEL</p><p>Okay. Why do you think it.</p><p>AARON</p><p>Maybe looks pretty plausible that I'm wrong? I vaguely feel like this is a thing that was, like, developed in, like, the 20th century by, like, legal scholars.</p><p>DANIEL</p><p>I think that's sort of right. So they had this notion of strict constructionism in the 19th century that I think is kind of analogous to originalism. I think when people talk about originalism, they mean simple enough concepts that it seems plausible to me that people could have been to me. I don't know, maybe this is my bias, but it seems very intuitive to me that when people were writing the Constitution, maybe they were thinking, hey, I want this law to mean what it means right now.</p><p>AARON</p><p>Yeah. There's a question. Okay. What is me and me. Yeah.</p><p>DANIEL</p><p>I guess everybody thinks yeah, all right. There's one game which is, like, what did the framers think they were doing when they wrote the Constitution? There's a potentially different question, which is, like, what were they actually doing? They could have been wrong about legal theory. Right. That's conceivable. And then there's a third game, which I think is maybe the best game, which is, like, what's the best way to sort of found a system of laws? Should we hope that all the courts do originalism, or should we hope that all the courts do like, I'm not exactly sure what the alternative is supposed to be, but like, yeah, but what.</p><p>AARON</p><p>Should we ask from an alternative?</p><p>DANIEL</p><p>Is like, sorry.</p><p>AARON</p><p>Yeah, I agree. I assume you mean, like, what actually in 2023 should be the answer, or how should judges interpret the Constitution?</p><p>DANIEL</p><p>That's the game whereby should I hear means something like, what would cause the most clarity about the laws? And something like that.</p><p>AARON</p><p>I don't mean that exact same thing. I think I mean something like, more in some sense, ultimately moral, not like clarity is not I don't know. There's other values besides clarity.</p><p>DANIEL</p><p>Yeah, sure. We might want to limit scope a little bit to make it easier to think about. Right.</p><p>AARON</p><p>Yeah.</p><p>DANIEL</p><p>When I'm building a house, if I'm building a house, I probably want to think, like, how will this house not fall down?</p><p>AARON</p><p>I don't know.</p><p>DANIEL</p><p>I'm going to have a bunch of concrete requirements, and it's probably going to be better to think about that rather than, like, what should I build? Because I don't want to solve philosophy before building my house.</p><p>AARON</p><p>Yeah, it's not as obvious what those requirements are for. Possible that just because you can have just, like, two statements issued by the federal court, or you can imagine that the last two judgments by the Supreme Court include unambiguous propositions that are just opposites of one another. And I don't think this would mean that the United States of America has fallen. You know what? Okay, like, nobody knows. What should we do? I don't.</p><p>DANIEL</p><p>Mean yeah. I would tend to take that as saying that legal judgments don't follow the inference rules of classical logic. Seems fine to me.</p><p>AARON</p><p>Sure. Also, I think I'm going to have to wrap this up in him pretty soon. Sorry.</p><p>DANIEL</p><p>Yeah, we can go for ages.</p><p>AARON</p><p>Do this again. Yeah, this will be the longest one yet.</p><p>DANIEL</p><p>I feel a bit guilty for just trolling. I don't even properly understand.</p><p>AARON</p><p>Especially I do think the morality thing is interesting because I think there's definitely, like, a strain of rationalist thought that it's directionally like you were at least in terms of vibes, like where you were coming from. That's pretty influential, at least in some circles.</p><p>DANIEL</p><p>Yeah, I guess I'm not sure if I did a good job of articulating it. And also, I've sort of changed my mind a little bit about I don't know, I feel like when I talk about morality, I want to get caught in the weird weeds or the semantics rather than, like I think an important fact about morality is it's not a weird contingent fact that humans evolved to care about it. I don't know. To me, it's really interesting that evolutionary accounts of why we care about morality, they don't rely on really fine grained features. They rely on very broad. People talk to each other, and we have common projects, and there's not one guy who's stronger than every other human. I don't know. Yeah, I feel like that's somehow more real and more important than just the weird semantics of it. Anyway, before we close up, can I plug some of my stuff?</p><p>AARON</p><p>Yes, plug everything that you want.</p><p>DANIEL</p><p>All right. I have two podcasts. One of my podcasts is called Axrp. It's the AI X Risk Research podcast, and you can listen to me interview AI X Risk researchers about their work and why they do it. I have another podcast called The Phylan Cabinet, where I just talk to whoever about whatever I want. I think if you want to hear some people who strongly who I guess the audience of this podcast is mostly EA's, like young atheist kind of EA types, if you want to hear people who are kind of not like that. I have a few episodes on religion and one three and a half hour conversation with my local Presbyterian pastor about what he thinks about God. And I have another episode with an objectivist about just I don't know, I guess everything Ayn Rand thinks the culmination.</p><p>AARON</p><p>Oh, no, you cut out at the word objectivist. Sorry, wait, you cut out at the word objectivist.</p><p>DANIEL</p><p>Oh, yeah, I'll try to say it again. I have one episode where I talk to this objectivist just about a bunch of objectivist thought. So I think we cover objectivists, like, ethics, metaphysics, and a bit of objectivist aesthetics as well. And I don't know, the thing objectivists are most famous for is they're really against altruism. And I ended up thinking that I thought the body of thought was more persuasive than I expected it to be. So maybe I recommend those two episodes to.</p><p>AARON</p><p>Have been sort of actually haven't listened to it in, like, a week, but was listening to your one with Oliver habrica. But after I finish that, I will look at the objectivist one. Yeah. Everybody should follow those podcasts. Like me.</p><p>DANIEL</p><p>Everyone. Even if you don't speak English.</p><p>AARON</p><p>Everyone. In fact, even if you're not a human, like Santa Claus, including yeah. Okay. So anything else to plug?</p><p>DANIEL</p><p>If you're considering building AGI don't.</p><p>AARON</p><p>Hear that. I know. Sam, you're listening okay. I know you're listening to Pigeonhoue.</p><p>DANIEL</p><p>Okay, yeah, I guess that's not very persuasive of me to just say, but I think AI could kill everyone, and that would be really bad.</p><p>AARON</p><p>Yeah, I actually agree with this. All right, well, yeah, there's more people we can cover this in more nuance next time you come on pigeonholer. Okay, cool.</p><p>DANIEL</p><p>I'm glad we have a harmonious ending.</p><p>AARON</p><p>Yeah. Of conflict. Disagreement is good. I'm pro discourse. Cool. All right, take care. See ya. Bye.</p>]]></content:encoded></item><item><title><![CDATA[I regret to report that I've started a podcast (again)]]></title><description><![CDATA[Tl;dr and links]]></description><link>https://www.aaronbergman.net/p/i-regret-to-report-that-ive-started</link><guid isPermaLink="false">https://www.aaronbergman.net/p/i-regret-to-report-that-ive-started</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Mon, 31 Jul 2023 02:06:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/af1ccb46-af28-4fb9-a94c-492c68a60095_3000x3000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Tl;dr and links</h3><ul><li><p>I started a low-effort &#8220;recorded conversation&#8221; podcast</p><ul><li><p><a href="https://open.spotify.com/show/4oPGwPO5mcjk7aL7EUOhSb?si=ac66ffc5309d46ec">Spotify</a></p></li><li><p><a href="https://podcasts.google.com/feed/aHR0cHM6Ly9hbmNob3IuZm0vcy9lNDJlYjllYy9wb2RjYXN0L3Jzcw?sa=X&amp;ved=2ahUKEwiBkq2fw7eAAxUGE2IAHQMyAI4Q9sEGegQIARAC">Google Podcasts</a></p></li><li><p><a href="https://podcasts.apple.com/us/podcast/pigeon-hour/id1693154768">Apple Podcasts</a></p></li><li><p><a href="https://anchor.fm/s/e42eb9ec/podcast/rss">RSS feed</a> to paste wherever: https://anchor.fm/s/e42eb9ec/podcast/rss </p></li><li><p>They&#8217;re also <a href="https://www.aaronbergman.net/podcast">on this Substack</a>, complete with decent summaries and mediocre transcriptions</p></li></ul></li></ul><div><hr></div><h1>Background, sorta</h1><p>A little over a year ago, I wrote about my experience <a href="https://www.aaronbergman.net/p/rob-and-keiran-on-the-philosophy-3f7#details">creating a podcast episode with the 80,000 Hours team</a>. That was <em>supposed </em>to be the first episode of (half defunct, half stillborn) <a href="https://open.spotify.com/show/1mj1BNP331yypVu5jpIl9l?si=0c1ff354055245b7">All Good</a>, but frankly the amount of time and effort it required scared me away until like two months ago.</p><p>That was when, luckily for me, I saw Pradyumna&#8217;s (author of <a href="https://brettongoods.substack.com/">Bretton Goods</a> which you should subscribe to) tweet while in a pro-pod mood, set up Pigeon Hour (the name doesn&#8217;t mean anything interesting or clever I promise) and got some friends to chat in front of a mic.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!c4ur!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75aefd90-507b-4938-aaa9-f480ba0a3419_1200x668.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!c4ur!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75aefd90-507b-4938-aaa9-f480ba0a3419_1200x668.png 424w, https://substackcdn.com/image/fetch/$s_!c4ur!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75aefd90-507b-4938-aaa9-f480ba0a3419_1200x668.png 848w, https://substackcdn.com/image/fetch/$s_!c4ur!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75aefd90-507b-4938-aaa9-f480ba0a3419_1200x668.png 1272w, https://substackcdn.com/image/fetch/$s_!c4ur!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75aefd90-507b-4938-aaa9-f480ba0a3419_1200x668.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!c4ur!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75aefd90-507b-4938-aaa9-f480ba0a3419_1200x668.png" width="481" height="267.75666666666666" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/75aefd90-507b-4938-aaa9-f480ba0a3419_1200x668.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:668,&quot;width&quot;:1200,&quot;resizeWidth&quot;:481,&quot;bytes&quot;:103757,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!c4ur!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75aefd90-507b-4938-aaa9-f480ba0a3419_1200x668.png 424w, https://substackcdn.com/image/fetch/$s_!c4ur!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75aefd90-507b-4938-aaa9-f480ba0a3419_1200x668.png 848w, https://substackcdn.com/image/fetch/$s_!c4ur!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75aefd90-507b-4938-aaa9-f480ba0a3419_1200x668.png 1272w, https://substackcdn.com/image/fetch/$s_!c4ur!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75aefd90-507b-4938-aaa9-f480ba0a3419_1200x668.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://twitter.com/PradyuPrasad/status/1662806418905522176?s=20">Link</a></figcaption></figure></div><p>Anyway, this is explicitly a <em>low effort podcast</em>. That could change in the future, but for now it&#8217;s gonna be mediocre sound quality, minimal editing, and <a href="http://claude.ai">Clong</a>-written descriptions.</p><p>I have five real episodes up, so check out whichever piques your interest, if you&#8217;re so inclined! Below are links to the episode pages attached to this blog, each of which now has an (again, mediocre) transcript. </p><h2>Episodes</h2><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;e183c2ba-0834-4d59-944d-f140aa2e94e9&quot;,&quot;caption&quot;:&quot;Listen now (70 min) | Follow Nathan&#8217;s blog Summary from Clong: The discussion centers around the concept of a unitary general intelligence or cognitive ability. Whether this exists as a real and distinct thing. Nathan argues against it, citing evidence from cognitive science about highly specialized and localized brain functions that can be damaged independently. Losing linguis&#8230;&quot;,&quot;cta&quot;:null,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;#5: Nathan Barnard (again!) on why general intelligence is basically fake&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:4425666,&quot;name&quot;:&quot;Aaron Bergman&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6dd2e409-d396-469c-abf3-103024394d0d_826x828.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2023-07-28T21:42:22.000Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9207ccb3-a6d9-4204-baa0-ed76c9f9ef45_3000x3000&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.aaronbergman.net/p/5-nathan-barnard-again-on-why-general-940&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:135570842,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;Aaron's Blog&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1447c40-329d-4442-95bc-8ae36fc428d1_1280x1280.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><ul><li><p>This one, episode 5, is my personal favorite</p><p></p></li></ul><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;cf267659-e695-49fc-85f8-c698c7c50e05&quot;,&quot;caption&quot;:&quot;Listen now (72 min) | Note: skip to minute 4 if you&#8217;re already familiar with The EA Archive or would just rather not listen to my spiel Summary (by Claude.ai) This informal podcast covers a wide-ranging conversation between two speakers aligned in the effective altruism (EA) community. They have a similar background coming to EA from interests in philosophy, rationality, and r&#8230;&quot;,&quot;cta&quot;:null,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;#4 Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:4425666,&quot;name&quot;:&quot;Aaron Bergman&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6dd2e409-d396-469c-abf3-103024394d0d_826x828.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2023-07-17T18:15:47.000Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9e0e258b-b6de-4de3-9e1b-d88b4dd32838_3000x3000&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.aaronbergman.net/p/4-winston-oswald-drummond-on-the-f38&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:135570843,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;Aaron's Blog&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1447c40-329d-4442-95bc-8ae36fc428d1_1280x1280.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;b8bac5fd-8dab-4983-ae84-c48b1acb288e&quot;,&quot;caption&quot;:&quot;Listen now (50 min) | Note: the first few minutes got cut due to technical difficulties, so it sounds like we start in the middle of our conversation. Follow Nathan&#8217;s blog Summary by Clong Stress Tests and AI Regulation: Nathan elaborates on the concept of stress tests conducted by central banks. These tests assess the resilience of banks to severe economic downturns and the po&#8230;&quot;,&quot;cta&quot;:null,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;#3: Nathan Barnard on how financial regulation can inform AI regulation&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:4425666,&quot;name&quot;:&quot;Aaron Bergman&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6dd2e409-d396-469c-abf3-103024394d0d_826x828.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2023-07-13T03:13:37.000Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9d5d73ea-f69f-4026-be7a-9e55f37759e1_3000x3000&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.aaronbergman.net/p/3-nathan-barnard-on-how-financial-e81&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:135570844,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;Aaron's Blog&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1447c40-329d-4442-95bc-8ae36fc428d1_1280x1280.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;40d68250-373f-4986-9914-6e9822888e69&quot;,&quot;caption&quot;:&quot;Listen now (62 min) | Follow Arjun on Twitter Read and subscribe to his blog Transcript Note: created for free by Assembly AI; very imperfect AARON So welcome. Welcome to the Pigeon hour podcast. Where do you see yourself? Wait, hold on. I need to get out of I literally say this every single time. I say this every single time. I always say I need to get out of podcaster mode and &#8230;&quot;,&quot;cta&quot;:null,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;#2 Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:4425666,&quot;name&quot;:&quot;Aaron Bergman&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6dd2e409-d396-469c-abf3-103024394d0d_826x828.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2023-06-30T00:01:48.000Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/033cdf25-3615-4ce1-aad5-87502f165be4_3000x3000&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.aaronbergman.net/p/arjun-panickssery-solves-books-hobbies-108&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:135570846,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;Aaron's Blog&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1447c40-329d-4442-95bc-8ae36fc428d1_1280x1280.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;b938203f-61e6-4a23-8847-beb26ef8a4fd&quot;,&quot;caption&quot;:&quot;Listen now (75 min) | Transcript Note: created for free by Assembly AI; very imperfect AARON Cool. So we have no topic suggestions. LAURA You mentioned last night that you have takes about working in the government, and I kind of wanted to hear that. AARON Yeah. Okay. My thoughts are not fully collected, so I have latent takes. Yeah. So basically, also, I need to get out of podcast&#8230;&quot;,&quot;cta&quot;:null,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;#1 Laura Duffy solves housing, ethics, and more&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:4425666,&quot;name&quot;:&quot;Aaron Bergman&quot;,&quot;bio&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6dd2e409-d396-469c-abf3-103024394d0d_826x828.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2023-06-17T21:55:19.000Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9ed5b87f-24b7-4586-8ade-a4cc1f6f8247_3000x3000&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.aaronbergman.net/p/laura-duffy-solves-housing-ethics-322&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:135570847,&quot;type&quot;:&quot;podcast&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:null,&quot;publication_name&quot;:&quot;Aaron's Blog&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1447c40-329d-4442-95bc-8ae36fc428d1_1280x1280.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aaronbergman.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aaronbergman.net/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aaronbergman.net/p/i-regret-to-report-that-ive-started/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aaronbergman.net/p/i-regret-to-report-that-ive-started/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[#5: Nathan Barnard (again!) on why general intelligence is basically fake]]></title><description><![CDATA[Follow Nathan&#8217;s blog]]></description><link>https://www.aaronbergman.net/p/5-nathan-barnard-again-on-why-general-940</link><guid isPermaLink="false">https://www.aaronbergman.net/p/5-nathan-barnard-again-on-why-general-940</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Fri, 28 Jul 2023 21:42:22 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/135570842/1ee3a29c67aac92e47d068ad9cfc9ab1.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<ul><li><p><a href="https://thegoodblog.substack.com/">Follow Nathan&#8217;s blog</a></p></li></ul><h2>Summary from <a href="http://claude.ai">Clong</a>:</h2><ul><li><p>The discussion centers around the concept of a unitary general intelligence or cognitive ability. Whether this exists as a real and distinct thing.</p></li><li><p>Nathan argues against it, citing evidence from cognitive science about highly specialized and localized brain functions that can be damaged independently. Losing linguistic ability does not harm spatial reasoning ability.</p></li><li><p>He also cites evidence from AI, like systems excelling at specific tasks without general competency, and tasks easy for AI but hard for humans. This suggests human cognition isn&#8217;t defined by some unitary general ability.</p></li><li><p>Aaron is more open to the idea, appealing to an intuitive sense of a qualitative difference between human and animal cognition - using symbolic reasoning in new domains. But he acknowledges the concept is fuzzy.</p></li><li><p>They discuss whether language necessitates this general ability in humans, or is just associated. Nathan leans toward specialized language modules in the brain.</p></li><li><p>They debate whether strong future AI systems could learn complex motor skills just from textual descriptions, without analogous motor control data. Nathan is highly skeptical.</p></li><li><p>Aaron makes an analogy to the universe arising from simple physical laws. Nathan finds this irrelevant to the debate.</p></li><li><p>Overall, Nathan seems to push Aaron towards a more skeptical view of a unitary general cognitive ability as a scientifically coherent concept. But Aaron retains some sympathy for related intuitions about human vs animal cognition.</p></li></ul><h1><strong>Transcript</strong></h1><p><em>Note: created for free by<a href="http://assemblyai.com/playground">Assembly AI</a>; very imperfect</em></p><p>NATHAN</p><p>It's going good. Finish the report.</p><p>AARON</p><p>Oh, congratulations.</p><p>NATHAN</p><p>Thank you. Let's see if anyone cares on the forum. Let's see if people if not still no one cares on the forum. I think they don't.</p><p>AARON</p><p>Yeah, because they shouldn't, but I know how that be.</p><p>NATHAN</p><p>It's slowly improving.</p><p>AARON</p><p>Let me see if I can find.</p><p>NATHAN</p><p>Slowly getting more hex. Okay.</p><p>AARON</p><p>Oh, nice. 14. Oh, wait, I haven't uploaded it.</p><p>NATHAN</p><p>Oh, my goodness.</p><p>AARON</p><p>Wait, actually, hold on. Do I really want to strong upload this? Yeah, I think I do. At least. I don't know if that's, like, how bad wait, maybe I'm being you know what? To preserve the form's epistemics, I'm going to only normal, like, upload it for now, and then later I like, decide.</p><p>NATHAN</p><p>That's good.</p><p>AARON</p><p>Okay. And later I might change it to a strong upload. I'm sorry.</p><p>NATHAN</p><p>I know. Oh, it's gob to 20 now. That's exciting.</p><p>AARON</p><p>Oh, not for long. It's going to go back down to 16.</p><p>NATHAN</p><p>Oh, no.</p><p>AARON</p><p>Because I'm wrong. Like upload, I guess my uplift counts for six.</p><p>NATHAN</p><p>Wow.</p><p>AARON</p><p>Yeah. So I say we don't talk about compute, governance, bank regulation. I said we talk about literally, almost literally anything else.</p><p>NATHAN</p><p>Yeah, I agree.</p><p>AARON</p><p>What was the hot take that you had? I forget.</p><p>NATHAN</p><p>Yeah.</p><p>AARON</p><p>Go ahead.</p><p>NATHAN</p><p>My hot take is I don't think general intelligence. To be honest, this is actually quite a cold take. My cold take. I don't think general intelligence is real.</p><p>AARON</p><p>So are you talking about okay. Are we talking more about IQ stuff or, like, AI stuff?</p><p>NATHAN</p><p>AI stuff.</p><p>AARON</p><p>Okay. That's like more okay. I feel like I don't even know where to start. Well, I kind of do, but I feel like it's like an underdefined point. I don't think it's as fundamental as velocity or something in my brain. It's not super well defined.</p><p>NATHAN</p><p>Yeah.</p><p>AARON</p><p>So what do you explicate what you mean by general intelligence, I guess, or what do you not think exists?</p><p>NATHAN</p><p>Yeah, so when I say general intelligence, what I mean is there's this faculty as some set of tasks which can't be done without this faculty, and this can be faculty can be turned up and down, and by turning up and down, you get better and worse at tasks. And maybe it's possible you can discover some qualities for new tasks, but once you have the faculty, you could just be able to learn, like, a much, much broader range of tasks than some intelligence without this general intelligence. General intelligence faculty. I think this is like a source thing which people say humans have and current air systems don't have, and probably also squirrels don't have general intelligence, I think, at least in the way it's used, like, colloquially. Yeah, I think that's, like, part one of the general intelligence hypothesis, and I think it's like a part two of the general intelligence partis. I think this one's even more controversial, which is like, once you cross the threshold of general intelligence, this is intrinsically tied up to pursuit of goals.</p><p>AARON</p><p>Yeah.</p><p>NATHAN</p><p>I sort of reject both of these. I also reject sort of the second hypothesis, even conditional on the first hypothesis being true.</p><p>AARON</p><p>Yeah. Okay. The first thing, I think I latched onto it because I was just, like, searching for disagreement, which I guess is.</p><p>NATHAN</p><p>Like, how my brain works.</p><p>AARON</p><p>But you said basically it's necessary for a certain path. That I don't think that's how the term is generally used. At the extreme. You can imagine hard coding a program to do something really hard that a general intelligence could learn on its own, or something like that. It's like the old Chinese room thing, I guess.</p><p>NATHAN</p><p>Yeah.</p><p>AARON</p><p>Maybe that deals with qualia, so forget about that. But you could just write down the formula for GPT four or whatever, or.</p><p>NATHAN</p><p>Some arbitrary some arbitrary complex or computate but like computable task.</p><p>AARON</p><p>Yeah. Are you going to stand by that claim that general intelligence, as most people use it or whatever, is necessary for certain things?</p><p>NATHAN</p><p>I think as most people use it, yes. So in Superintelligence, for instance, there's pre extensive discussion of tasks which AGI completes. I agree. As a technical point. Yes. If there's a task which is computable, you could write down a program and computers. But I think this is in fact not like an actual thing at stake when sort of talking about the hypothesis.</p><p>AARON</p><p>Okay. I basically agree. I guess I was being kind of fantastic or formal or something, but I think we're kind of on the same page. Yeah. Okay. Is there a qualitative difference between humans and squirrels, then? And if so, what is it that not general intelligence?</p><p>NATHAN</p><p>Yes, I think basically no, I think there isn't a qualitative difference between humans and squirrels. I think the thing which comes closest to this is being able to understand things in a hierarchical structure, I think, is probably a thing which comes closest to it. And like plausibly plausibly episodic memory as well. I don't think episodic memory is, like, particularly critical, too.</p><p>AARON</p><p>Did you say excellent?</p><p>NATHAN</p><p>No, episodic.</p><p>AARON</p><p>Oh, okay. That's a formal term.</p><p>NATHAN</p><p>This is like a formal term.</p><p>AARON</p><p>Okay.</p><p>NATHAN</p><p>I think it's the easiest when you contrast semantic memory. Semantic memory is like you've got procedural memories, which is like memories of, say, how to throw a ball. Be like partial procedural memory, semantic memory, memory of specific things, divorced of source, divorce of context isn't quite right, but sort of also context. Just like remembering that the capital of the United States is like Washington without DC. Yeah. Without remembering where you learnt it. Episodic memory is like memory of where you learnt it, for instance, for just.</p><p>AARON</p><p>Like, normal autobiographical memory. Like the memory of what it was like to go on a walk or something.</p><p>NATHAN</p><p>Yeah. I think autobiographical memory is like a subset of graphic memory. I'm not 100% sure of this.</p><p>AARON</p><p>Yeah. So, I mean.</p><p>NATHAN</p><p>I yeah, go ahead. Sorry, I was checking. I'm right here at absolute memory. I am right here. Memory.</p><p>AARON</p><p>Nice.</p><p>NATHAN</p><p>It looks like it's basically just the same as autobiographical memory. It looks like autobiographical it looks like it's basically the same as autobiographical memory, but seems to be like yeah. I think the reason I've been taught autobiographical memory, it seems like in the cognitive science literature when the concept is called episolic memory.</p><p>AARON</p><p>What was the first one again? I already forgot.</p><p>NATHAN</p><p>Oh, semantic memory.</p><p>AARON</p><p>What was the example? Okay. I remember the Capital blossoming thing. What was the other example?</p><p>NATHAN</p><p>Procedural memory.</p><p>AARON</p><p>Yeah. Do you have an example for that?</p><p>NATHAN</p><p>Yeah, being able to play the piano. Do procedural memory.</p><p>AARON</p><p>I don't even know if I want. It seems like just like everything else, you're more well read in the cognitive science, like psych literature, I guess. But I wouldn't normally call that memory. Well, I guess I kind of would. I don't know what I would call it's. Like not a central example of but sure.</p><p>NATHAN</p><p>Within the cognitive science issue, this is like one of the things it's called.</p><p>AARON</p><p>Then everything is memory. What's a capacity that's not like a cognitive capacity that's not memory.</p><p>NATHAN</p><p>Like processing facial signals is not a procedural memory.</p><p>AARON</p><p>Even though you remember how to do.</p><p>NATHAN</p><p>It.</p><p>AARON</p><p>Because it's genetically coded that it doesn't count.</p><p>NATHAN</p><p>No, but it's like a different as in you can lose your procedural memory. You'd still be able to for instance.</p><p>AARON</p><p>You could lose your ability to notice faces, like in principle completely.</p><p>NATHAN</p><p>Oh, no. Okay. Your eyes take in a bunch of light and your brain processes them.</p><p>AARON</p><p>Into.</p><p>NATHAN</p><p>Various things so that there's like a bit which does movement, there's a bit.</p><p>AARON</p><p>Which does.</p><p>NATHAN</p><p>Depth perception, there's a bit which breaks things down into builds up objects out of more discrete parts. You still have be able to do all these tasks even if you lost procedural memory.</p><p>AARON</p><p>Is that because they are a direct consequence of the physical structure of the neurons rather than the behavior of how they interact?</p><p>NATHAN</p><p>Okay.</p><p>AARON</p><p>No, over two. Okay.</p><p>NATHAN</p><p>Edges different.</p><p>AARON</p><p>Well, then in that case, why couldn't you lose your ability to notice movement or edges or whatever?</p><p>NATHAN</p><p>So you can okay, I'm just like referencing the cognitive science literature here. I can try being as precise as I can.</p><p>AARON</p><p>No, I feel like this is not that important.</p><p>NATHAN</p><p>But stuff which you wouldn't normally consider learning. You wouldn't normally consider learning how to sit. You would consider learning how to play piano and it's become part of your procedural memory. You wouldn't normally consider learning how to move your tongue muscles, for instance.</p><p>AARON</p><p>I feel like you probably like in utero. You probably do. I'm not actually sure.</p><p>NATHAN</p><p>I think the tongue tape was a bad example. You have to learn how to see, for instance, or learn how to smell or learn how to.</p><p>AARON</p><p>You don't learn how to take in photons or whatever. You do learn how to well.</p><p>NATHAN</p><p>I'm using learn here in the colloquial sense to try and get across things which are normally caught, which would be like procedural memory tasks versus non procedural memory tasks. And we have things which tasks which you'd colloquially say that you learned how to do, like playing tennis, playing the piano would be done by procedural memory and tasks which you colloquially and as like a rule of thumb, tasks which you say we should colloquially say you didn't learn how to do, like seeing or like what's? Another good one. Regulating your heartbeat.</p><p>AARON</p><p>Okay. Yeah. I feel like that's like the most clear cut piece of a thing that your brain does. I guess your brain stamped, but even still your brain or your nervous system doesn't that you definitely don't learn. It's as close to not learning as you can get. Okay. This is kind of maybe hearing I'm interested in it because do you think it's interesting to try to dissect the difference between learning piano and learning how to detect edges? I don't know what I'm talking about. But as an extreme layperson, seems like these are kind of same type of thing, even though they're radically different in terms of difficulty and contingentness contingency.</p><p>NATHAN</p><p>Yeah. I'm just going to check how much of your procedural memory is in your hippocampus. I think it's not just looking at lots of procedural memory. Cool. Yeah. So it's not specialized. So the hippocampus lots of memories done in the.</p><p>AARON</p><p>Hippocampus. Is that's like, the interesting part of the brain? That's like, all I know about.</p><p>NATHAN</p><p>Oh, I think it's like one of the interesting parts of the brain. Lots of interesting parts of the brain, yeah. Precision memory seems to be like lots in, like, motor corset, cerebellum and basal ganglia. So you could fuck up your motor corsets in some I'm almost 100% sure there'll be people who've basically 100% sure there'll be people who've had injuries to their motor corsets and lost the ability to play football, but there'd be no effects on their ability to process movements like see movements.</p><p>AARON</p><p>Okay. Should we bring it back to general intelligence? Wait, so this is like the squirrel thing. Okay. You were explaining why.</p><p>NATHAN</p><p>One of the things which is thought to be sort of distinct about again, all my knowledge here comes from me reading cognitive science and neuroscience textbooks.</p><p>AARON</p><p>You do this for fun?</p><p>NATHAN</p><p>No, I do this because I think it's very Cruxy for whether you'll die.</p><p>AARON</p><p>Okay. But it's not like okay, but I was using a fun in a broad term, not because we majored in neuroscience.</p><p>NATHAN</p><p>Oh, sure. Yeah.</p><p>AARON</p><p>Okay. Congratulations on out nerding me by far. I don't say that lightly.</p><p>NATHAN</p><p>Um, what was I going to say? Oh, so, yes, it's like one of the things, like some of the cognitive skills which humans seem to have.</p><p>AARON</p><p>Which.</p><p>NATHAN</p><p>Other animals just seems to don't have at all. And one of them is episodic memory. Another seems to probably be this hierarchical the ability it's, like hierarchical way of putting language together and potentially other tasks we saw build up in this hierarchical way. And then there's a few other abilities around cooperation and ability to have joint intentional things which seem like unique to humans. Yes, unique to humans compared to, say, like, chapendis.</p><p>AARON</p><p>Yeah. So, like, I when I, like, tentatively don't like when I think about like what I what I naively and like, maybe legitimately, I guess think of as general intelligence is more actually isn't like any of the I don't think it's any of the things you just listed and more like more like just a symbolic representation or something like that.</p><p>NATHAN</p><p>Wait, I actually just need to loot I'll be back.</p><p>AARON</p><p>No problem. Okay. Hello.</p><p>NATHAN</p><p>I am back. Great.</p><p>AARON</p><p>Okay.</p><p>NATHAN</p><p>Yes. This like symbolic like symbolic, symbolic reasoning stuff. Yeah. So I think if I was, like, making the if I was, like, making the case for general intelligence being real, I wouldn't have symbolic reasoning, but I would have language stuff. I'd have this hierarchical structure thing, which.</p><p>AARON</p><p>I would probably so I think of at least most uses of language and central examples as a type of symbolic reasoning because words mean things. They're like yeah. Pointers to objects or something like that.</p><p>NATHAN</p><p>Yeah, I think it's like, pretty confidence isn't where this isn't a good enough description of general intelligence. So, for instance so if you bit in your brain called, I'm using a checklist, I don't fuck this up vernacular, I'm not making this cool. Lots of connects to use words like pointers as these arbitrary signs happens mostly in this area of the brain called Berkeley's area. But very famously, you can have Berkeley's epaxics who lose the ability to do language comprehension and use the ability to consistently use words as pointers, as signs to point to things, but still have perfect good spatial reasoning abilities. And so, conversely, people with brokers of fascia who fuck up, who have the broker's reason their brain fucks up will not be able to form fluent sentences and have some problems like unsigned syntax, and they'll still be able to have very good spatial reasoning. It could still, for instance, be like, good engineers. Would you like many problems which, like, cost engineering?</p><p>AARON</p><p>Yeah, I totally buy that. I don't think language is the central thing. I think it's like an outgrowth of, like I don't know, there's like a simplified model I could make, which is like it's like an outgrowth of whatever general intelligence really is. But whatever the best spatial or graphical model is, I don't think language is cognition.</p><p>NATHAN</p><p>Yes, this is a really big debate in psycholinguistics as to whether language is like an outgrowth of other abilities like the brain has, whether language whether there's very specialized language modules. Yeah, this is just like a very live debate in psycholinguistics moments. I actually do lean towards the reason I've been talking about this actually just going to explain this hierarchical structure thing? Yeah, I keep talking about it. So one theory for how you can comprehend new sentences, like, the dominant theory in linguistics, how you can comprehend new sentences, um, is you break them up into, like you break them up into, like, chunks, and you form these chunks together in this, like, tree structure. So something like, if you hear, like, a totally novel sentence like the pit bull mastiff flopped around deliciously or something, you can comprehend what the sentence means despite the fact you've never heard it. Theory behind this is you saw yes, this can be broken up into this tree structure, where the different, like, ah, like like bits of the sentence. So, like like the mastiff would be like, one bit, and then you have, like, another bit, which is like, the mastiff I can't remember I said rolled around, so that'd be like, another bit, and then you'd have connectors to our heart.</p><p>AARON</p><p>Okay.</p><p>NATHAN</p><p>So the massive rolling around one theory of one of the sort of distinctive things that humans have disabilities is like, this quite general ability to break things up into these these tree structures. This is controversial within psycholinguistics, but it's broadly an area which I broadly buy it because we do see harms to other areas of intelligence. You get much worse at, like, Ravens Progressive Matrices, for instance, when you have, like, an injury to brokers area, but, like, not worse at, like, tests like tests of space, of, like, spatial reasoning, for instance.</p><p>AARON</p><p>So what is like, is there, like, a main alternative to, like, how humans.</p><p>NATHAN</p><p>Understand language as far as this specificity of how we pass completely novel sentences, as far as where this is just like this is just like the the academic consensus. Okay.</p><p>AARON</p><p>I mean, it sounds totally like right? I don't know.</p><p>NATHAN</p><p>Yeah. But yeah, I suppose going back to saying, how far is language like an outgrowth of general intelligence? An outgrowth like general intelligence versus having much more specialized language modules? Yeah, I lean towards the latter, despite yeah, I still don't want to give too strong of a personal opinion here because I'm not a linguistic this is a podcast.</p><p>AARON</p><p>You're allowed to give takes. No one's going to say this is like the academic we want takes.</p><p>NATHAN</p><p>We want takes. Well, gone to my head is.</p><p>AARON</p><p>I.</p><p>NATHAN</p><p>Think language is not growth of other abilities. I think the main justification for this, I think, is that the loss of other abilities we see when you have damage to broker's area and verca's area.</p><p>AARON</p><p>Okay, cool. So I think we basically agree on that. And also, I guess one thing to highlight is I think outgrowth can mean a couple of different things. I definitely think it's plausible. I haven't read about this. I think I did at some point, but not in a while. But outgrowth could mean temporarily or whatever. I think I'm kind of inclined to think it's not that straightforward. You could have coevolution where language per se encourages both its own development and the development of some general underlying trait or something.</p><p>NATHAN</p><p>Yeah. Which seems likely.</p><p>AARON</p><p>Okay, cool. So why don't humans have general intelligence?</p><p>NATHAN</p><p>Right. Yeah. As I was sort of talking about previously.</p><p>AARON</p><p>Okay.</p><p>NATHAN</p><p>I think I think I'd like to use go back to like a high level like a high level argument is there appears to be very surprised, like, much higher levels of functional specialization in brains than you expect. You can lose much more specific abilities than you expect to be able to lose. You can lose specifically the ability a famous example is like facebindness, actually. You probably lose the ability to specifically recognize things which you're, like, an expert in.</p><p>AARON</p><p>Who does it or who loses this ability.</p><p>NATHAN</p><p>If you've damaged your fuse inform area, you'll lose the ability to recognize faces, but nothing else.</p><p>AARON</p><p>Okay.</p><p>NATHAN</p><p>And there's this general pattern that your brain is much more you can lose much more specific abilities than you expect. So, for instance, if you sort of have damage to your ventral, medial, prefrontal cortex, you can say the reasoning for why you shouldn't compulsively gamble but still compulsively gamble.</p><p>AARON</p><p>For instance okay, I understand this not gambling per se, but like executive function stuff at a visceral level. Okay, keep going.</p><p>NATHAN</p><p>Yeah. Some other nice examples of this. I think memory is quite intuitive. So there's like, a very famous patient called patient HM who had his hippocampus removed and so as a result, lost all declarative memory. So all memory of specific facts and things which happened in his life. He just couldn't remember any of these things, but still perfectly functioning otherwise. I think at a really high level, I think this functional specialization is probably the strongest piece of evidence against the general intelligence hypothesis. I think fundamentally, general intelligence hypothesis implies that, like, if you, like yeah, if you was, like, harm a piece of your brain, if you have some brain injury, you might like generically get worse at tasks you like, generically get worse at, like at like all task groups use general intelligence. But I think suggesting people, including general intelligence, like the ability to write, the ability to speak, maybe not speak, the ability to do math, you do have.</p><p>AARON</p><p>This it's just not as easy to analyze in a Cogsy paper which IQ or whatever. So there is something where if somebody has a particular cubic centimeter of their brain taken out, that's really excellent evidence about what that cubic centimeter does or whatever, but that non spatial modification is just harder to study and analyze. I guess we'll give people drugs, right? Suppose that set aside the psychometric stuff. But suppose that general intelligence is mostly a thing or whatever and you actually can ratchet it up and down. This is probably just true, right? You can probably give somebody different doses of, like, various drugs. I don't know, like laughing gas, like like, yeah, like probably, probably weed. Like I don't know.</p><p>NATHAN</p><p>So I think this just probably isn't true. Your working memory corrects quite strongly with G and having better working memory generic can make you much better at lots of tasks if you have like.</p><p>AARON</p><p>Yeah.</p><p>NATHAN</p><p>Sorry, but this is just like a specific ability. It's like just specifically your working memory, which is improved if you go memory to a drugs. Improved working memory. I think it's like a few things like memory attention, maybe something like decision making, which are all like extremely useful abilities and improve how well other cognitive abilities work. But they're all separate things. If you improved your attention abilities, your working memory, but you sort of had some brain injury, which sort of meant you sort of had lost ability to pass syntax, you would not get better at passing syntax. And you can also use things separately. You can also improve attention and improve working memory separately, which just it's not just this one dial which you can turn up.</p><p>AARON</p><p>There's good reason to expect that we can't turn it up because evolution is already sort of like maximizing, given the relevant constraints. Right. So you would need to be looking just like injuries. Maybe there are studies where they try to increase people's, they try to add a cubic centimeter to someone's brain, but normally it's like the opposite. You start from some high baseline and then see what faculties you lose. Just to clarify, I guess.</p><p>NATHAN</p><p>Yeah, sorry, I think I've lost the you still think there probably is some general intelligence ability to turn up?</p><p>AARON</p><p>Honestly, I think I haven't thought about this nearly as much as you. I kind of don't know what I think at some level. If I could just write down all of the different components and there are like 74 of them and what I think of a general intelligence consists of does that make it I guess in some sense, yeah, that does make it less of an ontologically legit thing or something. I think I think the thing I want to get the motivating thing here is that with humans yet you can like we know humans range in IQ, and there's, like, setting aside a very tiny subset of people with severe brain injuries or development disorders or whatever. Almost everybody has some sort of symbolic reasoning that they can do to some degree. Whereas the smartest maybe I'm wrong about this, but as far as I know, the smartest squirrel is not going to be able to have something semantically represent something else. And that's what I intuitively want to appeal to, you know what I mean?</p><p>NATHAN</p><p>Yeah, I know what you're guessing at. So I think there's like two interesting things here. So I think one is, could a squirrel do this? I'm guessing a squirrel couldn't do this, but a dog can, or like a dog probably can. A chimpanzee definitely can.</p><p>AARON</p><p>Do what?</p><p>NATHAN</p><p>Chimpanzees can definitely learn to associate arbitrary signs, things in the world with arbitrary signs.</p><p>AARON</p><p>Yes, but maybe I'm just adding on epicentercles here, but I feel like correct me if I'm wrong, but I think that maybe I'm just wrong about this, but I would assume that Chicken Tees cannot use that sign in a domain that is qualitatively different from the ones they've been in. Right. So, like, a dog will know that a certain sign means sit or whatever, but maybe that's not a good I.</p><p>NATHAN</p><p>Don'T know think this is basically not true.</p><p>AARON</p><p>Okay.</p><p>NATHAN</p><p>And we sort of know this from teaching.</p><p>AARON</p><p>Teaching.</p><p>NATHAN</p><p>There's like a famously cocoa de guerrilla. Also a bonobo whose name I can't remember were taught sign language. And the thing they were consistently bad at was, like, putting together sentences they could learn quite large vocabularies learning to associate by large, I mean in the hundreds of words, in the low hundreds of words which they could consistently use consistently use correctly.</p><p>AARON</p><p>What do you mean by, like, in what sense? What is bonobo using?</p><p>NATHAN</p><p>A very famous and quite controversial example is like, coco gorilla was like, saw a swan outside and signed water bird. That's like, a controversial example. But other things, I think, which are controversial here is like, the syntax part of putting water and bird together is the controversial part, but it's not the controversial part that she could see a swan and call that a bird.</p><p>AARON</p><p>Yeah, I mean, this is kind of just making me think, okay, maybe the threshold for D is just like at the chimp level or something. We are like or whatever the most like that. Sure. If a species really can generate from a prefix and a suffix or whatever, a concept that they hadn't learned before.</p><p>NATHAN</p><p>Yeah, this is a controversial this is like a controversial example of that the addition to is the controversial part. Yeah, I suppose maybe brings back to why I think this matters is will there be this threshold which AIS cross such that their reasoning after this is qualitatively different to their reasoning previously? And this is like two things. One, like a much faster increase in AI capabilities and two, alignment techniques which worked on systems which didn't have g will no longer work. Systems which do have g. Brings back to why I think this actually matters. But I think if we're sort of accepting it, I think elephants probably also if you think that if we're saying, like, g is like a level of chimpanzees, chimpanzees just, like, don't don't look like quantitatively different to, like, don't look like that qualitatively different to, like, other animals. Now, lots of other animals live in similar complex social groups. Lots of other animals use tools.</p><p>AARON</p><p>Yeah, sure. For one thing, I don't think there's not going to be a discontinuity in the same way that there wasn't a discontinuity at any point between humans evolution from the first prokaryotic cells or whatever are eukaryotic one of those two or both, I guess. My train of thought. Yes, I know it's controversial, but let's just suppose that the sign language thing was legit with the waterbird and that's not like a random one off fluke or something. Then maybe this is just some sort of weird vestigial evolutionary accident that actually isn't very beneficial for chimpanzees and they just stumbled their way into and then it just enabled them to it enables evolution to bootstrap Shimp genomes into human genomes. Because at some the smartest or whatever actually, I don't know. Honestly, I don't have a great grasp of evolutionary biology or evolution at all. But, yeah, it could just be not that helpful for chimps and helpful for an extremely smart chimp that looks kind of different or something like that.</p><p>NATHAN</p><p>Yeah. So I suppose just like the other thing she's going on here, I don't want to keep banging on about this, but you can lose the language. You can lose linguistic ability. And it's just, like, happens this happens in stroke victims, for instance. It's not that rare. Just, like, lose linguistic ability, but still have all the other abilities which we sort of think of as like, general intelligence, which I think would be including the general intelligence, like, hypothesis.</p><p>AARON</p><p>I agree that's, like, evidence against it. I just don't think it's very strong evidence, partially because I think there is a real school of thought that says that language is fundamental. Like, language drives thought. Language is, like, primary to thought or something. And I don't buy that. If you did buy that, I think this would be, like, more damning evidence.</p><p>NATHAN</p><p>Yeah, I guess. Yeah. Cool. Okay, so maybe it's like maybe it's like yeah, maybe it's, like, slightly worth moving on from this to another like another sort of another piece of evidence which I've been sort of thinking about.</p><p>AARON</p><p>Yeah, go ahead.</p><p>NATHAN</p><p>So I suppose the other sort of piece of evidence thinking about is evidence from AI models. And so the thing which she like and sort of like two consistent patterns which we see, one, AI systems being getting being able to get very good at specific tasks without getting good at other tasks. And people consistently predicted that you won't be able to x unless you have AGI, and they've consistently been wrong about this. And two, this pattern inversion where tasks which seem hard for humans, like multiplying two tension numbers, are consistently easy for AI systems and vice versa. But tasks which hard for humans, tasks which are easy for humans, like loading the dishwasher is like a classic example, are very hard. You don't yet have an AI system which can go into a random kitchen and load a dishwasher.</p><p>AARON</p><p>Yeah, I think this is probably the weakest point you've mentioned, because that's my intuition. Yeah. I mean, in some I kind maybe there's a thing where it's like it seems like correct me if I'm. Wrong, but you kind of think that general intelligence, insofar as it's like a useful or true concept, should be kind of discrete or qualitative, I guess not qualitative, but should be discreet, a discrete ability that develops or something. It shouldn't be just like a smooth continuum from like you know what I mean?</p><p>NATHAN</p><p>Yeah, no, I think it definitely can be a smooth continuum if you're like. You get sort of the first spark of general intelligence and you can just turn this up by throwing more computing data at it, for instance. But I think the core thing which I'm trying to get, which I sort of think journal intelligence isn't like the way it's used in the sort of like in the way it was used in superintelligence, for instance. And I think the way in which it's used basically quite informally in the sort of AI theater community isn't used in this way in cognitive science, as far as I'm aware. This isn't like a nice, clean defined concept I'm trying to argue against. I'm sort of trying to infer meaning here. But I think a core feature of it is that there are some tasks which in actual intelligence systems you only get with other tasks and the more complex tasks, eventually there'll just be a point where you can task be so complex, you will need this general intelligence faculty. Intelligence actually exist. I still take your point about you can actually make a program with dozens. They'll need to have this general intelligence sorry. Oh, good, go ahead. Sorry. They'll need to have a general intelligence faculty. AI examples like evidence against both of these things. Chess was like a thing which people thought would require general attention capacity, and it doesn't, and it didn't. And people would not intuitively say that. People would say that solving differential equations in humans, for instance, be a thing which would require this general intelligence capacity. It's very easy to get computers to do this, very hard to get them to do many motor tasks or language until recently, which we thought of as like which we definitely don't think of as general intelligence. We think of as quite simple tasks. This is maybe less true for language, but definitely true of motor tasks. We definitely don't think of motor tasks as requiring general intelligence. It's actually just like an incredibly complex obstacle control problem which you have to solve. And this is like part of the reason why it's so difficult to get air systems to have motor skills anywhere close to the level motor skill teams currently have or other animals.</p><p>AARON</p><p>Yeah, the chess thing I was like talking in the 70s or something. I'm honestly not sure if I would predict that chess would be something that you can get via whatever deep blue, whatever that AI structure was, for one thing. I'm sure I think other people disagree, but I personally just don't think there's any given task that you need general intelligence for yeah, I think this is.</p><p>NATHAN</p><p>Like this is clearly true. This is clearly true.</p><p>AARON</p><p>One question. Okay. How do you think about GPT four? Do you think GPT Four what am I even asking here? Because you don't think it's a thing. Right? So how stupid do you think I am for thinking that GPT four is basically more like a human than a squirrel in the general intelligence game?</p><p>NATHAN</p><p>Yeah. So I suppose GPT four fits into my current worldview around general intelligence is yeah. I think what this tells me is within human language there's lots and lots of structure and lots of structure about the world. And from this structure about the world, you can do many there's lots of reasoning tasks you can do. There's also lots of reasoning tasks you have absolutely no hope of doing very far away from motor control or visual recognition or what's some other core human tasks which are really difficult for AI systems to do. Sorry, visual stuff and motor control are my two go to.</p><p>AARON</p><p>Yeah. So, I mean, I kind of just think like if you oh, sorry, go ahead.</p><p>NATHAN</p><p>No, yeah, go.</p><p>AARON</p><p>I think like the exact same, just GPT Four just scaled up. The system isn't currently attached to a robotic arm or whatever, but say you were emulate a human hand really well as a robotic arm or whatever, and then you gave a gptn or whatever, like a PDF describing the whole setup and a description of a task with the exact specifications as like a CAD file. I think, yeah, it will be able to interesting, like juggle or whatever.</p><p>NATHAN</p><p>This is like an actual so I think has no chance of doing that. No chance it does that. Sorry. No, go ahead.</p><p>AARON</p><p>What do you think, which I thought.</p><p>NATHAN</p><p>You'Re going to say I think I'm going to agree with is it seems like Transformers can generically learn any sequence and then do sequence modeling extremely well. There's some sense in which the transformer architecture is like general intelligence for sequence modeling. Anything which should be presented as a sequence can be modeled well by Transformers edgeworm just depends on what data they're sort of trained on. It's the thing I thought you were going to say no, I think there's like no chance that from reading the PDF do I actually think this?</p><p>AARON</p><p>I want to bet not very much.</p><p>NATHAN</p><p>I think it depends on this specificity and if it was okay, here is like you like have I need you to output like a string of numbers which tells what power these 5000 motors which control a human body to do. I need you to walk.</p><p>AARON</p><p>Up like.</p><p>NATHAN</p><p>A rocky hill and the only input you have, your only input is input in numbers from the sensors on the robot and the only output shrink can have is what number of watts to send to each motor. I think there's like no chance like GPCN does this.</p><p>AARON</p><p>I'm surprised to hear why.</p><p>NATHAN</p><p>Yeah, because this is one of the grand challenges in robotics, and currently, only very specialized systems have had success in doing this. I don't see how gptn could learn how to do this from the corpus of text, which is on the Internet. This seems to apply extremely strong hypothesis of immersion abilities, which are just incredibly complex. So much of brains of animals and so much of the brains of humans are devoted to motor control. Yeah, it's just like one of the big challenges, robotics, to do RL based motor control.</p><p>AARON</p><p>Well, I feel like that's actually RL based learning motor control. I feel like that's actually easier than what I'm proposing, because I always forget the difference, because I forget the type of learning things or whatever. But in broadly ML based motor control, there's like a period of trial and error. And I'm proposing that for a novel task, say, juggling three balls for ten minutes in an actual physical room via two robotic arms like hands, it will be able to do this, I guess, at least in principle, and probably in reality. Even if you scrubbed all the training text of information about juggling, but not all the information about physics or whatever yeah. Then an emergent ability would be on the very first try of an Arbitrarily strong system. Arbitrarily strong GBT four. That it. Juggles correctly.</p><p>NATHAN</p><p>It's only trained on the corpus, like the current Internet.</p><p>AARON</p><p>Yeah.</p><p>NATHAN</p><p>I'd very happy to take this bet.</p><p>AARON</p><p>Okay. We can finalize the details later. I'm kind of risk averse, but I still want to actually do it.</p><p>NATHAN</p><p>We don't have to do it very much money at all.</p><p>AARON</p><p>Yeah, okay.</p><p>NATHAN</p><p>Yeah, I know. I think this is just, like it would seem like, bizarre to me that many very smart teams, including teams at DeepMind, are working on trying to do this, trying to get robotics, trying to robotics to get most controls, like a similar degree, which humans are doing. They are not doing this using corpuses from trying to get text corpuses. It's just very hard.</p><p>AARON</p><p>Maybe juggling is not a great I stand by maybe it's not, like a great example because it's, like, relatively it's, like, difficult even for a human, I guess marble or something.</p><p>NATHAN</p><p>Playing football is like a kind of benchmark which is being used.</p><p>AARON</p><p>Although then the question is, what does it even mean? There's no discrete yes no type of thing. Did it play football?</p><p>NATHAN</p><p>No. But you could see how competent the year plays.</p><p>AARON</p><p>Yeah. Intuitively, I feel like appearing to be a competent soccer for us American football player is actually associated than juggling three balls, because I don't know.</p><p>NATHAN</p><p>I suppose do keep yuckies or something. I know. Do a cross turn. Sorry. You Americans.</p><p>AARON</p><p>Okay. I feel like a good example would be like tossing a ball from one hand to the other such that reaches a height of, like, 3ft above the hand, or something like that. I do know how to juggle, and this is like step one. It's like you practice just like juggling, but with one ball instead of three, and yeah, I guess no, it seems actually not that far fetched, actually. If you describe via text how the robotic arms work and what type of inputs they take and how those correspond to physical movement, and you have the dimensions of the ball and give it g or whatever, GME and gravity, like 9.8 meters squared. Yeah, it just does seem to be like a text prediction task, what output, when given to the robotic arms, makes this ball work or something.</p><p>NATHAN</p><p>I agree that if in its corpus, there was lots of data, I can completely agree if this text was in its corpus.</p><p>AARON</p><p>Yeah, but like a physics textbook doesn't count.</p><p>NATHAN</p><p>No, not anywhere close. Not anywhere close.</p><p>AARON</p><p>No one physics textbook. I guess, empirically, I guess it depends on I could just be mistaken about how hard it is, how many parameters the input to toss a ball actually necessitates. I do stand by that given some large enough body of text describing how the world works, this would be an emergent capability.</p><p>NATHAN</p><p>Yes, I agree with this. I don't think this emerges unless there's, like, this in its training corpus. This being what, as an example, data of the specific watts which need to go into motors in response to whether image data and maybe even tactile data to do motor tasks. I don't think it has to be juggling in particular. I think it has to be like, that kind of data, I think, could do that.</p><p>AARON</p><p>I feel like the emergent capabilities of GPT 3.5 and four just, like, pretty strong evidence. You don't need that concrete of the data. Doesn't have to be that analogous for a strong system to make use of it well.</p><p>NATHAN</p><p>What examples do you have?</p><p>AARON</p><p>Um, I mean, the thing that popped into my mind is GPT. No, that's not a good example. I was going to say GPT-2 playing chess, but like, there's totally chess in the in the yeah, I guess, like, I maybe I want to test actually, I don't have access to the GPT four API, but something to test would be like make up a game that you're as simple as you can reasonably come up with while still being pretty confident. That doesn't exist on the Internet. I don't know, some variation of cards and tic TAC O or something, and describe it in as much detail as you possibly can and see if it seems to get it. You know what I mean? And I think the answer is, like, yes.</p><p>NATHAN</p><p>Yeah. So I think, like, the disconnect here is, I view, like, this most controlled task as much more similar to, like much more similar to, like you have, like you have like, a sequence of, like, of like a T's and G's. What shape does this protein go? Into than, like, I have this simple game and can you play this confidently? Roughly half the neurons of the human brain are devoted to motor control.</p><p>AARON</p><p>Okay, so maybe half of the neurons in GPT eleven will be devoted to motor control.</p><p>NATHAN</p><p>If you had this minute training corpus, then totally yes, and I completely believe this. If you had this mixture training corpus wait, no, sorry.</p><p>AARON</p><p>Actually, the proportion I don't stand by that proportion, actually. I think there's probably some proto motor control systems in GPT four in the sense of being able to play some absurdly simple version of Pong if you just give it, I don't know, like, pixel representations or something in like a 32 x 32 square or something like that. And it's like this is sort of a protomotor type of thing if you disagree. But maybe that's like 1% of the total informational content, but eventually 1% of a gigantic amount of informational content just does contain enough do, like, human level motor task.</p><p>NATHAN</p><p>I don't think any of the any of the information you need. That's probably not true. It probably are textbook somewhere. I should even think this. Are they even files of this anywhere? I don't even know if they're files for this anywhere. Okay, so what do we think is the Crux here? Yeah.</p><p>AARON</p><p>Honestly?</p><p>NATHAN</p><p>Sorry.</p><p>AARON</p><p>No, I honestly don't know.</p><p>NATHAN</p><p>Yeah, so I wonder if the Crux is like, um I think you need, like, specific data on, like okay, I.</p><p>AARON</p><p>Think this is the Crux.</p><p>NATHAN</p><p>I don't think I don't think that, like, the abstract knowledge of, like, the equations of motion are anywhere near enough to do motor control. I think it would need in its corpus text representations of examples of robot doing composite motor control to learn composite motor control.</p><p>AARON</p><p>This is like extreme galaxy brain. But I want to say, just, like, the universe itself is, like, proof of concept or whatever, that you just give it the law. You just give some system laws of physics or just like, some system is the laws of physics and the output is humans juggling.</p><p>NATHAN</p><p>No, this is wrong. If you do RL sure.</p><p>AARON</p><p>Like, the universe itself, there's no informational presumably, like, the informational content of the universe generating function is like, I don't know, like a kilobyte or less or whatever. Not very much.</p><p>NATHAN</p><p>I have no idea how you can respond to this claim. I'm sorry.</p><p>AARON</p><p>Wait, I feel like this is not that original. Other people have used this type of analogy, but I don't know, I guess quantum stuff does complicate a little bit.</p><p>NATHAN</p><p>Has this analogy made the prediction which has been proven correct?</p><p>AARON</p><p>I don't think so, but that type of thing is overrated.</p><p>NATHAN</p><p>Okay. I love testing predictions. It's my favorite thing to do.</p><p>AARON</p><p>So do I. I think it's just, like, overrated. I don't know, like, empiricism or like some people dismiss any type of claim that's, like, not that can't be settled as just devoid of meaningful content. I am anti this.</p><p>NATHAN</p><p>Yeah, I'm also anti this. Okay. But I suppose I don't know when. I sort of have lots of evidence from current and past AI systems and what's been hard for them to do and the degrees which they've generalized and how hard it's been to do certain tasks. And also all the evidence from cognitive science, and I compared this to this. What is the generating function of the universe? I'm like, okay. I don't know. I know which one I'm putting my stock. I know which one I bet on.</p><p>AARON</p><p>I think I sort of lost you there. But the universe thing, I don't know. Maybe I guess I want to know what some actual I guess, like, Sean Carroll thinks about who's Sean Carroll. He's a podcast guy. He's listening, but he's not hello. He's physicist turned more into philosophy knowledgeable about physics and philosophy. I guess there's definitely, like a strain of naive and criticism in physics which is just like set up and calculate or whatever. Like volley don't exist.</p><p>NATHAN</p><p>Yeah. I feel like I can't really evaluate this argument.</p><p>AARON</p><p>Yeah, it's not really fair. I guess I can't either. I don't know.</p><p>NATHAN</p><p>It feels like I don't know. I'm going to wait till it makes a prediction before I feel obligated to think about it.</p><p>AARON</p><p>I mean, like one prediction. I don't know if this counts as a prediction, but like simulation theory or not the simulation argument, but insofar as you think that this universe could be simulated, it's like a virtual machine or whatever. It's like running on the logs of visit.</p><p>NATHAN</p><p>I just don't get the connection here between gptn without data on what velocities to run motors at can learn how to do fine motor control has to this argument.</p><p>AARON</p><p>I guess I'm kind of willing to stand by in the sense that a relatively small amount of description basically equations for the laws of physics plus a starting state of the universe. Not even the universe, just like, say, like the starting state of my chair or something. Sure, okay. You know what I mean? There's a lot of prediction there.</p><p>NATHAN</p><p>If you have a lot of time to fuck around and find out, then, yeah, maybe. But now we're just in like, okay, so now we're just like simulating evolution. The amount of computation it takes to simulate evolution is now what's required of.</p><p>AARON</p><p>Like yeah.</p><p>NATHAN</p><p>I just don't think this is like, any bearing on the general intelligence hypothesis.</p><p>AARON</p><p>Yeah, you're probably right.</p><p>NATHAN</p><p>I agree that an RL system can learn how to do this. I agree that a transformer can do sequence prediction if it has the data and what is fucking around and finding out if not generating data.</p><p>AARON</p><p>Fair enough. Okay. How long has it been? Okay. It's been, like, October, I think. I kind of want to come to a scale, mate. I don't even know, actually. Maybe you think that's too generous to me because you clearly know much more about all.</p><p>NATHAN</p><p>Knowing more things is not sufficient to be right. I think we should let the audience decide you have how much this has, in fact changed your views. And this is the no, this is.</p><p>AARON</p><p>No, you've definitely pushed me in the direction of General Intelligence is maybe just like a smooshed together bunch of modules or something. Sure, yeah. Should I make a manifold market about broadly who's right about General Intelligence? Aaron or Nathan? I might just do it regardless of sure, go ahead.</p><p>NATHAN</p><p>I think my guess is I will lose this man for Marcus.</p><p>AARON</p><p>I mean, there's going to be, like, four people who listen to this, and maybe one of them be I'm not going to say who it is. I think he knows.</p><p>NATHAN</p><p>I kind of know who it is. Cool.</p><p>AARON</p><p>Okay. I hope you come back on Pigeon Hour because you know a lot of random shit. Not random, well selected shit. Sure.</p><p>NATHAN</p><p>I know some well selected shits. Well. Yeah, I'd love to come back on at some point. This was also a very enjoyable pigeonhole. Cool.</p><p>AARON</p><p>Thanks. See ya. Thanks, Aaron. Bye.</p>]]></content:encoded></item><item><title><![CDATA[#4 Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more]]></title><description><![CDATA[Note: skip to minute 4 if you&#8217;re already familiar with The EA Archive or would just rather not listen to my spiel]]></description><link>https://www.aaronbergman.net/p/4-winston-oswald-drummond-on-the-f38</link><guid isPermaLink="false">https://www.aaronbergman.net/p/4-winston-oswald-drummond-on-the-f38</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Mon, 17 Jul 2023 18:15:47 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/135570843/bffade430c82d16fa8c54334d974b47e.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><em>Note: skip to minute 4 if you&#8217;re already familiar with <a href="https://forum.effectivealtruism.org/posts/DndmvDGStD3gTfhXk">The EA Archive</a> or would just rather not listen to my spiel</em></p><p><strong>Summary </strong>(by Claude.ai)</p><ul><li><p>This informal podcast covers a wide-ranging conversation between two speakers aligned in the effective altruism (EA) community. They have a similar background coming to EA from interests in philosophy, rationality, and reducing suffering. The main topic explored is reducing s-risks, or risks of extreme suffering in the future.</p></li><li><p>Winston works for the Center for Reducing Suffering (CRS), focused on spreading concern for suffering, prioritizing interventions, and specifically reducing s-risks. He outlines CRS's focus on research and writing to build a moral philosophy foundation for reducing suffering. Aaron is skeptical s-risk reduction is tractable currently, seeing the research as abstract without a clear theory of change.</p></li><li><p>They discuss how CRS and a similar group CLR are trying to influence AI alignment and digital sentience to reduce potential future s-risks. But Aaron worries about identifying and affecting the "digital neural correlates of suffering." Winston responds these efforts aim to have a positive impact even if unlikely to succeed, and there are potential lock-in scenarios that could be influenced.</p></li><li><p>Aaron explains his hesitancy to donate based on tractability concerns. He outlines his EA independent research, which includes an archive project around nuclear war. More broadly, the two find they largely ethically agree, including on a suffering-focused ethics and "lexical negative utilitarianism within total utilitarianism.</p></li><li><p>Some disagreements arise around the nature of consciousness, with Aaron arguing rejecting qualia implies nihilism while Winston disagrees. They also diverge on moral realism, with Aaron defending it and Winston leaning anti-realist.</p></li><li><p>As they wrap up the wide-ranging conversation, they joke about convincing each other and make predictions on podcast listens. They thank each other for the thought-provoking discussion, aligned in ethics but with some disagreements on consciousness and metaethics. The conversation provides an insider perspective on efforts to reduce s-risks through research and outreach.</p></li></ul><h1><strong>Transcript</strong></h1><p><em>Note: created for free by<a href="http://assemblyai.com/playground"> Assembly AI</a>; very imperfect</em></p><p>AARON</p><p>Hi, this is Aaron, and before the main part of the podcast, I'm going to read out an EA forum post I put out about a week ago, outlining a project I've been working on called The EA Archive. If you're already familiar with the post that I'm talking about, or would just rather skip ahead to the main part of the podcast, please go to four minutes in. The EA Archive is a project to preserve resources related to effective altruism in case of sub existential catastrophe such as nuclear war. Its more specific downstream. Motivating aim is to increase the likelihood that a movement akin to EA I E one that may go by a different name and be essentially discontinuous with the current movement but share the broad goal of using evidence and reason to do good survives, re emerges and or flourishes without having to reinvent the wheel, so to speak. It is a work in progress, and some of the subfolders at the referenced Google Drive, which I link, are already slightly out of date. The theory of change is simple, if not very cheerful, to describe. If copies of this information exist in many places around the world on devices owned by many different people, it is more likely that at least one copy will remain accessible after, say, a war that kills most of the world's population. Then I include a screenshot of basically the Google Drive folder, which shows a couple three different folders on it. And as shown in the screenshot, there are three folders. The smallest one main content contains HTML, PDF and other static text space files. It is by far the most important to download. If, for whatever reason, space isn't an issue and you'd like to download the larger folder, sue, that would be great. I will post a quick take, which is like a short EA forum post when there's been a major enough revision to warrant me asking for people to download a new version. How you can help one download, and I give some links to basically download either the two gigabyte version or up to all three folders, which works out to 51gb. This project depends on people like you downloading and storing the archive on a computer or flash drive that you personally have physical access to. Especially if you live in any of the following areas. One. Southeast Asia and the Pacific. Especially New Zealand. Two south and Central Africa. Three northern Europe, especially Iceland. Four latin America, Mexico City and south. Especially Ecuador, Colombia and Argentina. And finally, five. Any very rural area anywhere. If you live in any of these areas, I would love to buy you a flash drive to make this less annoying and or enable you to store copies in multiple locations. So please get in touch via the Google form which I link DM or any other method. Two suggest submit and provide feedback. Currently, the limiting factor on the archive's contents is my ability and willingness to identify relevant resources and then scrape or download them. I e. Not the cost or feasibility of storage. If you notice something ought to be in there that isn't, please use this Google form again, which I link to do any of the following one, let me know what it is broadly, which is good. Two, send me a list of URLs containing the info better, three, send me a Google Drive link with the files you'd like added best and four, provide any general feedback or suggestions. I may have to be somewhat judicious about large video and audio files, but virtually any relevant and appropriate PDF or to other text content should be fine. And finally, the last way you can help, which would be great, is to share it. Send this post again, which I'm linking in the podcast description. Send this post to friends, especially other EA's who do not regularly use or read the EA forum. So without further ado, the actual main part of the podcast.</p><p>AARON</p><p>So you're at the center for Reducing suffering, is that right?</p><p>WINSTON</p><p>That is correct, yeah.</p><p>AARON</p><p>Okay, I got it. Right, but there's like one of two options.</p><p>WINSTON</p><p>Yeah, there are two SRISK orgs basically, and they sound really similar. Center on long term risk is the other one.</p><p>AARON</p><p>Let's skip the I feel like anybody is actually listening to this is going to have heard of Srisks. If not, you can just go to the EA forum or whatever and type in SRISK and you'll get like a page or whatever. Do you think it's okay to skip to high level stuff?</p><p>WINSTON</p><p>Yeah, I think that sounds good. I do think a lot of people hear the same kind of abstract stuff over and over and so yeah, it'd be good to get deeper into it.</p><p>AARON</p><p>Okay, convince me. I think we come from a pretty similar ethical background or normative ethical standpoint. I don't know if you consider yourself like a full on negative utilitarian. I don't quite well, actually. Yeah, I guess I should ask you, do you or is it more just like general suffering focused perspective?</p><p>WINSTON</p><p>Yeah, I also don't consider myself like full negative utilitarian. I think I used to be more so, but yeah, I'm still overall more suffering focused than probably like the average.</p><p>AARON</p><p>EA or yeah, yeah, it's literally like my spiel, like I always say. Like also I was thinking somebody like as I was talking on Twitter a couple of days ago and I was thinking I don't actually know of any human being who's actually a negative utilitarian. Like a full on thinks that literally the only thing that matters is or positive experiences or whatever have no don't count for anything whatsoever. Yes. So convincing that there's like a theory of change for reducing s risks, at least given the current state of the world, I guess you know what mean. Like it seems to me there's really high level abstract research going on. And honestly, I haven't actually looked into what CRS does. So like maybe I'm like straw manning or something. I remember I applied for something at the center on Long Term Risk a while ago and it seemed like all their research was really cool, really important, but not the kind of thing where there's a theory of change. If you think that transformative AI is coming in the next decade, like maybe in the next century, but not the next decade. So you think there's a theory of change for any of this?</p><p>WINSTON</p><p>Yeah, I do think it's hard and often abstract, but yeah, I certainly think there's some very real concrete plans. So yeah, part of it depends on, like you mentioned, if you think transformative AI is coming soon. So part of it is how much you think there's going to be some lock in scenario soon where most of our impact comes from impacting. So then AI is like the big example there. And that is something that Clr is also doing or center on long term risk more internally. So there might be some work. I mean, there is some work on looking at which training environments lead to increased risk of conflict between AIS or maybe which type of alignment work is more likely to backfire in ways. Like, you might get like a near miss scenario where if you get really close to alignment, but not quite all the way, that actually increases the risk of lots of suffering compared to just not having alignment at. Yeah, and you just maybe have to talk to Clr more about that because I also don't know as much about what they're doing internally, but I can't talk about CRS. So then there's a different strategy, which is broad interventions, so less narrowly focused on a specific lock in scenario. And the idea there is you can look at risk factors, maybe you say it's just really hard to predict how SRISK will happen. There could be lots of different Sris scenarios and all the details are just kind of any particular details. Maybe unlikely that you're going to predict it correctly, but you could look at general features of the world that you can affect, where they can reduce SRISK across many different ways the future could play out. And so yeah, this idea of risk factors, I guess that's used in medicine, so like, a poor diet is not a poor health outcome in itself, but it's a risk factor for lots of other for depression and heart disease and all these things. For Sris, it might be easier to focus on risk factors. And then one basic example is maybe increasing society's concern for suffering is like a way that you can reduce Sris even if you don't know any of the details about how Sris will play out. So future people would then be in a better position, like they'll be more motivated to reduce suffering and then once they know more about the specifics of what to implement. Maybe this could also be related to the AI stuff as well. But if you think there's going to be a big crunch time or something and it's going to be like, you can have more impact and maybe it's also more clear how to have impact. It's better. If more of those people are motivated to reduce suffering, then that could be a way to sort of punt things to the future a little bit.</p><p>AARON</p><p>Yeah, I'm actually glad to hear especially about the particular AI, like the more technical things, like which training environments are likely to lead to, I guess esque risk prone misalignment or something like that. Because that's like a little bit I don't know if I've actually written it anywhere, so I'm not allowed to call it a hobby horse, but this is something I've been thinking about and it's like suffering focused alignment research, or at least SRISK aware alignment research. And it's not something that I really hear very much in the discourse, which is just my podcast feed the discourse. That's what the discourse is to me. Of course. Yeah.</p><p>WINSTON</p><p>I do think it's like neglected. It's often kind of like forgotten about a little bit.</p><p>AARON</p><p>Yeah. Is that something that Cr also the audio for me cut out for a few seconds, like a little while ago, so I might have missed something. But is that something that you think is like, Clr is doing more or is that something that CRS has also been researching?</p><p>WINSTON</p><p>Yeah, Clr does more on AI specific stuff. CRS generally does more broad, like value spreading moral philosophy, like improving political.</p><p>AARON</p><p>Are.</p><p>WINSTON</p><p>Both care about both, I think to some extent.</p><p>AARON</p><p>What is CRS like? Maybe I should probably have checked into this. I'm going to google it. I have my split screen open. But what is CRS up to these days? The center for Reducing.</p><p>WINSTON</p><p>Yeah, no, somewhat. I'm interested in all these things. I know some of the other AI stuff as well, but yeah, CRS, there's a lot of just like writing, doing research and writing books and things like this. So it's mostly just a research organization and there's also outreach and sometimes I give talks on asterisk things like this.</p><p>AARON</p><p>Yeah.</p><p>WINSTON</p><p>It'S very broad. There's basically spreading suffering, focused ethics, doing cause, prioritization on how to best reduce suffering, and then specifically looking at ways to reduce Sris are like the three main pillars, I guess.</p><p>AARON</p><p>Cool. Yeah. I'm checking out the books page right now. I see avoiding the worst suffering books. Three by Magnus Binding and one by to. Actually, I think I kind of failed. I tried to make an audiobook for Avoiding the Worst a while ago. I think it was the only audio version for A, but like, it wasn't very good. And then eventually eventually they came in and figured out how to get actual audio.</p><p>WINSTON</p><p>Yeah, that was great to have that for a few months or something.</p><p>AARON</p><p>Honestly, it would have been quicker.</p><p>WINSTON</p><p>Thanks for doing that.</p><p>AARON</p><p>I think. So, like, oh, well, next time I'll try to put my resources, line them up better. Anyway, other people got to listen to.</p><p>WINSTON</p><p>It too, so I think it seemed pretty good.</p><p>AARON</p><p>Okay, cool. Yeah. So maybe what do you personally do at CRS? Or I guess, how else have you been involved in EA more generally?</p><p>WINSTON</p><p>Yeah, I kind of do a bunch of different stuff like CRS, a lot of it has been just operations and things like this and hiring and some managing, but yeah, also outreach, some research. It's been very broad. And I'm also separate from CRS, interested in animal ethics and wild animal suffering and these types of things.</p><p>AARON</p><p>Okay, nice. Yeah. Where to go from here? I feel like largely we're on the same page, I feel like.</p><p>WINSTON</p><p>Yeah. Is your disagreement mostly tractability? Then? Maybe we should get into the disagreement.</p><p>AARON</p><p>Yeah. I don't even know if I've specified, but insofar as I have one, yes, it's trapped ability. This is the reason why I haven't donated very much to anywhere for money reasons. But insofar as I have, I have not donated to Clrcrs because I don't see a theory of change that connects the research currently being done to actually reducing s risks. And I feel like there must be something because there's a lot of extremely smart people at both of these orgs or whatever, and clearly they thought about this and maybe the answer is it's very general and the outcome is just so big in magnitude that anything kind.</p><p>WINSTON</p><p>Of that is part of it, I think. Yeah, part of it is like an expected value thing and also it's just very neglected. So it's like you want some people working on this, I think, at least. Even if it's unlikely to work. Yeah, even that might be underselling it, though. I mean, I do think there's people at CRS and Clr, like talking to people at AI labs and some people in politics and these types of things. And hopefully the research is a way to know what to try to get done at these places. You want to have some concrete recommendations and I think obviously people have to also be willing to listen to you, but I think there is some work being done on that and research is partially just like a community building thing as well. It's a credible signal that you were smart and have thought about this, and so it gives people reason to listen to you and maybe that mostly pays off later on in the future.</p><p>AARON</p><p>Yeah, that all sounds like reasonable. And I guess one thing is that I just don't there's definitely things I mean, first of all, I haven't really stayed up to date on what's going on, so I haven't even done I've done zero research for this podcast episode, for example. Very responsible and insofar as I've know things about these. Orgs. It's just based on what's on their website at some given time. So insofar as there's outreach going on, not like behind the scenes, but just not in a super public way, or I guess you could call that behind the scenes. I just don't have reason to, I guess, know about that. And I guess, yeah, I'm pretty comfortable. I don't even know if this is considered biting a bullet for the crowd that will be listening to this, if that's anybody but with just like yeah, saying a very small change for a very large magnitude, just, like, checks out. You can just do expected value reasoning and that's basically correct, like a correct way of thinking about ethics. But even I don't know how much you know specifically or, like, how much you're allowed want to reveal, but if there was a particular alignment agenda that I guess you in a broad sense, like the suffering focused research community thought was particularly promising and relative to other tractable, I guess, generic alignment recommendations. And you were doing research on that and trying to push that into the alignment mainstream, which is not very mainstream. And then with the hope that that jumps into the AI mainstream. Even if that's kind of a long chain of events. I think I would be a lot more enthusiastic about I don't know that type of agenda, because it feels like there's like a particular story you're telling where it cashes out in the end. You know what I mean?</p><p>WINSTON</p><p>Yeah, I'm not the expert on this stuff, but I do think you just mean I think there's some things about influencing alignment and powerful AI for sure. Maybe not like a full on, like, this is our alignment proposal and it also handles Sris. But some things we could ask AI labs that are already building, like AGI, we could say, can you also implement these sort of, like, safeguards so if you failed alignment, you fail sort of gracefully and don't cause lots of suffering.</p><p>AARON</p><p>Right?</p><p>WINSTON</p><p>Yeah. Or maybe there are other things too, which also seem potentially more tractable. Even if you solve alignment in some sense, like aligning with whatever the human operator tells the AI to do, then you can also get the issue that malevolent actors can take control of the AI and then what they want also causes lots of suffering that type of alignment wouldn't. Yeah, and I guess I tend to be somewhat skeptical of coherent extrapolated volition and things like this, where the idea is sort of like it'll just figure out our values and do the right thing. So, yeah, there's some ways to push on this without having a full alignment plan, but I'm not sure if that counts as what you were saying.</p><p>AARON</p><p>No, I guess it does. Yeah, it sounds like it does. And it could be that I'm just kind of mistaken about the degree to which that type of research and outreach is going on. That sounds like it's at least partially true. Okay. I was talking to somebody yesterday and I mentioned doing this interview and basically they said to ask you about the degree to which there's some sort of effort to basically keep asterisks out of the EA mainstream. Do you want to talk about that? Comment on it? And we can also think about later if we want to keep it in or not.</p><p>WINSTON</p><p>You mean like from non as risky.</p><p>AARON</p><p>A'S yeah, and then I think they use the word. This person used the word conspiracy. I have no idea how facetious that was if it was decentralized conspiracy or like a legit conspiracy, you know what I mean? So is there an anti asterisk conspiracy, yes or no?</p><p>WINSTON</p><p>The deep state controlling everything.</p><p>AARON</p><p>The deep EA state.</p><p>WINSTON</p><p>Yeah, actually, I'm not sure to what extent there's, like I tend to have a less cynical view on it, I.</p><p>AARON</p><p>Guess.</p><p>WINSTON</p><p>And I think maybe Easatize Sris less than they otherwise should, but potentially due to just, like, biases and maybe just, like, founder effects of the movement. And it's not nice to think about extreme suffering all the time, and you could mention some potential biases, but yeah, it's hard to say. I can't say I've personally had anyone really actively excluded me because of the Sris thing explicitly or something like that, but maybe it's like behind the scenes or something going on. But yeah, I guess I tend to think it's not so bad.</p><p>AARON</p><p>Okay.</p><p>WINSTON</p><p>And I think also there's been a lot a big push in the suffering focus asterisk communities to find common ground and find cooperative compromises and gains from trade. And I think this has probably been just good for everyone and good for other EA's perception of SRISK reducers as well.</p><p>AARON</p><p>Yeah, that's like something I want to highlight. This is, like, the most maybe, like the most. I feel like I've been casting, like, a negative light on sort of like oblique negative light on suffering both community. But, yeah, the gains from trade thing and cooperation is something that I did not expect to find, I guess, as much of diving in, and it actually makes total sense, right, because you know that you're up again. It's like the kind of thing that once you read about it, it's like, oh yeah, of course, once you incorporate how people are actually going to treat the movement, it makes sense to talk a lot about gains from trade. Gains from trade? Isn't cooperation is like a better term to use? I feel like, but I feel like most research social movements, I guess even subparts EA that I've encountered, just haven't fully modeled themselves as part of an altruistic community in the way to the extent that the suffering books community has. That's something I've been very, I guess impressed with and like I also just think it's like object level good.</p><p>WINSTON</p><p>Yeah, I think there are a lot of benefits and you can also even it's not just reputation and gains from trade, but if you have moral uncertainty, for example, then that's just another reason to not go all in on what you currently think is best.</p><p>AARON</p><p>Yeah, for sure.</p><p>WINSTON</p><p>Do you know about A causal stuff? That's another thing. Some asterisk people are kind of into.</p><p>AARON</p><p>I'm into it, I think I kind of buy it. I don't know how it relates to s risks specifically though. So how does it well, there's one.</p><p>WINSTON</p><p>Idea, it's called evidential cooperation in large worlds. It used to be multiverse wide, super rationality, it's a lot of syllables. But the idea is a causal trade is typically like you're simulating some other trade partner, right, or predicting them probabilistically. But with this. The idea is you acting just gives you evidence that if the universe is really big or potentially infinite, there's maybe like near copies of you or just so this depends on your decision theory, of course, but there might be other agents that are correlated with you. Their decision making is correlated with you. So you doing something just gives you evidence that they're also going to do something similar. So if you decide to cooperate and be nice to other value systems, that's evidence other people with other value systems will be nice to you. And so you can potentially get some a causal gains from trade as well. Obviously somewhat speculative. And this also maybe runs into the issue of like if someone has different values from you, they might not be similar enough to be correlated. So your decision making isn't correlated enough to be able to do these compromises. But yeah, that's another thing you could get into and maybe I should have explained the a causal stuff more first, but it's not that important.</p><p>AARON</p><p>I mean, we can talk about that. Should we talk about a causal trade?</p><p>WINSTON</p><p>We don't have to, but I just thought maybe it's confusing if I just threw that in there, but I think it's also fine.</p><p>AARON</p><p>Okay, maybe do you want to give like a 22nd shortish?</p><p>WINSTON</p><p>Okay, imagine you're in a prisoner's dilemma with a copy of yourself. So you shouldn't defect probably because they'll defect back on you. So I mean, that's the kind of evidence, that's kind of the intuition for a causal interactions. You can't just model like, oh, as long as I defect, no matter what they do, that's the better option. That would be like the causal way to look at things. But you also have to look at, well, your decision making might be correlated with your cooperation partner and so that can affect your decisions and then obviously it can get more complicated than that. But that's the basic idea.</p><p>AARON</p><p>Yeah, I've heard the term, but I learned about it substantively on Joe Carl Smith's 80K podcast episode not that long ago. So I guess I'll mention that as where to get up to speed. Insofar as, like, I'm up to speed, that's how to get up to speed.</p><p>WINSTON</p><p>Yeah, like a long post on this. And it's interesting, like, he talks about how you can kind of affect the past and it's like one way you could think about it if you're correlated with because these correlated agents can also be in different time periods. Yeah, obviously this is all just like more now. This is just interesting, but I don't want to make it sound like this is like the main SRISK thing or anything like that.</p><p>AARON</p><p>Yeah. Okay. So I don't know, do you want to branch out a little bit? How long has it even been? I don't know what else there is. I feel like I don't even know exactly what questions to ask, which is totally my fault, you know what I mean? So are there any cutting edge S risk directions or whatever that I should be knowing about?</p><p>WINSTON</p><p>Probably, but I don't know. Yeah, also happy to branch out, but I guess there's a lot one could say. So other objections might be so this whole Sris focus partially relies on having a long term focus, and obviously that's been talked about a lot in EA and then caring for reducing suffering or having a focus on reducing suffering. So you could also talk about why one might have that view. I guess I'll just say there's also one more kind of premise that I think goes unnoticed more, which is a focus on worst case outcomes. So you could also, instead of working on Esrus, if you were like a suffering focused long termist, you could focus on just eradicating suffering entirely. For example, as in the Hedonistic imperative from David Pierce is like an example of this, where he wants to use genetic engineering to make it so that people are just happy all the time. They have different levels of happiness, and there's a lot more details on that. But that's a different focus than trying to prevent the very worst forms of suffering, which is what an S risk is.</p><p>AARON</p><p>Yeah, I don't know. I feel like EA in general, this is like a big high level point. EA in general seems like there's a lot of focus on weak ass criticisms like, oh, maybe future people don't matter. Like, shut up. Come on, man. Yes, they do. I'm not exactly doing that justice. And then there's Esoteric weird points that don't get noticed or whatever. So are there any Esoteric weird cruxes that are like, the reason? I don't know. One thing is, I guess how much do you think that the answer or the nature of consciousness matters to the degree to which S risks are even a possibility? I guess. I guess they're always a conceptual possibility, but a physical possibility.</p><p>WINSTON</p><p>Yeah. I think if artificial sentience is possible or plausible, then that raises the stakes a lot and you can potentially see a lot more suffering in the world but I think without that, you still can have enough suffering for something to be considered an Sris, and there's a non negligible likelihood of that happening. So I wouldn't say working on Sris, like, hinges on this, but at least the type of Sris you work on and the way that you do it may depend on it. Maybe getting involved with influencing more and more people are talking about digital sentience, and maybe pushing that discussion in a good direction could be a promising thing to do for Asterisk.</p><p>AARON</p><p>Yeah. Do you have takes on digital sentience?</p><p>WINSTON</p><p>Well, I think I'm quite confused about consciousness still, but good.</p><p>AARON</p><p>Anybody who's not, I think, doesn't understand the problem.</p><p>WINSTON</p><p>It seems like a tough one, but I think that overall it's plausible. Lots of views of consciousness allow it, and it also just would be so important if it happened. So there's another sort of expected value thing, because I think you can have many more digital beings than biological beings. They're more energy and space efficient, and they could expand to space more easily, and they could be made to it seems unlikely to me that evolution selected for the very worst forms of suffering you could create. So digital sentience could be made to experience much more intense forms of suffering. So I think for these reasons, it's kind of just worth focusing on it while I think it's plausible enough. And there might just be a precautionary principle where you act as if they're sentient to avoid causing lots of harm, like to avoid things that have happened in the past with animals, I guess, and currently where people don't care or don't think they can suffer. And I tend to think that the downside risk from accidentally saying that they're sentient when they're not is lower than the reverse. So I think you can just get much more suffering from they're actually sentient and suffering, and we just don't care or know. Rather than the opportunity cost of accidentally giving them moral consideration when we shouldn't have. I tend to err on the side of, like, we should be careful to act as if they're sentient.</p><p>AARON</p><p>Yeah, I think my not objection. Like, I literally agree with everything you just said. I'm pretty sure and definitely agree with the broad strokes. My concern is that we have no idea what the digital neural correlates of suffering, as far as I know, have no idea what the digital neural correlates of suffering would look like or are. And so it seems especially intractable if you just take two computer programs. I feel like the naive thing where it's like, oh, if you're not giving an ML model reward, then that's just like the that is the case in which the thing might be suffering that just doesn't check out. I feel like, under inspection, you know what I mean? I feel like it's like way more we have no grasp whatsoever on what digital sub processes would. Correspond to suffering. I don't know if you agree with that.</p><p>WINSTON</p><p>I agree. I have no grasp of it. At least maybe someone does. Yeah, I'm not sure I can say that much. It seems hard. I also have the feeling it's a lot more than that. I think you can at least get evidence about this type of thing still. First off, you could look at how evolution seems to have selected for suffering. It was maybe to motivate moving away from stimuli that's bad for genetic fitness and to assist in learning or something like this. So you can try to look at analogous situations with artificial sentience and see where suffering just might be useful. And yeah, maybe you could also look at some similarities between artificial brains in some sense, and human brains and when they're suffering. But potentially, I think likely artificial sentience would just look so different that you couldn't really do that easily.</p><p>AARON</p><p>Yeah, I feel like all these things that have been bringing up sort of maybe I'm just like being irrational or whatever, but they sort of seem to stack on top of one another or something. And so I don't know. I have maybe an unjustified intuitive skepticism not of the importance, but of the tractability, as I've said a bunch of times. And maybe the answer is just like, it's a big number at the end, or you're multiplying all this by a really big and like, I guess I kind of buy that too. I don't even know what to also.</p><p>WINSTON</p><p>I also worry about Pascal's Mugging, and I do think it's fair to worry about tractability when there's a bunch of things adding up like this. But I also think that Sris are disjunctive, and so there are lots of different ways Sris could happen. So like I said earlier, we're kind of talking about specific stories of how s risk could play out, but it might be obviously the details of predicting the future are just hard. So I think you can still say s risks are likely and influencing s risk is possible even if you think any specific s risk is kind of unlikely that we can talk about. Mostly it might be unknown. Unknowns. And I also think the other big thing I should say is the lock in stuff that I mentioned earlier too. So influencing AI might actually not be that intractable and space colonization might be another lock in in some sense. Once we've colonized space to a large degree, it would be hard to coordinate because of the huge distances between different civilizations. And so getting things right before that point seems important. And there are a couple of other lock ins or attractor states you could imagine as well that you could try to influence.</p><p>SPEAKER A</p><p>Yeah.</p><p>AARON</p><p>Okay, cool. Do you want to branch out a little bit? Maybe we can come back to Sris or maybe we'll see. Okay.</p><p>WINSTON</p><p>Are you convinced then?</p><p>AARON</p><p>Am I convinced? About what?</p><p>WINSTON</p><p>Exactly are you donating to CRS now?</p><p>AARON</p><p>I guess I actually don't know what your funding situation is. So that's one thing I would look I would want to look at the actually, I probably will do this, so I would want to and hopefully I will look at the more specific differences between clr and CRS in terms of and also, my current best guess is rethink priorities in terms of the best utils per dollar charity. And a lot of this comes from the fact that I just posted like a manifold market that said under my values, where should I donate? And they're at 36% or something. Okay. I would encourage people to do this. I feel like I should not be the only one doing it. Doesn't even matter if I do it because I don't have, at least at the moment, very large amount of money to donate, whereas some people, at least in relative terms, do.</p><p>WINSTON</p><p>Yeah, sorry, I was also being tongue in cheek.</p><p>AARON</p><p>But no, it's a good question because it's easy to just nod along and say like, oh yeah, I agree with everything you just said, but at the end of the day not actually change my behavior. You know what I mean? So the answer is like, I'm really not sure. I think it's like yeah, there can.</p><p>WINSTON</p><p>Be like an opposite thing where often you can have kind of good reasons, but it's just hard to say it explicitly in the moment and stuff. So I think forcing you to commit is not totally reasonable.</p><p>AARON</p><p>Don't worry. I don't think you have the ability. I think there's people in the asphere who when they say any proposition in order to uphold their credibility, they think they really need to, or else they're going to be like if they don't follow through, they're going to be considered a liar or whatever. Nothing they ever say will be considered legitimate. And I think that's an important consideration. But also if I say some bullshit on a podcast and then I don't confirm it, I don't think you have the ability to make me commit to anything in fact via this computer connection or via this WiFi connection.</p><p>WINSTON</p><p>Yeah, that sounds.</p><p>AARON</p><p>Mostly joking, I guess partially joking, but saying it in a joking in a joking way or something.</p><p>WINSTON</p><p>I know what you mean. I think that's a good attitude to have.</p><p>AARON</p><p>Yeah. What's your story? What's your deal? I don't know either. I guess intellectually. How did you get into EA stuff? What's your life story?</p><p>WINSTON</p><p>I guess I got into it through a few different routes simultaneously. It's kind of hard to go back and look at it things objectively and know how I ended up here. But I was into animal ethics for a long time and I was just also into philosophy kind of in rationalism and then the sort of pushed in similar direction. And I think hearing I took a philosophy of an ethics class in college and I heard Peter Singer's Puddle Analogy there and I thought that was very convincing to me at the time. So that always kind of stuck with me. But it didn't change my actions that much until later. And yeah, I guess all these things sort of added up to working on prioritizing factory farming and wild animals suffering later. And again, not doing tons about this, but just kind of becoming convinced and thinking about it a lot and thinking about what should do. And then Sris came after that, I guess, after I got more into EA and heard about Long Termism and so added that component. And that's the rough overview.</p><p>AARON</p><p>Okay. We have a very similar story, although my Intro to Ethics class was pretty shitty, but I had a similar situation going on. So what else do you think we disagree about, if any, besides very niche topics we've talked about?</p><p>WINSTON</p><p>Yeah, I don't know. That's a good question. I guess I'm also curious what you do, typically, what is your priority? You say rethink and I don't know. Do you go to Georgetown still?</p><p>AARON</p><p>I guess this is an extremely legitimate question, so no, I don't go to do or something. I graduated about a year ago.</p><p>WINSTON</p><p>Okay.</p><p>AARON</p><p>And then I got a grant from the Long Term Future Fund to do independent research. And I always say that with air quotes because there has been some of that. There's also been I've done a bunch of miscellaneous projects, some of which supporting other some of which have been supportive of other EA projects that maybe don't the best descriptive phrase isn't independent research. So helped with an outreach project called Non Trivial, did some data analysis for a couple for CEA, and explicitly that's been going on for a year or whatever. So I'm trying to complete some of some projects. Actually, just yesterday I posted on the A forum about the EA Archive. I guess I'll give a shout out to that or I'll encourage people to look at that post. And I'll probably put that in the show description, which is basically collected a bunch of so about a year ago little tangent, but about a year ago, Putin invaded Ukraine, and there was, like people were freaking out about nuclear war, and so, like, did some research and basically became not convinced, but thought there was a pretty decent chance that in the case of a realistic nuclear war scenario, a lot of information on the Internet would just disappear because they're physically stored in two to four data centers in NATO countries, and those places would probably be targets, et cetera. And so basically I collected a bunch of EA and EA related information, and I basically put it in a Google Drive folder. And I'm just asking people, especially people who don't live in places like Washington, DC. Where I live, I think Iceland is a great place, like New Zealand. There's like a couple other places in Colombia to download. Like they can be like the designated that's like my most recent, I guess thing or whatever. I have no life plan right now. I am applying to jobs in terms of intellectually, I guess definitely suffering focused. Okay. So I have a kind of pretentious phrase that I use, which is, like, I would say I am a suffering leaning total utilitarian in that I think total utilitarianism actually doesn't imply some of the things that other people think it implies. And so in particular, I think that total utilitarianism doesn't imply non offset ability. So you can think that there's sufficiently bad suffering even under total utilitarianism that there's no amount of well being you can create such that it would justify the creation of that bad suffering. In terms of prioritization, I think I'm definitely buying to long term maybe not all the connotations that people give it, but just the formal description of it a lot about long term future matters. A lot probably overriding the dominant source of moral value. I think I'm definitely more animal welfare pilled than other long term mists or whatever. Yeah, I think I'm shrimp welfare pilled. I think that's like my second that's also one of the charities on my manifold market. So that's my five minute spiel.</p><p>WINSTON</p><p>Nice. Yeah. I think we align on a lot of this stuff. The total utilitarian thing is interesting because I think these are called lexical views sometimes. Is this like what you're talking about? Where are you classical utilitarian? And then at a certain point of suffering, it's just unallowable.</p><p>AARON</p><p>Yes. And then I think above and beyond, I think I have a very niche view, which is that in particular, lexicality does not conflict with total utilitarianism. I think the general understanding is that it does and I want to claim that it doesn't. And I have these philosophy reason. This is actually the first thing that I worked on this year or whatever, and I've been meaning to clean up my it was like part of a longer post that I wrote with some other people on the EA forum. But I've been meaning to clean up my part and emphasize it more or something like that. Do you have takes on this?</p><p>WINSTON</p><p>Yeah, I'm curious at least what do you mean by it doesn't conflict? Like just more happy people would still be good under this? Is that what you mean?</p><p>AARON</p><p>Let me ask you, is it your understanding that under total utilitarianism or total utilitarianism implies that any instantiation of suffering can be justified by some amount of well being? Is that your understanding?</p><p>WINSTON</p><p>Well, I think that's the typical way people think about it, but yeah, I guess I don't think technically yeah, I do think you can have this lexical view and it doesn't even have to be.</p><p>AARON</p><p>Maybe we just agree then.</p><p>WINSTON</p><p>Yeah, well, I guess one thing I would say is in expectation, you might still not ever want more beings to come into existence because there's some chance they have this lexically bad suffering. And you're saying that can't be outweighed, right?</p><p>AARON</p><p>Yeah, that's like an applied consideration, which is actually important, but not exactly what I was thinking of. So wait, maybe my claim actually isn't as niche or isn't as uncommon because it sounds like you might agree with it.</p><p>WINSTON</p><p>Actually, no, I think it is uncommon, but also I agree, unfortunately.</p><p>AARON</p><p>Okay, sweet. Sweet. Okay, cool. We can convert everybody.</p><p>WINSTON</p><p>There's one person at CRS sorry, I keep cutting you off. My Wi fi is kind of delay.</p><p>AARON</p><p>Keep going.</p><p>WINSTON</p><p>But someone at CRS cult named Teo has a bunch of posts on Tefiller. His last name is hard to say. He has a lot of posts related to suffering focused ethics, and he has some talking about population ethics. And he examines some views like this where you can have lexical total utilitarianism, basically, so you can have a suffering focused version of that. And you could also have one where you also have lexical goods where no amount of minor goods can add up to be more important than this. Really high. Good. I guess there's a lot of interesting stuff from that. They all seem to lead to having to bite some pollutants.</p><p>AARON</p><p>Yeah, I think I actually haven't thought about that as much, but it sounds like sort of like a direct or not direct, but it's like a one degree separated implication of thinking the same thing on the negative side or whatever. And I guess part of my motivation or whatever for at least developing this view initially is that I feel like total utilitarianism just has a lot the arguments are just strong. And then I feel like at least for some understanding of it, for some general category of total utilitarianism, in fact, I think they're correct and I think they're true. And then I also think sometimes people use the strong arguments to conclude, oh, total utilitarianism is true, and then they take that phrase and then draw conclusions that aren't in fact justified or something like that. But I'm speaking in pretty broad terms now. I guess it's hard to specify.</p><p>WINSTON</p><p>Yeah. I also don't know the details, but I think so there are some impossibility theorems that have been worked out on population ethics where that shows, like, you have to accept one of some counterintuitive conclusions, but they rely on, like yeah. So they rely on axioms that you could disagree with. And I think they typically don't consider lexical views.</p><p>AARON</p><p>Yeah, I think hopefully I'm going to talk to Daniel Faland, who I think we actually chatted and hope I think he definitely thinks I'm wrong about this, but I think he understands the view and thinks. So hopefully I'll get a more critical perspective so we can debate that or whatever it's damn. You're not providing any interesting content. We're just both right about everything.</p><p>WINSTON</p><p>Well, I might understand from how you described it sounded kind of just definitional like how you define total utilitarianism. Because obviously you can't have this view where some happiness is better, but extreme suffering is unavailable.</p><p>AARON</p><p>But I guess, yeah, it definitely is kind of. There's like a semantic portion to all of this, man. I think one sort of annoying argument that I make and I believe is true, is that just the claim that totally utilitarianism implies offset ability is just not justified anywhere. And so people it's like assumed, but actually I haven't seen or maybe it is, but I haven't found any sort of paper, any sort of logical or mathematical or philosophical or whatever demonstration of observability or whatever. It just seems like it's like and I don't think it's semantically implied. I don't think it's like tautological or whatever. Once you say the words totally utilitarianism, it's not implied by the semantics or something.</p><p>WINSTON</p><p>Sorry, go ahead.</p><p>AARON</p><p>No, go ahead.</p><p>WINSTON</p><p>I was just going to say, have you looked into objections to lexical views? I think a lot of people just think the problems with lexical views are also just big and so that's why they don't accept them. So maybe the sequence argument is a bigger one.</p><p>AARON</p><p>What's that? I don't know the term, but I might have been familiar with it.</p><p>WINSTON</p><p>Yeah, it's also called some other things sometimes. But you could take your lexical extreme suffering that can't be outweighed, and then you could just take very tiny amount less suffering, less intense suffering for much longer, and then ask which one's worse? And so most people would say that the also torture level suffering that's just slightly under wherever your electrical threshold is, but happening much longer is worse. And then you can just repeat that step all the way down until you get so then something even slightly less bad than that even longer is also worse. If each step is transitive, then you get the conclusion that this lexical suffering can be outweighed by tiny amounts of suffering. You have a really big amount.</p><p>AARON</p><p>This sounds like at least like a strike against lexical abuser or at least at least an intuitive strike or something. One cop out general point is that I think there's a lot of implicit modeling of just ethical value as corresponding to the number line, or more specifically, every state of the world corresponds to a real number. And that real number, or at least that can be scaled up and down by some real number factor or whatever. But if we just say like, oh, the state of the world right now is like some number x or whatever in terms of utils. So, yeah, the state of the world right now is x utils, and every other conceivable state of the world corresponds to some other real number. And maybe I think this is just like a very this is what makes the step argument tempting. It's because you think you can just and maybe it's true or whatever, but if you have this number line view, then it really pretty directly implies that you can just move left or right by some reasonably well defined amount on the morality axis. And I just feel like there's a lot of unexamined, I guess, formal work that needs to be done to justify that, and also should be done by me to counter it. Right. So I can't say that I have a formal disprove of this or really solid arguments against it. It just feels like a sort of implicit mental model that isn't formally justified anywhere, if that makes.</p><p>WINSTON</p><p>Think, um I think sadly, we just agree too much because that sounds kind of right to me.</p><p>AARON</p><p>God damn it. Okay.</p><p>WINSTON</p><p>I guess Magnus Vinding from CRS was also written about lexical views and kind of other versions of them, but yeah, it sounds like you just got it all figured out. You're just right about everything.</p><p>AARON</p><p>What about consciousness? What's your.</p><p>WINSTON</p><p>Mean? Like I said, I guess I'm just maybe too confused. I'm reading Consciousness Explained by Dan Dennett right now. I guess he has sort of like an illusionist view that I'm trying to wrap my head around. Different views. I think I intuitively have this basic the hard problem of consciousness seems really mysterious, and everything should be like we should be physicalists. You don't want dualism or something. But, yeah, trying to work all that out doesn't seem to work well, so I give substantial weight to things like, well, some forms of panpsychism, for example, is something I've become given more weight to over time, which is something I never would have thought was anywhere near plausible before. But I didn't think. I'm just not the person to ask about this.</p><p>AARON</p><p>Okay, I thought you might have like you said, you had a philosophy. You started by getting into philosophy, kind of. Is that what you studied in college?</p><p>WINSTON</p><p>No, I studied computer science.</p><p>AARON</p><p>Okay.</p><p>WINSTON</p><p>I just really wanted to take this philosophy course.</p><p>AARON</p><p>Okay, got you.</p><p>WINSTON</p><p>Yeah, I've been into philosophy. Yeah, I should have clarified. I just meant on my own time. I've been into philosophy, but mostly it's been moral philosophy and maybe other things, like personal identity, metaphysics or something, ontology which a lot of these I also don't know a ton about, but philosophy of mind is another really interesting thing to me, but it's not like something I have a take on at this point.</p><p>AARON</p><p>Okay. This is like the same thing. I mean, I technically got a minor, but mostly I've just been into it on my own also. Same, I guess, in terms of interest.</p><p>WINSTON</p><p>And uncertainty about consciousness also, for sure, yeah.</p><p>AARON</p><p>Honestly, I don't have a good understanding of all the terms or whatever. I feel like really, there's a couple of questions. One is, are quality real? And I think the answer is yes, but I don't think it's like, 100%. I think it's like 90%. And if not, then nihilism is true. What else? And then there's also just the question of, okay, if qualia are real, what is the correspondence between physical configuration of particles and qualia? And that's just I don't know. It's like hard. Right?</p><p>WINSTON</p><p>Yeah. I do disagree with the nihilism falls from quality being false, I guess.</p><p>AARON</p><p>Really?</p><p>WINSTON</p><p>Well, yeah, I think I would be more inclined to take it the other way and say, like, oh, I guess it turns out quality wasn't the thing that I care about. It's just like whatever this thing is that I've been calling suffering.</p><p>AARON</p><p>Okay, finally a disagreement. Finally. Okay. Yeah, I've heard a lot of not a lot, like maybe two, which is a lot of illusionists, basically, like gesture at this view, if there's no genuine subjective intuitive as we intuitively understand it, like conscious experience, then actually something else matters. And I think that's cope. And actually no, the arguments for Skeetanism or some view of hedonic value being important, and really the only thing that fundamentally matters, at least in some sense, is are very strong. In fact, they're so strong and they're true such that if hedonic value just isn't a thing. No, there's no such thing as functional suffering or functional pain. That's not a thing that can exist. If quality don't exist, then it's just like, whatever, we're all just trees.</p><p>WINSTON</p><p>Well, I think that might be right in some sense, but I think if we're making the assumption that quality is not real, then what's the most plausible world with this being true? I know that I still have the experience of what I call suffering.</p><p>AARON</p><p>I would disagree with that, for what it's worth.</p><p>WINSTON</p><p>Like, you're just saying in this example, no one suffers ever.</p><p>AARON</p><p>Or you are mistaken about having that experience.</p><p>WINSTON</p><p>Right? Well, in the world where quality yeah, I could be, but I guess maybe it depends what we mean here. And then you might also have a wager argument where you should act as if no matter how certain you are this is kind of a separate this is more metal point, but no matter how certain you are that suffering is not real, you should act as if it's real because just in case, then it really matters.</p><p>AARON</p><p>Oh, yeah, I kind of buy that, actually.</p><p>WINSTON</p><p>But I don't think someone like Brian Tomasic has this kind of illusionist view, and he obviously cares about suffering a lot.</p><p>AARON</p><p>That's something I've been really confused about, actually, because I just don't I think he's just smarter than me. So in one sense, I kind of want to defer, but I don't think he's so much smarter than me that I have to defer or something like that. Or I can't wonder what's going on or something like that.</p><p>WINSTON</p><p>You're not allowed to question Brian, to ask.</p><p>AARON</p><p>I respect the guy so much, but it just doesn't make sense to me. There's like a really fundamental conflict there.</p><p>WINSTON</p><p>I don't know. I also tend to think probably they're just kind of right. Like illusionists don't seem to just be nihilists all the time. They seem like they just think we're confused about what we're talking about, but we're still talking about something that matters. He might just say, I just care about all these processes, like being averse to stimuli and screaming and all this stuff. And I also agree that it's not at all how it feels. Like, that's not the thing that I care about. But I still think if I'm totally wrong, I still clearly care about something that seems really bad to me. I guess I get where you're coming from, though.</p><p>AARON</p><p>Well, I guess one thing is our personal values or not personal, but we care about values and stuff like that. In terms of meta ethics, are you a moral realist?</p><p>WINSTON</p><p>I tend to be more anti realist.</p><p>AARON</p><p>Okay. Another disagreement, finally. Okay, cool.</p><p>WINSTON</p><p>Yeah. I'm not totally sure, but yeah.</p><p>AARON</p><p>Okay. I feel like this has been debated so much, there's like, no new ground to cover.</p><p>WINSTON</p><p>We're probably not going to solve it here, unfortunately. It is interesting. I guess I could just say maybe the reasons are yes, sir. It just seems maybe more parsimonious. Like, you don't need to posit moral realism to explain all our behavior. I guess that could be disputed. And then it also just explains why a lot of it sometimes seems to run up, like, sometimes seems to be just kind of arbitrary or inconsistent, like a lot of our moral intuitions and inconsistent with each other and with other moral intuitions that we share. I don't know. How would you figure this out? How would you figure out where the exact lexical threshold is on your view, for example? It does seem like it makes more sense to just say, well, that's just kind of how I feel.</p><p>AARON</p><p>Oh, no, I don't think that's attractable I don't think figuring out what that is is like a tractable question at all. I do think that there are statements that are just observer, like moral claims that are independent of any humans beliefs or anything like that, or any beliefs at all.</p><p>WINSTON</p><p>But you seem to be thinking like you can get evidence about it if you believe there is that threshold.</p><p>AARON</p><p>Like lexicality. I feel like that's like that's like a very specific I haven't actually thought very much about the intersection between moral realism and lexicality. In fact, that's not at all a central example of the kind of thing I do entertain pretty seriously the notion that there are some moral claims that have truth values and some that don't. And I feel like fatality is like, one that actually might not or something, or there might be a broad structure of moral realist claims and then sub or more nuanced particulars that just don't have a well defined answer, like above and beyond people's beliefs.</p><p>WINSTON</p><p>Interesting. Yeah.</p><p>AARON</p><p>I don't actually think it matters, like very much. I like it's. I don't know, like yeah, I mean it's it's super interesting, but like especially.</p><p>WINSTON</p><p>If people agree.</p><p>AARON</p><p>I think it matters insofar as it as it interacts with normative ethics, which I think it does, actually, sorry, I sort of misspoke. I think it definitely can. And I think it does interact with normative ethics, but once you control for that and you discuss the normative ethics part, like above and beyond that, it.</p><p>WINSTON</p><p>Doesn'T matter, I guess, right? Yeah. What matters is what? Well, this just depends on your moral view, so it all gets kind of messy. Yeah, I think that's true. But I do think it probably interacts in lots of ways. You might expect more moral convergence over time if there's moral realism, and that would make me a bit more optimistic about the future, though. Still not that optimistic.</p><p>AARON</p><p>Yeah, that's a good point. I haven't really thought much about that.</p><p>WINSTON</p><p>And maybe moral uncertainty is more like it's more clear what's going on. It's really hard to find a way to do moral uncertainty in a well defined manner, and it would be more like just regular uncertainty. T I think, otherwise well, you might still run into lots of issues, but yeah, potentially that would change things and I don't know. I think there are others I could probably come up with, but haven't thought about it that much.</p><p>AARON</p><p>Okay, so are there any topics you want to hit?</p><p>WINSTON</p><p>I guess I mostly was interested in talking about Sris, so we did that. Yeah, I don't know.</p><p>AARON</p><p>Okay.</p><p>WINSTON</p><p>There's lots of philosophy topics I'm somewhat interested in, but I just feel like I've heard Robin Hansen say people should just stop having so many opinions. So when I feel myself talking about something I don't know, I'm not an expert on, I'm like, yeah, I probably shouldn't.</p><p>AARON</p><p>This sounds like a smart take. And then I think about it and I'm like, wait, no, you're totally allowed to apply. I don't have a latent list of opinions on stuff. I have a latent world model and a latent ethics that I can apply to just about any particular scenario. Right. Maybe it's too confusing to apply on air or something, but if somebody says, oh, what do you think about this new law to ban deodorant? I'm like, I don't know, sounds bad. Even though they didn't exist before, I just thought about it, you know what I mean? But I have a generic ideology.</p><p>WINSTON</p><p>No, I think that's generally fair, but we might also just be picking really hard questions that require things my model hasn't figured out or something.</p><p>AARON</p><p>Okay. So in that case, I'm going to demand that you not demand, but encourage you to give like a 90% confidence interval on the number of views or listens downloads this episode gets.</p><p>WINSTON</p><p>Yeah, I listened to your last episode, actually, and this was is this a recurring.</p><p>AARON</p><p>It has been recurring. It's recurring until somebody. Convinces me to stop.</p><p>WINSTON</p><p>No, I like it. I feel like it's cool to maybe in like, a year you can graph everyone's guesses over time. Also.</p><p>AARON</p><p>Yes.</p><p>WINSTON</p><p>Okay, so I don't know. What was the confidence interview?</p><p>AARON</p><p>What did you oh, wait, mate, hold on. Let me see if I can pull Spotify real quick so I can get better and better at predicting.</p><p>WINSTON</p><p>Should keep it consistent.</p><p>AARON</p><p>Yeah, although I guess update, depending on how well past episode do. Okay, wait, analytics between both episodes that are up 59 views on Spotify or plays on Spotify, so maybe, I don't know, 80 total or like, 100 total over other platforms.</p><p>WINSTON</p><p>Am I guessing total?</p><p>AARON</p><p>Let's go with total for this episode.</p><p>WINSTON</p><p>And sorry, the confidence interval was like, 95 or something or what did you say?</p><p>AARON</p><p>Yeah, sure. I was thinking 90%, but you can choose if you want.</p><p>WINSTON</p><p>No big difference. 90. I've got to get it right. I would say. Yeah, I don't know. Let me think. Oh, yeah, also, when is the cut off point?</p><p>AARON</p><p>Because this could just until the end of time. It's not a falsifiable thing.</p><p>WINSTON</p><p>Yeah, I guess like eight to 1500.</p><p>AARON</p><p>Okay, that makes sense. I want to say a little bit more than eight, but probably not 20. I don't know, like 14 to yeah, 1500 sounds right. I'll go with 14 to 1500.</p><p>WINSTON</p><p>Okay.</p><p>AARON</p><p>All right. We agree too much. All right.</p><p>WINSTON</p><p>Yeah, that's good.</p><p>AARON</p><p>Okay. I'm glad that I found somebody who has all of my opinions. Well, yeah, me too. It's been lovely.</p><p>WINSTON</p><p>Thanks for doing this.</p>]]></content:encoded></item><item><title><![CDATA[#3: Nathan Barnard on how financial regulation can inform AI regulation]]></title><description><![CDATA[Note: the first few minutes got cut due to technical difficulties, so it sounds like we start in the middle of our conversation.]]></description><link>https://www.aaronbergman.net/p/3-nathan-barnard-on-how-financial-e81</link><guid isPermaLink="false">https://www.aaronbergman.net/p/3-nathan-barnard-on-how-financial-e81</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Thu, 13 Jul 2023 03:13:37 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/135570844/4efd011043eac45eeddfbd8b50c36220.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p></p><p><em>Note: the first few minutes got cut due to technical difficulties, so it sounds like we start in the middle of our conversation.</em></p><ul><li><p><a href="https://thegoodblog.substack.com/">Follow Nathan&#8217;s blog</a></p></li></ul><h2>Summary by Clong</h2><ol><li><p>Stress Tests and AI Regulation: Nathan elaborates on the concept of stress tests conducted by central banks. These tests assess the resilience of banks to severe economic downturns and the potential for a domino effect if one bank fails. They believe that lessons from this process can be applied to AI regulation. Aaron agrees, but also highlights the need for a proactive approach to AI regulation, as opposed to the reactive measures often seen in banking regulation.</p></li><li><p>The Role of Central Banks in AI Regulation: Nathan suggests that institutions structured like central banks, staffed with technical experts and independent from government, could be beneficial for AI regulation. They believe such institutions could respond quickly and effectively to crises. However, they acknowledge that this approach may not be effective if AI development leads to rapid, uncontrollable self-improvement.</p></li><li><p>Compute Governance: The conversation then shifts to compute governance, which Nathan sees as a promising area for AI regulation due to the obviousness of someone using large amounts of compute. They believe that this could provide governments with a control lever over cutting-edge AI labs, similar to how central banks control banking loans and affairs.</p></li><li><p>AI Regulation and the Role of Public Actors: Nathan acknowledges that the leaders of major AI labs seem sensible and aligned with AI safety principles. However, they argue that regulation and public actors can play a crucial role in creating common knowledge between labs and preventing a race to the bottom. They also discuss the potential benefits and drawbacks of different regulatory approaches.</p></li><li><p>Financial Regulation as a Model for AI Regulation: Nathan believes that post-crisis financial regulation, such as the Dodd-Frank Act, has generally been effective. They suggest that AI regulation could follow a similar path, especially if AI becomes a significant part of the economy. However, Aaron expresses skepticism about the ability of political processes to produce effective AI regulation.</p></li><li><p>Regulation Before and After Crises: The speakers agree that pre-crisis regulation has generally been less effective than post-crisis regulation. They discuss the potential for AI regulation to follow a similar pattern, with effective regulation emerging in response to a crisis.</p></li><li><p>Regulatory Arbitrage: The conversation concludes with a discussion on regulatory arbitrage, where banks shift activities to where it's cheapest to do business. Despite evidence of this behavior, Nathan notes that there was no race to the bottom in terms of regulation during the financial crisis.</p></li></ol><h1><strong>Transcript</strong></h1><p><em>Note: created for free by<a href="http://assemblyai.com/playground">Assembly AI</a>; very imperfect</em></p><p>AARON</p><p>I guess one thing is like, okay, so with the fed, I honestly don't know how the mechanics of, for example, federal whatever the funds rate.</p><p>NATHAN</p><p>The federal funds rate?</p><p>AARON</p><p>Yeah, there's like various things like that. Okay, so somebody like Jerome Powell signs a piece of paper and it says the federal funds rate off by a little bit. Okay. I don't know how the mechanics works after that. One general impression I have, and you can correct me obviously the US federal government at large just has a bunch of stop and go levers within the banking system such that if the US wants to basically stop most commercial banking lending pretty quickly, which obviously wouldn't want to do, but if it did, it just could or something. Whereas right now I just own a computer and I can just run PyTorch on my computer and the military would have to break into my house to stop me.</p><p>NATHAN</p><p>Yeah, I think this is where Compute governance stuff seems like really important, at least in the sort of the current paradigm, having very large amounts of computers, like important for sort of trained state of the art models. And this is much more controllable. It's true. If you're in the world where just like anyone on the laptop could run safety art model, then I think you'd be in a lot of trouble. I think the bullcase for compute governance as a way to do AI regulation is like, compute so amongst the three inputs into the AI production function, like computes data and algorithms, compute like a lot of choke points. It's obvious when someone's using a large amount of it basically made obvious when someone's making it of death, someone's using a large amount of it. And this makes it the most obvious lever by which governments could be able to have the control over cutting edge labs working on the cutting edge they do over the ECB and the banking loans and affairs and central bank.</p><p>AARON</p><p>Am I correct to assume that you listen to the ADK podcast with Leonard Heim?</p><p>NATHAN</p><p>No, I've read.</p><p>AARON</p><p>Oh, okay. Because I figure just because then I'm very impressed because you're hitting all the points you're like summarizing the podcast right now.</p><p>NATHAN</p><p>Okay.</p><p>AARON</p><p>So I would recommend that to my fellow uneducated people.</p><p>NATHAN</p><p>Yeah, I suppose as an aspiring AI governance researcher and this is the sort of professional knowledge which I'm sort of required to.</p><p>AARON</p><p>I'm glad. It would be bad if you guys were I mean, not going to lie. Sorry, dude. I kind of trust. I don't know, Leonard seems like a little bit more senior or experienced. Yeah, but I'm glad you guys are basically converging evidence.</p><p>NATHAN</p><p>This is from stuff I've read from read from within the community.</p><p>AARON</p><p>I'm not really converging in like an accident.</p><p>NATHAN</p><p>Yeah, I think sort of lots of people lots of people have been thinking about this for a long time. But yeah, learn time is much more seniors than</p><p>NATHAN Barnes.</p><p>AARON</p><p>Okay, so what's your slash? You can also just carry on. I don't know if you had a thing, but your vision for, I don't know, compute governance.</p><p>NATHAN</p><p>I'm afraid I'm not a compute governance guy. This was more just a reference to how we could why compute governance is exciting and why sort of within if we sort of try to do air regulation, why compute governance seems like such like such like a promising thing to work on to make the full stack of air regulation sort of work.</p><p>AARON</p><p>Well, yeah, I guess. One thing, and I feel like a bad liberal saying this for extremely contingent reasons I think it seems plausible that four guys across some AI. Labs are just maybe not, in fact, responsible enough to have a really good shot at controlling their own, I guess self regulating AI. Or cutting edge AI. Or whatever. But it seems like an unusual situation in that plausibly just like a couple of people at the big labs would in fact do a better job than the outcome of whatever fucked up political process in the US. Generate. Like an agency.</p><p>NATHAN</p><p>Yeah, I think this is, like, a thing that we have to really grapple with. That Sam Altman does, in fact, seem like quite a nice guy and pretty sensible and has reasonably good alignment takes. And similarly for Dennis. Hasabis, and similarly for Dario and Daniela Amade do just in fact seem like quite sensible people and probably much more bought into much more air safety pills than I'd expect air existential safety pills than I'd expect a regulator to speak. I think this is just a thing you have to grapple with. Yes, I agree. Yeah, sorry, go ahead. I think over the longer run, how much churn do I expect there to be in like which leading company for safety art AI systems? I suppose if one has like five year timelines, then probably not very much chat, but if one has like 50 year timelines or even 30 year timelines, I think we should expect a lot more Chan and these worlds I'm more these sort of like medium to longer timelines worlds. Yeah, I think I'm much less confident that the sort of particular individuals leading custom edge AI labs will have alignment takes and making AI go well takes library with I'm more bullish on regulation in these worlds.</p><p>AARON</p><p>Yeah, I guess hopefully they don't necessarily compete with one another. I mean, maybe they kind of do, but seems like they can kind of stack, so hopefully we can have both.</p><p>NATHAN</p><p>Yeah, I think this would be good. I think there's also maybe another role that regulation and public access can play, like creating common knowledge between labs. None of them are engaging in race to the Bottom. It sort of allows them to way of them all binding their hands. Maybe this won't matter very much, but I think that could be another case for regulation. Even if you think that the specific takes of the various specific individuals with power leading AI labs are quite good.</p><p>AARON</p><p>Yeah, I don't know. I just get gloomy for, I think justified reasons when people talk about, oh yeah, here's the nine step process that has to take place and then maybe there's like a 20% chance that we'll be able to regulate AI effectively. I'm being facetious or exaggerating, something like that, but not by a gigantic amount.</p><p>NATHAN</p><p>I think this is pretty radically different to my mainline expectation.</p><p>AARON</p><p>What's your mainline expectation?</p><p>NATHAN</p><p>I suppose I expect like AI to come with an increasing importance past economy and to come up to really like a very large fraction of the economy before really crazy stuff starts happening and this world is going very anonymous. Anonymous, anonymous, anonymous. I know the word is it'd be very unusual if this extremely large sector economy which was impacted like a very large number of people's lives remains like broadly unregulated.</p><p>AARON</p><p>It'll be regulated, but just maybe in a stupid way.</p><p>NATHAN</p><p>Sure, yes, maybe in a stupid way. I suppose critically, do you expect the stupid way to be like too conservative or too like the specific question of AI accenture it's basically too conservative or too lenient or I just won't be able to interact with this.</p><p>AARON</p><p>I guess generally too lenient, but also mostly on a different axis where just like I don't actually know enough. I don't feel like I've read learned about various governance proposals to have a good object level take on this. But my broad prior is that there are just a lot of ways to for anything. There's a lot of ways to regulate something poorly. And the reason insofar as anything isn't regulated poorly it's because of a lot of trial and error.</p><p>NATHAN</p><p>Maybe.</p><p>AARON</p><p>I mean, there's probably exceptions, right? I don't know. Tax Americana is like maybe we didn't just kept winning wars starting with World War II. I guess just like maybe like a counterexample or something like that.</p><p>NATHAN</p><p>Yeah, I think I still mostly disagree with this. Oh, cool. Yeah. I suppose I see a much like broader spectrum between bad regulation and good regulation. I agree it's like very small amount. The space of optimal regulation is very small. But I think we have to hit that space for regulation to be helpful. Especially in this especially if you consider that if you sort of buy the AI extension safety risk then the downsides of it's not this quite fine balancing act between too much whether consumer protection and siphoning competition and cycling innovation too much. It's like trying to end this quite specific, very bad outcome which is maybe much worse than going somewhat slowering economic growth, at least somewhat particularly if we think we're going to get something. This is very explosive rates for economic growth really quite soon. And the cost of slowing down economic growth by weather even by quite a large percentage, very small compared to the cost of sort of an accidental catastrophe. I sort of think of Sony iconic growth as the main cost of main way regulation goes wrong currently.</p><p>AARON</p><p>I think in an actual sense that is correct. There's the question of like okay, Congress in the states like it's better than nothing. I'm glad it's not anarchy in terms of like I'm glad we have a legislature.</p><p>NATHAN</p><p>I'm also glad the United States.</p><p>AARON</p><p>How reasons responsive is Congress? I don't think reasons responsive enough to make it so that the first big law that gets passed insofar as there is one or if there is one is on the pareto frontier trading off between economic growth and existential security. It's going to be way inside of that production frontier or whatever. It's going to suck on every action, maybe not every act but at least like some relevant actions.</p><p>NATHAN</p><p>Yeah that doesn't seem like obviously true to me. I think Dodge Frank was quite a good law.</p><p>AARON</p><p>That came after 2008, right?</p><p>NATHAN</p><p>Yeah correct. Yeah there you go. No, I agree. I'm not especially confident about doing regulation before there's some quite bad before there's a quite bad warning shot and yes, if we're in world where we have no warning shots and we're just like blindsided by everyone getting turned into everyone getting stripped their Athens within 3 seconds, this is not good. Both in law we do have one of those shots and I think Glass Seagull is good law. Not good law is a technical term. I think Glass Steagall was a good piece of legislation. I think DoD Frank was a good piece of legislation. I think the 2008 Seamless Bill was good piece of legislation. I think the Troubled Assets Relief Program is a good piece of piece of legislation.</p><p>AARON</p><p>I recognize these terms and I know some of them and others I do not know the contents of.</p><p>NATHAN</p><p>Yeah so Glass Eagle was the financial regulation passed in 1933 after Great Depression. The Tropical Asset Relief Program was passed in I think 2008, moved 2009 to help recapitalize banks. Dodge Frank was the sort of landmark post financial cris piece of legislation passed in 2011. I think these are all good pieces of legislation now. I think like financial regulation is probably unusually good amongst US legislation. This is like a quite weak take, I guess. It's unusually.</p><p>AARON</p><p>So. I don't actually know the pre depression financial history at all but I feel like the more relevant comparison to the 21st century era is what was the regulatory regime in 1925 or something? I just don't know.</p><p>NATHAN</p><p>Yeah, I know a bit. I haven't read this stuff especially deeply and so I don't want to don't want to be so overcompensant here but sort of the core pieces which were sort of important for the sort of the Great Depression going very badly was yeah, no distinction between commercial banks and investment banks. Yes, such a bank could take much riskier. Much riskier. Things with like custom deposits than they could from 1933 until the Peel Glass Eagle. And combine that with no deposit insurance and if you sort of have the combination of banks being able to do quite risky things with depositors money and no deposit insurance, this is quite dangerously known. And glassy repeal.</p><p>AARON</p><p>I'm an expert in the sense that I have the Wikipedia page up. Well, yeah, there was a bunch of things. Basically. There's the first bank of the United States. There's the second bank of the United States. There's the free banking era. There was the era of national banks. Yada, yada, yada. It looks like 19. Seven was there was some panic. I vaguely remember this from like, AP US history, like seven years ago or.</p><p>NATHAN</p><p>Yes, I suppose in short, I sort of agree that the record of sort of non post Cris legislation is like, not very good, but I think record of post Cris legislation really, at least in the financial sector, really is quite good. I'm sure lots of people disagree with this, but this is my take.</p><p>AARON</p><p>Yeah. Now I feel like this is probably I have no idea if this is productive direction, but feel like maybe there's almost something resembling anthropics going on, which is like if you hadn't had effective financial regulation, we wouldn't be talking about transformative. The economy that supports the development of transformative AI depends on a strong economy that's robust to banking failure. And so there's like anthropic effects going on.</p><p>NATHAN</p><p>So I suppose one is I'm like, I don't know, quite skeptical of bringing up anthropics in these kinds of areas. We then got quite bad financial well, the US got quite bad financial regulation from the mid 90s until the this result in 2008 financial Cris guess when.</p><p>AARON</p><p>The golden era of deep learning wasn't was not during that time.</p><p>NATHAN</p><p>Sorry. The US did in fact have quite bad financial regulation for 20 year period.</p><p>AARON</p><p>Right. What I'm saying is the era of transformative AI was not during the period either during or like directly following, like this period of bad regulation.</p><p>NATHAN</p><p>I think that's, like, got nothing to do with like USA Khan was doing very well in 2005 and 1997. Okay.</p><p>AARON</p><p>This is like a very speculative, off the cuff thing. Yeah. Don't necessarily endorse anything I said in.</p><p>NATHAN</p><p>The last.</p><p>AARON</p><p>Yeah, no, I don't know. I feel like maybe I'm just also being I really just don't know. I'm like slinging takes based on broad impression. We love some takes, but I basically defer to the median. EA ish or I guess not ish EA governance researcher?</p><p>NATHAN</p><p>No, there's some selection effects and governance researchers are selected people who think governance has a chance of working. I saw how strong a selection effect is. Probably quite strong. I think it's really worth pushing back on.</p><p>AARON</p><p>You're such a good representative of the effective altruism community.</p><p>NATHAN</p><p>I endeavor to be.</p><p>AARON</p><p>Sorry. Okay. Is there anything else more to talk about on this matter?</p><p>NATHAN</p><p>Oh, many, many things.</p><p>AARON</p><p>You mean we didn't fully solve compute governance just now?</p><p>NATHAN</p><p>Many things. Just on the financial regulation case studies.</p><p>AARON</p><p>Oh, right, okay.</p><p>NATHAN</p><p>Yes, keep going. Might have exhausted people's appetites, pigeonhower listeners.</p><p>AARON</p><p>Fuck them if I get for it, I guess stop you, but I guess keep explaining the analogy.</p><p>NATHAN</p><p>Cool. I think the analogy is not terribly weak, but I don't want to sort of overstate how strong I think the allergy is. I think most of the value here just comes from looking at just looking at complex regulation, how complex regulation works. I think maybe the second key takeaway from looking at stress tests in particular is there was not any race to Blossom Dynamics, despite the fact that there is I think really quite robust evidence that banks do engage in regulatory arbitrage. It's like very hard to cause an inference it's very hard to cause an inference on whether banks do engage regularcy arbitrage.</p><p>AARON</p><p>Can you explain regulatory yeah concept?</p><p>NATHAN</p><p>Yeah, just basically trying to shift assets to shift activities to where it's cheapest to do business.</p><p>AARON</p><p>Right?</p><p>NATHAN</p><p>This can either be like in terms of trying to assets like off balance sheet, so keeping assets whatever legal restriction they're in, but sort of shifting them being in a different part of the bank. So using the reef home markets instead of other ways of doing interbank borrowing, be one example of this, but also just more obviously moving assets out of jurisdictions, moving assets from US bank, moving assets from the US to its UK branches, for instance. So they come up with regulation.</p><p>AARON</p><p>I mean it's always oh yeah, Ireland.</p><p>NATHAN</p><p>Is Ireland, or you have quite boss evidence that the banks do do this. But despite this, there was no it's been both the case that basically all central banks, all central banks that matter have adopted really quite stringent stress testing regimes for financial cris. Despite, it seems to me that their domestic financial system probably could have gained some advantage by banks not adopting these sort of the expense of the global financial system. And also at least in the literature I've read, in the interviews I've read, there's not any discussion of fear of regulatory arbitrage as something constraining the sorts of regulation which the central bank could adopt. I think this is a pretty interesting takeaway.</p><p>AARON</p><p>I mean how much of this is just because of correlated economies?</p><p>NATHAN</p><p>Not just wait.</p><p>AARON</p><p>So the theory behind why there would be a race to the bottom is that, for example, the UK makes it cheaper to do business in the UK. And banks move capital like you can imagine. On one end of the spectrum. It's like a perfectly global, like a super globalized economy. Like, like total, like free border, like, economically, like no borders or whatever.</p><p>NATHAN</p><p>Yeah.</p><p>AARON</p><p>In which case I'm pretty sure that reasoning falls apart right under this hypothetical perfectly globalized economy because everything equalized or the economic benefit just equalized throughout the world without respect to borders or whatever.</p><p>NATHAN</p><p>And then like, can you be more concrete?</p><p>AARON</p><p>Okay, yeah, that was like a terrible explanation. So, okay, like just like, imagine like two different states of the world, one of which is both of which are way more extreme than the actual world right now. And so on one hand, you can have perfectly closed economies like every country, all economic activity happens within their borders or whatever. And on the other extreme, you have a perfectly globalized economy which case rates.</p><p>NATHAN</p><p>The same across, like interest rates the same across all countries.</p><p>AARON</p><p>Sure, yes, interest rates and also, I guess, returns to everything.</p><p>NATHAN</p><p>Both wages and interest rates the same. Right.</p><p>AARON</p><p>You're good. Okay. Man, this guy's sharp. Okay.</p><p>NATHAN</p><p>Doing a British undergraduate degree. If you learn the fucking subjects you study.</p><p>AARON</p><p>In America, we get to study more than one thing.</p><p>NATHAN</p><p>Ha.</p><p>AARON</p><p>These are both like okay, in the libertarian paradise world.</p><p>NATHAN</p><p>Yeah.</p><p>AARON</p><p>So central banks only have, I'm very much like thinking and talking at the same time, as you can probably tell, but that's fine.</p><p>NATHAN</p><p>This is what writing hard, this is what question hour is for.</p><p>AARON</p><p>Exactly.</p><p>NATHAN</p><p>Writing hard is fucking easy.</p><p>AARON</p><p>Man. I'm realizing how poor of a grasp. I have no idea what a central bank does, and realistically, I think I kind of do. They do a bunch of things. I know the Fed, like, adjust the federal funds rate. They control the money supply. They're like the bank for banks above and beyond that, I have no idea.</p><p>NATHAN</p><p>So you also do fat regulation for our discussion. That's their key other function. They also do fat regulation.</p><p>AARON</p><p>Right? And so any financial regulation in this libertarian paradise pauses. It might adjust the interest rate. For example.</p><p>NATHAN</p><p>Everything should equivalent race, right?</p><p>AARON</p><p>So there's no race to the bottom because everybody's by definition or by assumption in the same.</p><p>NATHAN</p><p>There are no supernormal profits, everyone perfectly invested markets and so forth. Okay. I suppose the key way this is not like our current world is interest rates have in fact not equalized across countries. And also we empirically see at least again, I want to emphasize cause inference is very difficult in this area. But insofar as we can have cause inference here, as well as looking at sort of more basic measures, like much more basic regressions, we do in fact see that when financial regulation gets more stringent in country A, the banks in country A move assets to country B where financial regulation is less stringent.</p><p>AARON</p><p>Meta level aside, didn't you make a thread that's like all your rejections to EA places?</p><p>NATHAN</p><p>Oh, I did? Yeah. Okay. And also non EA places. Mostly non EA places.</p><p>AARON</p><p>Okay. I'm getting so black pilled on everything because you know your shit for a bunch of random things, like random important things. Not random, but important things. And so yeah, man, there's no hope for the rest of us anyway. Okay, back to the object level. Keep going. Yeah. Interest rates haven't normalized.</p><p>NATHAN</p><p>Yeah, I think you should be less blackfilled than just very dark gray. Well, we can come back to that later.</p><p>AARON</p><p>I mean, whatever you want. Yeah. Now or later.</p><p>NATHAN</p><p>Yeah, let's just finish off this thread and then we can maybe come back to that discussion. Yeah. Interest rates haven't equalized and we do see this empirical phenomenon of banks moving assets towards countries which cheaper do business. Assume that other country is also a high income country. I. E. Banks don't move from, like you can make your regulation make it as cheap to business as you want. In the know. Wells Fargo is not moving any of its assets to the Congo. And so in this context, I think it is, on its face, interesting I say, actually not surprising, but I think it is on its face, interesting that we don't see this race at the bottom between central banks in terms of, like yeah. In terms of post 2008 financial regulation. I think maybe the sort of analogy here to AI is that, yeah, we might have a reason to think this. I think maybe gives us some reason to think that international competition international competition is, like, less of a constraint on regulatory coordination, even though any explicit coordination between countries and without the US. Having to use its either soft or hard power to enforce it. So the Fed had played no part in forcing the ECB or the bank of England or the people of bank of China to adopt stress tests. I think this is actually quite encouraging for thinking about whether a lack of international coordination will doom AI regulation.</p><p>AARON</p><p>Yeah. At least for a while. For last 1020 minutes or whatever. Lost track of what this analogy was for. No, but I think it's pretty direct. Right. So we're talking about country level regulation versus country level regulation on these two domains. And I guess my impression is that the financial systems across the developed world are much more equal than the AI infrastructure or, like, AI economies.</p><p>NATHAN</p><p>Yeah, I think that's very clearly true.</p><p>AARON</p><p>Yeah. Okay, so maybe this is just, like, I guess good, because you only have to worry about one government instead of.</p><p>NATHAN</p><p>Like at least I think if you sort of create a good model of regulation, which is what happens with stress tests, is that stress tests really proved their worth in 2009. They really restored confidence in the US banking system. If you have this good model regulation, then they can be then they sort of read the thing that will be adopted very widely, which will happen with the stress test in particular.</p><p>AARON</p><p>Yeah, cool. Yeah, sounds good.</p><p>NATHAN</p><p>Yeah. I think maybe come back to checking our stress tests some other time. I've got six more takeaways. No, I've got four more takeaways. That might be a little excessive.</p><p>AARON</p><p>You can at least list I don't know how much maybe we won't have like an hour for each or something. But you can do the short version at least. Maybe what are they? And then maybe we can figure out which ones to dive in.</p><p>NATHAN</p><p>Sure, let's find the doc. Let's find the doc. Here's the rest of the doc. Yeah, I think this is maybe the third and most other really big one is like, credit rating agencies, I think, play like a pretty major part in a financial cris. Yeah, I think, yes, credit rating agencies play like a pretty major part for making the financial cris. And I think at least some of the ways in which we're doing air regulation now seem to me to have similarities to how achieves were structured. I think that's maybe a second takeaway I think that the other takeaway is that industry standards just were very important in influencing what regulation was. I think it's been like a big question in thinking about how much sway do we think that interlab agreements currently, how much influence can we have on long term air regulation via getting interlab agreements? Interlab standards? I think the lesson from regulation, especially in the US. Over the past century or so is a lot, is it could matter a lot and could even be the most important. Think the final one, the third takeaway still haven't already chatted about is banks mostly haven't been able to gain the stress testing system. Those are the three we haven't sourced our.</p><p>AARON</p><p>I mean, these all seem like good analogies. Which one of these jumps out to you as the most insight, like insight rich?</p><p>NATHAN</p><p>Yeah, I think the industry standards one is maybe the industry standards one might be worth talking about. I think it really is striking just how just how influential financial industry, industry level financial regulation has been influencing the long trajectory of US. Financial regulation.</p><p>AARON</p><p>We're talking about the laws or non law agreements, norms within companies doing the yeah.</p><p>NATHAN</p><p>Various ways in which banks and other financial institutions have regulated themselves through non legal channels. I've had really profound influences on the legal ways in which they've been regulated.</p><p>AARON</p><p>Okay, no, I don't know anything about this and I'm kind of surprised and also okay. Yeah. So what are some of these norms?</p><p>NATHAN</p><p>Yeah, so stress tests are one the stress tests started out as something done within the, within the mortgage industry, and then they were adopted pretty wholesale. The methods of stress testing were adopted pre wholesale by central banks after financial crisis. There's not much more of a flashy takeaway here. This is just the takeaway. This is just the first takeaway. Just like this method which they used is now just used in a very similar way and it now has the force of law that's the first 1 second one, I think this is maybe the most striking is a credit rating agency sort of pops up in the US. In sort of the early part of 20th century.</p><p>AARON</p><p>I'm sorry, there's something with the audio. Could you say that again?</p><p>NATHAN</p><p>Yeah. So stress tests were first used by the mortgage industry in the sort of 1990s okay. And now have just been adopted by all the major central banks.</p><p>AARON</p><p>Okay. So first there's like inter industry proliferation and proliferation and now you've added international legal.</p><p>NATHAN</p><p>Yeah. And now it's just now it's just international legal proliferation. Yeah. Wow. Yeah, that's it. That's the takeaway there for that one. I think the second sort of really striking example is credit rating agencies. Credit agency sort of pops up in the early 20th century in the US. And went, right, we're going to rate bonds. And so by rating bonds, investors will know and some other mostly bonds will know how much they can trust them. And then both the specific organizations doing the credit rating and the credit ratings themselves became enshrined in US law in 1975. So a large number of financial institutions, like pension funds, I think most importantly are only allowed to invest in bonds rated prime or best in the prime category by credit rating agencies. And the only credit rating agencies which this isn't quite true anymore, but is pretty close to being true, the only credit racing agencies which count are with the private credit racing agencies established in the early part of the 20th century. Yes, I can be much more concrete. So, like whether the US teachers pension fund wants to buy some bonds and the US government says, cool, you can only buy bonds rated crime or better and rated crime or better by Moody's Pictures or Standard and pause, there are a few others, but basically just those.</p><p>AARON</p><p>Kind of sounds like just regulatory capture by Moody's. Do you think that's like the right read or is it actually like a wise I think it's like philosopher king regulation.</p><p>NATHAN</p><p>I think it's regulatory laziness.</p><p>AARON</p><p>Okay.</p><p>NATHAN</p><p>They were all very established by 1975 and it was just much easier to piggyback off them than discover their own.</p><p>AARON</p><p>Regulation object level though. Is it even good to for the government? I honestly don't have a form take at all. Whether it's good for whether you should expect the market to be able to handle risk appropriately or whether you need want regulations on what types of bonds like the teachers union, whatever can buy.</p><p>NATHAN</p><p>Yeah, I have no especially strong take here. My weak take is yes. My sort of takeaway from like my weak take is yes. I think there's actually reasons to think that this could have quite bad untended consequences because basically something quite similar did have quite bad untended consequences in the bad untended consequence was a financial crisis weekly. Yes. But the reason I weren't characterized as regularly captured by Moody's is like from looking a little bit history of this, it didn't seem to me there was lots of lobbying efforts by the big credit racing agencies to have themselves enshrined in law. Like this. And they're also quite small organizations. They're just like many times smaller both in terms of revenue and staff compared to the large banks. So I should be very surprised that lobbying power is able to compete with this is this is quite a weak tech. I haven't looked that deeply into the history of this.</p><p>AARON</p><p>I feel like I'm going to add you next to Carl Schulman on people I can call upon to have a take on a well informed take on arbitrary matters.</p><p>NATHAN</p><p>Sorry, I still missed that. Someone called sorry.</p><p>AARON</p><p>I'm mostly joking, but I said I'm going to adding you to my list of people that I can call upon to have a well informed take about any arbitrary matter.</p><p>NATHAN</p><p>That's very kind of you. Do bear in mind I have been working this full time for the last whatever you say, like three weeks or something.</p><p>AARON</p><p>Whatever you say. I bet you still know random shit about other stuff too. Anyway. Yeah. Okay. I feel like this generally I think I'm just generally more optimistic about coming back to AI about industry sourced yeah.</p><p>NATHAN</p><p>Regulation. I think I am as well. I basically think the current trajectory I think it's very plausible that archivals gets enshrined into law.</p><p>AARON</p><p>That's wild. Sorry? It's wild. I don't know, maybe it's not wild. It's wild to me. I don't know. It's just like a little bit of like it makes total sense, actually. It sounds right. It's just like putting plus two other people in the same bucket as Moody's is just like I don't know, it's like a good illustrative vignette, I feel like, for the new AI era. Anyway, maybe I'm reading too much into this.</p><p>NATHAN</p><p>Yeah. I also haven't looked really deep. I've only looked into sort of financial regulation. I haven't looked into other industries here. So again, don't want to have too much force on any of these financial regulation.</p><p>AARON</p><p>And also, I feel like Paul always gets the credit. Paul's great. We know. There's also Beth Barnes and Mark.</p><p>NATHAN</p><p>Yeah. Beth Barnes is awesome.</p><p>AARON</p><p>We'll give them a shot. Okay. I'm probably missing more people I'm sorry, other arch people on this podcast that gets dozens of downloads.</p><p>NATHAN</p><p>Dozens of downloads.</p><p>AARON</p><p>Dozens. Yes. Okay.</p><p>NATHAN</p><p>You just want to wrap it up. Maybe you just want to wrap us up. So just one more take. Is it banks mostly haven't been able to game the system, have those haven't been able to gain the stress game the stress testing system? I think they have a bit, but how I take both from qualitative work interviews with individuals working at individuals working at banks and from looking at the status of whether larger banks be able to large banks been able to sort of get around regulation in various ways. For instance, by sort of not having to recapitalize themselves when other banks would other sort and less powerful banks would have. Yeah, it seems like banks mostly having to gain the system and it's sort of not just a tick box exercise and pathways is that the Fed, BCB, and the bank of England's models all private. The banks don't know them. And also the specific scenarios change year to year. I think this is not that important. I think it's mostly telling us that, yes, even institutions as powerful as the Fed, the ECB and bank of England, even if Face, is very powerful and very sophisticated actors, probably the most powerful sophisticated actors in the world economy, the large banks are able to effectively use their regulatory power to constrain them. I think that's the takeaway from here, mostly not entirely, but I think mostly.</p><p>AARON</p><p>This is actually pretty surprising. Yeah. I feel like if this wasn't the case, we would be seeing more financial crises. But it's like object level. There's so much incentive to basically just game a metric and I don't know, usually when there's heavy incentives to game a metric, people are pretty good at gaming the metric.</p><p>NATHAN</p><p>Yeah. I think you shouldn't update too much on this because financial crises, financial crises, like large financial crises are, like, structurally rare. So I don't want to sort yeah, I think we don't have that much evidence just from looking at base rates of financial crises to see to draw from here. Okay. So I wouldn't update too much on this. Yeah. I think the qualitative evidence, I think here looks like quite strong, as in just when you sort of interview bankers, they sort of say it isn't just a tick box exercise. And yeah, so, secondarily, just like very large banks have been forced to not pay out dividends and not do share buybacks to recapitalize. Like Citibank. In 2014, their share price dropped 6% when it was found they failed the stress test. This is like a serious financial penalty. This is like a no fucking.</p><p>AARON</p><p>Yeah, that's probably, what, at least tens of billions of dollars in market cap. Yeah.</p><p>NATHAN</p><p>And they just weren't allowed to pay dividends for that.</p><p>AARON</p><p>I didn't even know that was like.</p><p>NATHAN</p><p>A thing that's like a no fucking about test.</p><p>AARON</p><p>Yeah. Okay.</p><p>NATHAN</p><p>So I suppose maybe, just maybe, I just want to sum up on how I'm feeling about how this is very broadly updated me in terms of how I'm sort of feeling about air regulation. I think it's made me think that in worlds we don't have warning shots, I'm a bit more scared and even quite a lot more scared. It really does seem like a financial cris was really quite critical and the great question for that really was quite critical in getting better regulation. I think that's, like, the major Pessimistic update from this. I think the major optimistic update from this is that if you have powerful institutions like Central Bank, they can do really good regulation. What I judge is really quite good regulation against the most powerful actors and the various use the sort of rationalist term like Malokian forces do not constrain them here.</p><p>AARON</p><p>Yeah. And if anything, I feel like things, at least in this particular domain, there's various factors that actually push in the optimistic direction, like individuals being alignment pilled, whereas bankers are maybe not banking alignment pilled.</p><p>NATHAN</p><p>Yeah, I think that's basically right. I think that's basically right.</p><p>AARON</p><p>Okay, cool. So we will resume at some point in the future.</p><p>NATHAN</p><p>All right? Okay. Awesome. John, thanks for having me on. On.</p><p>AARON</p><p>Thank you.</p>]]></content:encoded></item><item><title><![CDATA[#2 Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one]]></title><description><![CDATA[Follow Arjun on Twitter]]></description><link>https://www.aaronbergman.net/p/arjun-panickssery-solves-books-hobbies-108</link><guid isPermaLink="false">https://www.aaronbergman.net/p/arjun-panickssery-solves-books-hobbies-108</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Fri, 30 Jun 2023 00:01:48 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/135570846/290d68b19c68ee9c492b68370e14ae11.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<ul><li><p><a href="https://twitter.com/panickssery">Follow Arjun on Twitter</a></p></li><li><p><a href="https://arjunpanickssery.substack.com">Read and subscribe to his blog</a></p></li></ul><h1><strong>Transcript</strong></h1><p><em>Note: created for free by <a href="http://assemblyai.com/playground">Assembly AI</a>; very imperfect</em></p><p>AARON</p><p>So welcome. Welcome to the Pigeon hour podcast. Where do you see yourself? Wait, hold on. I need to get out of I literally say this every single time. I say this every single time. I always say I need to get out of podcaster mode and into conversation mode. I've said that at the start of every single episode, but it's still true. So what's on your mind?</p><p>ARJUN</p><p>Oh, you were in the book chat, though. The book rant group chat, right?</p><p>AARON</p><p>Yeah, I think I might have just not read any of it. So do you want to fill me in on what I should have read?</p><p>ARJUN</p><p>Yeah, it's group chat of a bunch of people where we were arguing about a bunch of claims related to books. One of them is that most people don't remember pretty much anything from books that they read, right? They read a book and then, like, a few months later, if you ask them about it, they'll just say one page's worth of information or maybe like, a few paragraphs. The other is that what is it exactly? It's that if you read a lot of books, it could be that you just incorporate the information that's important into your existing models and then just forget the information. So it's actually fine. Isn't this what you wrote in your blog post or whatever? I think that's why I added you to that.</p><p>AARON</p><p>Oh, thank you. I'm sorry I'm such a bad group chat participant. Yeah, honestly, I wrote that a while ago. I don't fully remember exactly what it says, but at least one of the things that it said was and that I still basically stand by, is that it's basically just like it's increasing the salience of a set of ideas more so than just filling your brain with more facts. And I think this is probably true insofar as the facts support a set of common themes or ideas that are kind of like the intellectual core of it. It would be really hard. Okay, so this is not a book, but okay. I've talked about how much I love an 80,000 hours podcast, and I've listened to, I don't think every episode, but at least 100 of the episodes. And no, you're just, like, not going to definitely I've forgotten most of the actual almost all of the actual propositional pieces of information said, but you're just not going to convince me that it's completely not affecting either model of the world or stuff that I know or whatever. I mean, there are facts that I could list. I think maybe I should try.</p><p>ARJUN</p><p>Sure.</p><p>AARON</p><p>Yeah. So what's your take on book other long form?</p><p>ARJUN</p><p>Oh, I don't know. I'm still quite confused or I think the impetus for the group chat's creation was actually Hanania's post where he wrote the case against most books or most was in parentheses or something. I mean, there's a lot of things going on in that post. He just goes off against a bunch of different categories of books that are sort of not closely related. Like, he goes off against great. I mean, this is not the exact take he gives, but it's something like the books that are considered great are considered great literature for some sort of contingent reason, not because they're the best at getting you information that you want.</p><p>AARON</p><p>This is, like, another topic. But I'm, like, anti great books. In fact, I'm anti great usually just means old and famous. So insofar as that's what we mean by I'm like, I think this is a bad thing, or, like, I don't know, aristotle is basically wrong about everything and stuff like that.</p><p>ARJUN</p><p>Right, yeah. Wait, we could return to this. I guess this could also be divided into its component categories. He spends more time, though, I think, attacking a certain kind of nonfiction book that he describes as the kind of book that somebody pitches to a publisher and basically expands a single essay's worth of content into with a bunch of anecdotes and stuff. He's like, most of these books are just not very useful to read, I guess. I agree with that.</p><p>AARON</p><p>Yeah. Is there one that comes to mind as, like, an? Mean, I think of Malcolm Gladwell as, like, the kind of I haven't actually read any of his stuff in a while, but I did, I think, when I started reading nonfiction or with any sort of intent, I read. A bunch of his stuff or whatever and vaguely remember that this is basically what he like for better or.</p><p>ARJUN</p><p>Um yeah, I guess so. But he's almost, like, trying to do it on purpose. This is the experience that you're getting by reading a Malcolm Gladwell book. It's like talib. Right? It's just him just ranting. I'm thinking, I guess, of books that are about something. So, like, if you have a book that's know negotiation or something, it'll be filled with a bunch of anecdotes that are of dubious usefulness. Or if you get a book that's just about some sort of topic, there'll be historical trivia that's irrelevant. Maybe I can think of an example.</p><p>AARON</p><p>Yeah. So the last thing I tried to read, maybe I am but haven't in a couple of weeks or whatever, is like, the Derek Parfit biography. And part of this is motivated because I don't even like biographies in general for some reason, I don't know. But I don't know. He's, like, an important guy. Some of the anecdotes that I heard were shockingly close to home for me, or not close to home, but close to my brain or something. So I was like, okay, maybe I'll see if this guy's like the smarter version of Aaron Bergman. And it's not totally true.</p><p>ARJUN</p><p>Sure, I haven't read the book, but I saw tweet threads about it, as one does, and I saw things that are obviously false. Right. It's the claims that he read, like, a certain number of pages while brushing his teeth. That's, like, anatomically impossible or whatever. Did you get to that part? Or I assumed no, I also saw.</p><p>AARON</p><p>That tweet and this is not something that I do, but I don't know if it's anatomically impossible. Yeah, it takes a little bit of effort to figure out how to do that, I guess. I don't think that's necessarily false or whatever, but this is probably not the most important.</p><p>ARJUN</p><p>Maybe it takes long time to brush his teeth.</p><p>AARON</p><p>Yeah, maybe. Also.</p><p>ARJUN</p><p>There'S a lot of books, actually.</p><p>AARON</p><p>And one weird thing. I think I tweeted about this a long time ago, and I think it got, like, one like or, like it's actually surprising how many books there are. Like, if you go into a library, at least I remember at Georgetown, just, like, walking in or whatever. Georgetown? Like the library Georgetown? Yes. No, it's not like a particularly big library, but there's so many books. I can't believe actually somebody wrote all those. It's actually kind of surprising. It kind of breaks my brain a little bit. And so maybe I don't know, maybe people are talking about different sets of books or something like that. And that explains some of the disagreement here.</p><p>ARJUN</p><p>Sure, I guess. I have a friend who told me once that every time he passes the bookstore, he gets kind of depressed briefly because he's like, wow, look at all these utterly useless books that no one should ever read.</p><p>AARON</p><p>No. Yeah. I don't know.</p><p>ARJUN</p><p>I guess there's no reason ever, for anyone, basically, to read most of these books. Also because ultimately, you can only read so many books. Right. Even if you read a book every month, which is more than basically anyone reads in practice, that's what, twelve per year. And then if we each live for another 60 or 70 years, then that gives us definitely less than 1000 books. And so basically, most books are not worth reading at all, right?</p><p>AARON</p><p>Oh, yeah. Definitely more realistically.</p><p>ARJUN</p><p>People read, like, five books a year, and then after six years, they get, like, 300 books.</p><p>AARON</p><p>Yeah, I think it's like, the vast majority of I'm literally just, like, picturing. I think, like, sometimes I'd actually, like, just, like, try to, like, find a random book and yeah, they're on, like, the most random shit ever. It's like, I don't know, like, 1970 is, like, a study of, like or, like, you know, anthropology of, like, some Icelandic, like, motherhood, like, ritual practice or something. Like, I'm totally making this up. But, yeah, maybe not 99.9. Probably 99% of books are not worth reading. But that still leaves, like, I don't know, a lot. A million? Probably not a billion. Several, like, more than 1000. A lot of books. Probably 100,000 or a million or so.</p><p>ARJUN</p><p>Yeah, I'm not sure. I mean, also, you could just read way more than other people read or whatever. I think the great books of the Western world it's like the thing that you see in libraries are similar. That's like a volume of books in that leather looking thing. If you're familiar. I think that would take, like, a couple of years to read if you read for, like, an hour or two every day. But then most people obviously would never do that.</p><p>AARON</p><p>Fun facts.</p><p>ARJUN</p><p>You can't really oh. What?</p><p>AARON</p><p>I won some award in high school for, like it was for intellectual curiosity. It's like the St. John's College Book Award, and they gave me a bunch of old books, and I think they're under my bed somewhere because I had zero interest whatsoever.</p><p>ARJUN</p><p>Sorry.</p><p>AARON</p><p>It's like, neither here nor there.</p><p>ARJUN</p><p>Oh, wait, st. John's College is that place in Annapolis where they just read old books. Right. Instead of getting normal.</p><p>AARON</p><p>I'm like, that's, like, the last place I would ever consider.</p><p>ARJUN</p><p>Yeah, or returning to that topic where you said the great books are bad, the great books are bad, thing becomes or, like, their strongest cases when people actually read mathematical or medicinal or scientific texts that are just out of date for no reason. Right. This is like an absurdity to read Euclid's elements for basically no reason.</p><p>AARON</p><p>Yeah, I think that's pretty clear. But then for some reason I mean, not just for some reason, for identifiable reasons, I guess once you move into philosophy and even more so, like literature, it just assumed that even above and beyond the historical value of knowing what previous philosophers thought, people just assume, I guess, like Aristotle and Plato. I'm just like naming old people. I don't know who else are just, like, worth reading. I just disagree.</p><p>ARJUN</p><p>Yeah, I suppose you could divide it into two categories. There are disciplines where people make progress, like mathematics, where it doesn't really make sense to read some sort of old mathematician because he's just going to be worse than a modern mathematician. And then there's disciplines where people make no progress, like philosophy or art or literature.</p><p>AARON</p><p>I'll fight you on that. I think philosophers make progress.</p><p>ARJUN</p><p>Okay, maybe. But you agree that whatever novelists don't make so much progress. They just sort of change their so. You know, a book that's really old or a book that's considered, you know, tolstoy is probably just the best, because it's not like between Tolstoy and now, people got really good at doing that kind of thing at whatever. It is that, you know, display human nature in a like I agree that.</p><p>AARON</p><p>There'S not progress per se, but there's a lot more people and a lot more books. There's, like, the Lindy thing, so maybe it's hard to identify them, but I would be surprised if, just like, the in some sense, most of the best works of literature are, like, at least post 19, I don't know, 1950, 1970 or something. Just because the number presumably.</p><p>ARJUN</p><p>I guess I could count, but I don't think the most of the great works of literature are past 1950 or like great according to me.</p><p>AARON</p><p>Great, according to me.</p><p>ARJUN</p><p>Wait, name some great novelists born after 1950 or who were working after 1950.</p><p>AARON</p><p>I don't read fiction. You can't put me on the spot like that.</p><p>ARJUN</p><p>Oh, interesting. Wait, what did you mean by great books then? You meant like Derek Parfit?</p><p>AARON</p><p>No, what's the word? I forget about the word great. Why do I not read fiction? I don't know. Just because I find narrative nonfiction entertaining. And also, I don't know, I don't read a ton. Anyway, like books, that is. But I could, right? And I still think I would have aesthetic judgments about what books are better than others in all things considered sense. So if I just imagine myself doing that and ranking all of them, probably maybe the average quality is even getting worse over time. But the top 100 are going to be almost uniformly distributed across the space of books, not across the space of time or whatever. And the space of books is really concentrated in the last century at least. Presumably even more so, like post internet, although I'm not sure about that. Yeah.</p><p>ARJUN</p><p>Maybe relatively speaking.</p><p>AARON</p><p>Yeah. Should we move on to a more interesting topic?</p><p>ARJUN</p><p>I guess wait, I guess there are a couple of reasons to read books, right? Or one solution is that you could just pretend to read books. So like wait, I think I count on this in the group chat. There are three reasons to read books, right? One is to signal your high status by just credibly claiming that you read the book or by making references to it that indicate that you're familiar with it or read it. And the second thing is to learn some kind of specific information, the goal for which you decided beforehand. And the third is to just learn lots of stuff miscellaneously and then hope that this helps you. The first one, what would they call it? Goal factoring. You could replace the first one by just sort of pretending that you read these books. I played a lot of Quiz Bowl in university and a lot of my friends there, they read a lot of Wikipedia summaries of books and movies and ultimately in conversation. There's not a huge difference in their ability to make the funny references to such and such person or event in the movie because most people don't remember very much anyway, right?</p><p>AARON</p><p>Yeah, for sure.</p><p>ARJUN</p><p>So you could just pretend and then the distinction between the second and third is more puzzling or whatever. Like, Holden has a blog post where he says that he only thinks that reading is useful if you're reading with some sort of hypothesis that you're trying to disprove or get evidence for or some kind of question. Otherwise he just pretty much forgets anything else that he reads. But then, I don't know. There's the thing about just getting more intuitions or impressions or like a better general model even if you forget specific.</p><p>AARON</p><p>Information, just like, fair warning, I'm going to hop on video, at least for a little while. Let me make hold on. Make sure the right kit. Okay. Hi. Hello. That was dumb. I was going to ask, but can you see me? Obviously you can see me. Yeah, no, I'm pretty sympathetic to Holden's point and actually I was just thinking so one thing I did read that is like, maybe this is actually an exceptional example. So I did read what we owe the future with actually the intent and this actually might still happen, along with 20 gazillion other projects that might still happen. It's like, review what we owe the future. And a lot of it was like, oh yeah, I'm very much convinced by McCaskill, like some of McCaskill's arguments or whatever. But I was especially looking especially because I'm kind of contrarian and my review couldn't just be like, oh yeah, it sounds good, or whatever. Or it could be, but you want to be on the lookout for things that maybe don't check out or whatever. I feel like a lot more stuck with me because of that, because I was reading with an eye out for takes I could jump in with or something.</p><p>ARJUN</p><p>Yeah, I'm not sure. I used to play a lot of chess, and when you're playing chess, one of the most common way to study probably, or besides doing tactics exercises, is to just review your games in depth. Like to go move by move and think about which moves you could have played instead. Like maybe first by yourself, just taking advantage of having more time than you had in the game. And then afterwards with a computer to see what the actual objectively correct move was, basically. But then another kind of study that people undertake, maybe when they reach the Class A tournament player or something, is to just skim through hundreds of games at a rate of, I don't know, one move per second that's about one game per minute, and just click through them. I've done this for a lot of games and it's like pretty common advice. I think most or a lot of people have done this. And the idea is that you just get an intuition for what move tends to be right or what position tends to be good, and you just click through them.</p><p>AARON</p><p>I have an idea of actually now all of a sudden, I'm kind of tempted to figure out how to do this because the second thing sounds better than the first. Because I've been like, yeah, because I definitely think building up intuitions via brute force and direct feedback. And direct feedback, I guess in some sense here is coming from knowing who won. Then also just predicting what excellent players play, probably more so the second one. But presumably now you could just play a game where on one side of your screen you have the game and the other side, you have the immediate feedback being the probability that the computer thinks you're going to win based on yeah, and so you can see it moving up and down as you play. And this seems like maybe like the I'm totally epistemically trespassing here because I haven't played chess. I think I've played one game of chess in the last ten years or something, but this maybe seems like a good way to get better. It sounds kind of fun also.</p><p>ARJUN</p><p>Yeah, this is a common on Leaches, I think there will be a guess the move kind of thing where you play through a game and each move, it just tells you how much worse you were than the best move. I think I haven't done this in a while, but kind of different. But yeah, you're saying something like you just play a game, but then it just tells you whether your move was good or bad without saying what the better move was.</p><p>AARON</p><p>Or it tells you what your probability or I guess either what the probability is that you're going to win and you see that going up. And then next to that could be the change since previous moves. So it's like 0.6 to one to zero point 62 or whatever, and you see like, oh yeah, that went up, like a decent move or something like that. And then maybe you make a blunder and it goes down or whatever.</p><p>ARJUN</p><p>Yeah, sure. I mean, then this seems like less feedback than if you just learned which move you should have played, because all you learn is that your move is not good.</p><p>AARON</p><p>Maybe if you're an excellent player, but if you're not an excellent player, then you're not going to be choosing among choosing among four plausible best moves, and one of them is definitely going to be the best move. It's going to be like you often have a choice of 15 moves or something. And so whether you do the second best or the 15th best.</p><p>ARJUN</p><p>Those should.</p><p>AARON</p><p>Be meaningfully, distinguished, I guess.</p><p>ARJUN</p><p>Wait, say that last part again.</p><p>AARON</p><p>So you can imagine like 15 possible moves. And if you're like an amateur player, I guess I implicitly am because I don't really play, maybe I should anyway. Yeah, you have the best move or something, which is like ranked one out of 15 and the worst move is ranked 15 out of 15. But you want feedback that distinguishes between making move two out of 15 and move ten out of 15 and move 13 out of 15 and stuff. Like instead of just saying, oh yeah, this is the best move or not the best move.</p><p>ARJUN</p><p>Yeah, I guess it's a bit like I don't know. I mean, I guess you could do this because I don't know, most of your thinking happens also in important spots. So most of these moves you would sort of be playing the obvious move and then in a few moves you would just want to know which move you should have played, right?</p><p>AARON</p><p>Yeah. I'm guessing this is probably because you'd have a higher chance than you at the current moment of just playing the best move. And I would think this is, like a rarity rather than a plausible outcome or whatever, because.</p><p>ARJUN</p><p>Was it a lekin or morphe or somebody? Some journalist asked some world chess champion from the past how many moves he thinks ahead. This is a question people love to ask. How many moves you think ahead? And then usually people like to jokingly answer one because most of the time, they just play the first move that comes into their head or whatever, because it's the only move that seems to make any sense. And it's only occasionally that they have to calculate, whatever, ten moves each ahead, which they're able to do.</p><p>AARON</p><p>Ten moves? Sounds insane.</p><p>ARJUN</p><p>Right? Some of those moves will be forced moves, and so they don't actually branch out and stuff like that. Sure, yeah.</p><p>AARON</p><p>Do you think was chess like an enriching experience or like a dumb signaly experience or both?</p><p>ARJUN</p><p>I mean, it was pretty good.</p><p>AARON</p><p>Okay, cool.</p><p>ARJUN</p><p>Not the best, neither. The worst.</p><p>AARON</p><p>I don't know. It's the kind of thing that I feel like I'd have a hard time for some reason. Just like I don't know, I feel like I have a hard time getting into it or whatever. It's like, oh, what's the point, dude? Whereas maybe that's not true. I don't know. Maybe I'm like checking stuff out here.</p><p>ARJUN</p><p>Probably not the optimal hobby.</p><p>AARON</p><p>I mean no, it's fine. I'm pro hobby. It's fine.</p><p>ARJUN</p><p>No, I think I like optimal Hobies. Have you read Richard Knows post on the optimal hobby? I could just pull it up and then read from it. But he's trying to pick different Hobies, but he would just like to pick the ones that are best. Some activities are kind of dumb, right. Like, people do ballet as children, but then you can't really do ballet as an adult, and so it just means that it's kind of like a silly choice or whatever, right? I don't know. I played the trombone as a kid, and then it was good to play in marching band. But then I suppose that if I had played the piano, this would have been more practical. I don't even have my trombone with me. It's at my parents house in New York.</p><p>AARON</p><p>I relate to that. I mean, I played trumpet and stuff. I didn't even like it, but I don't know. Yeah, definitely a waste.</p><p>ARJUN</p><p>Yeah. Or whatever. And then he talks about sports or whatever, cardio activities, and then he goes through a bunch of them and then rules some of them out, and then he concludes that salsa dancing is the best. Maybe I did cardio activity.</p><p>AARON</p><p>Yeah. Wait, that sounds like okay. Yeah. I actually respect this type of analysis. I wouldn't go too hard. I wouldn't commit oneself x ante to doing whatever your analysis comes out with. But yeah, this seems like pretty valuable.</p><p>ARJUN</p><p>Here it is. Here, I can just read a quote from it for some time. Quote at least one of should be a cardio intensive sport for the sake of my health. Some people enjoy running, but I find it very difficult to motivate myself to do endurance sports. I just don't get any runners high. And then he says that he rules out cycling, swimming and rowing because he gets bored or thinks that he has more fast twitch muscles. He says that he could do hiking, scuba diving, kayaking, et cetera, but he doesn't enjoy nature enough for this. The team sports are good, but basketball and volleyball are for tall people, which Richard is not. And then he says that he could play rugby, football or hockey, field hockey. But he says that rugby is too physical for him for his taste. Field hockey is not very common. He might play oh, by football I guess he meant soccer. So he says that he might play soccer casually, but he doesn't want it to be his main sport because if he changes jobs or cities, he would have to find a new team and stuff like that. And second, because he would prefer in the abstract, to have some sort of individual sport where he can take individual responsibility for his win or a loss. He also claims that, thirdly, soccer is not as intense as other sports because you spend most of your time without the ball. That seems kind of a silly take to me, someone who played soccer in high school. He memes on cricket and baseball as just like joke sports. He says he can't imagine himself playing them and then doesn't elaborate. He says that racket sports are good because it's like an individual sport and then you're playing the whole time and so on. Oh, maybe he might have meant, actually, that he's on the bench. That is what he means by not running all the time. Not that he's not running all the time when he's on the field. He says that tennis is the most popular, but quote I never enjoyed it as much as the others because it's easier to hit the ball out, making the average rally significantly shorter. I guess this is a concern, or I don't know, maybe it increases the upfront cost of improvement. He says that squash is his bracket sport of choice, but then after a few decades, he would accumulate knee injuries, so maybe he could play it and then phase it out or something. Then, yeah, water sports and sorry, ice hockey and skiing and stuff, he rules out because it's like he doesn't like these expensive, dangerous, sort of exotic sports that require you to travel to a cold place or water or other strange things. He lists juggling as a cardio sport that he rejects. I'm confused about that. A little bit, but anyway, he ends up with dancing, and then he picks a specific dancing type or whatever you call it based on their prevalence, and then concludes that this is the best thing.</p><p>AARON</p><p>I'm slightly in the market for a hobby. Not even slightly, I guess, like, moderately. So I'm going to look into salsa dancing now.</p><p>ARJUN</p><p>You could build AI projects and post them on Twitter.</p><p>AARON</p><p>That's not a physical. I'm not in need of, like, nerd snipey computer hobbies. I'm in need of actually do something in the world, like touch grass.</p><p>ARJUN</p><p>Of course, you could pick up the guitar, do some power lifting. Real hobbyists.</p><p>AARON</p><p>Yeah, true. I was into juggling for a long time, actually. I don't know why. Didn't even give it up. I just sort of stopped doing it gradually or something.</p><p>ARJUN</p><p>Juggling is not so bad. I can juggle three balls and then do maybe a couple of tricks, like tossing it between my leg or off my head or whatever. I never bother with trying to learn four. I would definitely never bother with trying to learn five because people would not be more impressed. Oh, you could do five.</p><p>AARON</p><p>No, people are definitely more impressed.</p><p>ARJUN</p><p>Are they really? I feel like definitely not proportionally to the actual increased difficulty. Maybe they're not even twice as impressed, I feel like. And meanwhile, it's, like, so much harder.</p><p>AARON</p><p>No, because three is normal. Four is, like, abnormal. I wouldn't be shocked. I'm not that surprised that you can do three. I would be surprised to hear you say you can do four because I don't know, you can't just casually pick up four.</p><p>ARJUN</p><p>Right, okay. Yeah, maybe you're right.</p><p>AARON</p><p>I could juggle something up right now. I think I actually tried busking once. Like, $20. It's not worth it over, like, 4 hours or something. This is like in high school.</p><p>ARJUN</p><p>Yeah. Hobbies. Yeah. Wait, going back to books. Going back to pretending to read books in addition to pretend. So I just said pretending to read them as one way that you could get the social status associated with being a guy who reads lots of books and references them. But I guess you could also optimize for this by reading books that are shorter or like, reading poetry, because poetry is less content. There's less of it.</p><p>AARON</p><p>Yeah. I don't know. I feel like this is just, like, not people should not I'm pro signaling or signaling is, like, a fact of life. I'm like signaling pills, but people should do less of it. And just, like I wish I don't know. I, like, anti all this.</p><p>ARJUN</p><p>Fine. Yeah, go ahead.</p><p>AARON</p><p>Keep going.</p><p>ARJUN</p><p>Oh, I think I happen to believe that whatever literary or critical rankings of works of literature is basically accurate and not super my I was discussing on Twitter a few weeks ago with somebody about Casablanca, which I rewatched on the train about the degree to which because when I read the Wikipedia page after. It appeared that a bunch of contingent factors made it famous. Like it just happened to be screened at Harvard every year before finals after the war, and this led to people knowing what it was, and then it sort of regained its fame later than after.</p><p>AARON</p><p>I have, like, zero feel like I've seen posters but never watched it. Don't really know what it's about.</p><p>ARJUN</p><p>Sure. Wait. I guess the point at the end would have been that this is widely considered the best movie, but then there are a bunch of the plot is kind of OD in some ways, and there are a bunch of factors that seem sort of random that are related to it being famous. Yeah, but I don't think this is true in general. I guess there's a category of books or movies that are famous because they were the first in some respect, and not because they're really good or because they were relevant to some local event that happened around that time, and not because they're really good. Aesthetically. I don't know. I can't think of examples.</p><p>AARON</p><p>Yeah, I don't know. I feel like for fiction stuff, I'm just extremely in the camp of enjoy whatever you like, don't really care. Maybe you can get into the philosophy of aesthetics and say, but don't really find that that interesting or important. I don't know if you like the movie, it's like you should watch the movie, who cares? Whereas nonfiction, I think, is different. It's like plausibly has externalities in both directions.</p><p>ARJUN</p><p>Sure. Yeah.</p><p>AARON</p><p>Are there any nonfiction books that you really do appreciate having read? Or fiction? I guess.</p><p>ARJUN</p><p>For nonfiction I was going to say the Elephant in the Brain, of course. But then could it have just been a blog post? I mean, I don't know, it's kind of dense, but then I don't know. Some books are dense. Maybe textbooks are the best books. People should just read textbooks. Forget other kinds of books, just read more.</p><p>AARON</p><p>Yeah, I've never actually done this outside of a class, but I actually kind of think it does, at least for some, for quantitative or I guess formal subjects or whatever, you can't just read them or you probably can't just read them and take notes. Either you need to do stuff or exercises or whatever. I don't know. For some of the social sciences and other stuff. Yeah, it does actually sound I don't know. Yeah, maybe I just should read Intro to Sociology textbook or something. I don't know.</p><p>ARJUN</p><p>Yeah, there's like an old school less wrong post from like, 2011 or 2013 where the guy tries to curate the best textbooks on every subject and says that reading blog posts about stuff like economics is cringe. You should just read more, like real textbooks.</p><p>AARON</p><p>I'm pro cringe.</p><p>ARJUN</p><p>It's a Luke Prague it's a Luke Prague post from 2011 where he's just like he's like, why do you read blog posts, Wikipedia articles, podcast episodes, when you could just read a textbook, do the exercises, and then you would proceed with your life normally.</p><p>AARON</p><p>Well, okay, so the actual answer to that is that I like listening to podcasts and I don't like reading textbooks, which is like I feel like this is actually very important. I don't know, something that you legit like doing is just like both intrinsically for your quality of life and then also for how much you actually do it is like a huge factor or whatever.</p><p>ARJUN</p><p>Yeah, I guess so. Yeah, I guess it depends on also whether you're trying to do whether you're trying to consume information with some sort of goal, like whether you're like, oh, I want to learn how to write better. So I'll read a bunch of books on this topic, or I want to understand what Canada is like.</p><p>AARON</p><p>Yeah, there's different degrees. On one extreme, I have some experience with this from college. There might be part of a class just designed around the content of a book. So one of the books I was made to read that I do appreciate reading was very dense, is a Secular Age by Charles Taylor. I actually didn't read the whole thing, but the class was largely about that book. And then on the other side, you could just have totally undirected, purely aesthetic concerns. And then I guess, I think for me, at least with the things I actually listen to fall in between. I subscribe to things that content I do find generally interesting and I guess useful in some sense. But it's not like super goal directed.</p><p>ARJUN</p><p>I don't know. Yeah. And then there's the balance, like you said earlier, between optimizing for getting as much information as you can and doing something that's actually pleasant or is compatible with your drive to work or whatever.</p><p>AARON</p><p>Yeah.</p><p>ARJUN</p><p>Oh, speaking of audio content, I've been struggling for I'm not struggling anymore. I basically gave up on trying to become an audio content consumer who is always consuming audio content all the time.</p><p>AARON</p><p>I'm an audio content consumer. Consumer with multiple o's.</p><p>ARJUN</p><p>Yeah, I was going to say the same thing. I was going to say consumer with multiple o's. It's like oh, look, I'm consuming. But no, I can't seem to get a hang of it. I tried to read listen to audiobooks like novels, but this didn't I don't know, it was just hard to focus or something when I was walking around from place to place or in a car. Even nonfiction was slightly better. Some books are okay, possibly because of the way the book is written or merely because of the way the voice sounds specifically. But I couldn't really get a, um, like, Rob wiflin. I don't listen to his podcast.</p><p>AARON</p><p>I fucking love the ABK podcast. Okay, keep going.</p><p>ARJUN</p><p>I got Natural Reader and tried to listen to my daily substack skimming through Natural Reader, and it's like, fine, I guess. I don't know. I can also read faster than I can consume audio content even at like three X.</p><p>AARON</p><p>Okay, I don't believe that anybody understands audio at three x. I know.</p><p>ARJUN</p><p>I'm saying that I can read words on page at whatever.</p><p>AARON</p><p>I wish I was in your position. Honestly, I'm the total opposite. I have no reading comprehension. I have only audio comprehension. That's not literally true, but directionally it is. I don't know, it seems like that's fine. You don't have to consume audio if you don't want to.</p><p>ARJUN</p><p>But it would be so optimal. I could consume so much. I could just consume all the time.</p><p>AARON</p><p>I don't know, depends how much walking toward type stuff, right? Probably not all the time.</p><p>ARJUN</p><p>All the time, right. I could also just sort of but I find podcasts okay to listen to, possibly because podcasts are given in a style that is conducive to listening, like with repetition or certain kinds of speech or something. Or maybe it's just because I'm more tolerant of just not paying close attention to podcasts and not knowing exactly what's going on all the time than I am with a book.</p><p>AARON</p><p>Yeah, no, I think this is true, but I think it's largely just like our brains are really well designed to pick up conversation and less so. Yeah, books are written not to be read out loud and so there's a fundamental disconnect there.</p><p>ARJUN</p><p>Yeah, some of it is really like when I first started trying to listen to audio content, it was really tedious to listen to parentheticals that I don't sub vocalize when I read them or whatever. Yeah. Also, I'm pretty sure that when I read text, I jump back right. Briefly, momentarily.</p><p>AARON</p><p>I'm illiterate. I don't know what you're talking about.</p><p>ARJUN</p><p>I mean, like when you're reading, your eyes sort of dart back to previous paragraphs or they sort of look at the shape of them or something.</p><p>AARON</p><p>It's been so long since I've intentionally read something.</p><p>ARJUN</p><p>Wait, surely you read blogs that you follow at least?</p><p>AARON</p><p>Which blogs?</p><p>ARJUN</p><p>Oh, I don't know, I just assumed you were a substac consumer.</p><p>AARON</p><p>Yes, so I will skim blog posts in terms of actually genuinely making semantic content pass through my brain. It is all in audio form.</p><p>ARJUN</p><p>Oh, okay. Yeah, well, I was thinking of actually just taking my substac posts when I in the very near future. And this is a commitment post lots in July, and then just sort of say them into a mic. And then a bunch of people said that they would actually consume the content this way when they wouldn't read the post.</p><p>AARON</p><p>Otherwise I should do this. Actually, maybe I just will today. Like one of them. I don't know. Yeah, no, I also want to keep saying I want to blog more. I don't actually do it. Yeah, I need to figure out how to lower my standards.</p><p>ARJUN</p><p>Right, that's important because it's like high.</p><p>AARON</p><p>Quality if I do say so myself.</p><p>ARJUN</p><p>And so it's like costs are good.</p><p>AARON</p><p>Yeah, but a lot of this took like, a long time, so it's like a real cost.</p><p>ARJUN</p><p>People often mention to me my post on taking more stimulants, but not many of the others.</p><p>AARON</p><p>Do you want to talk about that?</p><p>ARJUN</p><p>Could I guess? Or maybe after I finish this line, which is that my inclination to write more posts goes up. One, the more I see that posters that are well known or have a lot of followers just write things that are not very good. I can't really think of examples, but I'm just like, I often see post and like, oh, wow, this is it. This is just what he thought up the top of his head and then wrote down. Yeah. And then second, the degree to which I think that it's actually just quantity over quality. If you just post more, you get more clout and it's irrespective of how good they are if you just churn out more. And so I'm like, oh, I should do this. I should just post three times a week. I'll just post a book review every Monday. I mean, I read enough books, I'm sure, or could. Yeah, I don't know. I don't finish books. But if you think of books, like published books, I read a book's length of pages of them pretty frequently. But I think it's good not to finish books because many of them, as we have discussed, are not very useful.</p><p>AARON</p><p>Oh, wait, we have to talk about the Sleeping Beauty problem. I'm sorry, that's like a requirement. Unless you really don't want to.</p><p>ARJUN</p><p>Sure, that's fine.</p><p>AARON</p><p>Okay. But sorry, I was like, pulling up my substac, which is like why that reminded me I cut you off.</p><p>ARJUN</p><p>Oh, what was I going to say? Oh, yeah, I was going to say yeah, my inclination to post more increases, basically. I mean, it makes sense that it increases as I become more confident that I could do it well, which is increasing because it seems easier than I thought to just post more, be more energetic about it.</p><p>AARON</p><p>Yeah, I explicitly endorse this.</p><p>ARJUN</p><p>Yeah. Wait, what if I actually just post tons, right, because people have different kinds. People post insight porn, right, which is where they have some sort of idea that's like novelry and then a toy model of it. But then people also post book reviews. People also post just lists of things that they saw on the Internet that week. Right? And and so if you write these kinds of posts, you can really just churn them out, and then all you need is I mean, you don't need it to be a large percentage of all the people who look at your blog, but even a small percentage of them who just sort of like all of your content and just like hearing you talk, I mean, I like hearing myself talk, so it stands to reason that other people also like hearing me talk.</p><p>AARON</p><p>Of course, no, but yes. Let me look at what are my top posts? I'm glancing at this right now.</p><p>ARJUN</p><p>Is it the Ivy League thing? Yeah. Did you see my stuff? I buy and use post. It starts with a really long list of other such posts.</p><p>AARON</p><p>Wait.</p><p>ARJUN</p><p>It'S an absurdly long list of every other such post that I found on the Internet.</p><p>AARON</p><p>Okay. This is such an effort post, though. So this is kind of I mean, it is what it is, right? I guess effort posts, all else equal, are better. Most IB smart students are not at IBT schools. Wait, how many Pews does this substac keeps trying to get me to ask people for money. Nobody wants to pay for my writing?</p><p>ARJUN</p><p>No, you have to sell them associated products.</p><p>AARON</p><p>Right?</p><p>ARJUN</p><p>You have to be like Richard Hanania and charge people $150 to just speak to him for 30 minutes.</p><p>AARON</p><p>If anybody wants to talk to me for 30 minutes, I will charge only $149.</p><p>ARJUN</p><p>Wow.</p><p>AARON</p><p>Actually, realistically, I don't know, I feel like as long as you're willing to accept that I might cancel within an hour before I will charge.</p><p>ARJUN</p><p>That's the only cost. The only cost is that it might not happen, but otherwise it's free.</p><p>AARON</p><p>No, honestly? Yes. I don't know, unless it's something where maybe there's another category of things where it's boring or I can actually have reason to squeeze out money or something. And then I would charge like, $30 half an hour or something like that. This has, like 3.4 thousand views. It's just, like, not very many. What's my minimal? I think I see 51. No, but those are podcast things that I just posted subsequent for no reason. 135, 100, and 779. I have 69 views. Yeah, but this is like my second post is, like, terrible. Not deleting it. That's 69 views. Zero new subs. Okay, sorry, this is, like, not relevant. Do you want to talk about Sleeping Beauty problem? And also why I think that my reply to you on my substac was actually bad and wrong, but I'm still right overall.</p><p>ARJUN</p><p>Okay, yeah. Wait. I posted a comment and you replied, and then I think that the conclusion was something like oh, to simplify if you're the experimenter and then you have two subjects, and then suppose that one of them the coin flips heads, the other one the coin flips tails, and then you take even money on the bet, then you'll just lose money. Because the second person I don't remember which is which in the Sleeping Beauty problem, but the second one will get paid twice, and then you will only get paid once.</p><p>AARON</p><p>Yes, I think I might have a more formalized response written somewhere, but basically, bettingons just are not the definition of probability in most situations in life, they go together. In this one, they don't. That's like the actual thing I should have said and I made, like, a fake reply. So ignore what I said before, I'm like, actually yes, the coin flip. It's like it's a fair coin. It's like one half, it doesn't really matter. So I think, like the analog I was thinking of, it's like okay, so let's say we have a bet and then I'm going to flip a fair coin right? Now, if it lands, whatever. Basically, you can choose which side you want to bet on. But if you lose, I point a gun at your head and I make you wager twice the amount. Retroactively, it's like, okay, well then it's not fair anymore. The probability is still one half either way, but I could just impose some condition that makes the betting ODS just not work out anymore. And in particular when the outcome is correlated with the amount being wagered, which in this case it effectively is, it just doesn't check out. Sorry. I've also thought way more about this than probably every other thing.</p><p>ARJUN</p><p>I haven't thought about this very much and I didn't quite follow your remarks just now, but I guess my position would be something like my unconsidered position is something like I'm not sure what exactly probability means, independent of what you would independent of expected utility or whatever, or independent of what you would bet. So if you agree that you wouldn't take even money if you walked into the room and saw somebody wake up, then I'm satisfied with whatever other statements you want to make about the probability.</p><p>AARON</p><p>I think there's a lot of things besides that. There's the Everettian, so that's like multiverse interpretations, which I think actually works out really well. And there's just like an extremely common sense thing in which yeah, I am expecting it to expecting heads as much as I'm expecting tails deep in my bones. And you know what? You're not going to convince me that's like fake or not. Like a legit. I mean, I don't know about legit, but it's reflecting some real deep fundamental thing about probability. Oh, and also I forgot about this. You just plug it into Bayes formula, get one half just how it works. I don't know, base formula seems pretty good. Or base theorem.</p><p>ARJUN</p><p>Wait. So to clarify, if you knew how this experiment was being run and that it was being run here in my house and then you came to Berkeley, and then you walked into the room, but you didn't know what day it was. You just walked into the room knowing that the experiment was taking place and that you showed up on either the first day or the second.</p><p>AARON</p><p>So is this just like the sleeping duty thing where I am sleeping duty.</p><p>ARJUN</p><p>Or yeah. Wait, this is a simpler way to say it. I was just trying more concrete.</p><p>AARON</p><p>No, it's fine. Wait. Yeah, I guess the answer to what is my actual belief about the probability that the coin landed heads is wait, this doesn't matter.</p><p>ARJUN</p><p>Let's say you're participating in the experiment. Yeah, and then you wake up and then I offer you a bet on what day it is.</p><p>AARON</p><p>Well, where does the bet the bet isn't part of the bet isn't part of the problem.</p><p>ARJUN</p><p>But then wait, even though it's not part of the problem, could you say what odds you would accept on the bet? You could say that it's one third and then still say that in some other meaningful sense.</p><p>AARON</p><p>I guess the probability is actually I think that's basically it. That's basically it. But here wait, my comeback is the analog here is that, okay, instead of waking up twice, you wake up once, but if you lose the bet, I just force you to double the amount that is isometric to what is going on in the sleeping Beauty problem. Or like, I force you to double the amount that is wagered if and only if you lose it's. Like, okay, I don't know, what should you what am I even trying to say here? That's definitely analogous in some sense. I'm having a little trouble formalizing it. Like exactly how.</p><p>ARJUN</p><p>I'm not sure. I haven't thought about this very much. The vague comment that I said earlier that besides the bet question, I'm not sure what the meaningful question is is basically the entirety of my intuition. And then also I guess that third and then third is the answer to that question. So it makes sense to me that it would be the answer to this question.</p><p>AARON</p><p>Yeah, okay, we're probably not going to solve this. One thing I did find interesting was that people just had the opposite intuition about the infinity or the countable. Actually, somebody pointed out to me the countable infinity thing gets weird because there's like the fact that you never actually wake up or the experiment never actually ends. But if you just say like, okay, no, instead of two versus one wake up days, we used to ten to the ten to the ten wake up days versus one. And some people really have the intuition that, okay, waking up, you should basically be basically just be absolutely certain that it landed tails. Like, that you're in the world in which you're like in one of the ten to the ten to the ten days where you wake up. And I do not share that intuition at all. No, there's some very intuitive sense there's like a one half chance that the coin landed heads and then you're just like in the other world, you're not in the ten to the ten to the ten world. But I don't know, I can't really argue it's hard to argue it in formal or propositional terms.</p><p>ARJUN</p><p>I guess. Is there a close analogy that I can think of in a matter of seconds that would cause you to have the opposite intuition? I mean, I could try to think of one. Like if there's a 50% chance that I give you a penny and then a 50% chance if the coin flips the other way that I give you, whatever, a million pennies, and then later somebody sees a penny blow out of the room because there's a strong wind that tends to sometimes blow pennies out of the room.</p><p>AARON</p><p>No, that is not analogous anymore. But that is only analogous if you assume that the world is such that exactly you must at that moment be observing a penny blowing out of the room. Which is like, not true.</p><p>ARJUN</p><p>But you are well, I mean, in your case, you exist or whatever.</p><p>AARON</p><p>My brain is, like, tangled up. I don't even remember. I lost track of anything. Okay, philosophical question.</p><p>ARJUN</p><p>So I made the claim, as I am want to do, I mean, making claims in general, not the specific claim of stimulants, that taking stimulants is arguably one of the best longevity interventions that you could have. Because suppose that each day you spend 90 minutes out of your, whatever, 16 waking hours sort of not really doing anything on purpose. You just sort of wonder where the time went. You're not really getting much done. But then you take some sort of stimulant and then you recover that time in the sense that you're able to act purposefully, to do some sort of thing on purpose that you wanted to do. Then you could argue that essentially your life got 9% longer or whatever. What's, 90 minutes over 16 hours? I think it's like nine, nine and a half. But then some people at this dinner party or similar event had some sort of philosophical objection where they agreed with the facts that I said, but they said that they don't consider their life as being composed of the things that they do on purpose, but just sort of like the fact that they are having experiences.</p><p>AARON</p><p>Yeah, I think from an altruistic perspective or from an extrinsic perspective, like your effect on the world, like, the former seems right. But no, I share in terms of how long I actually see myself as living. It seems proportional to quality of time, not intentional time. I don't know. But the former is still important for other reasons.</p><p>ARJUN</p><p>Right. A friend said something like, oh, are there experiences that you routinely have that you would pay that you would accept a very small amount of money to just, whatever, be a p zombie for that time? Suppose that you drive to work for 30 minutes every day and you would accept like, a pretty small amount of money to just, in fact, the opposite.</p><p>AARON</p><p>Yeah, I would pay for some there's painful bad experiences I would pay to avoid.</p><p>ARJUN</p><p>Sure. Actually, yeah. So I guess this is a more general situation to put in. So?</p><p>AARON</p><p>Yeah.</p><p>ARJUN</p><p>I don't know. But then also, I think this whole measuring longevity based on in a sort of contrived way makes a bunch of other calculations kind of weird. Right?</p><p>AARON</p><p>Like, there's the mandrived way.</p><p>ARJUN</p><p>Oh, I mean, like where instead of saying the normal sense in which you live longer by, for example, living to a longer age or like being healthier for a longer age instead, that you effectively live longer because you spend more minutes awake.</p><p>AARON</p><p>It feels very semantic. It's like I don't know whether you want to call it living longer or just like, I don't know, living better or something.</p><p>ARJUN</p><p>It's like yeah. Or like there's the Amanda Askel essay somewhere where she says that the way people talk about sleep ignores the huge fact that the more you sleep, the less time you spend awake. Right. If you sleep an hour longer, your life just got 6% shorter. There are benefits to sleeping an hour more, but then you have to weigh them against that. Interventions, sleep interventions in the same light, like, whatever, wearing a sleep mask or sleeping at the same time every day have benefits not only in your quality of life, but also essentially in your lifespan, since you can sleep less hours and sleep fewer hours and get the same sleep quality.</p><p>AARON</p><p>Yeah, no, I think that checks out. I think that's important. I do wish that I was just like one of the people that could sleep 4 hours or whatever.</p><p>ARJUN</p><p>Oh, yeah, one of those short sleepers. Yeah. They effectively live 25% longer.</p><p>AARON</p><p>Damn.</p><p>ARJUN</p><p>Yeah, that's wild.</p><p>AARON</p><p>It's wild that some people, in an evolutionary sense that it's possible but not universal, I guess, or something like that.</p><p>ARJUN</p><p>Sure. I don't know, whatever sense. In another sense, this is sort of just like change, right, Pennies? Because if you sleep 30 minutes more or less, this amounts to, what, a year? What's, 30 minutes a day times 365 days times 60 years divided by 60 minutes per hour divided by one or two years, 24 hours per day divided by yeah, like nothing. I mean, not nothing, but it's like this is overwhelmed by the compounding effects of having your career be slightly better or something. Or your personal development or whatever.</p><p>AARON</p><p>Yeah. So I can cut this part. I'm, like, slightly running out of steam, so we can also keep recording some other time, but I might say, like, wrapping up and not super long. Is that okay?</p><p>ARJUN</p><p>Sounds good.</p><p>AARON</p><p>So are there any topics you're dying ahead or takes you want to get out into the world?</p><p>ARJUN</p><p>Not really. Yeah, I've been thinking about the book thing and the blogging thing for some.</p><p>AARON</p><p>Okay, cool. Now I'm making this the thing that everybody, every podcast has to have a thing at the end, they have their special.</p><p>ARJUN</p><p>Oh, you mean like overrated or underrated?</p><p>AARON</p><p>Yeah, something like that. As Recline did, like the top books or whatever. So my thing is like, what is your 90% confidence interval on the number of views this episode gets at the limit as time goes to infinity.</p><p>ARJUN</p><p>Oh, wait, let me pull up your spotify and I can get some data. What is this podcast called? The pigeon pigeon hour. Here it is. Yes.</p><p>AARON</p><p>What data are you going to get?</p><p>ARJUN</p><p>Who is going to watch this whole thing anyway?</p><p>AARON</p><p>Duffy, I think. Matt. Matt from Twitter. Hello. Shout out to everybody who's listening. If you're listening, shout out to the.</p><p>ARJUN</p><p>Two people we've identified to possibly be listening.</p><p>AARON</p><p>I think when I checked I checked a couple of days ago, I had like a couple dozen, I think show.</p><p>ARJUN</p><p>On Spotify how many views it has. But you said that episode, the first episode has a couple.</p><p>AARON</p><p>Wait, maybe I can just check right now. Yeah, wait, let me just see if I can check for podcast.</p><p>ARJUN</p><p>Yeah, I mean, I don't know, off the top of my head, I guess there's a pretty long.</p><p>AARON</p><p>Okay.</p><p>ARJUN</p><p>Trail at the end.</p><p>AARON</p><p>32 plays. It is a couple of dozen kind of okay.</p><p>ARJUN</p><p>Yeah.</p><p>AARON</p><p>That's only on spotify, though. There are potentially other maybe it's 50 total or something.</p><p>ARJUN</p><p>Okay, so considering I don't know, I would say ten to 200.</p><p>AARON</p><p>Yeah, no, I feel like the tail sound good. Yeah, I would say like ten to ten to like 2000 or something.</p><p>ARJUN</p><p>If your podcast becomes famous, then people will backlash.</p><p>AARON</p><p>It doesn't have to be that famous to get 2000 Views isn't that much.</p><p>ARJUN</p><p>Not for the episode. I don't think the episode is going to become viral for any reason. I was saying more that if your podcast yeah.</p><p>AARON</p><p>For sure. But it doesn't have to become Lunar Society famous to get the boot or to get to 2000 Views or whatever. It could become like mildly other, mildly, slightly better known. Anyway. Okay, it has been a pleasure, Arjun.</p><p>ARJUN</p><p>Yes, likewise. Yeah. I'll catch you around when I'm in DC. Or you should just move here to the bay.</p><p>AARON</p><p>Yes, I will maybe do that. Probably not, but maybe. You never know. Okay.</p><p>ARJUN</p><p>All right, cool. See ya. See ya.</p>]]></content:encoded></item><item><title><![CDATA[#1 Laura Duffy solves housing, ethics, and more]]></title><description><![CDATA[Transcript]]></description><link>https://www.aaronbergman.net/p/laura-duffy-solves-housing-ethics-322</link><guid isPermaLink="false">https://www.aaronbergman.net/p/laura-duffy-solves-housing-ethics-322</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Sat, 17 Jun 2023 21:55:19 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/135570847/0430e16f4644b91f4665a8e3ef7ce59f.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<h1>Transcript</h1><p><em>Note: created for free by <a href="http://assemblyai.com/playground">Assembly AI</a>; very imperfect</em></p><p><strong>AARON</strong></p><p>Cool. So we have no topic suggestions.</p><p><strong>LAURA</strong></p><p>You mentioned last night that you have takes about working in the government, and I kind of wanted to hear that.</p><p><strong>AARON</strong></p><p>Yeah. Okay. My thoughts are not fully collected, so I have latent takes. Yeah. So basically, also, I need to get out of podcast mode and get into conversation because then I'm actually more normal and less weird. Okay. I worked at the Department of the Interior and FDIC, that's why you're a statistics major. Okay, cool.</p><p><strong>LAURA</strong></p><p>Well, what remark is that?</p><p><strong>AARON</strong></p><p>No, that's like a positive remark.</p><p><strong>LAURA</strong></p><p>Okay. She can't do math. That's why she's a statistics major.</p><p><strong>AARON</strong></p><p>No, wait, I'm the one that can't do math because I was the one checking it on my calculator. That wouldn't even make sense. Here's, like, a really dumb thing. So the offices just look bad and dark, for example, and this is compared to, I've been into, I guess, two tech looking EA offices, and they're, like, bright. I don't know if they're fake plants or real plants, but there's, like, plants around everything's white. That's, like, the dominant aesthetic. It's not, like, a lot of buildings owned by the federal government. So I'm talking about, like, literally one. Okay, the one sample size, n equals one. But it was like yeah, it's just, like, very much, like, stereotypical, kind of old official, not ugly. Just like, not the kind of not the kind of aesthetic you would imagine if you're coming in to shake things up or something like that. It's kind of, like, obvious or whatever. But I feel like a lot of that the to be expected stereotypes actually hold up really well in a way that was kind of even more like I would have maybe even expected more variance or or something, or like and.</p><p><strong>LAURA</strong></p><p>Then, yeah, I can't help but wonder if because it's like the Department of Interior is not a sexy organization hey. Seen, like, the Department of Education building?</p><p><strong>AARON</strong></p><p>No, I don't know. Not, like, intentionally.</p><p><strong>LAURA</strong></p><p>It's over. Kind of by lone fox Plaza. But it is so ugly. It's, like, brutal architecture. It's probably terrible inside, very dingy and dark. But I have to imagine that other government buildings that get more higher budget are probably much nicer and newer.</p><p><strong>AARON</strong></p><p>Yeah, I don't even want to say it was bad. I'm not even saying it wasn't bad. It didn't look cheap or whatever. It was, like, a nice old building, which is, like, kind of like the okay, this is not a particularly important point. It's, like, one side example, I guess, more substantively. I think there are actually and this is something I have more experience with, I guess N equals at least two, but potentially N equals, like, many, depending on how you count things. There is just, like, at least in my experience, more of kind of like an aggressive conservatism or something, which is kind of what you probably want as, like, an American taxpayer who doesn't want the deep state to come in and ruffle feathers. But yeah, go ahead.</p><p><strong>LAURA</strong></p><p>I feel like that aggressive conservatism can make the deep state even more awful to work with though.</p><p><strong>AARON</strong></p><p>How so?</p><p><strong>LAURA</strong></p><p>It's the red tape thing, right?</p><p><strong>AARON</strong></p><p>Yes.</p><p><strong>LAURA</strong></p><p>So there's definitely that incredibly inefficient, like, working government websites. Yikes.</p><p><strong>AARON</strong></p><p>Oh yeah. They build everything from the well, not okay. I'm talking from extremely limited personal experience, but my experience is just like the one thing I saw there was like building a survey from the ground up instead of using Google forms or whatever and spending hundreds of hours instead of like zero. That's like on the procedural side. But even beyond that there's also just like an ethos of try to do exactly what the last exactly as we were doing yesterday and it regresses back forever or whatever. Instead of thinking actually what is the best decision? Or something like that. Also, I should just say for the 25th time, I have exactly one year of experience. Federal government is a big place.</p><p><strong>LAURA</strong></p><p>That confirms my priors on this a bit. Did you at all listen to the Ezra Klein podcast about contracting in the government?</p><p><strong>AARON</strong></p><p>No.</p><p><strong>LAURA</strong></p><p>Okay. So it's pretty bad because in order to bid for a contract successfully, there's a lot of these suggested criteria that you meet. And it's not just for like say you want a government website and you're going to contract that out to some private entity in the bid. They will put together their proposal and say why they're the best for doing the website well and in a cost effective manner. But there's also all these other things like dei requirements or suggestions or parental leave and things that are just totally ancillary to the actual effectiveness of the organization at being able to do the job of building the government website. And oftentimes these things are given a lot of weight in the overall scoring as to who's going to win the bid. And it ends up being like this long protracted process and you end up not getting the best person for the job sometimes.</p><p><strong>AARON</strong></p><p>Yeah, that definitely also fits with my priors. Yeah, I can totally imagine that. I'm like speculating, but I'm like I would guess it's like a bunch of cobbled together laws. Like there was like somebody's hobby horse in like 1983 or whatever and somebody else's hobby horse in like 1989 and it's like added up. So there's like a million, um, things like that. Yeah, I can totally see that on the other side. Do you have any positive government takes?</p><p><strong>LAURA</strong></p><p>Shit?</p><p><strong>AARON</strong></p><p>I put you on the spot there.</p><p><strong>LAURA</strong></p><p>You know, I've actually never had really a bad experience at the DMV.</p><p><strong>AARON</strong></p><p>Okay, fair enough.</p><p><strong>LAURA</strong></p><p>Yeah.</p><p><strong>AARON</strong></p><p>Did you get your driver's license on the first drive?</p><p><strong>LAURA</strong></p><p>No. Yeah, I couldn't parallel park. In my defense. It was like I was in this neighborhood where the curbs are not very steep and so they're kind of like softly sloped. So it's very easy to roll up onto the curb, kill a pedestrian while you're parallel parking.</p><p><strong>AARON</strong></p><p>That's probably fine.</p><p><strong>LAURA</strong></p><p>So I failed the first driver.</p><p><strong>AARON</strong></p><p>Okay, same. Yeah, that was scary because failing the first one is fine, but like ex ante, you don't know whether you're going to fail the second one. If you fail the second one, you're like bringing your parents to the DMV with you or I was 16 or whatever, and so I was very scared about filming anyway. Okay. Yeah. Positive government. No, and I think to relate that, to bring that back to the core subject matter, aggressive conservatism goes along with actually people that are very good at once again, N equals zero. Basically very good at what they're actually trying to do. There's like a stereotype. Yeah, maybe this is like a stereotype. I don't actually know how live of a stereotype this is, but I feel like I've vaguely seen stereotypes about incompetent bureaucrats, and that's definitely not the situation in my experience. It's like people very competently doing a lot of bullshit along with in the process of doing some important things also, but.</p><p><strong>LAURA</strong></p><p>I don't know about very confidently. It's very hard to fire people in the government.</p><p><strong>AARON</strong></p><p>Yeah, it probably depends on I could definitely see this varying a lot by position. There are some things where it's kind of the same with, I guess, tenured professors or something. They could just not do anything. But if you're selecting for the people who are obsessed with, I don't know, sociology or math or something and who get tenure, they're just not the type of person who's going to do nothing at that point. And so if you're working with more senior people, like professionals, then they're just probably pretty selected for. But maybe there's other parts that makes sense.</p><p><strong>LAURA</strong></p><p>I have to say. Obviously I went to public schools, but other than that, I haven't had that much experience with the government, which is kind of a plus, I suppose. That could be government is operating correctly when it's not that much in your life.</p><p><strong>AARON</strong></p><p>Yeah, that's also like the count. I know it's local in Maryland, but it seems local in Montana also, so there's a lot of different governments. Like the government is like 17 things. No, but yeah.</p><p><strong>LAURA</strong></p><p>Privilege too, right. To deal with trying to navigate the welfare process.</p><p><strong>AARON</strong></p><p>Yeah, for sure.</p><p><strong>LAURA</strong></p><p>I think one of the coolest things about being in Montana, at least in the past few years, is that I was in a pretty middle of the lean right town, and they really have been on top of the whole housing crisis situation. So all of the people in the planning department are like I feel like they're kind of YIMBY's. They're very tuned into YIMBY planning.</p><p><strong>AARON</strong></p><p>We need to get them into San Francisco. Unfortunately, they're stuck in Montana. I'm sure they're doing a great job.</p><p><strong>LAURA</strong></p><p>In Montana, but they are it's like thumbs up to a 432 unit apartment complex or something like that on big empty lot in the middle of nowhere. And same thing with the planning board and the city commission. So they just keep green lighting all of these projects, and I'm like, I was so not expecting this. And they're not they sometimes even talk back to the NIMBY people who show up.</p><p><strong>AARON</strong></p><p>How do you know this? Did you go to meetings? Yeah, I would go to oh, of course. Okay. I'm not unfortunate. That's like a couple of steps above my civic engagement.</p><p><strong>LAURA</strong></p><p>No, it was kind of funny, right? Because this one lady stands up, she's like, I bought my house here 32 years ago, and it was affordable. So I don't know what you guys are complaining about. The person on the Planning board is like, well, it was affordable 32 years.</p><p><strong>AARON</strong></p><p>Ago, but not anymore. Nice.</p><p><strong>LAURA</strong></p><p>Yeah. I think that some of the local government people are really getting it.</p><p><strong>AARON</strong></p><p>Nice. Nice. Cool. Thank you for representing the Montana Planning Boards. Yeah, I have no other connections, as far as I know, to the Montana Planning committees. Cool. Yeah. So is there anything else? What's? Any hot takes? You've been thinking about your hobby horses recently? Hobby horses or that's like, my turn. I, like, use that in a self deprecating way, but it can be like anything.</p><p><strong>LAURA</strong></p><p>I'm not sure.</p><p><strong>AARON</strong></p><p>What'S the Davis Bacon wage?</p><p><strong>LAURA</strong></p><p>Yes. So in 1931, there was this congressional act that said that every project well, it's kind of an oversimplification, but publicly funded projects that get federal money have to pay contractors and subcontractors local prevailing wages. And it's basically like price fixing for minimum wages for different professions within four categories of construction. So residential building, highway, and heavy industry kind of stuff. And it's like, in DC. The distinction between residential and building is extremely arbitrary, and it penalizes building higher. So any affordable housing complex that has four or fewer floors counts as residential, and the wages that you have to pay on that are much lower than the ones that you have to pay for five and above course, because those fall into the building category and not residential for some reason. I have no idea.</p><p><strong>AARON</strong></p><p>Okay, why do you know all this?</p><p><strong>LAURA</strong></p><p>Okay? Because I was at a YIMBYs of DC event.</p><p><strong>AARON</strong></p><p>Okay, I see.</p><p><strong>LAURA</strong></p><p>And he just kind of like this one developer, affordable housing developer, just kind of threw out there like, oh, yeah, we're just assuming that in this model, we're building four or fewer floors, because if we're building five, then just pay just a completely different wage amount. I'm like, what is this so arbitrary? Cut off. I like, five over ones. And so I went and just Googled the regulations and downloaded the actual policy and then just make a spreadsheet, and it's like, yeah.</p><p><strong>AARON</strong></p><p>Oh, wow, cool. Yeah. Did you do, like, an extended amount of research on all this stuff? How did you how did you come to know everything there is to know about building codes in DC and Montana. Okay. Only more than every other person in the country. Not literally everything.</p><p><strong>LAURA</strong></p><p>It's like an 80 20 thing, right? You spend an hour just looking at the wage structure of DC building. Like common people end up knowing a lot about the thing.</p><p><strong>AARON</strong></p><p>Yeah, but I still fair enough. I haven't done that. But yes, that would happen if I did that.</p><p><strong>LAURA</strong></p><p>I assume it's really interesting. I don't know if there's some economics literature on whether or not that actually increases the construction costs, and some studies say that it does, but they're for school construction, not residential. Sometimes after Hurricane Katrina, george W. Bush suspended the Davis Bacon wages for rebuilding in that area and then brought them back, like, two months later. So it was an interesting kind of natural experiment because he was just getting a bunch of pushback from people in Congress, like, you're creating a race to the bottom and not paying people fairly or something.</p><p><strong>AARON</strong></p><p>Yeah, I'm just going to default to adopting all your views until I know more than anybody. Yeah, cool. Sorry, I literally no opinions on this. I do, but they're all, like, seven layers down. Seven layers? Like more abstracted and less specific.</p><p><strong>LAURA</strong></p><p>I don't have fully formed views on this. I just think it's very interesting. And I had no idea, like, five days ago that going from building, like, four floors to five floors just completely changes the rate structure. And I'm like, wow, interesting. Cool.</p><p><strong>AARON</strong></p><p>Fascinating. Okay. How did you get into EA?</p><p><strong>LAURA</strong></p><p>How did I get into EA? I suppose I'm I guess I started listening to a lot of, like, skeptic slash Rational podcast. Rationalist podcasts? Late middle school, early high school, and about halfway through high school, I found Rationally Speaking. And I got really into philosophy at that point, and I kind of was like, oh, yeah, the Peter Singer Shallow Pond argument. I hate it, but it makes sense. And so I kind of started planning my future life around how can I make enough money so that I can give more than 10% of my income to effective charities? And I guess that's like, EA light, right?</p><p><strong>AARON</strong></p><p>Because it seems pretty legit to me.</p><p><strong>LAURA</strong></p><p>Yeah, I guess this is true. But I remember I was working out one time and I was just, like, really bored running on the treadmill, so I was, like, constructing a little budget in my head. How much do I need to make in order to pay rent in San Francisco in order to buy food, utilities, et cetera, and then just donate all the rest of it to against Malignant?</p><p><strong>AARON</strong></p><p>Okay, I respect that. That's awesome. Do you remember what the figure was or, like, the cut off at which you could start donating everything else? For my own personal interest, I think.</p><p><strong>LAURA</strong></p><p>I ended up needing around $40,000.</p><p><strong>AARON</strong></p><p>Oh, that's, like, really low, but you could make that work. Somehow, but I think I'm just too selfish for that. But yeah, San Francisco is expensive. I think I could maybe even make that work in Montana. Not that Montana is like a single.</p><p><strong>LAURA</strong></p><p>Place talking me living in a crappy apartment in San Francisco, so maybe more like 45,000 if I.</p><p><strong>AARON</strong></p><p>Yeah, okay.</p><p><strong>LAURA</strong></p><p>Well, nice vermin infested areas, but I think with inflation it's definitely gone up since then.</p><p><strong>AARON</strong></p><p>Cool. Yeah, cool. Yeah, cool. Nice.</p><p><strong>LAURA</strong></p><p>I was always EA adjacent then through the rest of high school, but then I didn't get into EA proper. I had never heard about existential risk long termism stuff until I was like senior year of college.</p><p><strong>AARON</strong></p><p>Yeah, that's not that when did the precipice come out? Like not long before that. So it's like not like you're like that delayed in terms of everybody else. I mean, not like the cutie ord was thinking about it ten years ago or whatever, but most years wasn't that salient like that long ago?</p><p><strong>LAURA</strong></p><p>Yeah. It is interesting to look back how it was always around though.</p><p><strong>AARON</strong></p><p>Yeah.</p><p><strong>LAURA</strong></p><p>Was somewhat always of a bit of a long termist organization.</p><p><strong>AARON</strong></p><p>Sorry, who is that?</p><p><strong>LAURA</strong></p><p>80,000 hours.</p><p><strong>AARON</strong></p><p>Oh yeah. I remember I was looking at the like sorting the for the EA forum posts by old one time and I felt like I was in a museum or something where I could just cut all the artifacts because there are all these Will McCaskill posts with zero upvotes. I didn't know if I was supposed to leave it and not disturb the nature of the posts or whatever by upvoting them or something. But yeah, there were and I think this is like just about exactly ten years ago, maybe like eleven, but one of them was like in 2013 or something, was like Peter Singer and talking about something, something like animals in the long term future. I might be making that up, but it was some combination of like Peter Singer and McCaskill talking about applying the most singers, like basic arguments to long term feature or something. Yeah, I guess it just took a while to manifest anyway.</p><p><strong>LAURA</strong></p><p>Yeah. And I recently saw something, at least some quote on Twitter about Peter Singer pushing back a little bit against the long term, something at least in terms of he's just saying, hey guys, let's make sure we don't lose sight of current harms.</p><p><strong>AARON</strong></p><p>Yeah, that fits with my image of him, but I know he's yeah, I mean, I would want to look into more specifically me too. Yeah, what he said. Because I do think he's like unless he's changed his mind, I think he's like sympathetic to the core arguments of long termism or whatever, but I could totally leave a lot more to be defined once you say that that doesn't define your fees that specifically.</p><p><strong>LAURA</strong></p><p>Yeah, I think this is true because my biggest worry with EA is that in 30 years we'll just look back at long termism and time apparels and be like, we just wasted a bunch of money that could have been spent on saving the lives of humans and animals.</p><p><strong>AARON</strong></p><p>I shouldn't have been like, that was like a bad laugh. That was like a nervous laugh. Because I think there's a chance that's right, I think I would have mostly been at least instinctively or intuitively in agreement with that a couple of years ago. Now I'm pretty sure the world in 30 years is not going to be recognizable. And actually the AI weirdos basically had it right. And I don't know if that means we're going to maybe things will be fine, maybe they won't, but it's no longer. I think it's very unlikely now that especially in 30 and maybe probably in ten, at least the AI segment of insofar as AI and long termism interact resources dedicated, or the decision to spend a large amount of resources in that area will be seen as naive or something. Or just like a bad ex anti decision. Maybe there's like a 10% chance that the world will basically look the same or something. But I don't know.</p><p><strong>LAURA</strong></p><p>It raises a question. What do you mean by unrecognizable? Right. Because I think of past ways of technological progress. It's like yes, go from America in antebellum period 1850s versus America in 1890 or 1920 and there was huge amount of change to where people lived. I think somewhere around that time half of the population was living in cities. I want to fact check that. But at least by the early 19 hundreds and you had basically the industrial revolution take off. People are just not farming as much. But I wouldn't say that the world was unrecognizable in terms of humans basically are just doing normal human things. Yes, they're working different jobs, but they still have families and stuff like that. It was unrecognizable. The same thing with the internet. Right? It's like if you went back to 1970 and tried to predict what 2020 would look like and you had some knowledge about the World Wide Web, would you say that it would be unrecognizable today? And I would say largely no. Right. It's like yes, we are much better off, I think, than we were in 1970, but the world is pretty much the same in the US. In terms of probably people's happiness, except for we've made a lot of social progress. But if you were in the upper middle class, say in 1970 versus now, it's kind of similar. What do you mean by unrecognizable?</p><p><strong>AARON</strong></p><p>Oh no, I agree with literally everything you just said except I just think AI is different. I feel like this has been litigated on a million different blog posts and podcasts or something. So like we're probably not going to like solve the like the like the differing of intuitions or maybe we will. You never know here, but oh yeah, I totally agree with like yeah, except that except that then I also think AI is like not AI is like several orders, at least two, probably more probably how many orders of magnitude bigger of a discontinuity is it than the Internet or than the population moving to the cities? Like, definitely one, probably two, probably not four. Probably like, probably like two to three. I don't know, 500 or like 300. Just pulling that out of nowhere. I don't even know what I'm talking about. But yeah, like a lot more significant. I don't know.</p><p><strong>LAURA</strong></p><p>I think our prior should just be strongly against that because humans are very status quo biased.</p><p><strong>AARON</strong></p><p>Yeah, I also agree with that.</p><p><strong>LAURA</strong></p><p>Predict the future.</p><p><strong>AARON</strong></p><p>If you can. Yeah, cool.</p><p><strong>LAURA</strong></p><p>The best argument against my case, at least just based upon observable evidence, is last 15 years, how we interact with other people has changed a lot, at least how I interact with other people. And I really do buy into this whole mental health crisis being caused by social media kind of theory. And what is that? Displacing. Right? Because they're not hanging out with their friends as much in person and they're not getting as much sleep. And I think instagram is terrible. For at least my mental health it was because it's like, oh, yeah, a bunch of hot looking girls that I know, they're having a good time. I'm not having a good time. It's like that social comparison kind of stuff really dramatically has an effect on mental health. And I think that has been a very substantial change to how people live in society. And so I can see something like that occurring again in the next 30 years.</p><p><strong>AARON</strong></p><p>No, I very much relate to that. I think I still have an instagram, technically. Not even technically. I definitely do. I haven't used it in a while. But yeah, as a medium, it's definitely just conducive to, I guess, just the usual markers of social life and status is very conducive to FOMO and stuff like that. And it's also definitely interesting to think about what the net effect on, I guess, human hedonic well being is because I don't know, I think it's really pretty ambiguous. I guess it being like the whole cluster of computer things that's come along in the last half century. I don't know. This isn't to dismiss it at all, but I do think that the mental health specs are real and also concentrated in the relatively small group that is like female liberal teenagers and not small in an absolute sense, but it's like several dividing blocks or whatever to get there. And yes, it absolutely sounds like it really sucks for that particular group and this is a big deal, but why liberal? No, I feel like I'm just stealing madaglaces take on this, but I think it's like, in the data. I don't know why exactly, but it just pops out of at least self identified liberalism if you just take surveys, like at their word or whatever. Yeah, I don't really know.</p><p><strong>LAURA</strong></p><p>That's fascinating. I wouldn't have thought that.</p><p><strong>AARON</strong></p><p>Yeah, I feel like I wouldn't either.</p><p><strong>LAURA</strong></p><p>Be true, but yeah, I just don't know that many young women who are not liberal.</p><p><strong>AARON</strong></p><p>Yeah. So maybe there's just like a selection effect of weight. Who are the people that aren't? Maybe there are. I'm just like speculating. It could be like disproportionately religious, which is like, I think predicts like happiness and stuff, but I don't know what the actual answer is.</p><p><strong>LAURA</strong></p><p>And probably also involvement in community. Right. If you're a conservative, religious, young teenage girl, you're probably going to youth group and you're not spending as much time on social media.</p><p><strong>AARON</strong></p><p>We'll take your word for it. I assume youth group is like the name for a Christian meet up thing.</p><p><strong>LAURA</strong></p><p>Sorry.</p><p><strong>AARON</strong></p><p>Okay. Sorry. Okay. What's your experience in youth group? Or if you want to talk, you.</p><p><strong>LAURA</strong></p><p>Don'T have to talk to shit. Well, my experience in youth group was like through 6th grade because then I just stopped going to church.</p><p><strong>AARON</strong></p><p>Oh, same. I mean same, but except with Judaism.</p><p><strong>LAURA</strong></p><p>But yeah, and no, it was my mom ran the youth group at Birch and so it was me just eating together a bunch of my friends from school on Wednesday night, coming together and having dinner and then just playing a bunch of games and sounds awesome. It was really fun because we had like this three story tall church which also had like a really creepy basement that definitely haunted. Yeah, we thought it was haunted.</p><p><strong>AARON</strong></p><p>It is, obviously.</p><p><strong>LAURA</strong></p><p>Yeah. And so we just were messing around. I finally told my mom this like a year ago, but we stole the fire extinguisher and just started spraying it all around.</p><p><strong>AARON</strong></p><p>Oh shit.</p><p><strong>LAURA</strong></p><p>Who knew that that stuff is not fun to breathe in?</p><p><strong>AARON</strong></p><p>But it isn't actually. Yeah, I honestly didn't know until you just said that. I have no idea. But cool. Okay.</p><p><strong>LAURA</strong></p><p>I think youth group is a good time for young teenagers. You're in a space where you're getting a little bit of a moral lecture but then you're just kind of like there's guardians around but you're also free to do some fun things.</p><p><strong>AARON</strong></p><p>Freedom. Freedom is slavery or slavery is freedom. Sorry, I'm being deliberately provocative, but that does sound good, basically. But then what was your decision to stop being Christian or stop participating? I don't know. How would you characterize your religious phase transition?</p><p><strong>LAURA</strong></p><p>I think part of it was like I remember screaming and crying several Sundays when I was very small because I didn't want to go to church because it was super boring. And I can't help.</p><p><strong>AARON</strong></p><p>I had the exact same experience. I fucking detested. I don't know why, but like going to synagogue but anyway sorry, keep going.</p><p><strong>LAURA</strong></p><p>No, I mean, I can't help but wonder if I found it so boring because I was an atheist or if I became an atheist just because I found it so boring and I hated going to the thing, so I always hated doing that. I would talk back a little bit in Sunday school.</p><p><strong>AARON</strong></p><p>Yes. I would say.</p><p><strong>LAURA</strong></p><p>There'S some commandment that's like love the Lord your God more than anything else. Basically.</p><p><strong>AARON</strong></p><p>I think they say that like a billion times in all the Testament.</p><p><strong>LAURA</strong></p><p>Kidding. I'm supposed to love God more than my parents. No, that's not happening, stupid. And it didn't help that it was a bit of a conservative church. So they would sometimes talk about Obama and I'm like, you're trashing Obamacare? No, absolutely not. So I think there was several factors and then in 6th grade I was just like, yeah, I don't believe in God. If God exists, then why are children dying from malaria?</p><p><strong>AARON</strong></p><p>Yeah, that's it. Problem of evil is like a real thing. Yes, a bunch of people have noticed this and there's some cope to get around it. But no, I think it's like a good objection to the whole God.</p><p><strong>LAURA</strong></p><p>I can buy that there's this God that created the universe and it's kind of hands off or something. But I cannot believe in a God who answers prayers because I have to imagine that the parents and the children who are in poor countries and who are not well off are praying to God and hoping answers their prayers.</p><p><strong>AARON</strong></p><p>Yeah, I think it's legitimately serious and good. And I shouldn't probably haven't engaged enough with the actual theological takes on this to adjudicate or whatever, but my impression is that yeah, this is actually like a pretty solid objection or like a reason why you should doubt doubt the existence of God. Although now I've become sort of, from a different angle become like quasi theological, the weird simulation stuff. But we could talk about that if you want to. But I didn't mean to cut you off.</p><p><strong>LAURA</strong></p><p>But that's kind of like the deist type of God.</p><p><strong>AARON</strong></p><p>I don't even know what's deism again.</p><p><strong>LAURA</strong></p><p>Creates the universe but is hands off.</p><p><strong>AARON</strong></p><p>Yeah, that seems like consistent. Yeah.</p><p><strong>LAURA</strong></p><p>During COVID I got super into theology, I guess.</p><p><strong>AARON</strong></p><p>Oh, nice.</p><p><strong>LAURA</strong></p><p>Yeah, especially like Jewish theology.</p><p><strong>AARON</strong></p><p>Oh, wait, you definitely know more about Judaism than I do. Okay.</p><p><strong>LAURA</strong></p><p>But there is always that free will defense of theodacy and human responsibility. We are creatures that by our nature are supposed to have responsibility to act towards each other well and we have freedom to do so or not to do so. And that's what it means to be made in God's image. But I'm sure I'm messing that up a bit. But it's like either we have free will or God answers our prayers. And I think you can only believe in the free will version of it, in which case God looks kind of shitty in The Brothers Karamazov. It's like why would God create a universe in which there's even a single child who is being abused by their parents? This free will defense is just evil. And I'm like, yeah, I kind of agree with that. You can't have both things. You can't have he answers your prayers and it's all good. But then also we have free will to torture each other.</p><p><strong>AARON</strong></p><p>Yeah, once again, I very boringly. I think this is all correct take, although this is also, I think, placing, I guess God is a concept squarely in the Christian post year, like, I don't know, 50 or something. Not while Jesus is still around, but like one particular line of, I guess the Abrahamic religions and in the last 2000 years or so and there's a lot of other the notion of a benevolent single God I think is pretty peculiar. The people on the podcast won't be able to see me doing air quotes to this particular time in history.</p><p><strong>LAURA</strong></p><p>Yeah, that is true. I don't really know why this occurred.</p><p><strong>AARON</strong></p><p>Yeah, I don't think anybody does. But I actually did read most of or listened to I never read, but like, listened to most of a book by Robert Wright called The Evolution of God. And I think probably I don't want to claim it my takeaway for better whether it's true or not or whether it's good representation or not is basically that what's the word for one God again? Isn't there like a single term for that anyway? The core Abrahamic theology of single God whose law applies to all people, or at least it's sort of all people, depending on exactly what you mean. It's just like conducive to civilizational flourishing. And so it's kind of like a memetic argument in combination with, I guess, cultural evolution, probably not biological evolution. Yeah, and actually I actually kind of buy that relating to the 80,000 Hours podcast, the best podcast on Earth after this one, that Christianity is probably actually like a good force overall compared to the base rate or the base of what was going on before that in terms of its actual effects.</p><p><strong>LAURA</strong></p><p>Nisha might call it slave morality, but that's good.</p><p><strong>AARON</strong></p><p>Yeah, same with me.</p><p><strong>LAURA</strong></p><p>Yeah, I don't know, I think there's something of a little bit of a Western bias in that. Is Hinduism really all that bad?</p><p><strong>AARON</strong></p><p>Yeah, honestly, I know nothing of all the things I could not know less about, there's like wait, I don't know what I'm scratch that. But I don't know anything about Hinduism. So maybe. Although I do think it seems like from first principles, that if you have a religion where, like, every like, all human beings are, like, equal in some at least. Well, yeah, there's like some people think it's. Like, Calvinism or whatever that's, like, says or some strain that, like, says, you know, some people are saved and some people are, like, not saved or whatever, but at least the group of people who this applies to the Divine law applies to and is, like, just universal rather than, like, a particular, like, ethnic group is, like, on first principles. Like, that does seem conducive to not enslaving other people.</p><p><strong>LAURA</strong></p><p>I think that is a rather radical idea of the Abrahamic religions of being created in the image of God. And I think that probably has had some huge influence on the evolution of liberalism. I'm actually interested in hearing from you. How did you get into all of the different things that you did? So you did like mathish and also philosophy. You have read things about Robert Wright and evolution. What's your backstory in terms of how did you get into all of the things?</p><p><strong>AARON</strong></p><p>Okay, well, look, the three things that are math, econ and philosophy I just read the 80,000 hours of advice for undergraduates. It was like one it was like two pages in their ebook that's like their career advice ebook that I found late in late high school. And it basically said, yeah, do the most rigorous quantitative majors you can, maybe a communication based minor. And they gave some suggestions and like, okay, then I just picked out math, econ and philosophy because those that seemed interesting and that is the top down story of those particular degrees. And then this was doable because I had some AP credits. But yeah, I don't know, just my background, my intellectual background, put it in pretentious terms. I think I just went vegetarian for not even first principles, but like the normal reasons, like why people do like I don't know, just like not hurting animals and like at twelve and then vegan at 14 for the same reasons. And then ran into EA in like, I don't remember exactly, like 15 or 16 or something. And it was pretty convinced. I was like, yeah, it was already pretty disposed. I fit all the characteristics of the stereotypical EA, more or less. And so like, both like not just demographically, but like also like ideologically, I guess. And so it's pretty disposed to find it important and interesting. Yeah, I don't even know the math. I'm not interested in it at all. That's just signaling.</p><p><strong>LAURA</strong></p><p>Okay.</p><p><strong>AARON</strong></p><p>Econ is like more interesting. I don't know. I don't even remember of all the things. I don't know, it seems like kind of cool. Philosophy. Probably would have majored in philosophy if signaling wasn't an issue. Actually, maybe I'm not sure if that's true. Okay. I didn't want to do the old stuff though, so I'm actually not sure. But if I could aristotle it's all wrong. Didn't you say you got a lot out of Nicomachi or however you pronounce that?</p><p><strong>LAURA</strong></p><p>Nicomachian ethics guide to how you should live your life. About ethics as applied to your life because you can't be perfect. Utilitarians. There's no way to be that.</p><p><strong>AARON</strong></p><p>But he wasn't even responding to utilitarianism. I'm sure it was a good work given the time, but like, there's like no other discipline in which we care. So people care so much about like, what people thought 2000 years ago because like the presumption, I think the justified presumption is that things have iterated and improved since then. And I think that's true. It's like not just a presumption.</p><p><strong>LAURA</strong></p><p>Humans are still rather the same and what our needs are for living amongst each other in political society are kind of the same. I think America's founding is very influenced by what people thought 2000 years ago.</p><p><strong>AARON</strong></p><p>Yeah, descriptively that's probably true. But I don't know, it seems like all the whole body of philosophers have they've already done the work of, like, compressing the good stuff. Like the entire academy since like, 1400 or whatever has like, compressed the good stuff and like, gotten rid of the bad stuff. Not in like a high fidelity way, but like a better than chance way. And so the stuff that remains if you just take the state of I don't know if you read the Oxford Handbook of whatever it is, like ethics or something, the takeaways you're going to get from that are just better than the takeaways you're going to get from a summary of the state of the knowledge in any prior year. At least. Unless something weird happened. And I don't know. I don't know if that makes sense.</p><p><strong>LAURA</strong></p><p>I think we're talking about two different things, though. Okay. In terms of knowledge about logic or something or, I don't know, argumentation about trying to derive the correct moral theory or something, versus how should we think about our own lives. I don't see any reason as to why the framework of virtue theory is incorrect and just because it's old. There's many virtue theorists now who are like, oh yeah, they were really on to something and we need to adapt it for the times in which we live and the kind of societies we live in now. But it's still like there was a huge kernel of truth in at least the way of thinking that Aristotle put forth in terms of balancing the different virtues that you care about and trying to find. I think this is true. Right? Like take one virtue of his humor. You don't want to be on one extreme where you're just basically a meme your entire life. Everybody thinks you're funny, but that's just not very serious. But you don't want to be a boar and so you want to find somewhere in the middle where it's like you have a good sense of humor, but you can still function and be respected by other people.</p><p><strong>AARON</strong></p><p>Yeah. Once again, I agree. Well, I don't agree with everything. I agree with a lot of what you just said. I think there was like two main points of either confusion or disagreement. And like, the first one is that I definitely think, no, Aristotle shouldn't be discounted or like his ideas or virtue ethics or anything like that shouldn't be discounted because they were canonical texts or something were written a long time ago. I guess it's just like a presumption that I have a pretty strong presumption that conditional on them being good, they would also be written about today. And so you don't actually need to go back to the founding texts and then in fact, you probably shouldn't because the good stuff will be explained better and not in weird it looks like weird terms. The terms are used differently and they're like translations from Aramaic or whatever. Probably not Aramaic, probably something else. And yeah, I'm not sure if you.</p><p><strong>LAURA</strong></p><p>Agree with this because we have certain assumptions about what words like purpose mean now that we're probably a bit richer in the old conception of them like telos or happiness. Right. Udaimnia is much better concept and to read the original text and see how those different concepts work together is actually quite enriching compared to how do people use these words now. And it would take like I don't know, I think there just is a lot of value of looking at how these were originally conceived because popularizers of the works now or people who are seriously doing philosophy using these concepts. You just don't have the background knowledge that's necessary to understand them fully if you don't read the canonical text.</p><p><strong>AARON</strong></p><p>Yeah, I think that would be true. If you are a native speaker. Do you know Greek? If you know Greek, this is like dumb because then you're just right.</p><p><strong>LAURA</strong></p><p>I did take a quarter of it.</p><p><strong>AARON</strong></p><p>Oh God. Oh my God. I don't know if that counts, but that's like more than anybody should ever take. No, I'm just kidding. That's very cool. No, because I was going to say if you're a native speaker of Greek and you have the connotations of the word eudaimonia and you were like living in the temper shuttle, I would say. Yeah, that's true actually. That's a lot of nuanced, connotation and context that definitely gets lost with translation. But once you take the jump of reading English translations of the texts, not you may as well but there's nothing super special. You're not getting any privileged knowledge from saying the word eudaimonia as opposed to just saying some other term as a reference to that concept or something. You're absorbing the connotation in the context via English, I guess, via the mind of literally the translators who have like.</p><p><strong>LAURA</strong></p><p>Yeah, well see, I tried to learn virtue theory by any other route than reading Aristotle.</p><p><strong>AARON</strong></p><p>Oh God.</p><p><strong>LAURA</strong></p><p>I took a course specifically on Plato and Aristotle.</p><p><strong>AARON</strong></p><p>Sorry, I'm not laughing at you. I'm just like the opposite type of philosophy person.</p><p><strong>LAURA</strong></p><p>But keep going. Fair. But she had us read his physics before we read Nicomachi.</p><p><strong>AARON</strong></p><p>Think he was wrong about all that.</p><p><strong>LAURA</strong></p><p>Stuff, but it made you understand what he meant by his teleology theory so much better in a way that I could not get if I was reading some modern thing.</p><p><strong>AARON</strong></p><p>I don't know, I feel like you probably could. No, sorry, that's not true. I don't think you could get what Aristotle the man truly believed as well via a modern text. But is that what you? Depends. If you're trying to be a scholar of Aristotle, maybe that's important. If you're trying to find the best or truest ethics and learn the lessons of how to live, that's like a different type of task. I don't think Aristotle the man should be all that privileged in that.</p><p><strong>LAURA</strong></p><p>If all of the modern people who are talking about virtue theory are basically Aristotle, then I don't see the difference.</p><p><strong>AARON</strong></p><p>Oh, yeah, I guess. Fair enough. And then I would say, like, oh, well, they should probably start. Is that in fact the state of the things in virtue theory? I don't even know.</p><p><strong>LAURA</strong></p><p>I don't know either.</p><p><strong>AARON</strong></p><p>Yeah.</p><p><strong>LAURA</strong></p><p>I actually need to read after virtue.</p><p><strong>AARON</strong></p><p>I'd like to know that ringing a bell, but who wrote that?</p><p><strong>LAURA</strong></p><p>Alistair McIntyre?</p><p><strong>AARON</strong></p><p>I know the term after virtue. Or like, I've seen that a bunch, but like, sure.</p><p><strong>LAURA</strong></p><p>It was one of the recommended books at the keynote ending address, EAG London, which is kind of cool.</p><p><strong>AARON</strong></p><p>Oh, I wonder if I uploaded that to my other podcast feed. That's just the eaglobal talks yesterday. So maybe it worked its way into my subconscious, perhaps. Yeah, cool.</p><p><strong>LAURA</strong></p><p>No, I think that teleology has actually made me a little bit more friendly towards conservatism, at least like small C conservatism. Each human being kind of has this function in society that you can't understand yourself and you're flourishing without reference to your political community. And you need that in order to yeah, I guess, like, I don't know if I'm I'm probably not explaining this well, but there's a certain way of life that, on average helps people achieve, like eudaimonia. And it's usually one in which you are embedded in social groups to a very deep extent and you see yourself as a member of that group. And I think that's kind of conservatism light and there's a lot of ways in which that's kind of gross and we don't love it because you get conformity and oppression and stuff if it's taken way too extreme. But I think there's something very true there.</p><p><strong>AARON</strong></p><p>No, I think there's something very important and true there, which is kind of a serious challenge. I guess the things I generally sympathetic to, which is, like, rigorous analytical philosophy. You're basically just doing math. That's like a facetious point. Just thinking through ideas at the explicit level or whatever, which is that conservatisms are actually just, like, happier, I think might be misremembering some statistic, but it seems like the wrong ideas are actually just conducive to happiness which is kind of an uncomfortable position to be in because I wish that wasn't the case. But it does seem like religion is conducive to happiness. Maybe that's, like, the main thing. It doesn't seem like political liberalism in the American sense is conducive to happiness. I do think it's like, in some sense, like, more like better or more true. But like, what's that even good for if it's like, not producing, like, inclusive to human well being? I think there's more to be said.</p><p><strong>LAURA</strong></p><p>There, but yeah, I think there's a difference between liberalism in theory and liberalism in practice, where I'm basically just stealing detectville here, where he's like, if Americans took their individualistic theories seriously, america would not work. Because what makes America work is that people are just coming up with solutions as members of groups to solve community needs. And it's a very organic process of creating a free society that is not theory driven. And it would actually be bad if it were more theory driven because you would get a lot more atomization and people just not looking out for each other.</p><p><strong>AARON</strong></p><p>Yeah. And I think that's kind of the direction the modern world has gone. For better or yeah, largely for better, I guess. Kind of contra my last point. I actually think in some circles at least, like, western individualism is like, underrated. I don't know, being able to do what you want is actually has a lot of value.</p><p><strong>LAURA</strong></p><p>I totally agree.</p><p><strong>AARON</strong></p><p>Yeah. Sort of lost my train of thought, but yeah. I don't know, man. Maybe you're just right.</p><p><strong>LAURA</strong></p><p>I think I've tried to strike a balance of this in terms of like, yeah, I don't have to be beholden to the community in which I grew up just because it was where I was from. And that would be a very extreme conservative position. But I've kind of like, okay, I have freedom of movement. I'm going to go where I want to be in a place where I find people that are like me and weird and want to talk about philosophy and random economic stuff all day. And that's a huge benefit of Western individualism because I was just a strange kid growing up and I just did not fit in. And I am very individualistic because of this. But it's like if I go and find other people who are like me and then form a little community around that, then it's like the best of both.</p><p><strong>AARON</strong></p><p>Yeah. Yes. I once again very boringly think this is this is all good and true, but it's like not in an obvious sense. I feel like this is kind of like what the Internet, the ideal Internet was supposed to, in air quote sense, be good for. And I think we both find ourselves in something of abnormally benefited at least intellectual community in the sense of EA and EA adjacent spaces, being able to get a lot of intellectual and social value out of finding other weirdos on the Internet. But I do think that's a kind of abnormal. A lot of people haven't just been able to find fellow weirdos.</p><p><strong>LAURA</strong></p><p>I wonder I was never a video game person, but I young men like video games.</p><p><strong>AARON</strong></p><p>Maybe social community out of this. Yeah, I don't know. I actually kind of wonder whether above and beyond the actual playing of the video games. I feel like I'm such a nerd, but people with video game friends, I don't know, they don't have video game global or something where they travel to London and play video games together, do they? Maybe they do.</p><p><strong>LAURA</strong></p><p>No idea.</p><p><strong>AARON</strong></p><p>Yeah.</p><p><strong>LAURA</strong></p><p>No, I think this is great. Twitter is like the best of the internet and this is like the hottest because everybody's just trashing it all the time. I'm like, well then just don't be in that space.</p><p><strong>AARON</strong></p><p>Yeah, for sure. This is also a correct take. You have a lot of correct takes except for not the nicomachian ethics stuff.</p><p><strong>LAURA</strong></p><p>Okay, well, I'll take it.</p><p><strong>AARON</strong></p><p>Okay. No, I totally agree. Yeah, it does just yeah, I actually yeah, even at a like very personal like like not very personal, but like it's like a day to day level. Even like even though the app is optimizing for me to spend time on it. Actually. I haven't looked in a couple of weeks or something, but the last time I checked, I'm averaging an hour and a half a day on Twitter. And that's like my main app of distraction, being bored. Like what app I go to. There's honestly not there's nothing else, I really except for podcasts, but in terms of visually stimulating apps or whatever, twitter is the single one that I go to. I don't think that that's actually more than I would want to use on reflection.</p><p><strong>LAURA</strong></p><p>Interesting.</p><p><strong>AARON</strong></p><p>I probably wouldn't want to do 4 hours a day, but like half an hour, I would probably say like, no more is better. At that point.</p><p><strong>LAURA</strong></p><p>I think I am probably on the side where I'm at the less optimal too much on there because I found that I've stopped reading all of the things. Yeah, I probably need to start unfollowing a bunch of people, actually, because it used to be at least when I was late teenager, most of it was econ Twitter and I was just learning a lot of stuff and now it's me kind of scrolling because I'm addicted to scrolling.</p><p><strong>AARON</strong></p><p>Yeah, no, there's definitely some of that. It's like not for me as well. And I guess the more that I post, the more I'm not it's like, hard. That's actually like a very localized thing. Especially if I have some posts. It's like generating notifications or whatever. I can't actually focus on anything else but like a little notification thing that's like where my dopamine is like hijacked or whatever. But I'm sufficiently bad at posting. This is, I guess not like the dominant mode or something like that.</p><p><strong>LAURA</strong></p><p>Yeah, it's like I don't post during work hours anymore because the productivity will be gone for the rest of the day.</p><p><strong>AARON</strong></p><p>Okay. Respect. Yeah. Okay. Nice.</p><p><strong>LAURA</strong></p><p>Or I try not to.</p><p><strong>AARON</strong></p><p>I'm sure you oh, yeah. How was your neoliberal election experience? Neoliberal twitter bracket. What's the official name of the contest thing?</p><p><strong>LAURA</strong></p><p>Bracket?</p><p><strong>AARON</strong></p><p>Yeah. The shill bracket.</p><p><strong>LAURA</strong></p><p>Yes. What was that like a lot of stress.</p><p><strong>AARON</strong></p><p>Oh, really?</p><p><strong>LAURA</strong></p><p>Yeah. I don't want to become a politician, basically.</p><p><strong>AARON</strong></p><p>Wait. Oh yeah. Wait, why not me? Okay, sorry. Not why not? People are entitled to their preferences. I'm not going to say that's like a bad preference, but I would think both think that maybe would be the kind of thing that you'll be interested in and then also that you would be a good politician. So I'm like mildly surprised or something.</p><p><strong>LAURA</strong></p><p>I think there's something about performance that stresses me out a lot.</p><p><strong>AARON</strong></p><p>Yeah.</p><p><strong>LAURA</strong></p><p>And optimizing for engagement.</p><p><strong>AARON</strong></p><p>Yeah.</p><p><strong>LAURA</strong></p><p>The kinds of goals that running for office or running to be elected in some dumb Internet thing optimizes for and I'm not sure it's like the type of person I want to be.</p><p><strong>AARON</strong></p><p>Yeah, no, I respect that and definitely see that.</p><p><strong>LAURA</strong></p><p>Besides, I'm just a follower. I'm so not entrepreneurial and I just want to be like the bureaucrat with the statistics numbers, helping people out and just in the background.</p><p><strong>AARON</strong></p><p>Okay. Yeah, fair enough. Definitely fair enough. It definitely takes a certain type of person. I could also not. Yeah. That's like the last thing I would ever want to do. There we go. We share that lack of ambition to be President of the United States.</p><p><strong>LAURA</strong></p><p>I would never get to work out my routine. Interrupted.</p><p><strong>AARON</strong></p><p>Yes. Oh my God, it would be terrible having to deal with crises and God knows where. I don't know, like the Russians invade. I can't go to oh, God, I don't know.</p><p><strong>LAURA</strong></p><p>Hopefully not.</p><p><strong>AARON</strong></p><p>Yeah. No, honestly, I feel bad for Biden because he's probably going to die in not that long and I feel like if you're president, you should get to chill for like a while after.</p><p><strong>LAURA</strong></p><p>I agree.</p><p><strong>AARON</strong></p><p>Yeah. It's his own fault, though he did try didn't he try running before? Like a bunch of times. Yeah, but I guess thank him for his service. Except not the animal welfare stuff in the AG department. I don't know how much of that is his fault.</p><p><strong>LAURA</strong></p><p>Yeah, that's another reason I couldn't go into politics. I'd have to suck up to the agricultural industry.</p><p><strong>AARON</strong></p><p>I mean, were you the person who posted about Cass Sunstein or was that somebody else?</p><p><strong>LAURA</strong></p><p>No.</p><p><strong>AARON</strong></p><p>Okay.</p><p><strong>LAURA</strong></p><p>With his animal welfare stuff being the most controversial aspect of his getting confirmed.</p><p><strong>AARON</strong></p><p>Yeah, it was just kind of weird because I think polls are mostly bullshit. But I'm surprised. I wouldn't be surprised to see AG industry spending on ads. I guess I am kind of surprised to see senators actually caring about that a ton because evidently I'm wrong about that.</p><p><strong>LAURA</strong></p><p>The amount to which they have gotten corporate welfare and just all of the politicians in their pocket is extraordinary because everybody on the political spectrum except for good libertarians just really loves the farmers for some reason.</p><p><strong>AARON</strong></p><p>Yeah. I feel like there's like a I don't remember who coined this or maybe I'm just making it up, but all the professions or industries that appear in children's books. People love those. People love fishermen. They love farmers. Even though farmers are just like Monsanto and Purdue or whatever.</p><p><strong>LAURA</strong></p><p>Family farmer who's actually just a contractor for Purdue.</p><p><strong>AARON</strong></p><p>Oh, yes. I do hope we get to go to that farm sanctuary and kidnap the pig.</p><p><strong>LAURA</strong></p><p>Sorry.</p><p><strong>AARON</strong></p><p>This is like a big tangent. Yeah. If I was a billionaire, I would definitely have several pet pigs. Anyway, I feel like it wouldn't have to be a billionaire. Just like, more time and money, but anyway yeah. What other industries are, like, overly I don't know, I guess teachers. That's like a can of worms.</p><p><strong>LAURA</strong></p><p>It's like education too. Biggest scam ever. And you can't become a teacher for high school or elementary school, middle school, without at least showing that you're working towards a master's degree in education.</p><p><strong>AARON</strong></p><p>It's wild.</p><p><strong>LAURA</strong></p><p>A ton of money to just learn their nonsensical kind of fluffy it usually ideologically skewed education stuff.</p><p><strong>AARON</strong></p><p>I mean, I actually don't have a strong take on whether the content of education programs I just don't know anything about that. But whether they're bad, but certainly doesn't seem necessary to pedagogy absolutely, yeah.</p><p><strong>LAURA</strong></p><p>Become a professor without going to education school.</p><p><strong>AARON</strong></p><p>Right. In fact, I was able to become a math tutor without even going to finishing high school, in fact.</p><p><strong>LAURA</strong></p><p>True.</p><p><strong>AARON</strong></p><p>And I would like to think that I was a competent yeah. I did not have to get a master's in education. Okay, I think I might be winding down or, like, running out of podcast team. Is there anything you would like to say to the people of the podcast?</p><p><strong>LAURA</strong></p><p>If you're still listening, you're kind of weird.</p><p><strong>AARON</strong></p><p>Yeah, join the club. It's a weird party. Yeah, for sure. I was going to say, do you want to show your links? But I feel like that's such a cringe. Okay. Actually, I'm not going to allow you to say your Twitter handle if you do that. I'm going to cut it out or anything like that.</p><p><strong>LAURA</strong></p><p>If anybody is listening to this, they probably already follow me.</p><p><strong>AARON</strong></p><p>Yeah. How many views do you think this is going to get? I'm going to get conditional on it being posted, which is like, very probable.</p><p><strong>LAURA</strong></p><p>Forecasting is not my forte. Except for about prop twelve.</p><p><strong>AARON</strong></p><p>I have absolutely no idea. I would rather you be good at prop twelve forecasting than this particular question. Yeah, I'll go with like, 14.</p><p><strong>LAURA</strong></p><p>Okay. I was going to say ten. What's your confidence interval?</p><p><strong>AARON</strong></p><p>Because we must provide confidence intervals zero to infinity.</p><p><strong>LAURA</strong></p><p>Zero. Well, I'm going to at least min max.</p><p><strong>AARON</strong></p><p>What percentage confidence interval? 80%.</p><p><strong>LAURA</strong></p><p>Okay. No, I'm going to do a 90% confidence interval that is between three and 20.</p><p><strong>AARON</strong></p><p>I feel like there's definitely a right tail. You never know. There's definitely a right tail. 5% chance. Yeah, three sounds like a good lower bound. Like 5% chance of at least, I don't know, 10,000. There's a 5% chance of no, 10,000 is a little high. Maybe like 2000 or something. But not like what was yours? 20.</p><p><strong>LAURA</strong></p><p>Pessimistic.</p><p><strong>AARON</strong></p><p>There's definitely a 5% chance of at least 21.</p><p><strong>LAURA</strong></p><p>Listen, okay, I'll revise.</p><p><strong>AARON</strong></p><p>I'll do like 2000.</p><p><strong>LAURA</strong></p><p>It's a good lower bound because it's close to zero and we can't go negative.</p><p><strong>AARON</strong></p><p>You never know.</p><p><strong>LAURA</strong></p><p>Is a log normal distribution. Right? So yeah, three to 200.</p><p><strong>AARON</strong></p><p>Okay, I'll go three to 2000. Okay. Thank you for being on the podcast.</p>]]></content:encoded></item><item><title><![CDATA[The answer to the sleeping beauty problem is 1/2]]></title><description><![CDATA[Too many perpetually-almost-done drafts. I&#8217;m entering my &#8216;just post stuff even if it kinda sucks&#8217; arc.]]></description><link>https://www.aaronbergman.net/p/the-answer-to-the-sleeping-beauty</link><guid isPermaLink="false">https://www.aaronbergman.net/p/the-answer-to-the-sleeping-beauty</guid><dc:creator><![CDATA[Aaron Bergman]]></dc:creator><pubDate>Sun, 04 Jun 2023 00:37:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ee1c14f-7b08-4d57-b5c1-df647897c464_1230x803.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Note</strong>: skip down to the &#8220;The halfers are right&#8221; section if you&#8217;re familiar with the problem and usual positions. Also see the subtitle.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aaronbergman.net/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aaronbergman.net/subscribe?"><span>Subscribe now</span></a></p><h1>Problem</h1><p>According to <a href="https://en.wikipedia.org/wiki/Sleeping_Beauty_problem">Wikipedia</a>, the <em>Sleeping Beauty problem</em> is a &#8220;puzzle in decision theory&#8221; that goes like this:</p><blockquote><p>Sleeping Beauty [an &#8220;ideally rational epistemic agent&#8221;] volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:</p><ul><li><p><strong>If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.</strong></p></li><li><p><strong>If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.</strong></p></li></ul><p>In either case, she will be awakened on Wednesday without interview and the experiment ends.</p><p>Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: <strong>"What is your credence now for the proposition that the coin landed heads?"</strong></p></blockquote><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NLYT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ee1c14f-7b08-4d57-b5c1-df647897c464_1230x803.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NLYT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ee1c14f-7b08-4d57-b5c1-df647897c464_1230x803.jpeg 424w, https://substackcdn.com/image/fetch/$s_!NLYT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ee1c14f-7b08-4d57-b5c1-df647897c464_1230x803.jpeg 848w, https://substackcdn.com/image/fetch/$s_!NLYT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ee1c14f-7b08-4d57-b5c1-df647897c464_1230x803.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!NLYT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ee1c14f-7b08-4d57-b5c1-df647897c464_1230x803.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NLYT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ee1c14f-7b08-4d57-b5c1-df647897c464_1230x803.jpeg" width="1230" height="803" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ee1c14f-7b08-4d57-b5c1-df647897c464_1230x803.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:803,&quot;width&quot;:1230,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Diagram shows that when a coin flip leads to heads, Sleeping Beauty will be awakened on Monday. If tails, she must wake up on Monday and Tuesday.&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Diagram shows that when a coin flip leads to heads, Sleeping Beauty will be awakened on Monday. If tails, she must wake up on Monday and Tuesday." title="Diagram shows that when a coin flip leads to heads, Sleeping Beauty will be awakened on Monday. If tails, she must wake up on Monday and Tuesday." srcset="https://substackcdn.com/image/fetch/$s_!NLYT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ee1c14f-7b08-4d57-b5c1-df647897c464_1230x803.jpeg 424w, https://substackcdn.com/image/fetch/$s_!NLYT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ee1c14f-7b08-4d57-b5c1-df647897c464_1230x803.jpeg 848w, https://substackcdn.com/image/fetch/$s_!NLYT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ee1c14f-7b08-4d57-b5c1-df647897c464_1230x803.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!NLYT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ee1c14f-7b08-4d57-b5c1-df647897c464_1230x803.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Check out the <a href="https://www.scientificamerican.com/article/why-the-sleeping-beauty-problem-is-keeping-mathematicians-awake/">Scientific American article</a> this came from</figcaption></figure></div><h1>(Purported) solutions</h1><p>There are basically two competing claims about Sleeping Beauty&#8217;s subjective probability (recall, as an &#8216;ideally rational agent&#8217;) in the given scenario. Once again, I&#8217;ll  plagiarize some pseudonymous Wikipedia editors:</p><blockquote><h2>Thirder position</h2><p>The thirder position argues that the probability of heads is 1/3. Adam Elga argued for this position originally as follows: </p><p>Suppose Sleeping Beauty is told and she comes to fully believe that the coin landed tails. By even a highly restricted <a href="https://en.wikipedia.org/wiki/Principle_of_indifference">principle of indifference</a>, given that the coin lands tails, her credence that it is Monday should equal her credence that it is Tuesday, since being in one situation would be subjectively indistinguishable from the other. In other words, P(Monday | Tails) = P(Tuesday | Tails), and thus</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;P(\\text{Tails and Tues}) = P(\\text{Tails and Mon})&quot;,&quot;id&quot;:&quot;XIKQTKXNVT&quot;}" data-component-name="LatexBlockToDOM"></div><p>Suppose now that Sleeping Beauty is told upon awakening and comes to fully believe that it is Monday. Guided by the objective chance of heads landing being equal to the chance of tails landing, it should hold that P(Tails | Monday) = P(Heads | Monday), and thus</p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;P(\\text{Tails and Tues}) = P(\\text{Tails and Mon})=P(\\text{Heads and Mon}) &quot;,&quot;id&quot;:&quot;FDPPIHLSBE&quot;}" data-component-name="LatexBlockToDOM"></div><p>Since these three outcomes are exhaustive and exclusive for one trial (and thus their probabilities must add to 1), the probability of each is then 1/3 by the previous two steps in the argument: </p><div class="latex-rendered" data-attrs="{&quot;persistentExpression&quot;:&quot;P(\\text{Tails and Tues}) = P(\\text{Tails and Mon})=P(\\text{Hears and Mon}) =1/3&quot;,&quot;id&quot;:&quot;HXEUJWXRCX&quot;}" data-component-name="LatexBlockToDOM"></div><h2>Halfer position</h2><p><a href="https://en.wikipedia.org/wiki/David_Kellogg_Lewis">David Lewis</a> responded to Elga's paper with the position that Sleeping Beauty's credence that the coin landed heads should be 1/2&#8230;</p><p>&#8220;Sleeping Beauty receives no new non-self-locating information throughout the experiment because she is told the details of the experiment. Since her credence before the experiment is <em>P(Heads) = 1/2</em>, she ought to continue to have a credence of <em>P(Heads) = 1/2</em> since she gains no new relevant evidence when she wakes up during the experiment. This directly contradicts one of the thirder's premises, since it means <em>P(Tails | Monday) = 1/3</em> and <em>P(Heads | Monday) = 2/3</em>.</p></blockquote><h2>Maybe we should care?</h2><p>Apparently this might somehow matter for philosophical <a href="https://en.wikipedia.org/wiki/Anthropic_principle#Character_of_anthropic_reasoning">anthropics</a>, which in turn <a href="https://forum.effectivealtruism.org/topics/anthropics">might somehow matter</a> for the fate of the humanity.</p><p>Again, Wikipedia: &#8220;credence about what precedes awakenings is a core question in connection with the <a href="https://en.wikipedia.org/wiki/Anthropic_principle">anthropic principle</a>.&#8221;</p><div><hr></div><h1>The halfers are right</h1><p>Argument:</p><ol><li><p>By assumption, P(Heads) = P(Tails) = 1/2, so Sleeping Beauty would answer &#8220;1/2&#8221; before being put to sleep for the first time.</p></li><li><p>Sleeping Beauty gains no new information upon waking.</p></li><li><p>Therefore, she should not change her answer upon waking.</p></li></ol><p>Point 2<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> is where the dispute lies and so is worth defending more fully.</p><h2>Claim: waking is not evidence that the coin landed tails</h2><p>Let us examine the probabilities at each stage of events in this setup. Initially, the coin's probability <em>p </em>is 1/2 before it is flipped. At this point, Sleeping Beauty would also say p equals one-half. For this to change, some new information must be introduced.</p><p>Before she goes to sleep for the first time, has anything occurred between when the coin landed (out of her sight) and her falling asleep? The answer is no; everything remains symmetrical with a probability of one-half. Since all subsequent events are deterministic, Sleeping Beauty can devise an algorithmic plan based on what happens next.</p><p>However, for the probability to shift from one-half to any other value like one-third or even zero (as thirders claim), Sleeping Beauty would have to acknowledge that upon waking up &#8211; despite knowing she cannot differentiate between worlds &#8211; she will instantly believe that heads now have a lower likelihood than previously thought.</p><h4>An intuition pump</h4><p>To further illustrate where thirders err in their reasoning, consider an alternative scenario: instead of being woken up twice if tails, imagine it happening a billion times or even countably infinite times. In such cases, p should equal zero or an infinitesimally small number according to the same reasoning that implies p=1/3. </p><p>Now place yourself in Sleeping Beauty's shoes as you awaken; do you really feel 99.999% sure that the coin landed tails? I certainly don&#8217;t, in a sense that seems intuitively much clearer than introspecting on the 1/2 vs 1/3 case.</p><p>Indeed, it seems implausible given there remains a 50% chance that heads appeared and only required waking on just one occasion - which, of course, might have occurred just a moment ago.</p><h4>So where does the thirder intuition come from?</h4><p>It&#8217;s hard to say, but I think a case in which p=1/3 really <em>does</em> hold up gestures towards the answer:</p><p>Suppose you&#8217;re one of the experimenters, and divide the days of Monday and Tuesday into half-hour chunks. You then randomly select one, and find out that during this block Sleeping Beauty happened to be woken by your colleague. </p><p>Since this time you <em>could</em> have observed otherwise (unlike Sleeping Beauty in the thought experiment discussed), you&#8217;d be correct to conclude that that <em>P(Tails)=2*P(Heads)</em>, which implies in this case P(Heads)=1/3.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EhLG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92bd311-ee27-4e32-b6b2-179a869fff9e_972x2172.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EhLG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92bd311-ee27-4e32-b6b2-179a869fff9e_972x2172.png 424w, https://substackcdn.com/image/fetch/$s_!EhLG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92bd311-ee27-4e32-b6b2-179a869fff9e_972x2172.png 848w, https://substackcdn.com/image/fetch/$s_!EhLG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92bd311-ee27-4e32-b6b2-179a869fff9e_972x2172.png 1272w, https://substackcdn.com/image/fetch/$s_!EhLG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92bd311-ee27-4e32-b6b2-179a869fff9e_972x2172.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EhLG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92bd311-ee27-4e32-b6b2-179a869fff9e_972x2172.png" width="421" height="940.7530864197531" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d92bd311-ee27-4e32-b6b2-179a869fff9e_972x2172.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2172,&quot;width&quot;:972,&quot;resizeWidth&quot;:421,&quot;bytes&quot;:270613,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EhLG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92bd311-ee27-4e32-b6b2-179a869fff9e_972x2172.png 424w, https://substackcdn.com/image/fetch/$s_!EhLG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92bd311-ee27-4e32-b6b2-179a869fff9e_972x2172.png 848w, https://substackcdn.com/image/fetch/$s_!EhLG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92bd311-ee27-4e32-b6b2-179a869fff9e_972x2172.png 1272w, https://substackcdn.com/image/fetch/$s_!EhLG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd92bd311-ee27-4e32-b6b2-179a869fff9e_972x2172.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Thanks, chatGPT plugin &#128077;</figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.aaronbergman.net/p/the-answer-to-the-sleeping-beauty/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.aaronbergman.net/p/the-answer-to-the-sleeping-beauty/comments"><span>Leave a comment</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Really, this subtitle is just a more forthright reformulation of &#8220;Sleeping Beauty gains no new information upon waking.&#8221;</p><p></p></div></div>]]></content:encoded></item></channel></rss>