Aaron's Blog
Pigeon Hour
#4 Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more
0:00
-1:11:48

#4 Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more

Note: skip to minute 4 if you’re already familiar with The EA Archive or would just rather not listen to my spiel

Summary (by Claude.ai)

  • This informal podcast covers a wide-ranging conversation between two speakers aligned in the effective altruism (EA) community. They have a similar background coming to EA from interests in philosophy, rationality, and reducing suffering. The main topic explored is reducing s-risks, or risks of extreme suffering in the future.

  • Winston works for the Center for Reducing Suffering (CRS), focused on spreading concern for suffering, prioritizing interventions, and specifically reducing s-risks. He outlines CRS's focus on research and writing to build a moral philosophy foundation for reducing suffering. Aaron is skeptical s-risk reduction is tractable currently, seeing the research as abstract without a clear theory of change.

  • They discuss how CRS and a similar group CLR are trying to influence AI alignment and digital sentience to reduce potential future s-risks. But Aaron worries about identifying and affecting the "digital neural correlates of suffering." Winston responds these efforts aim to have a positive impact even if unlikely to succeed, and there are potential lock-in scenarios that could be influenced.

  • Aaron explains his hesitancy to donate based on tractability concerns. He outlines his EA independent research, which includes an archive project around nuclear war. More broadly, the two find they largely ethically agree, including on a suffering-focused ethics and "lexical negative utilitarianism within total utilitarianism.

  • Some disagreements arise around the nature of consciousness, with Aaron arguing rejecting qualia implies nihilism while Winston disagrees. They also diverge on moral realism, with Aaron defending it and Winston leaning anti-realist.

  • As they wrap up the wide-ranging conversation, they joke about convincing each other and make predictions on podcast listens. They thank each other for the thought-provoking discussion, aligned in ethics but with some disagreements on consciousness and metaethics. The conversation provides an insider perspective on efforts to reduce s-risks through research and outreach.

Transcript

Note: created for free by Assembly AI; very imperfect

AARON

Hi, this is Aaron, and before the main part of the podcast, I'm going to read out an EA forum post I put out about a week ago, outlining a project I've been working on called The EA Archive. If you're already familiar with the post that I'm talking about, or would just rather skip ahead to the main part of the podcast, please go to four minutes in. The EA Archive is a project to preserve resources related to effective altruism in case of sub existential catastrophe such as nuclear war. Its more specific downstream. Motivating aim is to increase the likelihood that a movement akin to EA I E one that may go by a different name and be essentially discontinuous with the current movement but share the broad goal of using evidence and reason to do good survives, re emerges and or flourishes without having to reinvent the wheel, so to speak. It is a work in progress, and some of the subfolders at the referenced Google Drive, which I link, are already slightly out of date. The theory of change is simple, if not very cheerful, to describe. If copies of this information exist in many places around the world on devices owned by many different people, it is more likely that at least one copy will remain accessible after, say, a war that kills most of the world's population. Then I include a screenshot of basically the Google Drive folder, which shows a couple three different folders on it. And as shown in the screenshot, there are three folders. The smallest one main content contains HTML, PDF and other static text space files. It is by far the most important to download. If, for whatever reason, space isn't an issue and you'd like to download the larger folder, sue, that would be great. I will post a quick take, which is like a short EA forum post when there's been a major enough revision to warrant me asking for people to download a new version. How you can help one download, and I give some links to basically download either the two gigabyte version or up to all three folders, which works out to 51gb. This project depends on people like you downloading and storing the archive on a computer or flash drive that you personally have physical access to. Especially if you live in any of the following areas. One. Southeast Asia and the Pacific. Especially New Zealand. Two south and Central Africa. Three northern Europe, especially Iceland. Four latin America, Mexico City and south. Especially Ecuador, Colombia and Argentina. And finally, five. Any very rural area anywhere. If you live in any of these areas, I would love to buy you a flash drive to make this less annoying and or enable you to store copies in multiple locations. So please get in touch via the Google form which I link DM or any other method. Two suggest submit and provide feedback. Currently, the limiting factor on the archive's contents is my ability and willingness to identify relevant resources and then scrape or download them. I e. Not the cost or feasibility of storage. If you notice something ought to be in there that isn't, please use this Google form again, which I link to do any of the following one, let me know what it is broadly, which is good. Two, send me a list of URLs containing the info better, three, send me a Google Drive link with the files you'd like added best and four, provide any general feedback or suggestions. I may have to be somewhat judicious about large video and audio files, but virtually any relevant and appropriate PDF or to other text content should be fine. And finally, the last way you can help, which would be great, is to share it. Send this post again, which I'm linking in the podcast description. Send this post to friends, especially other EA's who do not regularly use or read the EA forum. So without further ado, the actual main part of the podcast.

AARON

So you're at the center for Reducing suffering, is that right?

WINSTON

That is correct, yeah.

AARON

Okay, I got it. Right, but there's like one of two options.

WINSTON

Yeah, there are two SRISK orgs basically, and they sound really similar. Center on long term risk is the other one.

AARON

Let's skip the I feel like anybody is actually listening to this is going to have heard of Srisks. If not, you can just go to the EA forum or whatever and type in SRISK and you'll get like a page or whatever. Do you think it's okay to skip to high level stuff?

WINSTON

Yeah, I think that sounds good. I do think a lot of people hear the same kind of abstract stuff over and over and so yeah, it'd be good to get deeper into it.

AARON

Okay, convince me. I think we come from a pretty similar ethical background or normative ethical standpoint. I don't know if you consider yourself like a full on negative utilitarian. I don't quite well, actually. Yeah, I guess I should ask you, do you or is it more just like general suffering focused perspective?

WINSTON

Yeah, I also don't consider myself like full negative utilitarian. I think I used to be more so, but yeah, I'm still overall more suffering focused than probably like the average.

AARON

EA or yeah, yeah, it's literally like my spiel, like I always say. Like also I was thinking somebody like as I was talking on Twitter a couple of days ago and I was thinking I don't actually know of any human being who's actually a negative utilitarian. Like a full on thinks that literally the only thing that matters is or positive experiences or whatever have no don't count for anything whatsoever. Yes. So convincing that there's like a theory of change for reducing s risks, at least given the current state of the world, I guess you know what mean. Like it seems to me there's really high level abstract research going on. And honestly, I haven't actually looked into what CRS does. So like maybe I'm like straw manning or something. I remember I applied for something at the center on Long Term Risk a while ago and it seemed like all their research was really cool, really important, but not the kind of thing where there's a theory of change. If you think that transformative AI is coming in the next decade, like maybe in the next century, but not the next decade. So you think there's a theory of change for any of this?

WINSTON

Yeah, I do think it's hard and often abstract, but yeah, I certainly think there's some very real concrete plans. So yeah, part of it depends on, like you mentioned, if you think transformative AI is coming soon. So part of it is how much you think there's going to be some lock in scenario soon where most of our impact comes from impacting. So then AI is like the big example there. And that is something that Clr is also doing or center on long term risk more internally. So there might be some work. I mean, there is some work on looking at which training environments lead to increased risk of conflict between AIS or maybe which type of alignment work is more likely to backfire in ways. Like, you might get like a near miss scenario where if you get really close to alignment, but not quite all the way, that actually increases the risk of lots of suffering compared to just not having alignment at. Yeah, and you just maybe have to talk to Clr more about that because I also don't know as much about what they're doing internally, but I can't talk about CRS. So then there's a different strategy, which is broad interventions, so less narrowly focused on a specific lock in scenario. And the idea there is you can look at risk factors, maybe you say it's just really hard to predict how SRISK will happen. There could be lots of different Sris scenarios and all the details are just kind of any particular details. Maybe unlikely that you're going to predict it correctly, but you could look at general features of the world that you can affect, where they can reduce SRISK across many different ways the future could play out. And so yeah, this idea of risk factors, I guess that's used in medicine, so like, a poor diet is not a poor health outcome in itself, but it's a risk factor for lots of other for depression and heart disease and all these things. For Sris, it might be easier to focus on risk factors. And then one basic example is maybe increasing society's concern for suffering is like a way that you can reduce Sris even if you don't know any of the details about how Sris will play out. So future people would then be in a better position, like they'll be more motivated to reduce suffering and then once they know more about the specifics of what to implement. Maybe this could also be related to the AI stuff as well. But if you think there's going to be a big crunch time or something and it's going to be like, you can have more impact and maybe it's also more clear how to have impact. It's better. If more of those people are motivated to reduce suffering, then that could be a way to sort of punt things to the future a little bit.

AARON

Yeah, I'm actually glad to hear especially about the particular AI, like the more technical things, like which training environments are likely to lead to, I guess esque risk prone misalignment or something like that. Because that's like a little bit I don't know if I've actually written it anywhere, so I'm not allowed to call it a hobby horse, but this is something I've been thinking about and it's like suffering focused alignment research, or at least SRISK aware alignment research. And it's not something that I really hear very much in the discourse, which is just my podcast feed the discourse. That's what the discourse is to me. Of course. Yeah.

WINSTON

I do think it's like neglected. It's often kind of like forgotten about a little bit.

AARON

Yeah. Is that something that Cr also the audio for me cut out for a few seconds, like a little while ago, so I might have missed something. But is that something that you think is like, Clr is doing more or is that something that CRS has also been researching?

WINSTON

Yeah, Clr does more on AI specific stuff. CRS generally does more broad, like value spreading moral philosophy, like improving political.

AARON

Are.

WINSTON

Both care about both, I think to some extent.

AARON

What is CRS like? Maybe I should probably have checked into this. I'm going to google it. I have my split screen open. But what is CRS up to these days? The center for Reducing.

WINSTON

Yeah, no, somewhat. I'm interested in all these things. I know some of the other AI stuff as well, but yeah, CRS, there's a lot of just like writing, doing research and writing books and things like this. So it's mostly just a research organization and there's also outreach and sometimes I give talks on asterisk things like this.

AARON

Yeah.

WINSTON

It'S very broad. There's basically spreading suffering, focused ethics, doing cause, prioritization on how to best reduce suffering, and then specifically looking at ways to reduce Sris are like the three main pillars, I guess.

AARON

Cool. Yeah. I'm checking out the books page right now. I see avoiding the worst suffering books. Three by Magnus Binding and one by to. Actually, I think I kind of failed. I tried to make an audiobook for Avoiding the Worst a while ago. I think it was the only audio version for A, but like, it wasn't very good. And then eventually eventually they came in and figured out how to get actual audio.

WINSTON

Yeah, that was great to have that for a few months or something.

AARON

Honestly, it would have been quicker.

WINSTON

Thanks for doing that.

AARON

I think. So, like, oh, well, next time I'll try to put my resources, line them up better. Anyway, other people got to listen to.

WINSTON

It too, so I think it seemed pretty good.

AARON

Okay, cool. Yeah. So maybe what do you personally do at CRS? Or I guess, how else have you been involved in EA more generally?

WINSTON

Yeah, I kind of do a bunch of different stuff like CRS, a lot of it has been just operations and things like this and hiring and some managing, but yeah, also outreach, some research. It's been very broad. And I'm also separate from CRS, interested in animal ethics and wild animal suffering and these types of things.

AARON

Okay, nice. Yeah. Where to go from here? I feel like largely we're on the same page, I feel like.

WINSTON

Yeah. Is your disagreement mostly tractability? Then? Maybe we should get into the disagreement.

AARON

Yeah. I don't even know if I've specified, but insofar as I have one, yes, it's trapped ability. This is the reason why I haven't donated very much to anywhere for money reasons. But insofar as I have, I have not donated to Clrcrs because I don't see a theory of change that connects the research currently being done to actually reducing s risks. And I feel like there must be something because there's a lot of extremely smart people at both of these orgs or whatever, and clearly they thought about this and maybe the answer is it's very general and the outcome is just so big in magnitude that anything kind.

WINSTON

Of that is part of it, I think. Yeah, part of it is like an expected value thing and also it's just very neglected. So it's like you want some people working on this, I think, at least. Even if it's unlikely to work. Yeah, even that might be underselling it, though. I mean, I do think there's people at CRS and Clr, like talking to people at AI labs and some people in politics and these types of things. And hopefully the research is a way to know what to try to get done at these places. You want to have some concrete recommendations and I think obviously people have to also be willing to listen to you, but I think there is some work being done on that and research is partially just like a community building thing as well. It's a credible signal that you were smart and have thought about this, and so it gives people reason to listen to you and maybe that mostly pays off later on in the future.

AARON

Yeah, that all sounds like reasonable. And I guess one thing is that I just don't there's definitely things I mean, first of all, I haven't really stayed up to date on what's going on, so I haven't even done I've done zero research for this podcast episode, for example. Very responsible and insofar as I've know things about these. Orgs. It's just based on what's on their website at some given time. So insofar as there's outreach going on, not like behind the scenes, but just not in a super public way, or I guess you could call that behind the scenes. I just don't have reason to, I guess, know about that. And I guess, yeah, I'm pretty comfortable. I don't even know if this is considered biting a bullet for the crowd that will be listening to this, if that's anybody but with just like yeah, saying a very small change for a very large magnitude, just, like, checks out. You can just do expected value reasoning and that's basically correct, like a correct way of thinking about ethics. But even I don't know how much you know specifically or, like, how much you're allowed want to reveal, but if there was a particular alignment agenda that I guess you in a broad sense, like the suffering focused research community thought was particularly promising and relative to other tractable, I guess, generic alignment recommendations. And you were doing research on that and trying to push that into the alignment mainstream, which is not very mainstream. And then with the hope that that jumps into the AI mainstream. Even if that's kind of a long chain of events. I think I would be a lot more enthusiastic about I don't know that type of agenda, because it feels like there's like a particular story you're telling where it cashes out in the end. You know what I mean?

WINSTON

Yeah, I'm not the expert on this stuff, but I do think you just mean I think there's some things about influencing alignment and powerful AI for sure. Maybe not like a full on, like, this is our alignment proposal and it also handles Sris. But some things we could ask AI labs that are already building, like AGI, we could say, can you also implement these sort of, like, safeguards so if you failed alignment, you fail sort of gracefully and don't cause lots of suffering.

AARON

Right?

WINSTON

Yeah. Or maybe there are other things too, which also seem potentially more tractable. Even if you solve alignment in some sense, like aligning with whatever the human operator tells the AI to do, then you can also get the issue that malevolent actors can take control of the AI and then what they want also causes lots of suffering that type of alignment wouldn't. Yeah, and I guess I tend to be somewhat skeptical of coherent extrapolated volition and things like this, where the idea is sort of like it'll just figure out our values and do the right thing. So, yeah, there's some ways to push on this without having a full alignment plan, but I'm not sure if that counts as what you were saying.

AARON

No, I guess it does. Yeah, it sounds like it does. And it could be that I'm just kind of mistaken about the degree to which that type of research and outreach is going on. That sounds like it's at least partially true. Okay. I was talking to somebody yesterday and I mentioned doing this interview and basically they said to ask you about the degree to which there's some sort of effort to basically keep asterisks out of the EA mainstream. Do you want to talk about that? Comment on it? And we can also think about later if we want to keep it in or not.

WINSTON

You mean like from non as risky.

AARON

A'S yeah, and then I think they use the word. This person used the word conspiracy. I have no idea how facetious that was if it was decentralized conspiracy or like a legit conspiracy, you know what I mean? So is there an anti asterisk conspiracy, yes or no?

WINSTON

The deep state controlling everything.

AARON

The deep EA state.

WINSTON

Yeah, actually, I'm not sure to what extent there's, like I tend to have a less cynical view on it, I.

AARON

Guess.

WINSTON

And I think maybe Easatize Sris less than they otherwise should, but potentially due to just, like, biases and maybe just, like, founder effects of the movement. And it's not nice to think about extreme suffering all the time, and you could mention some potential biases, but yeah, it's hard to say. I can't say I've personally had anyone really actively excluded me because of the Sris thing explicitly or something like that, but maybe it's like behind the scenes or something going on. But yeah, I guess I tend to think it's not so bad.

AARON

Okay.

WINSTON

And I think also there's been a lot a big push in the suffering focus asterisk communities to find common ground and find cooperative compromises and gains from trade. And I think this has probably been just good for everyone and good for other EA's perception of SRISK reducers as well.

AARON

Yeah, that's like something I want to highlight. This is, like, the most maybe, like the most. I feel like I've been casting, like, a negative light on sort of like oblique negative light on suffering both community. But, yeah, the gains from trade thing and cooperation is something that I did not expect to find, I guess, as much of diving in, and it actually makes total sense, right, because you know that you're up again. It's like the kind of thing that once you read about it, it's like, oh yeah, of course, once you incorporate how people are actually going to treat the movement, it makes sense to talk a lot about gains from trade. Gains from trade? Isn't cooperation is like a better term to use? I feel like, but I feel like most research social movements, I guess even subparts EA that I've encountered, just haven't fully modeled themselves as part of an altruistic community in the way to the extent that the suffering books community has. That's something I've been very, I guess impressed with and like I also just think it's like object level good.

WINSTON

Yeah, I think there are a lot of benefits and you can also even it's not just reputation and gains from trade, but if you have moral uncertainty, for example, then that's just another reason to not go all in on what you currently think is best.

AARON

Yeah, for sure.

WINSTON

Do you know about A causal stuff? That's another thing. Some asterisk people are kind of into.

AARON

I'm into it, I think I kind of buy it. I don't know how it relates to s risks specifically though. So how does it well, there's one.

WINSTON

Idea, it's called evidential cooperation in large worlds. It used to be multiverse wide, super rationality, it's a lot of syllables. But the idea is a causal trade is typically like you're simulating some other trade partner, right, or predicting them probabilistically. But with this. The idea is you acting just gives you evidence that if the universe is really big or potentially infinite, there's maybe like near copies of you or just so this depends on your decision theory, of course, but there might be other agents that are correlated with you. Their decision making is correlated with you. So you doing something just gives you evidence that they're also going to do something similar. So if you decide to cooperate and be nice to other value systems, that's evidence other people with other value systems will be nice to you. And so you can potentially get some a causal gains from trade as well. Obviously somewhat speculative. And this also maybe runs into the issue of like if someone has different values from you, they might not be similar enough to be correlated. So your decision making isn't correlated enough to be able to do these compromises. But yeah, that's another thing you could get into and maybe I should have explained the a causal stuff more first, but it's not that important.

AARON

I mean, we can talk about that. Should we talk about a causal trade?

WINSTON

We don't have to, but I just thought maybe it's confusing if I just threw that in there, but I think it's also fine.

AARON

Okay, maybe do you want to give like a 22nd shortish?

WINSTON

Okay, imagine you're in a prisoner's dilemma with a copy of yourself. So you shouldn't defect probably because they'll defect back on you. So I mean, that's the kind of evidence, that's kind of the intuition for a causal interactions. You can't just model like, oh, as long as I defect, no matter what they do, that's the better option. That would be like the causal way to look at things. But you also have to look at, well, your decision making might be correlated with your cooperation partner and so that can affect your decisions and then obviously it can get more complicated than that. But that's the basic idea.

AARON

Yeah, I've heard the term, but I learned about it substantively on Joe Carl Smith's 80K podcast episode not that long ago. So I guess I'll mention that as where to get up to speed. Insofar as, like, I'm up to speed, that's how to get up to speed.

WINSTON

Yeah, like a long post on this. And it's interesting, like, he talks about how you can kind of affect the past and it's like one way you could think about it if you're correlated with because these correlated agents can also be in different time periods. Yeah, obviously this is all just like more now. This is just interesting, but I don't want to make it sound like this is like the main SRISK thing or anything like that.

AARON

Yeah. Okay. So I don't know, do you want to branch out a little bit? How long has it even been? I don't know what else there is. I feel like I don't even know exactly what questions to ask, which is totally my fault, you know what I mean? So are there any cutting edge S risk directions or whatever that I should be knowing about?

WINSTON

Probably, but I don't know. Yeah, also happy to branch out, but I guess there's a lot one could say. So other objections might be so this whole Sris focus partially relies on having a long term focus, and obviously that's been talked about a lot in EA and then caring for reducing suffering or having a focus on reducing suffering. So you could also talk about why one might have that view. I guess I'll just say there's also one more kind of premise that I think goes unnoticed more, which is a focus on worst case outcomes. So you could also, instead of working on Esrus, if you were like a suffering focused long termist, you could focus on just eradicating suffering entirely. For example, as in the Hedonistic imperative from David Pierce is like an example of this, where he wants to use genetic engineering to make it so that people are just happy all the time. They have different levels of happiness, and there's a lot more details on that. But that's a different focus than trying to prevent the very worst forms of suffering, which is what an S risk is.

AARON

Yeah, I don't know. I feel like EA in general, this is like a big high level point. EA in general seems like there's a lot of focus on weak ass criticisms like, oh, maybe future people don't matter. Like, shut up. Come on, man. Yes, they do. I'm not exactly doing that justice. And then there's Esoteric weird points that don't get noticed or whatever. So are there any Esoteric weird cruxes that are like, the reason? I don't know. One thing is, I guess how much do you think that the answer or the nature of consciousness matters to the degree to which S risks are even a possibility? I guess. I guess they're always a conceptual possibility, but a physical possibility.

WINSTON

Yeah. I think if artificial sentience is possible or plausible, then that raises the stakes a lot and you can potentially see a lot more suffering in the world but I think without that, you still can have enough suffering for something to be considered an Sris, and there's a non negligible likelihood of that happening. So I wouldn't say working on Sris, like, hinges on this, but at least the type of Sris you work on and the way that you do it may depend on it. Maybe getting involved with influencing more and more people are talking about digital sentience, and maybe pushing that discussion in a good direction could be a promising thing to do for Asterisk.

AARON

Yeah. Do you have takes on digital sentience?

WINSTON

Well, I think I'm quite confused about consciousness still, but good.

AARON

Anybody who's not, I think, doesn't understand the problem.

WINSTON

It seems like a tough one, but I think that overall it's plausible. Lots of views of consciousness allow it, and it also just would be so important if it happened. So there's another sort of expected value thing, because I think you can have many more digital beings than biological beings. They're more energy and space efficient, and they could expand to space more easily, and they could be made to it seems unlikely to me that evolution selected for the very worst forms of suffering you could create. So digital sentience could be made to experience much more intense forms of suffering. So I think for these reasons, it's kind of just worth focusing on it while I think it's plausible enough. And there might just be a precautionary principle where you act as if they're sentient to avoid causing lots of harm, like to avoid things that have happened in the past with animals, I guess, and currently where people don't care or don't think they can suffer. And I tend to think that the downside risk from accidentally saying that they're sentient when they're not is lower than the reverse. So I think you can just get much more suffering from they're actually sentient and suffering, and we just don't care or know. Rather than the opportunity cost of accidentally giving them moral consideration when we shouldn't have. I tend to err on the side of, like, we should be careful to act as if they're sentient.

AARON

Yeah, I think my not objection. Like, I literally agree with everything you just said. I'm pretty sure and definitely agree with the broad strokes. My concern is that we have no idea what the digital neural correlates of suffering, as far as I know, have no idea what the digital neural correlates of suffering would look like or are. And so it seems especially intractable if you just take two computer programs. I feel like the naive thing where it's like, oh, if you're not giving an ML model reward, then that's just like the that is the case in which the thing might be suffering that just doesn't check out. I feel like, under inspection, you know what I mean? I feel like it's like way more we have no grasp whatsoever on what digital sub processes would. Correspond to suffering. I don't know if you agree with that.

WINSTON

I agree. I have no grasp of it. At least maybe someone does. Yeah, I'm not sure I can say that much. It seems hard. I also have the feeling it's a lot more than that. I think you can at least get evidence about this type of thing still. First off, you could look at how evolution seems to have selected for suffering. It was maybe to motivate moving away from stimuli that's bad for genetic fitness and to assist in learning or something like this. So you can try to look at analogous situations with artificial sentience and see where suffering just might be useful. And yeah, maybe you could also look at some similarities between artificial brains in some sense, and human brains and when they're suffering. But potentially, I think likely artificial sentience would just look so different that you couldn't really do that easily.

AARON

Yeah, I feel like all these things that have been bringing up sort of maybe I'm just like being irrational or whatever, but they sort of seem to stack on top of one another or something. And so I don't know. I have maybe an unjustified intuitive skepticism not of the importance, but of the tractability, as I've said a bunch of times. And maybe the answer is just like, it's a big number at the end, or you're multiplying all this by a really big and like, I guess I kind of buy that too. I don't even know what to also.

WINSTON

I also worry about Pascal's Mugging, and I do think it's fair to worry about tractability when there's a bunch of things adding up like this. But I also think that Sris are disjunctive, and so there are lots of different ways Sris could happen. So like I said earlier, we're kind of talking about specific stories of how s risk could play out, but it might be obviously the details of predicting the future are just hard. So I think you can still say s risks are likely and influencing s risk is possible even if you think any specific s risk is kind of unlikely that we can talk about. Mostly it might be unknown. Unknowns. And I also think the other big thing I should say is the lock in stuff that I mentioned earlier too. So influencing AI might actually not be that intractable and space colonization might be another lock in in some sense. Once we've colonized space to a large degree, it would be hard to coordinate because of the huge distances between different civilizations. And so getting things right before that point seems important. And there are a couple of other lock ins or attractor states you could imagine as well that you could try to influence.

SPEAKER A

Yeah.

AARON

Okay, cool. Do you want to branch out a little bit? Maybe we can come back to Sris or maybe we'll see. Okay.

WINSTON

Are you convinced then?

AARON

Am I convinced? About what?

WINSTON

Exactly are you donating to CRS now?

AARON

I guess I actually don't know what your funding situation is. So that's one thing I would look I would want to look at the actually, I probably will do this, so I would want to and hopefully I will look at the more specific differences between clr and CRS in terms of and also, my current best guess is rethink priorities in terms of the best utils per dollar charity. And a lot of this comes from the fact that I just posted like a manifold market that said under my values, where should I donate? And they're at 36% or something. Okay. I would encourage people to do this. I feel like I should not be the only one doing it. Doesn't even matter if I do it because I don't have, at least at the moment, very large amount of money to donate, whereas some people, at least in relative terms, do.

WINSTON

Yeah, sorry, I was also being tongue in cheek.

AARON

But no, it's a good question because it's easy to just nod along and say like, oh yeah, I agree with everything you just said, but at the end of the day not actually change my behavior. You know what I mean? So the answer is like, I'm really not sure. I think it's like yeah, there can.

WINSTON

Be like an opposite thing where often you can have kind of good reasons, but it's just hard to say it explicitly in the moment and stuff. So I think forcing you to commit is not totally reasonable.

AARON

Don't worry. I don't think you have the ability. I think there's people in the asphere who when they say any proposition in order to uphold their credibility, they think they really need to, or else they're going to be like if they don't follow through, they're going to be considered a liar or whatever. Nothing they ever say will be considered legitimate. And I think that's an important consideration. But also if I say some bullshit on a podcast and then I don't confirm it, I don't think you have the ability to make me commit to anything in fact via this computer connection or via this WiFi connection.

WINSTON

Yeah, that sounds.

AARON

Mostly joking, I guess partially joking, but saying it in a joking in a joking way or something.

WINSTON

I know what you mean. I think that's a good attitude to have.

AARON

Yeah. What's your story? What's your deal? I don't know either. I guess intellectually. How did you get into EA stuff? What's your life story?

WINSTON

I guess I got into it through a few different routes simultaneously. It's kind of hard to go back and look at it things objectively and know how I ended up here. But I was into animal ethics for a long time and I was just also into philosophy kind of in rationalism and then the sort of pushed in similar direction. And I think hearing I took a philosophy of an ethics class in college and I heard Peter Singer's Puddle Analogy there and I thought that was very convincing to me at the time. So that always kind of stuck with me. But it didn't change my actions that much until later. And yeah, I guess all these things sort of added up to working on prioritizing factory farming and wild animals suffering later. And again, not doing tons about this, but just kind of becoming convinced and thinking about it a lot and thinking about what should do. And then Sris came after that, I guess, after I got more into EA and heard about Long Termism and so added that component. And that's the rough overview.

AARON

Okay. We have a very similar story, although my Intro to Ethics class was pretty shitty, but I had a similar situation going on. So what else do you think we disagree about, if any, besides very niche topics we've talked about?

WINSTON

Yeah, I don't know. That's a good question. I guess I'm also curious what you do, typically, what is your priority? You say rethink and I don't know. Do you go to Georgetown still?

AARON

I guess this is an extremely legitimate question, so no, I don't go to do or something. I graduated about a year ago.

WINSTON

Okay.

AARON

And then I got a grant from the Long Term Future Fund to do independent research. And I always say that with air quotes because there has been some of that. There's also been I've done a bunch of miscellaneous projects, some of which supporting other some of which have been supportive of other EA projects that maybe don't the best descriptive phrase isn't independent research. So helped with an outreach project called Non Trivial, did some data analysis for a couple for CEA, and explicitly that's been going on for a year or whatever. So I'm trying to complete some of some projects. Actually, just yesterday I posted on the A forum about the EA Archive. I guess I'll give a shout out to that or I'll encourage people to look at that post. And I'll probably put that in the show description, which is basically collected a bunch of so about a year ago little tangent, but about a year ago, Putin invaded Ukraine, and there was, like people were freaking out about nuclear war, and so, like, did some research and basically became not convinced, but thought there was a pretty decent chance that in the case of a realistic nuclear war scenario, a lot of information on the Internet would just disappear because they're physically stored in two to four data centers in NATO countries, and those places would probably be targets, et cetera. And so basically I collected a bunch of EA and EA related information, and I basically put it in a Google Drive folder. And I'm just asking people, especially people who don't live in places like Washington, DC. Where I live, I think Iceland is a great place, like New Zealand. There's like a couple other places in Colombia to download. Like they can be like the designated that's like my most recent, I guess thing or whatever. I have no life plan right now. I am applying to jobs in terms of intellectually, I guess definitely suffering focused. Okay. So I have a kind of pretentious phrase that I use, which is, like, I would say I am a suffering leaning total utilitarian in that I think total utilitarianism actually doesn't imply some of the things that other people think it implies. And so in particular, I think that total utilitarianism doesn't imply non offset ability. So you can think that there's sufficiently bad suffering even under total utilitarianism that there's no amount of well being you can create such that it would justify the creation of that bad suffering. In terms of prioritization, I think I'm definitely buying to long term maybe not all the connotations that people give it, but just the formal description of it a lot about long term future matters. A lot probably overriding the dominant source of moral value. I think I'm definitely more animal welfare pilled than other long term mists or whatever. Yeah, I think I'm shrimp welfare pilled. I think that's like my second that's also one of the charities on my manifold market. So that's my five minute spiel.

WINSTON

Nice. Yeah. I think we align on a lot of this stuff. The total utilitarian thing is interesting because I think these are called lexical views sometimes. Is this like what you're talking about? Where are you classical utilitarian? And then at a certain point of suffering, it's just unallowable.

AARON

Yes. And then I think above and beyond, I think I have a very niche view, which is that in particular, lexicality does not conflict with total utilitarianism. I think the general understanding is that it does and I want to claim that it doesn't. And I have these philosophy reason. This is actually the first thing that I worked on this year or whatever, and I've been meaning to clean up my it was like part of a longer post that I wrote with some other people on the EA forum. But I've been meaning to clean up my part and emphasize it more or something like that. Do you have takes on this?

WINSTON

Yeah, I'm curious at least what do you mean by it doesn't conflict? Like just more happy people would still be good under this? Is that what you mean?

AARON

Let me ask you, is it your understanding that under total utilitarianism or total utilitarianism implies that any instantiation of suffering can be justified by some amount of well being? Is that your understanding?

WINSTON

Well, I think that's the typical way people think about it, but yeah, I guess I don't think technically yeah, I do think you can have this lexical view and it doesn't even have to be.

AARON

Maybe we just agree then.

WINSTON

Yeah, well, I guess one thing I would say is in expectation, you might still not ever want more beings to come into existence because there's some chance they have this lexically bad suffering. And you're saying that can't be outweighed, right?

AARON

Yeah, that's like an applied consideration, which is actually important, but not exactly what I was thinking of. So wait, maybe my claim actually isn't as niche or isn't as uncommon because it sounds like you might agree with it.

WINSTON

Actually, no, I think it is uncommon, but also I agree, unfortunately.

AARON

Okay, sweet. Sweet. Okay, cool. We can convert everybody.

WINSTON

There's one person at CRS sorry, I keep cutting you off. My Wi fi is kind of delay.

AARON

Keep going.

WINSTON

But someone at CRS cult named Teo has a bunch of posts on Tefiller. His last name is hard to say. He has a lot of posts related to suffering focused ethics, and he has some talking about population ethics. And he examines some views like this where you can have lexical total utilitarianism, basically, so you can have a suffering focused version of that. And you could also have one where you also have lexical goods where no amount of minor goods can add up to be more important than this. Really high. Good. I guess there's a lot of interesting stuff from that. They all seem to lead to having to bite some pollutants.

AARON

Yeah, I think I actually haven't thought about that as much, but it sounds like sort of like a direct or not direct, but it's like a one degree separated implication of thinking the same thing on the negative side or whatever. And I guess part of my motivation or whatever for at least developing this view initially is that I feel like total utilitarianism just has a lot the arguments are just strong. And then I feel like at least for some understanding of it, for some general category of total utilitarianism, in fact, I think they're correct and I think they're true. And then I also think sometimes people use the strong arguments to conclude, oh, total utilitarianism is true, and then they take that phrase and then draw conclusions that aren't in fact justified or something like that. But I'm speaking in pretty broad terms now. I guess it's hard to specify.

WINSTON

Yeah. I also don't know the details, but I think so there are some impossibility theorems that have been worked out on population ethics where that shows, like, you have to accept one of some counterintuitive conclusions, but they rely on, like yeah. So they rely on axioms that you could disagree with. And I think they typically don't consider lexical views.

AARON

Yeah, I think hopefully I'm going to talk to Daniel Faland, who I think we actually chatted and hope I think he definitely thinks I'm wrong about this, but I think he understands the view and thinks. So hopefully I'll get a more critical perspective so we can debate that or whatever it's damn. You're not providing any interesting content. We're just both right about everything.

WINSTON

Well, I might understand from how you described it sounded kind of just definitional like how you define total utilitarianism. Because obviously you can't have this view where some happiness is better, but extreme suffering is unavailable.

AARON

But I guess, yeah, it definitely is kind of. There's like a semantic portion to all of this, man. I think one sort of annoying argument that I make and I believe is true, is that just the claim that totally utilitarianism implies offset ability is just not justified anywhere. And so people it's like assumed, but actually I haven't seen or maybe it is, but I haven't found any sort of paper, any sort of logical or mathematical or philosophical or whatever demonstration of observability or whatever. It just seems like it's like and I don't think it's semantically implied. I don't think it's like tautological or whatever. Once you say the words totally utilitarianism, it's not implied by the semantics or something.

WINSTON

Sorry, go ahead.

AARON

No, go ahead.

WINSTON

I was just going to say, have you looked into objections to lexical views? I think a lot of people just think the problems with lexical views are also just big and so that's why they don't accept them. So maybe the sequence argument is a bigger one.

AARON

What's that? I don't know the term, but I might have been familiar with it.

WINSTON

Yeah, it's also called some other things sometimes. But you could take your lexical extreme suffering that can't be outweighed, and then you could just take very tiny amount less suffering, less intense suffering for much longer, and then ask which one's worse? And so most people would say that the also torture level suffering that's just slightly under wherever your electrical threshold is, but happening much longer is worse. And then you can just repeat that step all the way down until you get so then something even slightly less bad than that even longer is also worse. If each step is transitive, then you get the conclusion that this lexical suffering can be outweighed by tiny amounts of suffering. You have a really big amount.

AARON

This sounds like at least like a strike against lexical abuser or at least at least an intuitive strike or something. One cop out general point is that I think there's a lot of implicit modeling of just ethical value as corresponding to the number line, or more specifically, every state of the world corresponds to a real number. And that real number, or at least that can be scaled up and down by some real number factor or whatever. But if we just say like, oh, the state of the world right now is like some number x or whatever in terms of utils. So, yeah, the state of the world right now is x utils, and every other conceivable state of the world corresponds to some other real number. And maybe I think this is just like a very this is what makes the step argument tempting. It's because you think you can just and maybe it's true or whatever, but if you have this number line view, then it really pretty directly implies that you can just move left or right by some reasonably well defined amount on the morality axis. And I just feel like there's a lot of unexamined, I guess, formal work that needs to be done to justify that, and also should be done by me to counter it. Right. So I can't say that I have a formal disprove of this or really solid arguments against it. It just feels like a sort of implicit mental model that isn't formally justified anywhere, if that makes.

WINSTON

Think, um I think sadly, we just agree too much because that sounds kind of right to me.

AARON

God damn it. Okay.

WINSTON

I guess Magnus Vinding from CRS was also written about lexical views and kind of other versions of them, but yeah, it sounds like you just got it all figured out. You're just right about everything.

AARON

What about consciousness? What's your.

WINSTON

Mean? Like I said, I guess I'm just maybe too confused. I'm reading Consciousness Explained by Dan Dennett right now. I guess he has sort of like an illusionist view that I'm trying to wrap my head around. Different views. I think I intuitively have this basic the hard problem of consciousness seems really mysterious, and everything should be like we should be physicalists. You don't want dualism or something. But, yeah, trying to work all that out doesn't seem to work well, so I give substantial weight to things like, well, some forms of panpsychism, for example, is something I've become given more weight to over time, which is something I never would have thought was anywhere near plausible before. But I didn't think. I'm just not the person to ask about this.

AARON

Okay, I thought you might have like you said, you had a philosophy. You started by getting into philosophy, kind of. Is that what you studied in college?

WINSTON

No, I studied computer science.

AARON

Okay.

WINSTON

I just really wanted to take this philosophy course.

AARON

Okay, got you.

WINSTON

Yeah, I've been into philosophy. Yeah, I should have clarified. I just meant on my own time. I've been into philosophy, but mostly it's been moral philosophy and maybe other things, like personal identity, metaphysics or something, ontology which a lot of these I also don't know a ton about, but philosophy of mind is another really interesting thing to me, but it's not like something I have a take on at this point.

AARON

Okay. This is like the same thing. I mean, I technically got a minor, but mostly I've just been into it on my own also. Same, I guess, in terms of interest.

WINSTON

And uncertainty about consciousness also, for sure, yeah.

AARON

Honestly, I don't have a good understanding of all the terms or whatever. I feel like really, there's a couple of questions. One is, are quality real? And I think the answer is yes, but I don't think it's like, 100%. I think it's like 90%. And if not, then nihilism is true. What else? And then there's also just the question of, okay, if qualia are real, what is the correspondence between physical configuration of particles and qualia? And that's just I don't know. It's like hard. Right?

WINSTON

Yeah. I do disagree with the nihilism falls from quality being false, I guess.

AARON

Really?

WINSTON

Well, yeah, I think I would be more inclined to take it the other way and say, like, oh, I guess it turns out quality wasn't the thing that I care about. It's just like whatever this thing is that I've been calling suffering.

AARON

Okay, finally a disagreement. Finally. Okay. Yeah, I've heard a lot of not a lot, like maybe two, which is a lot of illusionists, basically, like gesture at this view, if there's no genuine subjective intuitive as we intuitively understand it, like conscious experience, then actually something else matters. And I think that's cope. And actually no, the arguments for Skeetanism or some view of hedonic value being important, and really the only thing that fundamentally matters, at least in some sense, is are very strong. In fact, they're so strong and they're true such that if hedonic value just isn't a thing. No, there's no such thing as functional suffering or functional pain. That's not a thing that can exist. If quality don't exist, then it's just like, whatever, we're all just trees.

WINSTON

Well, I think that might be right in some sense, but I think if we're making the assumption that quality is not real, then what's the most plausible world with this being true? I know that I still have the experience of what I call suffering.

AARON

I would disagree with that, for what it's worth.

WINSTON

Like, you're just saying in this example, no one suffers ever.

AARON

Or you are mistaken about having that experience.

WINSTON

Right? Well, in the world where quality yeah, I could be, but I guess maybe it depends what we mean here. And then you might also have a wager argument where you should act as if no matter how certain you are this is kind of a separate this is more metal point, but no matter how certain you are that suffering is not real, you should act as if it's real because just in case, then it really matters.

AARON

Oh, yeah, I kind of buy that, actually.

WINSTON

But I don't think someone like Brian Tomasic has this kind of illusionist view, and he obviously cares about suffering a lot.

AARON

That's something I've been really confused about, actually, because I just don't I think he's just smarter than me. So in one sense, I kind of want to defer, but I don't think he's so much smarter than me that I have to defer or something like that. Or I can't wonder what's going on or something like that.

WINSTON

You're not allowed to question Brian, to ask.

AARON

I respect the guy so much, but it just doesn't make sense to me. There's like a really fundamental conflict there.

WINSTON

I don't know. I also tend to think probably they're just kind of right. Like illusionists don't seem to just be nihilists all the time. They seem like they just think we're confused about what we're talking about, but we're still talking about something that matters. He might just say, I just care about all these processes, like being averse to stimuli and screaming and all this stuff. And I also agree that it's not at all how it feels. Like, that's not the thing that I care about. But I still think if I'm totally wrong, I still clearly care about something that seems really bad to me. I guess I get where you're coming from, though.

AARON

Well, I guess one thing is our personal values or not personal, but we care about values and stuff like that. In terms of meta ethics, are you a moral realist?

WINSTON

I tend to be more anti realist.

AARON

Okay. Another disagreement, finally. Okay, cool.

WINSTON

Yeah. I'm not totally sure, but yeah.

AARON

Okay. I feel like this has been debated so much, there's like, no new ground to cover.

WINSTON

We're probably not going to solve it here, unfortunately. It is interesting. I guess I could just say maybe the reasons are yes, sir. It just seems maybe more parsimonious. Like, you don't need to posit moral realism to explain all our behavior. I guess that could be disputed. And then it also just explains why a lot of it sometimes seems to run up, like, sometimes seems to be just kind of arbitrary or inconsistent, like a lot of our moral intuitions and inconsistent with each other and with other moral intuitions that we share. I don't know. How would you figure this out? How would you figure out where the exact lexical threshold is on your view, for example? It does seem like it makes more sense to just say, well, that's just kind of how I feel.

AARON

Oh, no, I don't think that's attractable I don't think figuring out what that is is like a tractable question at all. I do think that there are statements that are just observer, like moral claims that are independent of any humans beliefs or anything like that, or any beliefs at all.

WINSTON

But you seem to be thinking like you can get evidence about it if you believe there is that threshold.

AARON

Like lexicality. I feel like that's like that's like a very specific I haven't actually thought very much about the intersection between moral realism and lexicality. In fact, that's not at all a central example of the kind of thing I do entertain pretty seriously the notion that there are some moral claims that have truth values and some that don't. And I feel like fatality is like, one that actually might not or something, or there might be a broad structure of moral realist claims and then sub or more nuanced particulars that just don't have a well defined answer, like above and beyond people's beliefs.

WINSTON

Interesting. Yeah.

AARON

I don't actually think it matters, like very much. I like it's. I don't know, like yeah, I mean it's it's super interesting, but like especially.

WINSTON

If people agree.

AARON

I think it matters insofar as it as it interacts with normative ethics, which I think it does, actually, sorry, I sort of misspoke. I think it definitely can. And I think it does interact with normative ethics, but once you control for that and you discuss the normative ethics part, like above and beyond that, it.

WINSTON

Doesn'T matter, I guess, right? Yeah. What matters is what? Well, this just depends on your moral view, so it all gets kind of messy. Yeah, I think that's true. But I do think it probably interacts in lots of ways. You might expect more moral convergence over time if there's moral realism, and that would make me a bit more optimistic about the future, though. Still not that optimistic.

AARON

Yeah, that's a good point. I haven't really thought much about that.

WINSTON

And maybe moral uncertainty is more like it's more clear what's going on. It's really hard to find a way to do moral uncertainty in a well defined manner, and it would be more like just regular uncertainty. T I think, otherwise well, you might still run into lots of issues, but yeah, potentially that would change things and I don't know. I think there are others I could probably come up with, but haven't thought about it that much.

AARON

Okay, so are there any topics you want to hit?

WINSTON

I guess I mostly was interested in talking about Sris, so we did that. Yeah, I don't know.

AARON

Okay.

WINSTON

There's lots of philosophy topics I'm somewhat interested in, but I just feel like I've heard Robin Hansen say people should just stop having so many opinions. So when I feel myself talking about something I don't know, I'm not an expert on, I'm like, yeah, I probably shouldn't.

AARON

This sounds like a smart take. And then I think about it and I'm like, wait, no, you're totally allowed to apply. I don't have a latent list of opinions on stuff. I have a latent world model and a latent ethics that I can apply to just about any particular scenario. Right. Maybe it's too confusing to apply on air or something, but if somebody says, oh, what do you think about this new law to ban deodorant? I'm like, I don't know, sounds bad. Even though they didn't exist before, I just thought about it, you know what I mean? But I have a generic ideology.

WINSTON

No, I think that's generally fair, but we might also just be picking really hard questions that require things my model hasn't figured out or something.

AARON

Okay. So in that case, I'm going to demand that you not demand, but encourage you to give like a 90% confidence interval on the number of views or listens downloads this episode gets.

WINSTON

Yeah, I listened to your last episode, actually, and this was is this a recurring.

AARON

It has been recurring. It's recurring until somebody. Convinces me to stop.

WINSTON

No, I like it. I feel like it's cool to maybe in like, a year you can graph everyone's guesses over time. Also.

AARON

Yes.

WINSTON

Okay, so I don't know. What was the confidence interview?

AARON

What did you oh, wait, mate, hold on. Let me see if I can pull Spotify real quick so I can get better and better at predicting.

WINSTON

Should keep it consistent.

AARON

Yeah, although I guess update, depending on how well past episode do. Okay, wait, analytics between both episodes that are up 59 views on Spotify or plays on Spotify, so maybe, I don't know, 80 total or like, 100 total over other platforms.

WINSTON

Am I guessing total?

AARON

Let's go with total for this episode.

WINSTON

And sorry, the confidence interval was like, 95 or something or what did you say?

AARON

Yeah, sure. I was thinking 90%, but you can choose if you want.

WINSTON

No big difference. 90. I've got to get it right. I would say. Yeah, I don't know. Let me think. Oh, yeah, also, when is the cut off point?

AARON

Because this could just until the end of time. It's not a falsifiable thing.

WINSTON

Yeah, I guess like eight to 1500.

AARON

Okay, that makes sense. I want to say a little bit more than eight, but probably not 20. I don't know, like 14 to yeah, 1500 sounds right. I'll go with 14 to 1500.

WINSTON

Okay.

AARON

All right. We agree too much. All right.

WINSTON

Yeah, that's good.

AARON

Okay. I'm glad that I found somebody who has all of my opinions. Well, yeah, me too. It's been lovely.

WINSTON

Thanks for doing this.

0 Comments