Aaron's Blog
Pigeon Hour
Drunk Pigeon Hour!
0:00
-1:35:52

Drunk Pigeon Hour!

You earned it
Transcript

No transcript...

Intro

Around New Years, Max Alexander, Laura Duffy, Matt and I tried to raise money for animal welfare (more specifically, the EA Animal Welfare Fund) on Twitter. We put out a list of incentives (see the pink image below), one of which was to record a drunk podcast episode if the greater Very Online Effective Altruism community managed to collectively donate $10,000.

To absolutely nobody’s surprise, they did ($10k), and then did it again ($20k) and then almost did it a third time ($28,945 as of March 9, 2024).

To everyone who gave or helped us spread the word, and on behalf of the untold number of animals these dollars will help, thank you.

And although our active promotion on Twitter has come to an end, it is not too late to give!

I give a bit more context in a short monologue intro I recorded (sober) after the conversation, so without further ado, Drunk Pigeon Hour:

Transcript

(Note: very imperfect - sorry!)

Monologue

Hi, this is Aaron. This episode of Pigeon Hour is very special for a couple of reasons.

The first is that it was recorded in person, so three of us were physically within a couple feet of each other. Second, it was recorded while we were drunk or maybe just slightly inebriated. Honestly, I didn't get super drunk, so I hope people forgive me for that.

But the occasion for drinking was that this, a drunk Pigeon Hour episode, was an incentive for a fundraiser that a couple of friends and I hosted on Twitter, around a little bit before New Year's and basically around Christmas time. We basically said, if we raise $10,000 total, we will do a drunk Pigeon Hour podcast. And we did, in fact, we are almost at $29,000, just shy of it. So technically the fundraiser has ended, but it looks like you can still donate. So, I will figure out a way to link that.

And also just a huge thank you to everyone who donated. I know that's really cliche, but this time it really matters because we were raising money for the Effective Altruism Animal Welfare Fund, which is a strong contender for the best use of money in the universe.

Without further ado, I present me, Matt, and Laura. Unfortunately, the other co-host Max was stuck in New Jersey and so was unable to participate tragically.

Yeah so here it is!

Conversation

AARON

Hello, people who are maybe listening to this. I just, like, drank alcohol for, like, the first time in a while. I don't know. Maybe I do like alcohol. Maybe I'll find that out now.

MATT

Um, All right, yeah, so this is, this is Drunk Pigeon Hour! Remember what I said earlier when I was like, as soon as we are recording, as soon as we press record, it's going to get weird and awkward.

LAURA

I am actually interested in the types of ads people get on Twitter. Like, just asking around, because I find that I get either, like, DeSantis ads. I get American Petroleum Institute ads, Ashdale College.

MATT

Weirdly, I've been getting ads for an AI assistant targeted at lobbyists. So it's, it's like step up your lobbying game, like use this like tuned, I assume it's like tuned ChatGPT or something. Um, I don't know, but it's, yeah, it's like AI assistant for lobbyists, and it's like, like, oh, like your competitors are all using this, like you need to buy this product.

So, so yeah, Twitter thinks I'm a lobbyist. I haven't gotten any DeSantis ads, actually.

AARON

I think I might just like have personalization turned off. Like not because I actually like ad personalization. I think I'm just like trying to like, uh, this is, this is like a half-baked protest of them getting rid of circles. I will try to minimize how much revenue they can make from me.

MATT

So, so when I, I like went through a Tumblr phase, like very late. In like 2018, I was like, um, like I don't like, uh, like what's happening on a lot of other social media.

Like maybe I'll try like Tumblr as a, as an alternative.

And I would get a lot of ads for like plus-sized women's flannels.

So, so like the Twitter ad targeting does not faze me because I'm like, oh, okay, like, I can, hold on.

AARON

Sorry, keep going. I can see every ad I've ever.

MATT

Come across, actually, in your giant CSV of Twitter data.

AARON

Just because I'm a nerd. I like, download. Well, there's actually a couple of things. I just download my Twitter data once in a while. Actually do have a little web app that I might try to improve at some point, which is like, you drop it in and then it turns them. It gives you a csV, like a spreadsheet of your tweets, but that doesn't do anything with any of the other data that they put in there.

MATT

I feel like it's going to be hard to get meaningful information out of this giant csv in a short amount of time.

AARON

It's a giant JSON, actually.

MATT

Are you just going to drop it all into c long and tell it to parse it for you or tell it to give you insights into your ads.

AARON

Wait, hold on. This is such a.

MATT

Wait. Do people call it “C-Long” or “Clong”?

AARON

Why would it be long?

MATT

Well, because it's like Claude Long.

LAURA

I've never heard this phrase.

MATT

This is like Anthropic’s chat bot with a long context with so like you can put. Aaron will be like, oh, can I paste the entire group chat history?

AARON

Oh yeah, I got clong. Apparently that wasn't acceptable so that it.

MATT

Can summarize it for me and tell me what's happened since I was last year. And everyone is like, Aaron, don't give our data to Anthropic, is already suss.

LAURA

Enough with the impressions feel about the Internet privacy stuff. Are you instinctively weirded out by them farming out your personal information or just like, it gives me good ads or whatever? I don't care.

MATT

I lean a little towards feeling weird having my data sold. I don't have a really strong, and this is probably like a personal failing of mine of not having a really strong, well formed opinion here. But I feel a little sketched out when I'm like all my data is being sold to everyone and I don't share. There is this vibe on Twitter that the EU cookies prompts are like destroying the Internet. This is regulation gone wrong. I don't share that instinct. But maybe it's just because I have average tolerance for clicking no cookies or yes cookies on stuff. And I have this vibe that will.

AARON

Sketch down by data. I think I'm broadly fine with companies having my information and selling it to ad targeting. Specifically. I do trust Google a lot to not be weird about it, even if it's technically legal. And by be weird about it, what do mean? Like, I don't even know what I mean exactly. If one of their random employees, I don't know if I got into a fight or something with one of their random employees, it would be hard for this person to track down and just see my individual data. And that's just a random example off the top of my head. But yeah, I could see my view changing if they started, I don't know, or it started leaching into the physical world more. But it seems just like for online ads, I'm pretty cool with everything.

LAURA

Have you ever gone into the ad personalization and tried see what demographics they peg you?

AARON

Oh yeah. We can pull up mine right now.

LAURA

It's so much fun doing that. It's like they get me somewhat like the age, gender, they can predict relationship status, which is really weird.

AARON

That's weird.

MATT

Did you test this when you were in and not in relationships to see if they got it right?

LAURA

No, I think it's like they accumulate data over time. I don't know. But then it's like we say that you work in a mid sized finance. Fair enough.

MATT

That's sort of close.

LAURA

Yeah.

AARON

Sorry. Keep on podcasting.

LAURA

Okay.

MATT

Do they include political affiliation in the data you can see?

AARON

Okay.

MATT

I would have been very curious, because I think we're all a little bit idiosyncratic. I'm probably the most normie of any of us in terms of. I can be pretty easily sorted into, like, yeah, you're clearly a Democrat, but all of us have that classic slightly. I don't know what you want to call it. Like, neoliberal project vibe or, like, supply side. Yeah. Like, some of that going on in a way that I'm very curious.

LAURA

The algorithm is like, advertising deSantis.

AARON

Yeah.

MATT

I guess it must think that there's some probability that you're going to vote in a republican primary.

LAURA

I live in DC. Why on earth would I even vote, period.

MATT

Well, in the primary, your vote is going to count. I actually would think that in the primary, DC is probably pretty competitive, but I guess it votes pretty. I think it's worth.

AARON

I feel like I've seen, like, a.

MATT

I think it's probably hopeless to live. Find your demographic information from Twitter. But, like.

AARON

Age 13 to 54. Yeah, they got it right. Good job. I'm only 50, 99.9% confident. Wait, that's a pretty General.

MATT

What's this list above?

AARON

Oh, yeah. This is such a nerd snipe. For me, it's just like seeing y'all. I don't watch any. I don't regularly watch any sort of tv series. And it's like, best guesses of, like, I assume that's what it is. It thinks you watch dune, and I haven't heard of a lot of these.

MATT

Wait, you watch cocaine there?

AARON

Big bang theory? No, I definitely have watched the big Bang theory. Like, I don't know, ten years ago. I don't know. Was it just, like, random korean script.

MATT

Or whatever, when I got Covid real bad. Not real bad, but I was very sick and in bed in 2022. Yeah, the big bang theory was like, what I would say.

AARON

These are my interest. It's actually pretty interesting, I think. Wait, hold on. Let me.

MATT

Oh, wait, it's like, true or false for each of these?

AARON

No, I think you can manually just disable and say, like, oh, I'm not, actually. And, like, I did that for Olivia Rodrigo because I posted about her once, and then it took over my feed, and so then I had to say, like, no, I'm not interested in Olivia Rodrigo.

MATT

Wait, can you control f true here? Because almost all of these. Wait, sorry. Is that argentine politics?

AARON

No, it's just this.

MATT

Oh, wait, so it thinks you have no interest?

AARON

No, this is disabled, so I haven't. And for some reason, this isn't the list. Maybe it was, like, keywords instead of topics or something, where it was the.

MATT

Got it.

AARON

Yes. This is interesting. It thinks I'm interested in apple stock, and, I don't know, a lot of these are just random.

MATT

Wait, so argentine politics was something it thought you were interested in? Yeah. Right.

AARON

Can.

MATT

Do you follow Maya on Twitter?

AARON

Who's Maya?

MATT

Like, monetarist Maya? Like, neoliberal shell two years ago.

AARON

I mean, maybe. Wait, hold on. Maybe I'm just like.

MATT

Yeah, hardcore libertarianism.

LAURA

Yeah. No, so far so good with him. I feel like.

AARON

Maia, is it this person? Oh, I am.

MATT

Yeah.

AARON

Okay.

MATT

Yeah, she was, like, neoliberal shell two years ago.

AARON

Sorry, this is, like, such an errands. Like snipe. I got my gender right. Maybe. I don't know if I told you that. Yeah. English. Nice.

MATT

Wait, is that dogecoin?

AARON

I assume there's, like, an explicit thing, which is like, we're going to err way on the side of false positives instead of false negatives, which is like. I mean, I don't know. I'm not that interested in AB club, which.

MATT

You'Re well known for throwing staplers at your subordinate.

AARON

Yeah.

LAURA

Wait, who did you guys support in 2020 primary?

MATT

You were a Pete stan.

LAURA

I was a Pete stan. Yes, by that point, definitely hardcore. But I totally get. In 2016, I actually was a Bernie fan, which was like, I don't know how much I was really into this, or just, like, everybody around me was into it. So I was trying to convince myself that he was better than Hillary, but I don't know, that fell apart pretty quickly once he started losing. And, yeah, I didn't really know a whole lot about politics. And then, like, six months later, I became, like, a Reddit libertarian.

AARON

We think we've talked about your ideological evolution.

MATT

Have you ever done the thing of plotting it out on the political? I feel like that's a really interesting.

LAURA

Exercise that doesn't capture the online. I was into Ben Shapiro.

MATT

Really? Oh, my God. That's such a funny lore fact.

AARON

I don't think I've ever listened to Ben Shapiro besides, like, random clips on Twitter that I like scroll?

MATT

I mean, he talks very fast. I will give him that.

LAURA

And he's funny. And I think it's like the fast talking plus being funny is like, you can get away with a lot of stuff and people just end up like, oh, sure, I'm not really listening to this because it's on in the background.

AARON

Yeah.

MATT

In defense of the Bernie thing. So I will say I did not support Bernie in 2016, but there was this moment right about when he announced where I was very intrigued. And there's something about his backstory that's very inspiring. This is a guy who has been just extraordinarily consistent in his politics for close to 50 years, was saying lots of really good stuff about gay rights when he was like, Burlington mayor way back in the day, was giving speeches on the floor of the House in the number one sound very similar to the things he's saying today, which reflects, you could say, maybe a very myopic, closed minded thing, but also an ideological consistency. That's admirable. And I think is pointing at problems that are real often. And so I think there is this thing that's, to me, very much understandable about why he was a very inspiring candidate. But when it came down to nitty gritty details and also to his decisions about who to hire subordinates and stuff, very quickly you look at the Bernie campaign alumni and the nuances of his views and stuff, and you're like, okay, wait, this is maybe an inspiring story, but does it actually hold up?

AARON

Probably not.

LAURA

Yeah, that is interesting. It's like Bernie went woke in 2020, kind of fell apart, in my opinion.

AARON

I stopped following or not following on social media, just like following him in general, I guess. 2016 also, I was 16. You were not 16. You were.

MATT

Yeah, I was in college at that time, so I was about 20.

AARON

So that was, you can't blame it. Anything that I do under the age of 18 is like just a race when I turn 18.

LAURA

Okay, 2028 draft. Who do we want to be democratic nominee?

AARON

Oh, Jesse from pigeonhole. I honestly think he should run. Hello, Jesse. If you're listening to this, we're going to make you listen to this. Sorry. Besides that, I don't know.

MATT

I don't have, like, an obvious front runner in mind.

AARON

Wait, 2028? We might be dead by 2028. Sorry, we don't talk about AI.

MATT

Yeah.

AARON

No, but honestly, that is beyond the range of planability, I think. I don't actually think all humans are going to be dead by 2028. But that is a long way away. All I want in life is not all I want. This is actually what I want out of a political leader. Not all I want is somebody who is good on AI and also doesn't tells the Justice Department to not sue California or whatever about their gestation. Or maybe it's like New Jersey or something about the gestation crate.

MATT

Oh, yeah. Top twelve.

AARON

Yeah. Those are my two criteria.

MATT

Corey Booker is going to be right on the latter.

AARON

Yeah.

MATT

I have no idea about his views on.

AARON

If to some extent. Maybe this is actively changing as we speak, basically. But until recently it wasn't a salient political issue and so it was pretty hard to tell. I don't know. I don't think Biden has a strong take on it. He's like, he's like a thousand years old.

LAURA

Watch what Mitch should have possibly decided. That's real if we don't do mean.

AARON

But like, but his executive order was way better than I would have imagined. And I, like, I tweeted about, know, I don't think I could have predicted that necessarily.

MATT

I agree. I mean, I think the Biden administration has been very reasonable on AI safety issues and that generally is reflective. Yeah, I think that's reflective of the.

AARON

Tongue we know Joe Biden is listening to.

MATT

Okay.

AARON

Okay.

MATT

Topics that are not is like, this is a reward for the fundraiser. Do we want to talk about fundraiser and retrospective on that?

AARON

Sure.

MATT

Because I feel like, I don't know. That ended up going at least like one sigma above.

AARON

How much? Wait, how much did we actually raise?

MATT

We raised like 22,500.

LAURA

Okay. Really pissed that you don't have to go to Ava.

AARON

I guess this person, I won't name them, but somebody who works at a prestigious organization basically was seriously considering donating a good amount of his donation budget specifically for the shrimp costume. And, and we chatted about it over Twitter, DM, and I think he ended up not doing it, which I think was like the right call because for tax reasons, it would have been like, oh. He thought like, oh, yeah, actually, even though that's pretty funny, it's not worth losing. I don't know, maybe like 1000 out of $5,000 tax reasons or whatever. Clearly this guy is actually thinking through his donations pretty well. But I don't know, it brought him to the brink of donating several, I think, I don't know, like single digit thousands of dollars. Exactly.

LAURA

Clearly an issue in the tax.

AARON

Do you have any tax take? Oh, wait, sorry.

MATT

Yeah, I do think we should like, I mean, to the extent you are allowed by your employer too, in public space.

AARON

All people at think tanks, they're supposed to go on podcast and tweet. How could you not be allowed to do that kind of thing?

MATT

Sorry, keep going. But yeah, no, I mean, I think it's worth dwelling on it a little bit longer because I feel like, yeah, okay, so we didn't raise a billion dollars as you were interested in doing.

AARON

Yeah. Wait, can I make the case for like. Oh, wait. Yeah. Why? Being slightly unhinged may have been actually object level. Good. Yeah, basically, I think this didn't end up exposed to. We learned this didn't actually end up happening. I think almost all of the impact money, because it's basically one of the same in this context. Sorry. Most of the expected money would come in the form of basically having some pretty large, probably billionaire account, just like deciding like, oh, yeah, I'll just drop a couple of mil on this funny fundraiser or whatever, or maybe less, honestly, listen, $20,000, a lot of money. It's probably more money than I have personally ever donated. On the other hand, there's definitely some pretty EA adjacent or broadly rationalist AI adjacent accounts whose net worth is in at least tens of millions of dollars, for whom $100,000 just would not actually affect their quality of life or whatever. And I think, yeah, there's not nontrivial chance going in that somebody would just decide to give a bunch of money.

MATT

I don't know. My view is that even the kinds of multimillionaires and billionaires that hang out on Twitter are not going to ever have dropped that much on a random fundraiser. They're more rational.

AARON

Well, there was proof of concept for rich people being insane. Is Balaji giving like a million dollars to James Medlock.

MATT

That's true.

AARON

That was pretty idiosyncratic. Sorry. So maybe that's not fair. On the other hand. On the other hand, I don't know, people do things for clout. And so, yeah, I would have, quote, tweeted. If somebody was like, oh yeah, here's $100,000 guys, I would have quote, tweeted the shit out of them. They would have gotten as much possible. I don't know. I would guess if you have a lot of rich people friends, they're also probably on Twitter, especially if it's broadly like tech money or whatever. And so there's that. There's also the fact that, I don't know, it's like object people, at least some subset of rich people have a good think. EA is basically even if they don't identify as an EA themselves, think like, oh yeah, this is broadly legit and correct or whatever. And so it's not just like a random.

MATT

That's true. I do think the choice of the animal welfare fund made that harder. Right. I think if it's like bed nets, I think it's more likely that sort of random EA rich person would be like, yes, this is clearly good. And I think we chose something that I think we could all get behind.

AARON

Because we have, there was a lot of politicking around.

MATT

Yeah, we all have different estimates of the relative good of different cause areas and this was the one we could very clearly agree on, which I think is very reasonable and good. And I'm glad we raised money for the animal welfare fund, but I do think that reduces the chance of, yeah.

LAURA

I think it pushes the envelope towards the animal welfare fund being more acceptable as in mainstream ea.org, just like Givewell would be. And so by forcing that issue, maybe we have done more good for the.

AARON

That there's like that second order effect. I do just think even though you're like, I think choosing this over AMF or whatever, global health fund or whatever decreased the chance of a random person. Not a random person, but probably decrease the total amount of expected money being given. I think that was just trumped by the fact that I think the animal welfare, the number I pull out of thin air is not necessarily not out of thin air, but very uncertain is like 1000 x or whatever relative to the standards you vote for. Quote, let it be known that there is a rabbit on the premises. Do they interact with other rodents?

MATT

Okay, so rabbits aren't rodents. We can put this on the pod. So rabbits are lagging wars, which is.

AARON

Fuck is that?

MATT

It's a whole separate category of animals.

AARON

I just found out that elk were part of it. Like a type of deer. This is another world shattering insight.

MATT

No, but rabbits are evolutionarily not part of the same. I guess it's a family on the classification tree.

AARON

Nobody, they taught us that in 7th grade.

MATT

Yeah, so they're not part of the same family as rodents. They're their own thing. What freaks me out is that guinea pigs and rabbits seem like pretty similar, they have similar diet.

AARON

That's what I was thinking.

MATT

They have similar digestive systems, similar kind of like general needs, but they're actually like, guinea pigs are more closely related to rats than they are to rabbits. And it's like a convergent evolution thing that they ended up.

AARON

All mammals are the same. Honestly.

MATT

Yeah. So it's like, super weird, but they're not rodents, to answer your question. Rabbits do like these kinds of rabbits. So these are all pet rabbits are descended from european. They're not descended from american rabbits because.

LAURA

American rabbits like cotton tails. Oh, those are different.

MATT

Yeah. So these guys are the kinds of rabbits that will live in warrens. Warrens. So, like, tunnel systems that they like. Like Elizabeth Warren. Yeah. And so they'll live socially with other rabbits, and they'll dig warrens. And so they're used to living in social groups. They're used to having a space they need to keep clean. And so that's why they can be, like, litter box trained, is that they're used to having a warren where you don't just want to leave poop everywhere. Whereas american rabbits are more solitary. They live above ground, or in my understanding is they sometimes will live in holes, but only occupying a hole that another animal has dug. They won't do their hole themselves. And so then they are just not social. They're not easily litter box trained, that kind of stuff. So all the domestic rabbits are bred from european ones.

AARON

I was thinking, if you got a guinea pig, would they become friends? Okay.

MATT

So apparently they have generally similar dispositions and it can get along, but people don't recommend it because each of them can carry diseases that can hurt the other one. And so you actually don't want to do it. But it does seem very cute to have rabbit.

AARON

No, I mean, yeah. My last pet was a guinea pig, circa 20. Died like, a decade ago. I'm still not over it.

MATT

Would you consider another one?

AARON

Probably. Like, if I get a pet, it'll be like a dog or a pig. I really do want a pig. Like an actual pig.

MATT

Wait, like, not a guinea pig? Like a full size pig?

AARON

Yeah. I just tweeted about this. I think that they're really cool and we would be friends. I'm being slightly sarcastic, but I do think if I had a very large amount of money, then the two luxury purchases would be, like, a lot of massages and a caretaker and space and whatever else a pig needs. And so I could have a pet.

MATT

Like, andy organized a not EADC, but EADC adjacent trip to Rosie's farm sanctuary.

AARON

Oh, I remember this. Yeah.

MATT

And we got to pet pigs. And they were very sweet and seems very cute and stuff. They're just like, they feel dense, not like stupid. But when you pet them, you're like, this animal is very large and heavy for its size. That was my biggest surprising takeaway, like, interacting with the hair is not soft either. No, they're pretty coarse, but they seem like sweeties, but they are just like very robust.

LAURA

Have you guys seen Dave?

AARON

Yes.

LAURA

That's like one of the top ten movies of all time.

AARON

You guys watch movies? I don't know. Maybe when I was like four. I don't like.

LAURA

Okay, so the actor who played farmer Hoggett in this movie ended up becoming a vegan activist after he realized, after having to train all of the animals, that they were extremely intelligent. And obviously the movie is about not killing animals, and so that ended up going pretty well.

AARON

Yeah, that's interesting. Good brown.

MATT

Okay, sorry. Yeah, no, this is all tracked. No, this is great. We are doing a drunk podcast rather than a sober podcast, I think, precisely because we are trying to give the people some sidetracks and stuff. Right. But I jokingly put on my list of topics like, we solved the two envelopes paradox once and for all.

AARON

No, but it's two boxing.

MATT

No. Two envelopes. No. So this is the fundamental challenge to questions about, I think one of the fundamental challenges to be like, you multiply out the numbers and the number.

AARON

Yeah, I feel like I don't have like a cash take. So just like, tell me the thing.

MATT

Okay.

AARON

I'll tell you the correct answer. Yeah.

MATT

Okay, great. We were leading into this. You were saying, like, animal charity is 1000 x game, right?

AARON

Conditional. Yeah.

MATT

And I think it's hard to easily get to 1000 x, but it is totally possible to get to 50 x if you just sit down and multiply out numbers and you're like, probability of sentience and welfare range.

AARON

I totally stand by that as my actual point estimate. Maybe like a log mean or something. I'm actually not sure, but. Sorry, keep going.

MATT

Okay, so one line of argument raised against this is the two envelopes problem, and I'm worried I'm going to do a poor job explaining this. Laura, please feel free to jump in if I say something wrong. So two envelopes is like, it comes from the thing of, like, suppose you're given two envelopes and you're told that one envelope has twice as much money in it as the other.

AARON

Oh, you are going to switch back and forth forever.

MATT

Exactly. Every time. You're like, if I switch the other envelope and it has half as much money as this envelope, then I lose 0.5. But if it has twice as much money as this envelope, then I gain one. And so I can never decide on which envelope because it always looks like it's positive ev to switch the other. So that's where the name comes from.

AARON

I like a part that you're like, you like goggles?

MATT

So let me do the brief summary, which is that basically, depending on which underlying units you pick, whether you work in welfare range, units that are using one human as the baseline or one chicken as the baseline, you can end up with different outputs of the expected value calculation. Because it's like, basically, is it like big number of chickens times some fraction of the human welfare range that dominates? Or is it like some small probability that chickens are basically not sentient times? So then a human has like a huge human's welfare range is huge in chicken units, and which of those dominates is determined by which unit you work in.

AARON

I also think, yeah, this is not a good conducive to this problem. Is not conducive to alcohol or whatever. Or alcohol is not going to this issue. To this problem or whatever. In the maximally abstract envelope thing. I have an intuition that's something weird kind of probably fake going on. I don't actually see what the issue is here. I don't believe you yet that there's like an actual issue here. It's like, okay, just do the better one. I don't know.

MATT

Okay, wait, I'll get a piece of paper. Talk amongst yourselves, and I think I'll be able to show this is like.

LAURA

Me as the stats person, just saying I don't care about the math. At some point where it's like, look, I looked at an animal and I'm like, okay, so we have evolutionarily pretty similar paths. It would be insane to think that it's not feeling like, it's not capable of feeling hedonic pain to pretty much the same extent as me. So I'm just going to ballpark it. And I don't actually care for webs.

AARON

I feel like I've proven my pro animal bona fide. I think it's bona fide. But here, and I don't share that intuition, I still think that we can go into that megapig discourse. Wait, yeah, sort of. Wait, not exactly megapig discourse. Yeah, I remember. I think I got cyberbullyed by, even though they didn't cyberbully me because I was informed of offline bullying via cyber about somebody's, sorry, this is going to sound absolutely incoherent. So we'll take this part out. Yeah. I was like, oh, I think it's like some metaphysical appeal to neuron counts. You specifically told me like, oh, yeah, Mr. So and so didn't think this checked out. Or whatever. Do you know what I'm talking about?

LAURA

Yeah.

AARON

Okay. No, but maybe I put it in dawn or Cringey or pretentious terms, but I do think I'm standing by my metaphysical neurons claim here. Not that I'm super confident in anything, but just that we're really radically unsure about the nature of sentience and qualia and consciousness. And probably it has something to do with neurons, at least. They're clearly related in a very boring sciency way. Yeah. It's not insane to me that, like, that, like. Like the unit of. Yeah, like the. The thing. The thing that, like, produces or like is, or like is directly, like one to one associated with, like, particular, like. Like, I guess, amount, for lack of better terms, of conscious experience, is some sort of physical thing. The neurons jumps out as the unit that might make sense. And then there's like, oh, yeah, do we really think all the neurons that control the tongue, like the motor function of the tongue, are those really make you quadrillion more important than a seal or whatever? And then I go back to, okay, even though I haven't done any research on this, maybe it's just like opiate. The neurons directly related neuron counts directly of. Sorry. Neurons directly involved in pretty low level hedonic sensations. The most obvious one would be literal opioid receptors. Maybe those are the ones that matter. This is like, kind of. I feel like we've sort of lost the plot a little.

MATT

Okay, this is like weird drunk math.

AARON

But I think your handwriting is pretty good.

MATT

I think I have it. So suppose we work in human units. I have a hypothetical intervention that can help ten chickens or one human, and we assume that when I say help, it's like, help them. The same of it. So if I work in human units, I say maybe there is a 50% chance that a chicken is zero one to 1100 of a human and a 50% chance that a chicken and a human are equal. Obviously, this is a thought experiment. I'm not saying that this is my real world probabilities, but suppose that these are my credences. So I do out the EV. The thing that helps ten chickens. I say that, okay, in half of the world, chickens are one 100th of a human, so helping ten of them is worth, like, zero five. Sorry, helping ten of them is zero one. And so 0.5 times zero one times ten is zero five. And then in the other half of the world, I say that a chicken and a human are equal. So then my intervention helps ten chickens, which is like helping ten humans so my total credence, like the benefit in that set of worlds with my 0.5 probability, is five. And so in the end, the chicken intervention wins because it has, on net, an ev of 5.5 versus one for the human intervention. Because the human intervention always helps one human. I switch it around and I say my base unit of welfare range, or, like moral weight, or whatever you want to say, is chicken units. Like, one chicken's worth of moral weight. So in half of the world, a human is worth 100 chickens, and then in the other half of the world, a human is worth one chicken. So I do out the ev for my intervention that helps the one human. Now, in the chicken units, and in chicken units, like, half of the time, that human is worth 100 chickens. And so I get 0.5 times, 100 times one, which is 50. And then in the other half of the world, the chicken and the human are equal. And so then it's 0.5 times one, times one, because I'm helping one human, so that's 0.5. The ev is 50.5. And then I do have my ev for my chicken welfare thing. That's, like, ten chickens, and I always help ten chickens. And so it's ten as my units of good. So when I worked in human units, I said that the chickens won because it was 5.5 human units versus one human unit for helping the human. When I did it in chicken units, it was 50.5 to help the humans versus ten to help the chickens. And so now I'm like, okay, my ev is changing just based on which units I work in. And I think this is, like, the two envelopes problem that's applied to animals. Brian Tomasic has, like, a long post about this, but I think this is, like, this is a statement or an example of the problem.

AARON

Cool.

LAURA

Can I just say something about the moral weight project? It's like, really just. We ended up coming up with numbers, which I think may have been a bit of a mistake in the end, because I think the real value of that was going through the literature and finding out the similarities and the traits between animals and humans, and then there are a surprising number of them that we have in common. And so at the end of the day, it's a judgment call. And I don't know what you do with it, because that is, like, a legit statistical problem with things that arises when you put numbers on stuff.

MATT

So I'm pretty sympathetic to what you're saying here of, like, the core insight of the moral weight project is, like, when we look at features that could plausibly determine capacity to experience welfare, we find that a pig and a human have a ton in common. Obviously, pigs cannot write poetry, but they do show evidence of grief behavior when another pig dies. And they show evidence of vocalizing in response to pain and all of these things. I think coming out of the moral waste project being like, wow. Under some form of utilitarianism, it's really hard to justify harms to, or like harms to pigs. Really. Morally matter makes complete sense. I think the challenge here is when you get to something like black soldier flies or shrimp, where when you actually look at the welfare range table, you see that the number of proxies that they likely or definitely have is remarkably low. The shrimp number is hinging on. It's not hinging on a ton. They share a few things. And because there aren't that many categories overall, that ends up being in the median case. Like, they have a moral weight, like one 30th of a human. And so I worry that sort of your articulation of the benefit starts to break down when you get to those animals. And we start to like, I don't know what you do without numbers there. And I think those numbers are really susceptible to this kind of 200.

AARON

I have a question.

MATT

Yeah, go.

AARON

Wait. This supposed to be like 5.5 versus one?

MATT

Yeah.

AARON

And this is 50.5 versus ten? Yeah. It sound like the same thing to me.

MATT

No, but they've inverted this case, the chickens one. So it's like when I'm working in human units, right? Like, half the time, I help.

AARON

If you're working in human units, then the chicken intervention looks 5.5 times better. Yes. Wait, can I write this down over here?

MATT

Yeah. And maybe I'm not an expert on this problem. This is just like something that tortures me when I try and sleep at night, not like a thing that I've carefully studied. So maybe I'm stating this wrong, but, yeah. When I work in human units, the 50% probability in this sort of toy example that the chickens and the humans are equal means that the fact that my intervention can help more chickens makes the ev higher. And then when I work in the chicken units, the fact that human might be 100 times more sentient than the chicken or more capable of realizing welfare, to be technical, that means the human intervention just clearly wins.

AARON

Just to check that I would have this right, the claim is that in human units, the chicken intervention looks 5.5 times better than the human intervention. But when you use chicken units, the human intervention looks 5.5 times better than the chicken intervention. Is that correct?

MATT

Yes, that's right.

AARON

Wait, hold on. Give me another minute.

MATT

This is why doing this drunk was a bad idea.

AARON

In human.

LAURA

No, I think that's actually right. And I don't know what to do about the flies and shrimp and stuff like this. This is like where I draw my line of like, okay, so lemonstone quote.

MATT

Tweeted me, oh, my God.

LAURA

I think he actually had a point of, there's a type of ea that is like, I'm going to set my budget constraint and then maximize within that versus start with a blank slate and allow the reason to take me wherever it goes. And I'm definitely in the former camp of like, my budget constraint is like, I care about humans and a couple of types of animals, and I'm just like drawing the line there. And I don't know what you do with the other types of things.

MATT

I am very skeptical of arguments that are like, we should end Medicare to spend it all on shrimp.

AARON

No one's suggesting that. No, there's like a lot of boring, prosaic reasons.

MATT

I guess what I'm saying is there's a sense in which, like, totally agreeing with you. But I think the challenge is that object level.

AARON

Yeah, you set us up. The political economy, I like totally by double it.

MATT

I think that there is. This is great. Aaron, I think you should have to take another shot for.

AARON

I'm sorry, this isn't fair. How many guys, I don't even drink, so I feel like one drink is like, is it infinity times more than normalize it? So it's a little bit handle.

MATT

I think there has to be room for moral innovation in my view. I think that your line of thinking, we don't want to do radical things based on sort of out there moral principles in the short term. Right. We totally want to be very pragmatic and careful when our moral ideas sort of put us really far outside of what's socially normal. But I don't think you get to where we are. I don't know what we owe the future was like a book that maybe was not perfect, but I think it eloquently argues with the fact that the first person to be like, hey, slavery in the Americas is wrong. Or I should say really the first person who is not themselves enslaved. Because of course, the people who are actually victims of this system were like, this is wrong from the start. But the first people to be like, random white people in the north being like, hey, this system is wrong. Looks super weird. And the same is true for almost any moral innovation. And so you have to, I think saying, like, my budget constraint is totally fixed seems wrong to me because it leaves no room for being wrong about some of your fundamental morals.

LAURA

Yeah, okay. A couple of things here. I totally get that appeal 100%. At the same time, a lot of people have said this about things that now we look back at as being really bad, like the USSR. I think communism ends up looking pretty bad in retrospect, even though I think there are a lot of very good moral intuitions underpinning it.

AARON

Yeah, I don't know. It's like, mostly an empirical question in that case, about what government policies do to human preference satisfaction, which is like, pretty. Maybe I'm too econ. These seem like very different questions.

LAURA

It's like we let our reason go astray, I think.

MATT

Right, we, as in some humans.

AARON

No, I think. Wait, at first glance. At first glance, I think communism and things in that vicinity seem way more intuitively appealing than they actually, or than they deserve to be, basically. And the notion of who is it? Like Adam Smith? Something Smith? Yeah, like free hand of the market or whatever. Invisible hand. Invisible free hand of the bunny ear of the market. I think maybe it's like, field intuitive to me at this point, because I've heard it a lot. But no, I totally disagree that people's natural intuition was that communism can't work. I think it's like, isn't true.

MATT

I'm not sure you guys are disagreeing with one.

AARON

Yeah.

MATT

Like, I think, Laura, if I can attempt to restate your point, is that to at least a subset of the people in the USSR at the time of the russian revolution, communism plausibly looked like the same kind of moral innovation as lots of stuff we looked back on as being really good, like the abolition of slavery or like, women's rights or any of those other things. And so you need heuristics that will defend against these false moral innovations.

AARON

Wait, no, you guys are both wrong. Wait, hold on. No, the issue there isn't that we disregard, I guess, humans, I don't know exactly who's responsible for what, but people disregarded some sort of deserving heuristic that would have gardened against communism. The issue was that, like, it was that, like, we had, like, lots of empirical, or, like, it's not even necessarily. I mean, in this case, it is empirical evidence, but, like, like, after a couple years of, like, communism or whatever, we had, like, lots of good evidence to think, oh, no, books like that doesn't actually help people, and then they didn't take action on that. That's the problem. If we were sitting here in 1910 or whatever, and I think it's totally possible, I will be convinced communism is, in fact, the right thing to do. But the thing that would be wrong is if, okay, five years later, you have kids starving or people starving or whatever, and maybe you can find intellectuals who claim and seem reasonably correct that they can explain how this downstream of your policies. Then doubling down is the issue, not the ex ante hypothesis that communism is good. I don't even know if that made any sense, I think.

LAURA

But we're in the ex ante position right now.

AARON

Yeah, totally. Maybe we'll find out some sort of, whether it's empirical or philosophical or something like maybe in five years or two years or whatever, there'll be some new insight that sheds light on how morally valuable shrimp are. And we should take that into account.

LAURA

I don't know. Because it's really easy to get good feedback when other fellow humans are starving to death versus. How are you supposed to judge? No, we've made an improvement.

AARON

Yeah, I do think. Okay. Yes. That's like a substantial difference. Consciousness is, like, extremely hard. Nobody knows what the hell is going on. It kind of drives me insane.

MATT

Whomstemonga has not been driven insane by the hard problem of consciousness.

AARON

Yeah. For real. I don't know. I don't have to say. It's like, you kind of got to make your best guess at some point.

MATT

Okay, wait, so maybe tacking back to how to solve it, did you successfully do math on this piece of paper?

AARON

Mostly? No, mostly I was word selling.

MATT

I like the verb form there.

AARON

Yeah. No, I mean, like, I don't have, like, a fully thought out thing. I think in part this might be because of the alcohol. I'm pretty sure that what's going on here is just that, like, in fact, like, there actually is an asymmetry between chicken units and human units, which is that. Which is that we have much better idea. The real uncertainty here is how valuable a chicken is. There's probably somebody in the world who doubts this, but I think the common sense thing and thing that everybody assumes is we basically have it because we're all humans and there's a lot of good reasons to think we have a decent idea of how valuable another human life is. And if we don't, it's going to be a lot worse for other species. And so just, like, taking that as a given, the human units are the correct unit because the thing with the unit is that you take it as given or whatever. The real uncertainty here isn't the relationship between chickens and humans. The real question is how valuable is a chicken? And so the human units are just like the correct one to use.

LAURA

Yeah, there's something there, which is the right theory is kind of driving a lot of the problem in the two envelope stuff. Because if you just chose one theory, then the units wouldn't really matter which one. The equality theory is like, you've resolved all the inter theoretic uncertainty and so wouldn't that get rid of.

AARON

I don't know if you know, if there's, like. I'm not exactly sure what you mean by theory.

LAURA

Like, are they equal, the equality theory versus are they 1100 theory? And we're assuming that each of them has five probabilities each end. So if we resolved that, it's like we decide upon the 1100 theory, then the problem goes away.

AARON

Yeah, I mean, that's true, but you might not be able to.

MATT

Yeah, I think it doesn't reflect our current state or, like.

AARON

No, just like taking as given the numbers, like, invented, which I think is fine for the illustration of the problem. Maybe a better example is what's, like, another thing, chicken versus a rabbit. I don't know. Or like rabbits. I don't know.

MATT

Chicken versus shrimp. I think it's like a real one. Because if you're the animal welfare fund, you are practically making that decision.

AARON

Yeah. I think that becomes harder. But it's not, like, fundamentally different. And it's like the question of, like, okay, which actually makes sense, makes more sense to use as a unit. And maybe you actually can come up with two, if you can just come up with two different species for which, on the merits, they're equally valid as a unit and there's no issue anymore. It really is 50 50 in the end.

MATT

Yeah. I don't know. I see the point you're making. With humans, we know in some sense we have much more information about how capable of realizing welfare a human is. But I guess I treat this as, like, man, I don't know. It's like why all of my confidence intervals are just, like, massive on all these things is I'm just very confused by these problems and how much that.

AARON

Seems like I'm confused by this one. Sorry, I'm, like, half joking. It is like maybe. I don't know, maybe I'll be less confident. Alcohol or so.

MATT

Yeah, I don't know. I think it's maybe much more concerning to me the idea that working in a different unit changes your conclusion radically.

AARON

Than it is to you.

LAURA

Sometimes. I don't know if this is, like, too much of a stoner cake or something like that.

AARON

Bring it on.

LAURA

I kind of doubt working with numbers at all.

MATT

Okay. Fit me well.

LAURA

It's just like when he's.

AARON

Stop doing that.

LAURA

I don't know what to do, because expected value theory. Okay, so one of the things that, when we hired a professional philosopher to talk about uncertainty.

MATT

Pause for a sec. Howie is very sweetly washing his ears, which is very cute in the background. He's like, yeah, I see how he licks his paws and squeezes his ear.

AARON

Is it unethical for me to videotape?

MATT

No, you're more than welcome to videotape it, but I don't know, he might be done.

AARON

Yeah, that was out.

MATT

Laura, I'm very sorry. No, yeah, you were saying you hired the professional philosopher.

LAURA

Yeah. And one of the first days, she's like, okay, well, is it the same type of uncertainty if we, say, have a one in ten chance of saving the life of a person we know for sure is conscious, versus we have a certain chance of saving the life of an animal that has, like, a one in ten probability of being sentient? These seem like different types.

AARON

I mean, maybe in some sense they're like different types. Sure. But what are the implications? It's not obviously the same.

LAURA

It kind of calls into question as to whether we can use the same mathematical approach for analyzing each of these.

AARON

I think my main take is, like, you got a better idea? That was like, a generic.

LAURA

No, I don't.

AARON

Yeah. It's like, okay, yeah, these numbers are, like, probably. It seems like the least bad option if you're going by intuition. I don't know. I think all things considered, sometimes using numbers is good because our brains aren't built to handle getting moral questions correct.

MATT

Yeah, I mean, I think that there is a very strong piece of evidence for what you're saying, Aaron, which is.

AARON

The whole paper on this. It's called the unreasonable efficacy of mathematics in the natural sciences.

MATT

Or this is. This is interesting. I was going to make sort of an easier or simpler argument, which is just like, I think the global health ea pitch of, like, we tend to get charity radically wrong.

AARON

Often.

MATT

Charities very plausibly do differ by 100 x or 1000 x in cost effectiveness. And most of the time, most people don't take that into account and end up helping people close to them or help an issue that's salient to them or help whatever they've heard about most and leave opportunities for doing what I think is very difficult to argue as not being radically more effective opportunities on the table as a result. Now, I led into this saying that I have this very profound uncertainty when it comes to human versus animal trade offs. So I'm not saying that, yes, we just should shut up and multiply. But I do think that is sort of like the intuition for why I think the stoner take is very hard for me to endorse is that we know in other cases, actually bringing numbers to the problem leads to saving many more lives of real people who have all of the same hopes and dreams and fears and feelings and experiences as the people who would have been saved in alternate options.

LAURA

Isn't that just like still underlying this is we're sure that all humans are equal. And that's like our theory that we have endorsed.

AARON

Wait, what?

MATT

Or like on welfare ranges, the differences among different humans are sufficiently small in terms of capacity to realize welfare. That plausibly they are.

AARON

Yeah, I don't think anyone believes that. Does anyone believe that? Wait, some people that everybody's hedonic range is the same.

LAURA

Randomly select a person who lives in Kenya. You would think that they have the same welfare range, a priority as somebody.

MATT

Who lives in the description. The fundamental statistics of describing their welfare range are the same.

AARON

Yeah, I think that's probably correct. It's also at an individual level, I think it's probably quite varied between humans.

LAURA

So I don't think we can say that we can have the same assumption about animals. And that's where it kind of breaks down, is we don't know the right theory to apply it.

AARON

Well, yeah, it's a hard question. Sorry, I'm being like kind of sarcastic.

LAURA

I think you have to have the theory right. And you can't easily average over theories with numbers.

MATT

Yeah, no, I mean, I think you're right. I think this is the challenge of the two envelopes. Problem is exactly this kind of thing. I'm like four chapters into moral uncertainty. The book.

AARON

By Will.

MATT

Yeah. McCaskill, Ord and Bryke Fist. I'm probably getting that name. But they have a third co author who is not as much of like an.

AARON

Yeah, I don't know. I don't have any super eloquent take except that to justify the use of math right now. Although I actually think I could. Yeah, I think mostly it's like, insofar as there's any disagreement, it's like we're both pointing at the issue, pointing at a question, and saying, look at that problem. It's, like, really hard. And then I'm saying like, yeah, I know. Shit. You should probably just do your best to answer it. Sorry, maybe I'm just not actually adding any insight here or whatever, but I agree with you that a lot of these problems are very difficult, actually. Sorry, maybe this is, like, a little bit of a nonsense. Whatever. Getting back to the hard problem of consciousness, I really do think it feels like a cruel joke that we have to implicitly, we have to make decisions about potentially gigantic numbers of digital lives or, like, digital sentience or, you know, whatever you want to call it, without having any goddamn idea, like, what the fuck is up with consciousness. And, I don't know, it doesn't seem fair. Okay.

MATT

Yeah, wait, okay, so fundraiser. This is great. We've done all of these branching off things. So we talked about how much we raised, which was, like, amount that I was quite happy with, though. Maybe that's, like, selfish because I didn't have to wear a shrink costume. And we talked about. Cause prio. We haven't talked about the whole fake OpenAI thing.

AARON

Fake open AI.

MATT

Wait. Like the entire.

AARON

Oh, well, shout out to I really. God damn it, Qualy. I hope you turn into a human at some point, because let it be known that Qualy made a whole ass Google Doc to plan out the whole thing and was, like, the driving. Yeah, I think it's fair to say Qualy was the driving force.

MATT

Yeah, totally. Like, absolutely had the concept, did the Google Doc. I think everybody played their parts really well, and I think that was very fun.

AARON

Yeah, you did. Good job, everybody.

MATT

But, yeah, that was fun. It was very unexpected. Also, I enjoyed that. I was still seeing tweets and replies that were like, wait, this was a bit. I didn't get this after the end of it, which maybe suggests. But if you look at the graph I think I sent in, maybe we.

AARON

Should pull up my. We can analyze my Twitter data and find out which things got how many views have.

MATT

Like, you have your text here. I think the graph of donations by date is, like, I sent in the text chat between.

AARON

Maybe I can pull it like, media.

MATT

Like you and me and Max and Laura. And it's very clear that that correlated with a. I think it's probably pretty close to the end.

AARON

Maybe I just missed this. Oh, Laura, thank you for making.

MATT

Yeah, the cards were amazing cards.

AARON

They're beautiful.

MATT

Oh, wait, okay, maybe it's not. I thought I said. Anyway, yeah, we got, like, a couple grand at the start, and then definitely at least five grand, maybe like, ten grand, somewhere in the five to ten range.

AARON

Can we get a good csv going? Do you have access to. You don't have to do this right now.

MATT

Wait, yeah, let me grab that.

AARON

I want to get, like, aerospace engineering grade cpus going to analyze the causal interactions here based on, I don't know, a few kilobytes of data. It's a baby laptop.

MATT

Yeah, this is what the charts looked like. So it's basically like there was some increase in the first. We raised, like, a couple of grand in the first couple of days. Then, yeah, we raised close to ten grand over the course of the quality thing, and then there was basically flat for a week, and then we raised another ten grand right at the end.

AARON

That's cool. Good job, guys.

MATT

And I was very surprised by this.

AARON

Maybe I didn't really internalize that or something. Maybe I was sort of checked out at that point. Sorry.

MATT

I guess. No, you were on vacation because when you were coming back from vacation, it's when you did, like, the fake Sama.

AARON

Yeah, that was on the plane.

LAURA

Okay, yeah, I remember this. My mom got there the next day. I'm like, I'm checking out, not doing anything.

AARON

Yeah, whatever. I'll get rstudio revving later. Actually, I'm gradually turning it into my worst enemy or something like that.

MATT

Wait, how so?

AARON

I just use Python because it's actually faster and catchy and I don't have to know anything. Also, wait, this is like a rant. This is sort of a totally off topic take, but something I was thinking about. No, actually, I feel like a big question is like, oh, are LLMs going to make it easy for people to do bad things that make it easier for me to do? Maybe not terrible things, but things that are, like, I don't know, I guess of dubious or various things that are mostly in the realm of copyright violation or pirating are not ever enforced, as far as I can tell. But, no, I just couldn't have done a lot of things in the past, but now I can, so that's my anecdote.

MATT

Okay, I have a whole python.

AARON

You can give me a list of YouTube URLs. I guess Google must do, like, a pretty good job of policing how public websites do for YouTube to md three sites, because nothing really just works very well very fast. But you can just do that in python, like, five minutes. But I couldn't do that before, so.

MATT

I feel like, to me, it's obvious that LLMs make it easier for people to do bad stuff. Exactly as you said because they let make in general make it easier for people to do stuff and they have some protections on this, but those protections are going to be imperfect. I think the much more interesting question in some sense is this like a step change relative to the fact that Google makes it way easier for you to do stuff and including bad stuff and the printing press made it way easier for you to do?

AARON

I wouldn't even call it a printing press.

MATT

I like think including bad stuff. So it's like, right, like every invention that generally increases people's capability to do stuff and share information also has these bad effects. And I think the hard question is, are LLMs, wait, did I just x.

AARON

No, I don't think, wait, did I just like, hold on. I'm pretty sure it's like still wait, how do I have four things?

LAURA

What is the benefit of LLMs versus.

AARON

You can ask it something and it tells you the answer.

LAURA

I know, but Google does this too.

AARON

I don't mean, I don't know if I have like a super, I don't think I have any insightful take it just in some sense, maybe these are all not the same, but maybe they're all of similar magnitude, but like object level. Now we live in a world with viruses CRISPR. Honestly, I think to the EA movement's credit, indefinite pause, stop. AI is just not, it's not something that I support. It's not something like most people support, it's not like the official EA position and I think for good reason. But yeah, going back to whatever it was like 1416 or whatever, who knows? If somebody said somebody invented the printing press and somebody else was like, yeah, we should, well I think there's some pretty big dis analysis just because of I guess, biotech in particular, but just like how destructive existing technologies are now. But if somebody had said back then, yeah, let's wait six months and see if we can think of any reason not to release the printing press. I don't think that would have been a terrible thing to do. I don't know, people. I feel like I'm saying something that's going to get coded as pretty extreme. But like x ante hard ex ante. People love thinking, exposed nobody. Like I don't know. I don't actually think that was relevant to anything. Maybe I'm just shit faced right now.

MATT

On one shot of vodka.

AARON

$15 just to have one shot.

MATT

I'll have a little.

AARON

Yeah. I think is honestly, wait. Yeah, this is actually interesting. Every time I drink I hope that it'll be the time that I discover that I like drinking and it doesn't happen, and I think that this is just because my brain is weird. I don't hate it. I don't feel, like, bad. I don't know. I've used other drugs, which I like. Alcohol just doesn't do it for me. Yeah, screw you, alcohol.

MATT

Yes. And you're now 15.99 cheaper or 50. 99 poorer.

AARON

Yeah, I mean, this will last me a lifetime.

MATT

You can use it for, like, cleaning your sink.

AARON

Wait, this has got to be the randomest take of all time. But, yeah, actually, like, isopropyl alcohol, top tier, disinfected. Because you don't have to do anything with it. You leave it there, it evaporates on its own.

MATT

Honestly. Yeah.

AARON

I mean, you don't want to be in an enclosed place or whatever. Sorry. To keep. Forget. This is like.

MATT

No, I mean, it seems like a good take to me.

AARON

That's all.

MATT

Yeah, this is like a very non sequitur.

AARON

But what are your guys' favorite cleaning suppliers?

MATT

Okay, this is kind of bad. Okay, this is not that bad. But I'm, like, a big fan of Clorox wipes.

AARON

Scandalous.

MATT

I feel like this gets looked down on a little bit because it's like, in theory, I should be using a spray cleaner and sponge more.

AARON

If you're like, art porn, what theories do you guys.

MATT

If you're very sustainable, very like, you shouldn't just be buying your plastic bucket of Clorox infused wet wipes and you're killing the planet.

AARON

What I thought you were going to say is like, oh, this is like germaphobe coating.

MATT

No, I think this is fine. I don't wipe down my groceries with Clorox wipes. This is like, oh, if I need to do my deep clean of the kitchen, what am I going to reach for? I feel like my roommate in college was very much like, oh, I used to be this person. No, I'm saying he was like an anti wet wipe on sustainability reasons person. He was like, oh, you should use a rag and a spray cleaner and wash the rag after, and then you will have not used vast quantities of resources to clean your kitchen.

AARON

At one point, I tweeted that I bought regular. Actually, don't do this anymore because it's no longer practical. But I buy regularly about 36 packs of bottled water for like $5 or whatever. And people actually, I think it was like, this is like close to a scissor statement, honestly. Because object level, you know what I am, right. It's not bad. For anything. I'm sorry. It just checks out. But people who are normally pretty technocratic or whatever were kind of like, I don't know, they were like getting heated on.

MATT

I think this is an amazing scissor statement.

AARON

Yeah.

MATT

Because I do.

AARON

I used to be like, if I were to take my twelve year old self, I would have been incredibly offended, enraged.

MATT

And to be fair, I think in my ideal policy world, there would be a carbon tax that slightly increases the price of that bottled water. Because actually it is kind of wasteful to. There is something, something bad has happened there and you should internalize those.

AARON

Yeah, I think in this particular, I think like thin plastic is just like not. Yeah, I don't think it would raise it like very large amount. I guess.

MATT

I think this is probably right that even a relatively high carbon tax would not radically change the price.

LAURA

It's not just carbon, though. I think because there is land use implicated in this.

AARON

No, there's not.

LAURA

Yeah, you're filling up more landfills.

AARON

Yeah, I'm just doing like hearsay right now. Heresy.

MATT

Hearsay. Hearsay is going to be whatever. Well, wait, no, heresy is, if you're arguing against standardly accepted doctrine. Hearsay is like, well, it's both. Then you're just saying shit.

AARON

I'm doing both right now. Which is that actually landfills are usually like on the outskirts of town. It's like, fine.

LAURA

They'Re on the outskirts of town until the town sprawls, and then the elementary school is on a phone.

AARON

Yeah, no, I agree in principle. I don't have a conceptual reason why you're wrong. I just think basically, honestly, the actual heuristic operating here is that I basically outsource what I should pay attention to, to other people. And since I've never seen a less wrong post or gave Warren post about how actually landfills are filling up, it's like, fine, probably.

LAURA

No, this is me being devil's advocate. I really don't care that about personal waste.

MATT

Yeah, I mean, I think plausibly here, there is, right? So I think object level, the things that matter, when we think about plastic, there is a carbon impact. There is a production impact of like, you need to think about what pollution happened when the oil was drilled and stuff. And then there is like a disposal impact. If you successfully get that bottle into a trash can, for what it's worth.

AARON

My bottles are going into their goddamn trash can.

MATT

Ideally a recycling. No, apparently recycling, I mean, recycling is.

AARON

Well, I mean, my sense is like apparently recycling. Yeah, I recycle metal. I think I do paper out of convenience.

MATT

If you successfully get that bottle handle a waste disposal system that is properly disposing of it, rather than like you're throwing it on a slap, then I think my guess is that the willingness to pay, or if you really crunch the numbers really hard, it would not be once again, a huge cost for the landfill costs. On the flip side, if you throw it in a river, that's very bad. My guess is that it would be right for everyone on Twitter to flame you for buying bottles and throwing them in a river if you did that.

AARON

What is an ed impact on wild animal welfare and equilibrium? No, just kidding. This is something. Yeah, don't worry, guys. No, I was actually the leave no trade coordinator for my Boy scout troop. It's actually kind of ironic because I think probably like a dumb ideology or.

LAURA

Whatever, it's a public good for the other people around you to not have a bunch of garbage around on that trail.

AARON

Yeah, I do think I went to an overnight training for this. They're very hardcore, but basically conceptually incoherent people. I guess people aren't conceptually incoherent. Their concepts are incoherent who think it's really important that you don't impact, that you walk through mud instead of expanding the trail or whatever. This is not even worth the time right now. Let's figure out how many digital shrimp needs by the heat up of the universe.

MATT

Yeah, I mean, I will say it's probably worth mentioning here, right, that in practice, your carbon and land use footprint is actually really small relative to the average. Yes, you buy a bunch of bottled water, but you live in a dense, walkable community and you rarely drive and all of these things. So in practice, all the people who are roasting you on Twitter for buying.

AARON

All the water, what they should roast me for is buying grass. Diet is. This is actually plausibly like the worst thing that I do from a climate standpoint. Yeah, I think this is probably mean. I've given my take on listen, if you're one of the seven people listening to this, you probably know what it is.

MATT

Yeah, it is true that you have. Two of your regular listeners are here on the podcast with you, which reduces the audience.

AARON

Yeah, I think my sister, the episode of my sister is going to get some. It's going to get some people.

MATT

Oh, yeah, me too.

LAURA

I want to hear a normie also.

AARON

I don't know. I get along great, honestly. Should I call her up? But her friend is like, she's the perfect intermediary my sister and I. Yeah, I guess we don't talk about. I don't know, she's like much more, like happy go lucky. Like less, I don't know, like nerdy person or whatever. But yeah, like our friend. You know what? I'm friends with Annie, too. Annie is like a good intermediary.

MATT

Can I just say, I think my favorite pigeonhower moment. Okay, I have a couple favorite pigeonhole. One is when you literally said, like, hedonic utilitarianism and stuff. That's like in the transcript of the Un Max episode. And it's just like the most perfect distillation of pigeon hour is that line. But then also when you just left in an episode with you and Sarah, where you just left in the part where you're discussing whether she could sleep on your couch if she visits DC.

AARON

How weird.

MATT

And I think at some point you're like, we're going to edit this out.

AARON

I always do that and then don't do it. Whatever.

MATT

Talking about travel logistics.

AARON

It'S like not a big deal.

MATT

You can tell that last bit of vodka really got me.

AARON

But, yeah, I feel like this isn't especially, like, insane. I don't know. Well, no, mine was like, I took it out eventually, but I left in Nathan's 15 minutes bath or, sorry, five minute bathroom break. I actually felt kind of bad because it was honestly as normal and as minimally embarrassing as it could have been. Like, oh, yeah, I'm going to use the bathroom now. And there was like five minutes of silence. Yeah.

MATT

And I have been amused by the fact that this is like, Sarah has started her own podcast now. You have inspired others with your hedonic utilitarianism, honestly, and your travel.

AARON

I really shouldn't be. As people can tell, I'm not the most eloquent person. Alcohol doesn't help, but I'm never the most eloquent person. But you know what it's like I'm creating awareness of people with.

MATT

Well, I mean, what I was going to say is, in some sense, it is this very unique take on the genre to leave in the bathroom break and the discussion of travel arrangements. Right. I'm laughing now, but I sort of genuinely got a kick out of those things. And in some sense, you're subverting the norms. This is actually art.

AARON

Yes, it is. Yeah. Honestly, I think I feel like I mentioned this before, but all this, my ethos here, I feel like very self important. Discussing my ethos as an artist, as a creator, as they say, is basically doing the opposite of what I tried to do the first time when I started the podcast, which is when I interviewed Rob Wiblin and spent hundreds, honestly, it was a good episode. It was a two hour episode, probably spent not something in the hundreds of hours. I think I guesstimated maybe like 250 hours or something in total for two hour episode. It was cool, right? I think the output was awesome, but that was a lot of effort.

MATT

Wait, okay, so how do you get to 250 hours?

AARON

So I did a bunch of research.

MATT

That's like six weeks of work.

AARON

So the stages, I pulled 250 out of my, like. But I do remember, like, I do remember like the, like the hundreds thing, like it's probably at least 100 hours. But good question. No, I think most of it was like going, reading all of Rob's old blog posts and every time he was interviewed on another podcast and taking notes and coming with questions based on that stuff. And then.

MATT

What even, like then you presumably recorded and then you edit. Well, did you edit or did.

AARON

No, they edited. And then there was a whole back and forth before we were putting together what questions, basically having sketching an outline, like a real talk about.

MATT

Sure, I've wondered about this for 80k, like how much the guests prep, the specific questions they are asked.

AARON

I mean, I don't think it's a secret. Maybe, maybe, I don't know if somebody like anonymous ADK Gmail account or email account, like send says, better take that shit out, this is top secret than I will. But no, I don't think it's secret to say that. At least the questions are decided ahead of time. Well, not decided unilaterally by one party. I think the main thing is, just like they're not trying to, the ebook is not conducive to what you would do if you wanted to see if a politician was lying or whatever. It's supposed to be like eliciting people's views. They do this election, I'm being a stand because I don't like a lie, but I really do think it's like a good show or whatever. And they do this election at the level of taking good people to come on. They want to hear the expected stuff. Honestly, I do think maybe it's a little bit on the margin, it should be a little bit more pushback or whatever during the interview to stuff. But no, I think in general, and maybe the more important thing is making it so that people can take out stuff after the fact. I don't know, this makes total sense. I don't see how people don't do that. You know what I mean? Why would you not want to make everybody chill and try to catch them saying something they don't want to say on a live feed or whatever?

MATT

I mean, I think it all depends on what context, right? Because clearly there are venues where you're trying to. Okay, I'm going to make an analogy with a job interview, right? Where it's like you wouldn't want a candidate to be able to edit the transcript, the job interview after the fact, to make themselves look good. Because the goal of the job interview is to understand the candidate's strengths and weaknesses. And that requires sort of pushing them and putting them on the spot a little bit in ways that they may, after the fact, be like, I wish I hadn't said this thing. Because your goal is to elicit an understanding of how they will do in a workplace where they're going to be asked to do various challenging things that they can't go back and edit afterwards. So, too, with a politician where you're trying to decide, do I want this person to be president? You don't want them to present. To get to present their best face in all.

AARON

No. Yeah, I totally agree. It seems like most podcast episodes don't have those features or whatever. There's good reason to get an unbiased view of. It's important to convey full information about what actually went down during the physical. But I guess real live recording instead of write. Nobody thinks that authors have a duty to public. If they wrote a sentence and then take it out, nobody thinks that that should go in a footnote or something. You know what I mean? So it seems like very intuitive to me that. And I think honestly, substantively, the most important thing is that people are just more relaxed, more willing to say things that are closer to the line, because then they can think about it later, say, like, oh, yeah, maybe we can take this out instead of just ignoring whole topic sections of topic entirely. I. It. I might have to cut this out. Yeah.

MATT

No, it's like getting late. We have recorded.

AARON

This is actually kind of sad. That's only 11:00 p.m. I feel like we should be, like, raging right now.

LAURA

But except for on behalf of the machine, likewise.

AARON

Actually, not even on behalf of the.

MATT

Machine, honestly, you're not a mayor Pete, Stan.

AARON

Wait, what does that have to do with.

MATT

Oh, the classic meme is Mayor Pete standing in front of a whiteboard. And the whiteboard has been edited to say, what if we rage on behalf of the machine? It's like a commentary on Mayor Pete's campaign.

AARON

My mom definitely I had do not deserve one. She definitely clicked send. Anyway. It's not even that important. It's not like say, tell me if you're alive. It's like a general.

MATT

Are we going to leave that bit in the podcast?

AARON

Yeah, I'm actually hoping to interview her about tennising me. Just kidding about her. Like I probably shouldn't say that in case it never actually happens.

MATT

Do we want to do a wrap here and do we have any final things we want to say that we.

AARON

Can, like we can always do?

LAURA

We want to put confidence intervals on the number of people who are going to listen.

MATT

Yes.

AARON

Should I pull up the data? Episode data or whatever about how many people. Oh, no, because. Sorry. Wait, I do have access to that. Moving to substack from spot in general.

MATT

People are always underestimate how big the variance is and everything. So I think I need to put some probability on this going like weirdly viral. It's like some subreddit dedicated to sexy men's voices discovers air. Yes, correct.

AARON

In fact, they're already doing it right, as we see.

MATT

And so then this gets like reposted a thousand times.

AARON

Wait, it doesn't give me. Oh, wait, no, sorry. Podcast. So best of pigeon hour, mine's been an episode, has 65 downloads.

LAURA

Nice.

AARON

Wait, what's that? Wait, so some of the old ones are like one of the ones that I was on like a while ago, like before pre pigeonhole had two and 161. I guess it's the peak. I think that was the, even though it's like mostly hosted elsewhere, I guess. Oh, Daniel Salons has one, I think. Laura, you were. Wait, no, this isn't, I think it must be wrong because it says that you only have 29, but that must be since we moved. I moved it to Sunstack. So I don't think this is actually correct.

MATT

I can only say there was at least one because I listened to it. That's the only data I have daily.

AARON

What happened? Yeah, so the most downloads in a single day was 78 on January 24.

MATT

That's surprisingly many.

LAURA

Yeah.

AARON

What happened? Okay, wait, why was on January 20? Oh, that was best of pigeonhole. It's not insane. Whatever.

LAURA

All right, my CI is like 15% to 595%.

MATT

Okay. My 95 is going to be wider for weird tail things. So I'm going to say like ten to 5000. Wait, I don't think there's a 5% chance or I don't think there's a 2.5% chance that there are more than 5000 lessons. I'll go ten to 1000.

AARON

I will go eleven to 990 so I can sound better. Wait in fact we're both right. Mine is more right.

MATT

Okay, I just want to note Aaron that it is a crime that you don't use the fuck your life bing bong throwing the pigeon video as your intro.

AARON

What is anybody talking about right now?

MATT

Okay, give me a sec, give me a sec. I sent this to you at some right here we are on the perfect and throws a pigeon at them.

AARON

I wonder hopefully pigeon is doing well wait really hope yeah this is all working et cetera.

MATT

Honestly if this wasn't recording I will be so happy.

AARON

No, that's what happened in my first episode with Nathan. Paul Nathan or UK Nathan yes forecaster Nathan yes predict market yeah no, I mean I felt bad like it was totally my fault. I was like yeah, that's why I pay 1059 a month for fancy Google Meet and two terabytes. Cool. Can I subscribe?

MATT

Yes, you can press stop all.

0 Comments