Aaron's Blog
Pigeon Hour
#5: Nathan Barnard (again!) on why general intelligence is basically fake
0:00
-1:09:34

#5: Nathan Barnard (again!) on why general intelligence is basically fake

Summary from Clong:

  • The discussion centers around the concept of a unitary general intelligence or cognitive ability. Whether this exists as a real and distinct thing.

  • Nathan argues against it, citing evidence from cognitive science about highly specialized and localized brain functions that can be damaged independently. Losing linguistic ability does not harm spatial reasoning ability.

  • He also cites evidence from AI, like systems excelling at specific tasks without general competency, and tasks easy for AI but hard for humans. This suggests human cognition isn’t defined by some unitary general ability.

  • Aaron is more open to the idea, appealing to an intuitive sense of a qualitative difference between human and animal cognition - using symbolic reasoning in new domains. But he acknowledges the concept is fuzzy.

  • They discuss whether language necessitates this general ability in humans, or is just associated. Nathan leans toward specialized language modules in the brain.

  • They debate whether strong future AI systems could learn complex motor skills just from textual descriptions, without analogous motor control data. Nathan is highly skeptical.

  • Aaron makes an analogy to the universe arising from simple physical laws. Nathan finds this irrelevant to the debate.

  • Overall, Nathan seems to push Aaron towards a more skeptical view of a unitary general cognitive ability as a scientifically coherent concept. But Aaron retains some sympathy for related intuitions about human vs animal cognition.

Transcript

Note: created for free byAssembly AI; very imperfect

NATHAN

It's going good. Finish the report.

AARON

Oh, congratulations.

NATHAN

Thank you. Let's see if anyone cares on the forum. Let's see if people if not still no one cares on the forum. I think they don't.

AARON

Yeah, because they shouldn't, but I know how that be.

NATHAN

It's slowly improving.

AARON

Let me see if I can find.

NATHAN

Slowly getting more hex. Okay.

AARON

Oh, nice. 14. Oh, wait, I haven't uploaded it.

NATHAN

Oh, my goodness.

AARON

Wait, actually, hold on. Do I really want to strong upload this? Yeah, I think I do. At least. I don't know if that's, like, how bad wait, maybe I'm being you know what? To preserve the form's epistemics, I'm going to only normal, like, upload it for now, and then later I like, decide.

NATHAN

That's good.

AARON

Okay. And later I might change it to a strong upload. I'm sorry.

NATHAN

I know. Oh, it's gob to 20 now. That's exciting.

AARON

Oh, not for long. It's going to go back down to 16.

NATHAN

Oh, no.

AARON

Because I'm wrong. Like upload, I guess my uplift counts for six.

NATHAN

Wow.

AARON

Yeah. So I say we don't talk about compute, governance, bank regulation. I said we talk about literally, almost literally anything else.

NATHAN

Yeah, I agree.

AARON

What was the hot take that you had? I forget.

NATHAN

Yeah.

AARON

Go ahead.

NATHAN

My hot take is I don't think general intelligence. To be honest, this is actually quite a cold take. My cold take. I don't think general intelligence is real.

AARON

So are you talking about okay. Are we talking more about IQ stuff or, like, AI stuff?

NATHAN

AI stuff.

AARON

Okay. That's like more okay. I feel like I don't even know where to start. Well, I kind of do, but I feel like it's like an underdefined point. I don't think it's as fundamental as velocity or something in my brain. It's not super well defined.

NATHAN

Yeah.

AARON

So what do you explicate what you mean by general intelligence, I guess, or what do you not think exists?

NATHAN

Yeah, so when I say general intelligence, what I mean is there's this faculty as some set of tasks which can't be done without this faculty, and this can be faculty can be turned up and down, and by turning up and down, you get better and worse at tasks. And maybe it's possible you can discover some qualities for new tasks, but once you have the faculty, you could just be able to learn, like, a much, much broader range of tasks than some intelligence without this general intelligence. General intelligence faculty. I think this is like a source thing which people say humans have and current air systems don't have, and probably also squirrels don't have general intelligence, I think, at least in the way it's used, like, colloquially. Yeah, I think that's, like, part one of the general intelligence hypothesis, and I think it's like a part two of the general intelligence partis. I think this one's even more controversial, which is like, once you cross the threshold of general intelligence, this is intrinsically tied up to pursuit of goals.

AARON

Yeah.

NATHAN

I sort of reject both of these. I also reject sort of the second hypothesis, even conditional on the first hypothesis being true.

AARON

Yeah. Okay. The first thing, I think I latched onto it because I was just, like, searching for disagreement, which I guess is.

NATHAN

Like, how my brain works.

AARON

But you said basically it's necessary for a certain path. That I don't think that's how the term is generally used. At the extreme. You can imagine hard coding a program to do something really hard that a general intelligence could learn on its own, or something like that. It's like the old Chinese room thing, I guess.

NATHAN

Yeah.

AARON

Maybe that deals with qualia, so forget about that. But you could just write down the formula for GPT four or whatever, or.

NATHAN

Some arbitrary some arbitrary complex or computate but like computable task.

AARON

Yeah. Are you going to stand by that claim that general intelligence, as most people use it or whatever, is necessary for certain things?

NATHAN

I think as most people use it, yes. So in Superintelligence, for instance, there's pre extensive discussion of tasks which AGI completes. I agree. As a technical point. Yes. If there's a task which is computable, you could write down a program and computers. But I think this is in fact not like an actual thing at stake when sort of talking about the hypothesis.

AARON

Okay. I basically agree. I guess I was being kind of fantastic or formal or something, but I think we're kind of on the same page. Yeah. Okay. Is there a qualitative difference between humans and squirrels, then? And if so, what is it that not general intelligence?

NATHAN

Yes, I think basically no, I think there isn't a qualitative difference between humans and squirrels. I think the thing which comes closest to this is being able to understand things in a hierarchical structure, I think, is probably a thing which comes closest to it. And like plausibly plausibly episodic memory as well. I don't think episodic memory is, like, particularly critical, too.

AARON

Did you say excellent?

NATHAN

No, episodic.

AARON

Oh, okay. That's a formal term.

NATHAN

This is like a formal term.

AARON

Okay.

NATHAN

I think it's the easiest when you contrast semantic memory. Semantic memory is like you've got procedural memories, which is like memories of, say, how to throw a ball. Be like partial procedural memory, semantic memory, memory of specific things, divorced of source, divorce of context isn't quite right, but sort of also context. Just like remembering that the capital of the United States is like Washington without DC. Yeah. Without remembering where you learnt it. Episodic memory is like memory of where you learnt it, for instance, for just.

AARON

Like, normal autobiographical memory. Like the memory of what it was like to go on a walk or something.

NATHAN

Yeah. I think autobiographical memory is like a subset of graphic memory. I'm not 100% sure of this.

AARON

Yeah. So, I mean.

NATHAN

I yeah, go ahead. Sorry, I was checking. I'm right here at absolute memory. I am right here. Memory.

AARON

Nice.

NATHAN

It looks like it's basically just the same as autobiographical memory. It looks like autobiographical it looks like it's basically the same as autobiographical memory, but seems to be like yeah. I think the reason I've been taught autobiographical memory, it seems like in the cognitive science literature when the concept is called episolic memory.

AARON

What was the first one again? I already forgot.

NATHAN

Oh, semantic memory.

AARON

What was the example? Okay. I remember the Capital blossoming thing. What was the other example?

NATHAN

Procedural memory.

AARON

Yeah. Do you have an example for that?

NATHAN

Yeah, being able to play the piano. Do procedural memory.

AARON

I don't even know if I want. It seems like just like everything else, you're more well read in the cognitive science, like psych literature, I guess. But I wouldn't normally call that memory. Well, I guess I kind of would. I don't know what I would call it's. Like not a central example of but sure.

NATHAN

Within the cognitive science issue, this is like one of the things it's called.

AARON

Then everything is memory. What's a capacity that's not like a cognitive capacity that's not memory.

NATHAN

Like processing facial signals is not a procedural memory.

AARON

Even though you remember how to do.

NATHAN

It.

AARON

Because it's genetically coded that it doesn't count.

NATHAN

No, but it's like a different as in you can lose your procedural memory. You'd still be able to for instance.

AARON

You could lose your ability to notice faces, like in principle completely.

NATHAN

Oh, no. Okay. Your eyes take in a bunch of light and your brain processes them.

AARON

Into.

NATHAN

Various things so that there's like a bit which does movement, there's a bit.

AARON

Which does.

NATHAN

Depth perception, there's a bit which breaks things down into builds up objects out of more discrete parts. You still have be able to do all these tasks even if you lost procedural memory.

AARON

Is that because they are a direct consequence of the physical structure of the neurons rather than the behavior of how they interact?

NATHAN

Okay.

AARON

No, over two. Okay.

NATHAN

Edges different.

AARON

Well, then in that case, why couldn't you lose your ability to notice movement or edges or whatever?

NATHAN

So you can okay, I'm just like referencing the cognitive science literature here. I can try being as precise as I can.

AARON

No, I feel like this is not that important.

NATHAN

But stuff which you wouldn't normally consider learning. You wouldn't normally consider learning how to sit. You would consider learning how to play piano and it's become part of your procedural memory. You wouldn't normally consider learning how to move your tongue muscles, for instance.

AARON

I feel like you probably like in utero. You probably do. I'm not actually sure.

NATHAN

I think the tongue tape was a bad example. You have to learn how to see, for instance, or learn how to smell or learn how to.

AARON

You don't learn how to take in photons or whatever. You do learn how to well.

NATHAN

I'm using learn here in the colloquial sense to try and get across things which are normally caught, which would be like procedural memory tasks versus non procedural memory tasks. And we have things which tasks which you'd colloquially say that you learned how to do, like playing tennis, playing the piano would be done by procedural memory and tasks which you colloquially and as like a rule of thumb, tasks which you say we should colloquially say you didn't learn how to do, like seeing or like what's? Another good one. Regulating your heartbeat.

AARON

Okay. Yeah. I feel like that's like the most clear cut piece of a thing that your brain does. I guess your brain stamped, but even still your brain or your nervous system doesn't that you definitely don't learn. It's as close to not learning as you can get. Okay. This is kind of maybe hearing I'm interested in it because do you think it's interesting to try to dissect the difference between learning piano and learning how to detect edges? I don't know what I'm talking about. But as an extreme layperson, seems like these are kind of same type of thing, even though they're radically different in terms of difficulty and contingentness contingency.

NATHAN

Yeah. I'm just going to check how much of your procedural memory is in your hippocampus. I think it's not just looking at lots of procedural memory. Cool. Yeah. So it's not specialized. So the hippocampus lots of memories done in the.

AARON

Hippocampus. Is that's like, the interesting part of the brain? That's like, all I know about.

NATHAN

Oh, I think it's like one of the interesting parts of the brain. Lots of interesting parts of the brain, yeah. Precision memory seems to be like lots in, like, motor corset, cerebellum and basal ganglia. So you could fuck up your motor corsets in some I'm almost 100% sure there'll be people who've basically 100% sure there'll be people who've had injuries to their motor corsets and lost the ability to play football, but there'd be no effects on their ability to process movements like see movements.

AARON

Okay. Should we bring it back to general intelligence? Wait, so this is like the squirrel thing. Okay. You were explaining why.

NATHAN

One of the things which is thought to be sort of distinct about again, all my knowledge here comes from me reading cognitive science and neuroscience textbooks.

AARON

You do this for fun?

NATHAN

No, I do this because I think it's very Cruxy for whether you'll die.

AARON

Okay. But it's not like okay, but I was using a fun in a broad term, not because we majored in neuroscience.

NATHAN

Oh, sure. Yeah.

AARON

Okay. Congratulations on out nerding me by far. I don't say that lightly.

NATHAN

Um, what was I going to say? Oh, so, yes, it's like one of the things, like some of the cognitive skills which humans seem to have.

AARON

Which.

NATHAN

Other animals just seems to don't have at all. And one of them is episodic memory. Another seems to probably be this hierarchical the ability it's, like hierarchical way of putting language together and potentially other tasks we saw build up in this hierarchical way. And then there's a few other abilities around cooperation and ability to have joint intentional things which seem like unique to humans. Yes, unique to humans compared to, say, like, chapendis.

AARON

Yeah. So, like, I when I, like, tentatively don't like when I think about like what I what I naively and like, maybe legitimately, I guess think of as general intelligence is more actually isn't like any of the I don't think it's any of the things you just listed and more like more like just a symbolic representation or something like that.

NATHAN

Wait, I actually just need to loot I'll be back.

AARON

No problem. Okay. Hello.

NATHAN

I am back. Great.

AARON

Okay.

NATHAN

Yes. This like symbolic like symbolic, symbolic reasoning stuff. Yeah. So I think if I was, like, making the if I was, like, making the case for general intelligence being real, I wouldn't have symbolic reasoning, but I would have language stuff. I'd have this hierarchical structure thing, which.

AARON

I would probably so I think of at least most uses of language and central examples as a type of symbolic reasoning because words mean things. They're like yeah. Pointers to objects or something like that.

NATHAN

Yeah, I think it's like, pretty confidence isn't where this isn't a good enough description of general intelligence. So, for instance so if you bit in your brain called, I'm using a checklist, I don't fuck this up vernacular, I'm not making this cool. Lots of connects to use words like pointers as these arbitrary signs happens mostly in this area of the brain called Berkeley's area. But very famously, you can have Berkeley's epaxics who lose the ability to do language comprehension and use the ability to consistently use words as pointers, as signs to point to things, but still have perfect good spatial reasoning abilities. And so, conversely, people with brokers of fascia who fuck up, who have the broker's reason their brain fucks up will not be able to form fluent sentences and have some problems like unsigned syntax, and they'll still be able to have very good spatial reasoning. It could still, for instance, be like, good engineers. Would you like many problems which, like, cost engineering?

AARON

Yeah, I totally buy that. I don't think language is the central thing. I think it's like an outgrowth of, like I don't know, there's like a simplified model I could make, which is like it's like an outgrowth of whatever general intelligence really is. But whatever the best spatial or graphical model is, I don't think language is cognition.

NATHAN

Yes, this is a really big debate in psycholinguistics as to whether language is like an outgrowth of other abilities like the brain has, whether language whether there's very specialized language modules. Yeah, this is just like a very live debate in psycholinguistics moments. I actually do lean towards the reason I've been talking about this actually just going to explain this hierarchical structure thing? Yeah, I keep talking about it. So one theory for how you can comprehend new sentences, like, the dominant theory in linguistics, how you can comprehend new sentences, um, is you break them up into, like you break them up into, like, chunks, and you form these chunks together in this, like, tree structure. So something like, if you hear, like, a totally novel sentence like the pit bull mastiff flopped around deliciously or something, you can comprehend what the sentence means despite the fact you've never heard it. Theory behind this is you saw yes, this can be broken up into this tree structure, where the different, like, ah, like like bits of the sentence. So, like like the mastiff would be like, one bit, and then you have, like, another bit, which is like, the mastiff I can't remember I said rolled around, so that'd be like, another bit, and then you'd have connectors to our heart.

AARON

Okay.

NATHAN

So the massive rolling around one theory of one of the sort of distinctive things that humans have disabilities is like, this quite general ability to break things up into these these tree structures. This is controversial within psycholinguistics, but it's broadly an area which I broadly buy it because we do see harms to other areas of intelligence. You get much worse at, like, Ravens Progressive Matrices, for instance, when you have, like, an injury to brokers area, but, like, not worse at, like, tests like tests of space, of, like, spatial reasoning, for instance.

AARON

So what is like, is there, like, a main alternative to, like, how humans.

NATHAN

Understand language as far as this specificity of how we pass completely novel sentences, as far as where this is just like this is just like the the academic consensus. Okay.

AARON

I mean, it sounds totally like right? I don't know.

NATHAN

Yeah. But yeah, I suppose going back to saying, how far is language like an outgrowth of general intelligence? An outgrowth like general intelligence versus having much more specialized language modules? Yeah, I lean towards the latter, despite yeah, I still don't want to give too strong of a personal opinion here because I'm not a linguistic this is a podcast.

AARON

You're allowed to give takes. No one's going to say this is like the academic we want takes.

NATHAN

We want takes. Well, gone to my head is.

AARON

I.

NATHAN

Think language is not growth of other abilities. I think the main justification for this, I think, is that the loss of other abilities we see when you have damage to broker's area and verca's area.

AARON

Okay, cool. So I think we basically agree on that. And also, I guess one thing to highlight is I think outgrowth can mean a couple of different things. I definitely think it's plausible. I haven't read about this. I think I did at some point, but not in a while. But outgrowth could mean temporarily or whatever. I think I'm kind of inclined to think it's not that straightforward. You could have coevolution where language per se encourages both its own development and the development of some general underlying trait or something.

NATHAN

Yeah. Which seems likely.

AARON

Okay, cool. So why don't humans have general intelligence?

NATHAN

Right. Yeah. As I was sort of talking about previously.

AARON

Okay.

NATHAN

I think I think I'd like to use go back to like a high level like a high level argument is there appears to be very surprised, like, much higher levels of functional specialization in brains than you expect. You can lose much more specific abilities than you expect to be able to lose. You can lose specifically the ability a famous example is like facebindness, actually. You probably lose the ability to specifically recognize things which you're, like, an expert in.

AARON

Who does it or who loses this ability.

NATHAN

If you've damaged your fuse inform area, you'll lose the ability to recognize faces, but nothing else.

AARON

Okay.

NATHAN

And there's this general pattern that your brain is much more you can lose much more specific abilities than you expect. So, for instance, if you sort of have damage to your ventral, medial, prefrontal cortex, you can say the reasoning for why you shouldn't compulsively gamble but still compulsively gamble.

AARON

For instance okay, I understand this not gambling per se, but like executive function stuff at a visceral level. Okay, keep going.

NATHAN

Yeah. Some other nice examples of this. I think memory is quite intuitive. So there's like, a very famous patient called patient HM who had his hippocampus removed and so as a result, lost all declarative memory. So all memory of specific facts and things which happened in his life. He just couldn't remember any of these things, but still perfectly functioning otherwise. I think at a really high level, I think this functional specialization is probably the strongest piece of evidence against the general intelligence hypothesis. I think fundamentally, general intelligence hypothesis implies that, like, if you, like yeah, if you was, like, harm a piece of your brain, if you have some brain injury, you might like generically get worse at tasks you like, generically get worse at, like at like all task groups use general intelligence. But I think suggesting people, including general intelligence, like the ability to write, the ability to speak, maybe not speak, the ability to do math, you do have.

AARON

This it's just not as easy to analyze in a Cogsy paper which IQ or whatever. So there is something where if somebody has a particular cubic centimeter of their brain taken out, that's really excellent evidence about what that cubic centimeter does or whatever, but that non spatial modification is just harder to study and analyze. I guess we'll give people drugs, right? Suppose that set aside the psychometric stuff. But suppose that general intelligence is mostly a thing or whatever and you actually can ratchet it up and down. This is probably just true, right? You can probably give somebody different doses of, like, various drugs. I don't know, like laughing gas, like like, yeah, like probably, probably weed. Like I don't know.

NATHAN

So I think this just probably isn't true. Your working memory corrects quite strongly with G and having better working memory generic can make you much better at lots of tasks if you have like.

AARON

Yeah.

NATHAN

Sorry, but this is just like a specific ability. It's like just specifically your working memory, which is improved if you go memory to a drugs. Improved working memory. I think it's like a few things like memory attention, maybe something like decision making, which are all like extremely useful abilities and improve how well other cognitive abilities work. But they're all separate things. If you improved your attention abilities, your working memory, but you sort of had some brain injury, which sort of meant you sort of had lost ability to pass syntax, you would not get better at passing syntax. And you can also use things separately. You can also improve attention and improve working memory separately, which just it's not just this one dial which you can turn up.

AARON

There's good reason to expect that we can't turn it up because evolution is already sort of like maximizing, given the relevant constraints. Right. So you would need to be looking just like injuries. Maybe there are studies where they try to increase people's, they try to add a cubic centimeter to someone's brain, but normally it's like the opposite. You start from some high baseline and then see what faculties you lose. Just to clarify, I guess.

NATHAN

Yeah, sorry, I think I've lost the you still think there probably is some general intelligence ability to turn up?

AARON

Honestly, I think I haven't thought about this nearly as much as you. I kind of don't know what I think at some level. If I could just write down all of the different components and there are like 74 of them and what I think of a general intelligence consists of does that make it I guess in some sense, yeah, that does make it less of an ontologically legit thing or something. I think I think the thing I want to get the motivating thing here is that with humans yet you can like we know humans range in IQ, and there's, like, setting aside a very tiny subset of people with severe brain injuries or development disorders or whatever. Almost everybody has some sort of symbolic reasoning that they can do to some degree. Whereas the smartest maybe I'm wrong about this, but as far as I know, the smartest squirrel is not going to be able to have something semantically represent something else. And that's what I intuitively want to appeal to, you know what I mean?

NATHAN

Yeah, I know what you're guessing at. So I think there's like two interesting things here. So I think one is, could a squirrel do this? I'm guessing a squirrel couldn't do this, but a dog can, or like a dog probably can. A chimpanzee definitely can.

AARON

Do what?

NATHAN

Chimpanzees can definitely learn to associate arbitrary signs, things in the world with arbitrary signs.

AARON

Yes, but maybe I'm just adding on epicentercles here, but I feel like correct me if I'm wrong, but I think that maybe I'm just wrong about this, but I would assume that Chicken Tees cannot use that sign in a domain that is qualitatively different from the ones they've been in. Right. So, like, a dog will know that a certain sign means sit or whatever, but maybe that's not a good I.

NATHAN

Don'T know think this is basically not true.

AARON

Okay.

NATHAN

And we sort of know this from teaching.

AARON

Teaching.

NATHAN

There's like a famously cocoa de guerrilla. Also a bonobo whose name I can't remember were taught sign language. And the thing they were consistently bad at was, like, putting together sentences they could learn quite large vocabularies learning to associate by large, I mean in the hundreds of words, in the low hundreds of words which they could consistently use consistently use correctly.

AARON

What do you mean by, like, in what sense? What is bonobo using?

NATHAN

A very famous and quite controversial example is like, coco gorilla was like, saw a swan outside and signed water bird. That's like, a controversial example. But other things, I think, which are controversial here is like, the syntax part of putting water and bird together is the controversial part, but it's not the controversial part that she could see a swan and call that a bird.

AARON

Yeah, I mean, this is kind of just making me think, okay, maybe the threshold for D is just like at the chimp level or something. We are like or whatever the most like that. Sure. If a species really can generate from a prefix and a suffix or whatever, a concept that they hadn't learned before.

NATHAN

Yeah, this is a controversial this is like a controversial example of that the addition to is the controversial part. Yeah, I suppose maybe brings back to why I think this matters is will there be this threshold which AIS cross such that their reasoning after this is qualitatively different to their reasoning previously? And this is like two things. One, like a much faster increase in AI capabilities and two, alignment techniques which worked on systems which didn't have g will no longer work. Systems which do have g. Brings back to why I think this actually matters. But I think if we're sort of accepting it, I think elephants probably also if you think that if we're saying, like, g is like a level of chimpanzees, chimpanzees just, like, don't don't look like quantitatively different to, like, don't look like that qualitatively different to, like, other animals. Now, lots of other animals live in similar complex social groups. Lots of other animals use tools.

AARON

Yeah, sure. For one thing, I don't think there's not going to be a discontinuity in the same way that there wasn't a discontinuity at any point between humans evolution from the first prokaryotic cells or whatever are eukaryotic one of those two or both, I guess. My train of thought. Yes, I know it's controversial, but let's just suppose that the sign language thing was legit with the waterbird and that's not like a random one off fluke or something. Then maybe this is just some sort of weird vestigial evolutionary accident that actually isn't very beneficial for chimpanzees and they just stumbled their way into and then it just enabled them to it enables evolution to bootstrap Shimp genomes into human genomes. Because at some the smartest or whatever actually, I don't know. Honestly, I don't have a great grasp of evolutionary biology or evolution at all. But, yeah, it could just be not that helpful for chimps and helpful for an extremely smart chimp that looks kind of different or something like that.

NATHAN

Yeah. So I suppose just like the other thing she's going on here, I don't want to keep banging on about this, but you can lose the language. You can lose linguistic ability. And it's just, like, happens this happens in stroke victims, for instance. It's not that rare. Just, like, lose linguistic ability, but still have all the other abilities which we sort of think of as like, general intelligence, which I think would be including the general intelligence, like, hypothesis.

AARON

I agree that's, like, evidence against it. I just don't think it's very strong evidence, partially because I think there is a real school of thought that says that language is fundamental. Like, language drives thought. Language is, like, primary to thought or something. And I don't buy that. If you did buy that, I think this would be, like, more damning evidence.

NATHAN

Yeah, I guess. Yeah. Cool. Okay, so maybe it's like maybe it's like yeah, maybe it's, like, slightly worth moving on from this to another like another sort of another piece of evidence which I've been sort of thinking about.

AARON

Yeah, go ahead.

NATHAN

So I suppose the other sort of piece of evidence thinking about is evidence from AI models. And so the thing which she like and sort of like two consistent patterns which we see, one, AI systems being getting being able to get very good at specific tasks without getting good at other tasks. And people consistently predicted that you won't be able to x unless you have AGI, and they've consistently been wrong about this. And two, this pattern inversion where tasks which seem hard for humans, like multiplying two tension numbers, are consistently easy for AI systems and vice versa. But tasks which hard for humans, tasks which are easy for humans, like loading the dishwasher is like a classic example, are very hard. You don't yet have an AI system which can go into a random kitchen and load a dishwasher.

AARON

Yeah, I think this is probably the weakest point you've mentioned, because that's my intuition. Yeah. I mean, in some I kind maybe there's a thing where it's like it seems like correct me if I'm. Wrong, but you kind of think that general intelligence, insofar as it's like a useful or true concept, should be kind of discrete or qualitative, I guess not qualitative, but should be discreet, a discrete ability that develops or something. It shouldn't be just like a smooth continuum from like you know what I mean?

NATHAN

Yeah, no, I think it definitely can be a smooth continuum if you're like. You get sort of the first spark of general intelligence and you can just turn this up by throwing more computing data at it, for instance. But I think the core thing which I'm trying to get, which I sort of think journal intelligence isn't like the way it's used in the sort of like in the way it was used in superintelligence, for instance. And I think the way in which it's used basically quite informally in the sort of AI theater community isn't used in this way in cognitive science, as far as I'm aware. This isn't like a nice, clean defined concept I'm trying to argue against. I'm sort of trying to infer meaning here. But I think a core feature of it is that there are some tasks which in actual intelligence systems you only get with other tasks and the more complex tasks, eventually there'll just be a point where you can task be so complex, you will need this general intelligence faculty. Intelligence actually exist. I still take your point about you can actually make a program with dozens. They'll need to have this general intelligence sorry. Oh, good, go ahead. Sorry. They'll need to have a general intelligence faculty. AI examples like evidence against both of these things. Chess was like a thing which people thought would require general attention capacity, and it doesn't, and it didn't. And people would not intuitively say that. People would say that solving differential equations in humans, for instance, be a thing which would require this general intelligence capacity. It's very easy to get computers to do this, very hard to get them to do many motor tasks or language until recently, which we thought of as like which we definitely don't think of as general intelligence. We think of as quite simple tasks. This is maybe less true for language, but definitely true of motor tasks. We definitely don't think of motor tasks as requiring general intelligence. It's actually just like an incredibly complex obstacle control problem which you have to solve. And this is like part of the reason why it's so difficult to get air systems to have motor skills anywhere close to the level motor skill teams currently have or other animals.

AARON

Yeah, the chess thing I was like talking in the 70s or something. I'm honestly not sure if I would predict that chess would be something that you can get via whatever deep blue, whatever that AI structure was, for one thing. I'm sure I think other people disagree, but I personally just don't think there's any given task that you need general intelligence for yeah, I think this is.

NATHAN

Like this is clearly true. This is clearly true.

AARON

One question. Okay. How do you think about GPT four? Do you think GPT Four what am I even asking here? Because you don't think it's a thing. Right? So how stupid do you think I am for thinking that GPT four is basically more like a human than a squirrel in the general intelligence game?

NATHAN

Yeah. So I suppose GPT four fits into my current worldview around general intelligence is yeah. I think what this tells me is within human language there's lots and lots of structure and lots of structure about the world. And from this structure about the world, you can do many there's lots of reasoning tasks you can do. There's also lots of reasoning tasks you have absolutely no hope of doing very far away from motor control or visual recognition or what's some other core human tasks which are really difficult for AI systems to do. Sorry, visual stuff and motor control are my two go to.

AARON

Yeah. So, I mean, I kind of just think like if you oh, sorry, go ahead.

NATHAN

No, yeah, go.

AARON

I think like the exact same, just GPT Four just scaled up. The system isn't currently attached to a robotic arm or whatever, but say you were emulate a human hand really well as a robotic arm or whatever, and then you gave a gptn or whatever, like a PDF describing the whole setup and a description of a task with the exact specifications as like a CAD file. I think, yeah, it will be able to interesting, like juggle or whatever.

NATHAN

This is like an actual so I think has no chance of doing that. No chance it does that. Sorry. No, go ahead.

AARON

What do you think, which I thought.

NATHAN

You'Re going to say I think I'm going to agree with is it seems like Transformers can generically learn any sequence and then do sequence modeling extremely well. There's some sense in which the transformer architecture is like general intelligence for sequence modeling. Anything which should be presented as a sequence can be modeled well by Transformers edgeworm just depends on what data they're sort of trained on. It's the thing I thought you were going to say no, I think there's like no chance that from reading the PDF do I actually think this?

AARON

I want to bet not very much.

NATHAN

I think it depends on this specificity and if it was okay, here is like you like have I need you to output like a string of numbers which tells what power these 5000 motors which control a human body to do. I need you to walk.

AARON

Up like.

NATHAN

A rocky hill and the only input you have, your only input is input in numbers from the sensors on the robot and the only output shrink can have is what number of watts to send to each motor. I think there's like no chance like GPCN does this.

AARON

I'm surprised to hear why.

NATHAN

Yeah, because this is one of the grand challenges in robotics, and currently, only very specialized systems have had success in doing this. I don't see how gptn could learn how to do this from the corpus of text, which is on the Internet. This seems to apply extremely strong hypothesis of immersion abilities, which are just incredibly complex. So much of brains of animals and so much of the brains of humans are devoted to motor control. Yeah, it's just like one of the big challenges, robotics, to do RL based motor control.

AARON

Well, I feel like that's actually RL based learning motor control. I feel like that's actually easier than what I'm proposing, because I always forget the difference, because I forget the type of learning things or whatever. But in broadly ML based motor control, there's like a period of trial and error. And I'm proposing that for a novel task, say, juggling three balls for ten minutes in an actual physical room via two robotic arms like hands, it will be able to do this, I guess, at least in principle, and probably in reality. Even if you scrubbed all the training text of information about juggling, but not all the information about physics or whatever yeah. Then an emergent ability would be on the very first try of an Arbitrarily strong system. Arbitrarily strong GBT four. That it. Juggles correctly.

NATHAN

It's only trained on the corpus, like the current Internet.

AARON

Yeah.

NATHAN

I'd very happy to take this bet.

AARON

Okay. We can finalize the details later. I'm kind of risk averse, but I still want to actually do it.

NATHAN

We don't have to do it very much money at all.

AARON

Yeah, okay.

NATHAN

Yeah, I know. I think this is just, like it would seem like, bizarre to me that many very smart teams, including teams at DeepMind, are working on trying to do this, trying to get robotics, trying to robotics to get most controls, like a similar degree, which humans are doing. They are not doing this using corpuses from trying to get text corpuses. It's just very hard.

AARON

Maybe juggling is not a great I stand by maybe it's not, like a great example because it's, like, relatively it's, like, difficult even for a human, I guess marble or something.

NATHAN

Playing football is like a kind of benchmark which is being used.

AARON

Although then the question is, what does it even mean? There's no discrete yes no type of thing. Did it play football?

NATHAN

No. But you could see how competent the year plays.

AARON

Yeah. Intuitively, I feel like appearing to be a competent soccer for us American football player is actually associated than juggling three balls, because I don't know.

NATHAN

I suppose do keep yuckies or something. I know. Do a cross turn. Sorry. You Americans.

AARON

Okay. I feel like a good example would be like tossing a ball from one hand to the other such that reaches a height of, like, 3ft above the hand, or something like that. I do know how to juggle, and this is like step one. It's like you practice just like juggling, but with one ball instead of three, and yeah, I guess no, it seems actually not that far fetched, actually. If you describe via text how the robotic arms work and what type of inputs they take and how those correspond to physical movement, and you have the dimensions of the ball and give it g or whatever, GME and gravity, like 9.8 meters squared. Yeah, it just does seem to be like a text prediction task, what output, when given to the robotic arms, makes this ball work or something.

NATHAN

I agree that if in its corpus, there was lots of data, I can completely agree if this text was in its corpus.

AARON

Yeah, but like a physics textbook doesn't count.

NATHAN

No, not anywhere close. Not anywhere close.

AARON

No one physics textbook. I guess, empirically, I guess it depends on I could just be mistaken about how hard it is, how many parameters the input to toss a ball actually necessitates. I do stand by that given some large enough body of text describing how the world works, this would be an emergent capability.

NATHAN

Yes, I agree with this. I don't think this emerges unless there's, like, this in its training corpus. This being what, as an example, data of the specific watts which need to go into motors in response to whether image data and maybe even tactile data to do motor tasks. I don't think it has to be juggling in particular. I think it has to be like, that kind of data, I think, could do that.

AARON

I feel like the emergent capabilities of GPT 3.5 and four just, like, pretty strong evidence. You don't need that concrete of the data. Doesn't have to be that analogous for a strong system to make use of it well.

NATHAN

What examples do you have?

AARON

Um, I mean, the thing that popped into my mind is GPT. No, that's not a good example. I was going to say GPT-2 playing chess, but like, there's totally chess in the in the yeah, I guess, like, I maybe I want to test actually, I don't have access to the GPT four API, but something to test would be like make up a game that you're as simple as you can reasonably come up with while still being pretty confident. That doesn't exist on the Internet. I don't know, some variation of cards and tic TAC O or something, and describe it in as much detail as you possibly can and see if it seems to get it. You know what I mean? And I think the answer is, like, yes.

NATHAN

Yeah. So I think, like, the disconnect here is, I view, like, this most controlled task as much more similar to, like much more similar to, like you have, like you have like, a sequence of, like, of like a T's and G's. What shape does this protein go? Into than, like, I have this simple game and can you play this confidently? Roughly half the neurons of the human brain are devoted to motor control.

AARON

Okay, so maybe half of the neurons in GPT eleven will be devoted to motor control.

NATHAN

If you had this minute training corpus, then totally yes, and I completely believe this. If you had this mixture training corpus wait, no, sorry.

AARON

Actually, the proportion I don't stand by that proportion, actually. I think there's probably some proto motor control systems in GPT four in the sense of being able to play some absurdly simple version of Pong if you just give it, I don't know, like, pixel representations or something in like a 32 x 32 square or something like that. And it's like this is sort of a protomotor type of thing if you disagree. But maybe that's like 1% of the total informational content, but eventually 1% of a gigantic amount of informational content just does contain enough do, like, human level motor task.

NATHAN

I don't think any of the any of the information you need. That's probably not true. It probably are textbook somewhere. I should even think this. Are they even files of this anywhere? I don't even know if they're files for this anywhere. Okay, so what do we think is the Crux here? Yeah.

AARON

Honestly?

NATHAN

Sorry.

AARON

No, I honestly don't know.

NATHAN

Yeah, so I wonder if the Crux is like, um I think you need, like, specific data on, like okay, I.

AARON

Think this is the Crux.

NATHAN

I don't think I don't think that, like, the abstract knowledge of, like, the equations of motion are anywhere near enough to do motor control. I think it would need in its corpus text representations of examples of robot doing composite motor control to learn composite motor control.

AARON

This is like extreme galaxy brain. But I want to say, just, like, the universe itself is, like, proof of concept or whatever, that you just give it the law. You just give some system laws of physics or just like, some system is the laws of physics and the output is humans juggling.

NATHAN

No, this is wrong. If you do RL sure.

AARON

Like, the universe itself, there's no informational presumably, like, the informational content of the universe generating function is like, I don't know, like a kilobyte or less or whatever. Not very much.

NATHAN

I have no idea how you can respond to this claim. I'm sorry.

AARON

Wait, I feel like this is not that original. Other people have used this type of analogy, but I don't know, I guess quantum stuff does complicate a little bit.

NATHAN

Has this analogy made the prediction which has been proven correct?

AARON

I don't think so, but that type of thing is overrated.

NATHAN

Okay. I love testing predictions. It's my favorite thing to do.

AARON

So do I. I think it's just, like, overrated. I don't know, like, empiricism or like some people dismiss any type of claim that's, like, not that can't be settled as just devoid of meaningful content. I am anti this.

NATHAN

Yeah, I'm also anti this. Okay. But I suppose I don't know when. I sort of have lots of evidence from current and past AI systems and what's been hard for them to do and the degrees which they've generalized and how hard it's been to do certain tasks. And also all the evidence from cognitive science, and I compared this to this. What is the generating function of the universe? I'm like, okay. I don't know. I know which one I'm putting my stock. I know which one I bet on.

AARON

I think I sort of lost you there. But the universe thing, I don't know. Maybe I guess I want to know what some actual I guess, like, Sean Carroll thinks about who's Sean Carroll. He's a podcast guy. He's listening, but he's not hello. He's physicist turned more into philosophy knowledgeable about physics and philosophy. I guess there's definitely, like a strain of naive and criticism in physics which is just like set up and calculate or whatever. Like volley don't exist.

NATHAN

Yeah. I feel like I can't really evaluate this argument.

AARON

Yeah, it's not really fair. I guess I can't either. I don't know.

NATHAN

It feels like I don't know. I'm going to wait till it makes a prediction before I feel obligated to think about it.

AARON

I mean, like one prediction. I don't know if this counts as a prediction, but like simulation theory or not the simulation argument, but insofar as you think that this universe could be simulated, it's like a virtual machine or whatever. It's like running on the logs of visit.

NATHAN

I just don't get the connection here between gptn without data on what velocities to run motors at can learn how to do fine motor control has to this argument.

AARON

I guess I'm kind of willing to stand by in the sense that a relatively small amount of description basically equations for the laws of physics plus a starting state of the universe. Not even the universe, just like, say, like the starting state of my chair or something. Sure, okay. You know what I mean? There's a lot of prediction there.

NATHAN

If you have a lot of time to fuck around and find out, then, yeah, maybe. But now we're just in like, okay, so now we're just like simulating evolution. The amount of computation it takes to simulate evolution is now what's required of.

AARON

Like yeah.

NATHAN

I just don't think this is like, any bearing on the general intelligence hypothesis.

AARON

Yeah, you're probably right.

NATHAN

I agree that an RL system can learn how to do this. I agree that a transformer can do sequence prediction if it has the data and what is fucking around and finding out if not generating data.

AARON

Fair enough. Okay. How long has it been? Okay. It's been, like, October, I think. I kind of want to come to a scale, mate. I don't even know, actually. Maybe you think that's too generous to me because you clearly know much more about all.

NATHAN

Knowing more things is not sufficient to be right. I think we should let the audience decide you have how much this has, in fact changed your views. And this is the no, this is.

AARON

No, you've definitely pushed me in the direction of General Intelligence is maybe just like a smooshed together bunch of modules or something. Sure, yeah. Should I make a manifold market about broadly who's right about General Intelligence? Aaron or Nathan? I might just do it regardless of sure, go ahead.

NATHAN

I think my guess is I will lose this man for Marcus.

AARON

I mean, there's going to be, like, four people who listen to this, and maybe one of them be I'm not going to say who it is. I think he knows.

NATHAN

I kind of know who it is. Cool.

AARON

Okay. I hope you come back on Pigeon Hour because you know a lot of random shit. Not random, well selected shit. Sure.

NATHAN

I know some well selected shits. Well. Yeah, I'd love to come back on at some point. This was also a very enjoyable pigeonhole. Cool.

AARON

Thanks. See ya. Thanks, Aaron. Bye.

0 Comments