An Evolutionary Theory of Moral Development
Might an expanding circle of moral inclusion be a side effect of the world's growing complexity and interconnectivity?
12,000 B.C.
Let’s say you’re a caveman living with a small tribe in 12,000 B.C. You have a personal relationship with everyone you interact with, and unless someone dies you can expect to have repeated interactions with them in the future. From a selfish point of view - both your personal wellbeing and the evolutionary success of your genes - there is no need for “morality” above and beyond the rational decision to cooperate with others so as to foster indefinitely-fruitful relationships.
It’s the classic tit-for-tat solution to a repeated prisoner’s dilemma; everyone in the tribe is best off cooperating with others all the time because acting selfishly will provoke retaliation from one’s tribesmates. Perhaps you’ve implicitly reified this into a proto-moral code of “being nice to everyone I know is the right thing to do,” but consciously calculating that it is wise to treat others well would work too.
But if your tribe does come across another, all bets are off. The stronger one will extract whatever it can from the weaker, whether hunting territory or stockpiles of food. And if your tribe happens to extend its intra-tribe spirit of mutual cooperation to a weaker weaker one, it will be outcompeted by those that do not.
6,000 B.C.
Fast forward to 6,000 B.C. and you now live in a small village with a few other villages nearby. You personally know everyone in your own town, but occasionally walk to another to trade or come across an outsider at home. Now, from a selfish perspective, you can do better than treating everyone well. You can, say, steal a chicken from a town nearby and never return to face consequences (and let’s just stipulate that your victim won’t go to the trouble of seeking revenge).
But there’s a problem; if everyone starts maximizing their own wellbeing in this way, towns will stop trading with each other and everyone will be worse off. Clusters of towns that develop a norm of treating outsiders well will outcompete those that do not. Thanks to cultural evolution and memetics, we’ll likely see such a norm proliferate.
But if your town cluster comes across a group from a far away land (perhaps indicated by a peculiar skin color), all bets are off. If your cluster is powerful enough, it will extract whatever it can from these foreigners. And if your cluster’s moral consideration happens to extend to foreigners, it will be outcompeted by those whose do not.
1980 C.E.
Fast forward a bit more, and you are now living in an American suburb well into the rise of modern capitalism. While racism still thrives in your country, you are steeped in a progressive milieu that favors racial and gender equality. After all, the business you work for depends on complex webs of international supply chains and sells to anyone with the money to spare.
No longer are there any large chunks of the human population whom you can extract from with impunity, even from a selfish perspective. Doing so might disrupt cooperation in your supply chains or alienate your customers. So, naturally, your society gradually develops an ideology of meriticratic non-discrimination. As the world coalesces on a set of such norms, international cooperation gives rise to an economy capable of producing shockingly inexpensive goods and services, and the population - liberated from any Malthusian constraint - continues its exponential balloon.
But not everyone is included in your moral circle. After a long day of amiable negotiations with Chinese and Nigerian suppliers, you usually sit down to a dinner of cow, pig, or chicken - never even pondering whether these animals deserve consideration.
2020 C.E.
Fast forward one last time, and it’s the present day. A few months ago, someone ate a bat in Wuhan, China and now you don’t feel safe going to McDonalds. In recent years, the internet has disrupted global power dynamics and destabilized politics across the globe, and Donald Trump (!) is now your president. The business you work for now has to contend not only with diverse partners and customers but a social media information ecosystem in which the “crowd” can make or break your brand.
In short, the world seems increasingly chaotic and unpredictable, with an unmanageable number of people, institutions, and systems capable of affecting your life. It doesn’t matter whether the world really is more complex than it was twenty years ago, because it sure feels that way. Meanwhile, a massive racial justice movement is sweeping cities across the globe, and eerily-realistic veggie burgers and sausages recently hit the menus of Burger King and Dunkin’ Donuts.
A Theory
Over the last 14,020 years, your circle of moral inclusion has grown in tandem with the size and complexity of your network of positive-sum interactions. Functionally, morality is (at least partially) an evolved social technology for facilitating cooperation.
However, too much cooperation won’t help you or your society thrive; if the caveman of 13,000 B.C. had the moral intuitions of his 6,000 B.C. counterpart, he may end up starving to death after declining to steal a rival tribe’s food stocks. If the townsperson of 6,000 B.C. had the moral intuitions of his 1980 counterpart, he might have been unduly trusting of those peculiarly-colored outsiders and wound up enslaved or killed in conquest.
It seems likely, then, that (biological) evolution would have endowed us with a psychological faculty for detecting how wide our moral circle of inclusion ought to be as a function of the size and complexity of our network of interaction. But just as our evolutionary drive for securing nutrition and social support leads us to obesity and anxious social media obsession in the modern world, this moral intuition-generation faculty may too become maladaptive in a hyper-complex modern world.
“Maladaptive,” however, merely means “not conducive to narrowly-selfish wellbeing or evolutionary success,” not “bad.” In fact, unlike obesity and anxiety, I think it is plausible that these maladaptive moral intuitions generated in the modern world might be genuinely more accurate and better (if you’re a moral realist) or more preferable (if you’re not) than more historically common and “adaptive” ethical beliefs.
How it Works, Maybe
In particular, I’m thinking of a moral intuition-generation system that works like this:
As you grow up, you detect the size of your society’s web of cooperative interaction.
You then set your “moral thermostat” large enough to include everyone whom you might cooperate with but not so large that you miss out on extracting something useful from some outsider group.
An Analogy
An important analog, which scientists at least regard as plausible, is the “human life history strategies” theory which posits that humans detect how secure and abundant their environment is and adjust life strategies accordingly to maximize reproductive fitness. For example, a person who grows up amidst violence and poverty learns to have relatively more children in order to compensate for their smaller likelihood of success, whereas that person’s identical twin from a comfortable, safe neighborhood will spend more resources on fewer children because he can be confident that those resources will enable them to thrive.
It seems reasonable that this adaptation helped our genes proliferate when “safe” environment meant 30% childhood mortality instead of 60%, but something changes in a world of affluence, modern medicine, and robust social support.
Check out these graphs of fertility rate vs GDP per capita and of fertility rate vs human development index:
For sake of argument, let’s accept that the human life history strategies model is at least a contributing factor to the clear inverse relationship between wealth and fertility. What’s striking to me is that so many of the countries fall below 2.1 children per woman, or replacement fertility. More recent data (see this 2018 BBC article) puts the proportion of countries at half. In other words, without immigration, half of all countries would see their populations shrinking.
Below-replacement fertility is so plainly contrary to the evolutionary success of a person’s genes that it seems only explainable as a maladaptive (from a genetic perspective) consequence of some system not designed for the modern world. People in wealthy countries are detecting that the world is super duper duper safe and setting their life strategy thermostat accordingly.
Back to the Point
I introduce this analogy because it exemplifies a case of us humans having a clever detection-and-adjustion system for regulating behavior, which goes off the rails when applied in the modern world. When we detect that our environment is hyper-complex and with more plausibly-cooperative relationships than our great-great-grandmother could have imagined, we set our “moral inclusion” thermostat to 11. What does that look like?
For one thing, it might look like an even more robust consideration of fellow humans. Have all the cynical takes about signalling you want; the recent Black Lives Matter/criminal justice reform movement in the U.S. and across the globe is driven at least partially by a genuine moral concern for racial justice - not just by people in the communities adversely affected but also by affluent, privileged whites.
For another, I would argue that environmentalism often reflects a direct moral concern for nature rather than a recognition of its instrumental benefit (or perhaps detriment) for humans or animals.
Finally, and most important to me, is the apparent (though perhaps illusory) rise in concern for animals, most especially farm animals such as chickens and pigs. While per capita meat consumption continues to rise, even in the U.S., there’s some evidence that this is driven by public misinformation and cognitive dissonance rather than affirmative, conscious disregard for animals. From the linked study (bullets shortened by me):
54% of US adults say they are “currently trying to consume fewer animal-based foods
49% of US adults support a ban on factory farming, 47% support a ban on slaughterhouses, and 33% support a ban on animal farming. The support for the latter two exceeded even researcher expectations.
58% of US adults think “most farmed animals are treated well,” despite those same estimates and over a decade of undercover investigations in the US and USDA data suggesting over 99% of farmed animals live on factory farms. This suggests either we have insufficient awareness of factory farming, or people have refused to accept the evidence…
75% of US adults say they usually buy animal products “from animals that are treated humanely,” despite estimates suggesting fewer than 1% of US farmed animals live on non-factory farms. This suggests a psychological refuge effect…
In short, it remains plausible that people have an increased psychological concern for animals, even if this fails to manifest through behavior.
Finally, and most obscurely, are a few niche ideas and social movements that at least seem to be rising in influence and/or popularity. The effective altruism movement at large has grown from nonexistent to modestly influential in the last decade or so. There is now a small research and advocacy community around wild animal suffering. Discussion about computer sentience seems to be taking place at all levels of sophistication, from film to academic philosophy.
Implications
What should we expect if this theory is true? If the world is increasing in perceived complexity (which I think is probably true but certainly not obvious), then our collective willingness to attach moral value and consideration will rise indiscriminately. This has two broad implications:
New consideration for things that deserve it - likely with positive consequences.
New consideration for things that do not deserve it - likely with negative consequences.
The Good
To lay my cards on the table, I believe that things deserve moral consideration in proportion to their degree of sentience and ability to suffer. Regardless of your preferred ethical theory, though, it’s pretty likely that expanded consideration of, say, the global poor, people of other races, or animals that are almost certainly sentient (like cows and pigs) qualifies as a good thing.
This section is short, but don’t let that be mistaken for a lack of importance. I think the changes I’m describing are likely net-positive.
The Bad
But not everything deserves moral consideration; to take a trivial example, I’m almost certain that my water bottle can’t think or feel and thus doesn’t deserve anything. As I mentioned above, I think one of the most important examples of misplaced moral concern is in nature. Even though “appeal to nature” arguments are generally accepted as fallacious, it seems that conservation of natural environments is widely regarded as intrinsically good. To be sure, this might sometimes accidentally work to serve the interests of humans or animals, and it’s not always clear when respect for something indicates an intrinsic moral concern.
However, moral concern is, to some extent, a zero-sum game. The more who have a seat at the table, the less anyone’s vote can count. When this means that wealthy white men have to start competing with, well, everyone else for moral consideration, the final result is a much better world. But if water bottles start getting a seat at the table too, the final result is a bunch of money diverted from health or food security to water bottle welfare.
With nature, this kind of trade-off might involve brutally killing invasive species for the sake of some perceived ecological homeostasis, like when the U.S. government airdropped Tylenol-infused dead mice into Guam to poison invasive snakes to death. I can also see things like art, history, languages, and other cultural entities as well as non-sentient computer programs falling into the same bucket as nature now or in the near future.
Can a Maladapted Culture Survive?
Thus far, this post has suggested that the world as a whole may be getting (or seeming to get) more complex. Even if true, different cultures and countries will be experiencing different rates of change. In the long run, I am more concerned by the amoral, blind force of cultural evolution than the direct unfortunate consequences of a widening moral circle.
Until now, it seems that groups of humans more inclined to treat others well have profited from the resulting networks of mutually-beneficial cooperation. A society that gives moral consideration to those who cannot participate in these types of network might be kinder and even closer to the best or truest version of morality, but it also might be analogous to a society with below-replacement fertility - fostering the flourishing of those already within that society while eventually leading to its own demise.
There are a million ways this could happen. Perhaps more inclusive morality within a nation somehow reduces long-run economic growth by a fraction of a percent relative to other nations. Perhaps it somehow reduces relative fertility.
Then again, not all liabilities are fatal. As long as complexity remains positively correlated with other mechanisms of social strength, “maladaptive” morality need not sink the whole ship. Intuitively, technological capacity and raw wealth seem pretty likely to correlate with perceived world complexity while compensating for any weakness generated by a bloated moral circle.
Conclusion
I’m not overwhelmingly confident in this “humans set moral thermostat as a function of detected world complexity” theory. But playing out the implications of such an idea in writing has got me thinking about what other social systems become maladaptive once a certain threshold of some input is reached. There is already plenty of discussion over this phenomenon at the individual level, as with the adverse effects of abundant and hyper-palatable food or abundant superstimuli like porn.
Fertility rate as a function of perceived safety is an example of one such social system. What might some others be, and what, if anything, can we do to preserve the benefits of modernity without sending countries or cultures into decline?
Acknowledgement
I got the main idea for this post while reading Robert Wright’s Evolution of God. In the book, he proposes that monotheism was a technology functionally for aiding political and moral expansion; if two peoples need to start functioning as one nation, it is very useful to consolidate their seperate gods into one. Recommended, and the audiobook is free on Libby.
This is interesting. It doesn't really demarcate itself against equally plausible speculative theories, of which several explain as much while assuming less, but if you managed to get solid object-level support for claims that discard alternatives, I'd be happy to say I read it here first. Good luck, Aaron.