“Mad behaviour: the psychologist Joseph Henrich on what makes us weird” By Sophie McBain [New Statesman]

“Mad behaviour: the psychologist Joseph Henrich on what makes us weird

The Harvard professor on how most claims about human nature are based on people from “Western, educated, industrialised, rich, democratic societies”.

Sophie McBain

New Statesman



In 2010 Henrich co-authored a landmark paper titled, “The weirdest people in the world?” It observed that almost every claim made about human psychology or behaviour is based on studying people who are “Weird”; that is, from Western, educated, industrialised, rich, democratic societies. These people are also weird – statistical outliers.  

Henrich’s research suggests our cultural environment, the norms and institutions we inherit, alters our psychology – and even our biology – in profound ways. Take learning to read. Becoming literate thickens your corpus callosum, which connects the brain’s right and left hemispheres and alters the parts of the brain responsible for processing speech and thinking about other minds. Literate people tend to be worse than others at recognising faces and are more likely to think analytically – breaking problems or scenes into component parts – rather than holistically.

Henrich contends that, compared with much of the world’s populations, “Weird” people are more individualistic and self-obsessed, and more likely to defer gratification, to stick to impartial rules and to trust strangers. They are less likely to extend special favours to friends or family. They’re more likely to feel guilt (a sense of having failed to meet one’s own self-imposed standards) than shame (a sense of having let down one’s community).


Henrich’s book poses a challenge to psychology, a field grappling with the so-called “replication crisis” – the realisation that when psychologists repeat an experiment, they often get different results. Henrich believes the discipline suffers from a “theoretical crisis”. “There’s no overarching theory that tells you what kind of effects you should expect. And that causes psychologists to try a bunch of stuff, which breeds a lot of false positives.”

The Weirdest People in the World is a provocative book. Human rights activists, for example, might bristle at its suggestion that in certain countries, individual rights aren’t a good psychological “fit”. But Henrich wants to avoid normative conclusions. “Like any science, [the book] can be useful to achieve your goals, but people might have different goals. I can see it being used by people who want to figure out how to spread human rights. I could see it being used by those who don’t.”

“What is Human Behavior and Evolution Society (HBES) Doing About the WEIRD Problem?” By Chris von Rueden & Coren Apicella

What is HBES Doing About the WEIRD Problem?

October 3, 2020/in Newsletter

By HBES Executive Council Members, Chris von Rueden & Coren Apicella

What is HBES Doing About the WEIRD Problem?

Evolution and Human Behavior (EHB) just released its September issue, which is devoted to highlighting ongoing research in the evolutionary social sciences that expands beyond WEIRD (Western, Educated, Industrialized, Rich, and Democratic) populations. This special issue, titled “Beyond WEIRD, a decade later: Population diversity in the evolutionary study of human behavior,” was edited by Coren Apicella, Ara Norenzayan, and Joseph Henrich and features articles on topics including evolutionary medicine, cooperation, leadership, morality, and developmental psychology.


Now, authors who submit to the journal are required to fully describe their samples. For instance, authors are now asked to specify the geographic location from which their sample was drawn, how their data was collected (online or in-person), and any theoretically-relevant characteristics pertinent to the research study, such as religion affiliation, race/ethnicity, and gender identity (inclusive of non-binary options). And importantly, authors must also specify the source of the sample in their Abstract. Manuscripts that do not adequately describe samples will be returned to authors for revision prior to consideration.


The September EHB issue, “Beyond WEIRD, A Decade Later: Population Diversity in the Evolutionary Study of Human Behavior,” offers some criticisms, but its contributors are also optimistic about the future of evolutionary social science. We agree that the methods and theory will only get better, and that is in part because of the disciplinary diversity of our community. In particular, the dialogue between anthropologists and psychologists has been, and we hope will continue to be, an engine at the heart of the creativity and productivity of HBES.

“You’re most likely WEIRD … and don’t even know it” By Douglas Todd

“You’re most likely WEIRD … and don’t even know it

Opinion: WEIRD is a high-impact acronym invented by psychology professors at UBC, referring to people who are ‘Western,’ ‘Educated,’ ‘Industrialized,’ ‘Rich’ and ‘Democratic’

Douglas Todd



Everybody talks about diversity now. But when these profs examined contemporary social-science research they uncovered a huge blind spot to cultural differences, which has led to misleading conclusions about human psychology and, for that matter, human nature.

The colleagues published a ground-breaking paper in 2010 that showed more than 96 per cent of experiments in social psychology were based on subjects who are WEIRD. Compared to the vast majority of people on the planet, WEIRD people tend to be highly individualistic, control-oriented, nonconformist, analytical and trusting of strangers.

We are not the global norm. As Henrich says, “Textbooks that purport to be about ‘Psychology’ or “Social Psychology’ need to be retitled something like ‘The Cultural Psychology of Late 20th-Century Americans.’ ”


Henrich explains all this and much more in his new magnum opus, titled The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous. Despite its 680 pages, it’s quite readable.

Henrich’s book takes the UBC crew’s understanding of WEIRD traits to new levels of significance. Gleaning from history, philosophy, religion and anthropology it attempts to explain why there are differences between cultures, including why some are more prosperous. It’s reminiscent of the trans-disciplinary project Jared Diamond took on with Guns, Germs and Steel, which maintained geography shaped Eurasian power.


“WEIRD people are bad friends,” Henrich writes in one catchy subtitle.

WEIRD people aren’t really willing to lie for a friend, he explains. In a cross-cultural experiment in disparate nations, participants were asked to imagine what they would do if they were a passenger in a car with a close friend who, while driving above the speed limit, hit a pedestrian.

More than 90 per cent of people in WEIRD countries such as Canada, Switzerland the U.S. would not testify their friend was driving slower than he was. “By contrast, in Nepal, Venezuela, and South Korea most people said they’d willingly lie under oath to help a close friend.” Communal bonds matter more in places that are not WEIRD.


While clearly disposed to “celebrate diversity” he avoids the cliché that, because of our common humanity, “deep down everyone’s the same.” It’s only true to a small extent: If we’re cut with a sharp object, for instance, we all bleed.

But because of our collective histories and cultures humans can actually turn out starkly different. So much so that Henrich makes it clear that ethnic and religious conventions can rewire the structure of our brains, even our genes.

It’s a real-world position: Humans become the peculiar and often amazingly different people they are due to myriad unrecognized cultural forces.”

“The Dark Side of Smart” By Diana Fleischman [Nautilus]

“The Dark Side of Smart

Diana Fleischman
Diana Fleischman is an evolutionary psychologist at the University of Portsmouth, writing and living while on sabbatical in Albuquerque, New Mexico. Follow her on Twitter @sentientist.


Manipulative communication surrounds us. With misinformation and disinformation about the pandemic, “cheap” and “deep” fakes of elected officials, and targeted ads and emotionally exploitative social media algorithms, it can begin to feel like all communication is manipulation.

Well, as it turns out, this is the thesis of an influential paper by evolutionary biologists Richard Dawkins and John Krebs. The cynicism behind this statement can make many people uncomfortable. When we think about communicating, we tend to think about our own thoughts and feelings rather than how we might be influencing others. One major reason an evolutionary perspective on our own behavior can be so confronting is that it doesn’t take our word for why we do things. It looks at how what we do influences the two core currencies of life on earth, survival and reproduction.


When minds start to figure out other minds, a lot of cognitive power gets built up that can be used for other things. Consider one of the groundbreaking insights in evolution in the last few decades, the idea of the “extended phenotype.” Evolution isn’t just acting on an individual’s characteristics but the way it interacts with the environment—including other minds. Evolution is selecting not just on the teeth and tail and claws of a beaver, but also on how well its dam keeps out water. Not just the bees’ wings and bodies but also the structure of their hive.


Hold on a minute you might be saying to yourself—you evolutionary people are so cynical—didn’t we also get smart to cooperate? Perhaps, to some degree. But research suggests intelligence has been a lot more important, especially for theory of mind for competition, than for cooperation. Evolutionary models, for example, have shown that competition promotes the ability to think about other minds more strongly than cooperation. And studies have shown that areas of the brain related to thinking about other minds are activated more by competition than cooperation.


Human intelligence is incredibly useful but it doesn’t safeguard you against having false beliefs, because that’s not what intelligence is for. Intelligence is associated with coming up with more convincing bullshit and with being a better liar, but not associated with a better ability to recognize one’s own bias. Unfortunately, intelligence has very little influence on your ability to rationally evaluate your own beliefs, or undermine what’s called “myside bias.”

The dark side of smart is that whenever we do good works, and cooperate, we draw from our manipulative past. The even darker side of smart is that competition doesn’t just select an ability to manipulate but also an adaptive ability to be unpredictable. And one of the best ways to be unpredictable is to not know yourself. So we have evolution to thank for shielding us from complete self-knowledge. As a result, most of our own minds are shrouded in darkness. Perhaps that’s for the best. We might not like what we’d see.”

Why Are We in the West So Weird? A Theory By Daniel C. Dennett [On The WEIRDest People in the World By Joseph Henrich (Farrar, Straus and Giroux, 2020)]

“Why Are We in the West So Weird? A Theory

According to Joseph Henrich’s book, it was the advent of Protestantism, aided by the invention of the printing press, that brought along the spread of literacy and altered the workings of our brains.

By Daniel C. Dennett

Sept. 12, 2020

How the West Became Psychologically Peculiar and Particularly Prosperous
By Joseph Henrich


The world today has billions of inhabitants who have minds strikingly different from ours. Roughly, we weirdos are individualistic, think analytically, believe in free will, take personal responsibility, feel guilt when we misbehave and think nepotism is to be vigorously discouraged, if not outlawed. Right? They (the non-WEIRD majority) identify more strongly with family, tribe, clan and ethnic group, think more “holistically,” take responsibility for what their group does (and publicly punish those who besmirch the group’s honor), feel shame — not guilt — when they misbehave and think nepotism is a natural duty.


WEIRD folk are the more recent development, growing out of the innovation of agriculture about 10,000 years ago, the birth of states and organized religions about 3,000 years ago, then becoming “proto-WEIRD” over the last 1,500 years (thanks to the prohibition on marrying one’s cousin), culminating in the biologically sudden arrival of science, industry and the “modern” world during the last 500 years or so. WEIRD minds evolved by natural selection, but not by genetic selection; they evolved by the natural selection of cultural practices and other culturally transmitted items.

Henrich is an anthropologist at Harvard. He and his colleagues first described the WEIRD mind in a critique of all the work in human psychology (and the social sciences more generally) built on experimental subjects almost exclusively composed of undergraduates — or the children of academics and others who live near universities. The results obtained drawing on this conveniently available set of “normal” people were assumed by almost all researchers to be universal features of human nature, the human brain, the human emotional system. But when attempts were made to replicate the experiments with people in other countries, not just illiterate hunter-gatherers and subsistence farmers but the elites in Asian countries, for instance, it was shown in many cases that the subject pool of the original work had been hugely biased from the outset.

One of the first lessons that must be learned from this important book is that the WEIRD mind is real; all future investigation of “human nature” must be complicated by casting a wider net for subjects, and we must stop assuming that our ways are “universal.” Offhand, I cannot think of many researchers who haven’t tacitly adopted some dubious universalist assumptions. I certainly have. We will all have to change our perspective.


This is an extraordinarily ambitious book, along the lines of Jared Diamond’s “Guns, Germs and Steel,” which gets a brief and respectful mention, but going much farther, and bolstering the argument at every point with evidence gathered by Henrich’s “lab,” with dozens of collaborators, and wielding data points from world history, anthropology, economics, game theory, psychology and biology, all knit together with “statistical razzle-dazzle” when everyday statistics is unable to distinguish signal from noise. The endnotes and bibliography take up over 150 pages and include a fascinating range of discussions.


This book calls out for respectful but ruthless vetting on all counts, and what it doesn’t need, and shouldn’t provoke, is ideological condemnations or quotations of brilliant passages by revered authorities. Are historians, economists and anthropologists up to the task? It will be fascinating to see.”

“What’s Behind Humanity’s Love-Hate Relationship With Exercise?” – By Marina Krakovsky [Sapiens]

What’s Behind Humanity’s Love-Hate Relationship With Exercise?

Evolutionary history can help resolve the question of why so many people desire a physical break even when their bodies need movement.

By Marina Krakovsky



“What is it about human nature that pulls people to the chair or the couch when they’d be better off moving on their feet? The resolution to this paradox lies in evolutionary history, says David Raichlen, a professor of biological sciences at the University of Southern California.

Raichlen is one of several anthropologists studying how the evolutionary history of the human body shapes health today. In 2012, for example, he and his colleagues published findings from an experimental examination of the runner’s high, the experience of euphoria that some people report during aerobic exercise.

The experiments compared levels of particular feel-good chemicals—called endocannabinoids—in the brains of humans and two other species before and after treadmill exercise. Raichlen and his colleagues found significantly higher endocannabinoid levels in humans and dogs—but not ferrets—following this high-intensity activity. This finding is revealing because humans and dogs evolved to need endurance for hunting food and ferrets did not. The runner’s high could therefore be evolutionarily advantageous to some species, helping creatures run for longer distances to hunt for food despite the high energy costs of running.

In his quest to understand human health, Raichlen also does fieldwork with Tanzania’s Hadza people, a contemporary hunter-gatherer tribe. This community attracts scholars in part because the Hadza way of life resembles that of hunter-gatherers who lived prior to the development of agriculture in many societies some 10,000 years ago. The Hadza, Raichlen notes with affection, are “super-wonderful people,” and studying them could offer clues to what life was like for hunter-gatherers in the past.


Research on the Hadza certainly supports the idea that physical activity benefits health. For example, Hadza are more susceptible to deadly infections than people in industrialized societies because of differences in hygiene and medical care. Yet those Hadza who survive these dangers tend to live long and healthy lives because they are far less prone than people in industrialized societies to what public health experts call “lifestyle diseases,” such as obesity, heart disease, and Type 2 diabetes. In fact, research shows that increasing one’s physical activity reduces the risk of developing these chronic diseases.


Like the sweet tooth at a time when calories are abundant, the need for much more physical activity than many people get is an evolutionary mismatch between human physiology and the present environment.


“The whole point of life is turning energy into kids—that’s evolution,” says Herman Pontzer, an evolutionary anthropologist at Duke University who frequently collaborates with Raichlen, including on the study of rest.

“Natural selection favors any strategy that makes you better at turning energy in your environment into offspring,” Pontzer says. Resting is part of such a strategy: In an energy-scarce environment, a strong drive to burn calories when you didn’t have to would have died out through natural selection.


Though not through conscious choice, sedentary Americans and physically active Hadza both follow this rule. “Our desire to rest is as strong as it’s ever been,” Raichlen says. This desire, he adds, often overcomes the choice to exercise. When you take away the need to move and make exercise a choice, as our current environment has done, he adds, “it takes a lot of motivation to do it.”


Unfortunately, people who live more sedentary lives can’t expect their bodies to adapt to that new mode any time soon. For one thing, in the time scale of human evolutionary history, “even a thousand years is the blink of an eye,” Pontzer says. “The other thing to understand,” he adds, “is that a lot of [lifestyle] diseases don’t kick in until after you’ve had your kids.”

“Social status helped and hindered by the same behaviors and traits worldwide” [University of Texas at Austin/Medical Xpress]

“Social status helped and hindered by the same behaviors and traits worldwide

by University of Texas at Austin



“Humans live in a social world in which relative rank matters for nearly everything—your access to resources, your ability to attract mates, and even how long you live,” said UT Austin evolutionary psychologist David Buss, one of the study’s lead authors. “From an evolutionary perspective, reproductively relevant resources flow to those high in status and trickle slowly, if at all, to those lower on the social totem pole.”

The researchers compared people’s impressions of 240 factors—including acts, characteristics and events—to determine what increased and impaired a person’s esteem in the eyes of others. They found that certain qualities such as being honest, hard-working, kind, intelligent, having a wide range of knowledge, making sacrifices for others, and having a good sense of humor increased a person’s social value.

“From the Gypsies in Romania to the native islanders of Guam, people displaying intelligence, bravery and leadership rise in rank in the eyes of their peers,” said UT Austin psychology graduate student Patrick Durkee, who led the study with Buss. “But possessing qualities that inflict costs on others will cause your status to plummet, whether you live in Russia or Eritrea.”

Being known as a thief, as dirty or unclean, as mean or nasty, acquiring a sexually transmitted disease, and bringing shame on one’s family decreased a person’s social status or value. These status-harming actions can also lead to a person being ostracized from the group—”an action that would have meant near-certain death in ancestral environments,” the researchers said.

“Although this study was conducted prior to the current pandemic, it’s interesting that being a disease vector is universally detrimental to a person’s status,” Buss said. “Socially transmitted diseases are evolutionarily ancient challenges to human survival, so humans have psychological adaptations to avoid them. Lowering a person’s social status is an evolutionarily ancient method of social distancing from disease vectors.”


David M. Buss et al, Human status criteria: Sex differences and similarities across 14 nations., Journal of Personality and Social Psychology (2020).  

DOI: 10.1037/pspa0000206

Journal of Personality and Social Psychology


“How Our Ancient Brains Are Coping in the Age of Digital Distraction” [Discover Magazine]

“How Our Ancient Brains Are Coping in the Age of Digital Distraction

Our species invented the internet. Can we handle the consequences?

By Kenneth Miller

April 20, 2020 10:00 AM



In the public arena, online filters generate bubbles that reinforce our preconceptions and amplify our anger. Brandishing tweets like pitchforks, we’re swept into virtual mobs; some of us move on to violence IRL. Our digitally enhanced tribalism upends political norms and sways elections.


A growing body of research suggests that this conundrum arises from a feature etched into our DNA: our unparalleled hunger to know stuff. “This is an ancient drive that leads to all sorts of complexities in how we interact with the world around us,” says Adam Gazzaley, a neuroscientist at the University of California, San Francisco, and co-author of The Distracted Mind: Ancient Brains in a High-Tech World.


Yet this wonder’s origins were strikingly humble. About 7 million years ago, hominins — our branch of the primate family tree — began the long transition to walking upright. Bipedalism, or walking on two legs, freed our hands for making and manipulating tools. It also allowed us to walk longer distances, key to our spread beyond Africa’s forests and savannas. “If you look at nonhuman primates, it’s like they have another set of hands down there,” notes Dean Falk, a professor of anthropology at Florida State University and senior scholar at Santa Fe’s School for Advanced Research, who specializes in brain evolution. “When our feet became weight-bearing instruments, that kicked everything off — no pun intended.”

Not that the effects were immediate. More than 3 million years ago, the braincase of Australopithecus afarensis, likely the first fully bipedal hominin, was only slightly larger than a chimpanzee’s. But by the time Homo sapiens emerged at least 300,000 years ago, brain volume had tripled. Our brain-to-body ratio is six times that of other mammals, and the neurons in our cerebral cortex (the brain’s outer layer, responsible for cognition) are more densely packed than those of any other creature on Earth.

In recent years, scientists have identified about two dozen genetic changes that might have helped make our brains not only bigger but incomparably capable. “It’s not just one quantum leap,” says University of Wisconsin-Madison paleoanthropologist John Hawks. “A lot of adaptations are at play, from metabolic regulation to neuron formation to timing of development.” A stretch of gene-regulating DNA called HARE5, for example, differs slightly between chimps and humans; when a team at Duke University introduced both versions into mouse embryos, the ones that got the human type developed brains that were 12 percent larger. Meanwhile, mutations in a gene called NOTCH2 increase our production of neural stem cells and delay their maturation into cortical neurons, which may be part of the reason our brains keep growing far longer than those of other primates. The FOXP2 gene, crucial for verbal communication in many species, diverges by two base pairs in humans and our nearest living ape relatives. Our mutation may explain why we can talk and chimps can’t.

Our brains were also shaped by external forces, which increased the odds of smarter hominins passing on their genes. Experts debate which factors mattered most. Falk, for one, hypothesizes that the loss of grasping feet was crucial: When infants could no longer cling to their mothers, as nonhuman primates do, the need to soothe them from a distance led to the development of language, which revolutionized our neural organization. Other researchers believe that dietary shifts, such as eating meat or cooking food in general, enabled us to get by with a shorter digestive tract, which freed up more energy for a calorie-hogging brain. Still others credit our cerebral evolution to growing social complexity or intensifying environmental challenges.

What’s clear is that our neural hardware took shape under conditions radically different from those it must contend with today. For millennia, we had to be on the alert for dangerous predators, hostile clans, potential sources of food and shelter — and that was about it. As McGill University neuroscientist Daniel J. Levitin put it in his book The Organized Mind: “Our brains evolved to focus on one thing at a time.”

Our digital devices, by design, make that almost impossible.

Tech vs. Brain

The part of the brain that enables us to make elaborate plans and carry them through — the part, arguably, that makes us most human — is the prefrontal cortex. This region is only slightly larger in H. sapiens than in chimps or gorillas, but its connections with other brain regions are more extensive and intricate. Despite this advanced network, our planning ability is far stronger than our ability to remain focused on a given task.

One reason is that, like all animals, we evolved to switch attention instantly when we sense danger: the snapping twig that might signal an approaching predator, the shadow that could indicate an enemy behind a tree. Our goal-directed, or top-down, mental activities stand little chance against these bottom-up forces of novelty and saliency — stimuli that are unexpected, sudden or dramatic, or that evoke memories of important experiences.


When the animal finds a ripe mango in the jungle — or solves a problem in the lab — brain cells in what’s called the dopaminergic system light up, creating a sensation of pleasure. These cells also build durable connections with the brain circuits that helped earn the reward. By triggering positive feelings whenever these circuits are activated, the system promotes learning.

Humans, of course, forage for data more voraciously than any other animal. And, like most foragers, we follow instinctive strategies for optimizing our search. Behavioral ecologists who study animals seeking nourishment have developed various models to predict their likely course of action. One of these, the marginal value theorem (MVT), applies to foragers in areas where food is found in patches, with resource-poor areas in between. The MVT can predict, for example, when a squirrel will quit gathering acorns in one tree and move on to the next, based on a formula assessing the costs and benefits of staying put — the number of nuts acquired per minute versus the time required for travel, and so on. Gazzaley sees the digital landscape as a similar environment, in which the patches are sources of information — a website, a smartphone, an email program. He believes an MVT-like formula may govern our online foraging: Each data patch provides diminishing returns over time as we use up information available there, or as we start to worry that better data might be available elsewhere.

The call of the next data patch may keep us hopping from Facebook to Twitter to Google to YouTube; it can also interfere with the fulfillment of goals — meeting a work deadline, paying attention in class, connecting face-to-face with a loved one. It does this, Gazzaley says, in two basic ways. One is distraction, which he defines as “pieces of goal-irrelevant information that we either encounter in our external surroundings or generate internally within our own minds.” We try to ignore our phone’s pings and buzzes (or our fear of missing out on the data they signify), only to find our focus undermined by the effort.

The other goal-killer is interruption: We take a break from top-down activity to feed our information munchies. The common term for this is multitasking, which sounds as if we’re accomplishing several things at once — working on the quarterly report, answering client emails, staying on top of the politician’s gaffe count, taking a peek at that aardvark. In truth, it means we’re doing nothing well.


It also wreaks havoc on working memory, the function that allows us to hold a few key bits of data in our heads just long enough to apply them to a task. Multiple studies have shown that “media multitasking” (the scientific term for toggling between digital data sources) overloads this mental compartment, making us less focused and more prone to mistakes. In 2012, for instance, Canadian researchers found that multitasking on a laptop hindered classroom learning not only for the user but for students sitting nearby. Heavy media multitasking has been associated with diminished cognitive control, higher levels of impulsivity and reduced volume in the anterior cingulate cortex, a brain region linked with error detection and emotional regulation.

Us vs. Them

Emotional regulation is central to another of tech’s disruptive effects on our ancient brains: exacerbation of tribal tendencies. Our distant ancestors lived in small nomadic bands, the basic social unit for most of human history. “Groups that were competing for resources and space didn’t always do so peacefully,” says paleoanthropologist Hawks. “We’re a product of that process.”

These days, many analysts see tribalism asserting itself in the resurgence of nationalist movements worldwide and the sharp rise in political polarization in the U.S., with both trends playing out prominently online. A study published in the American Journal of Political Science in 2015 found that party affiliation had become a basic component of identity for Republicans and Democrats. Social media, which spurs us to publicly declare our passions and convictions, helps fuel what the authors call “the gradual encroachment of party preference into nonpolitical and hitherto personal domains.”

And we’re hardwired to excel at telling “us” from “them.” When we interact with in-group members, a release of dopamine gives us a rush of pleasure, while out-group members may trigger a negative response. Getting online “likes” only intensifies the experience.

Our retreat into tribal mode may also be a reaction to the data explosion that the web has ignited. In 2018, in the journal Perspectives on Psychological Science, psychologist Thomas T. Hills reviewed an array of earlier studies on the proliferation of information. He found that the upsurge in digitally mediated extremism and polarization may be a response to cognitive overload. Amid the onslaught, he suggested, we rely on ingrained biases to decide which data deserve our attention (see “Tribal Tech” sidebar). The result: herd thinking, echo chambers and conspiracy theories. “Finding information that’s consistent with what I already believe makes me a better member of my in-group,” Hills says. “I can go to my allies and say, ‘Look, here’s the evidence that we’re right!’ ”


For example, when Red Sox and Yankees fans watch their rival team fail to score, even against a third team, they show heightened activity in the ventral striatum, a brain region associated with reward response.

It’s surely no coincidence that during the 2016 presidential election, Russian hackers focused largely on convincing various groups of Americans that another group was out to get them. But foreign agents are hardly the top promoters of tribalism online. As anyone who’s spent time on social media knows, there’s plenty of homegrown schadenfreude on the web.


Faced with tech’s cognitive overload, humans determine what’s worthy of attention by relying on biases shaped by evolution, says Thomas T. Hills, a professor of psychology at England’s University of Warwick. Those tendencies may have helped our ancestors survive, but they’re not always in our best interests today, Hills says. He identifies four types of “cognitive selection” that fuel digital tribalism.

Selection for belief-consistent information. Also called confirmation bias, it inclines us to prefer data that align with what we already think. In prehistoric times, this might have led people to see a rainstorm as proof of a shaman’s power over the weather — an interpretation that strengthened social cohesion, even if it was wrong. Today, confirmation bias can lead to more consequential errors, such as seeing a cold snap as proof that climate change is a hoax.

Selection for negative information. This tendency, also known as negativity bias, primed our ancestors’ brains to prioritize alertness for predators over other, less threatening types of attention. Today, it can lead us to privilege bad news over good — for example, by taking a single horrific crime by an out-group member more seriously than data showing that the group as a whole is law-abiding.

Selection for predictive information. Pattern-recognition bias, as it’s often called, helps us discern order in chaos. Noticing that large prey animals tended to arrive in the savanna after the first summer rains would have given early humans an evolutionary advantage. Today, however, a predilection for patterns can lead us to detect conspiracies where none exist.

Selection for social information. This “herd bias” prompts us, in uncertain environments, to follow the crowd. Back in the day, “if everyone else in your tribe was running toward the river, they probably had a good reason,” says Hills. But if everyone in your Reddit community says a famous politician is running a child-sex ring from the basement of a pizzeria, well, it would be wise to visit a fact-checking website before making up your mind.”

“Covid-19, and Our Tribal Identities | When Our Moral Psychology Turns on Itself” by Hector Garcia [Psychology Today]

“Covid-19, and Our Tribal Identities

When Our Moral Psychology Turns on Itself

Hector A Garcia Psy.D.
Hector Garcia, Ph.D., is a professor of psychiatry at the University of Texas Health Science Center at San Antonio. [Alpha God: The Psychology of Religious Violence and Oppression; Sex, Power, and Partisanship: How Evolutionary Science Makes Sense of Our Political Divide]

Posted Apr 27, 2020



As examples, golf courses around the nation were allowed to remain open, along with beaches in Florida. Texas Republican Governor Greg Abbot decreed that religious services are allowable, despite the obvious risks (churches were already defying previous restrictions). With emboldening tweets from Donald Trump, anti-lockdown rallies have proliferated, touting the pandemic as a liberal media conspiracy, and containment efforts as anti-freedom—one protestor’s sign read “Social Distancing is Communism!,” another, “Liberate America!”. Notably, these are gatherings of potentially infectious people in close contact with one another, and perhaps unsurprisingly the virus has begun to kill protestors. While the common thread of churches, Trump, Texas governance, and even golf courses may be political conservatism, how do we make sense of this dangerously irrational behavior?

It is ironic that our sensible, science-based efforts to remove ourselves from corona’s onslaught push against ancient instincts designed to help us survive infectious disease. In other words, our pathogen-survival instincts may have outlived their utility, and today may be helping to actually spread COVID-19 rather than contain it. It may be surprising to learn that these very instincts drive our political behaviors. Given that our strategic responses to infectious disease are utterly dependent on political processes, understanding our evolved responses to germs is now literally a matter of life and death. To ensure our future, we may start with a look to our past.

Our hunter-gatherer ancestors knew nothing about what diseases really were. The microbiologic world was invisible to them. They had no vaccines. They had no antibiotics. But they were (as we are now) equipped with what is known in the evolutionary sciences as a behavioral immune system—a set of emotional responses such as disgust, fear, and hostility that helped them withdraw from potential pathogens. Given that humans were the biggest vectors of disease, that immune system included a prejudicial psychology against outsiders; strangers potentially carried pathogens for which the tribe had no immunity. We already know the devastating impact that contact between distal peoples has had on human populations—that is, before the advent of vaccines, and an understanding of physical distancing. As one crushing example, diseases brought by the European invasion of the “New World” sent up to 90% of Native Americans to their graves.

Fear of germs, like so many other traits, falls on the natural curve. Just as there were advantages to xenophobia in our ancestral environments, there were advantages to xenophilia (an attraction to outsiders), which afforded our ancestors greater access to new technologies, and mates outside their gene pool.  One of the most robust findings in the science of our political psychology is that those who are more germ-and-xenophobic tend to be more politically conservative. It makes sense, then, that today those with greater xenophobia would be drawn to politics that are hawkish in their foreign policy, take tough stances on border security, and posture against affirmative action (which helps people who may be seen as outsiders).

Interestingly, those on the conservative end (on average) tend to be not only more fearful of germs and outsiders, but to be generally more fearful. Imaging studies even find conservatives tend to have larger amygdalae, a brain structure that generates our fear responses.2 Yet, those protesting on the streets, violating isolation orders, putting not only others but also themselves in mortal danger, tend to be overwhelmingly far right. The explanation to this stunning contradiction—tribalism.


Sometimes the signals are visual. Think of aboriginal tribes that wear similar regalia to signify belonging—similar headdresses, similar colors, or even tribal scarring or tattoos. Today’s maga-hats, protest signs, political bumper stickers, and T-shirts, tell other members of the group that you’re with them. At other times the signal is showing agreement, and an alarming degree of this kind of conformity happens below the level of conscious awareness.  There is important research on this tendency worth pondering.


Indeed, that tendency to mentally suppress information that runs contrary to group consensus appears to be related to the fact that humans are exquisitely talented at detecting liars, cheaters, or even insincerity. And so, the capacity to self-deceive may have developed to conceal our true beliefs. Indeed, if Wrangham is right, this ability to self-deceive may have kept our ancestors from getting murdered by their peers.


One revealing study examined how highly partisan liberals and conservatives respond to fabricated newspaper stories on welfare programs.5 One program was exorbitantly generous, the other inflexibly stingy. The researchers then queried which program subjects supported. Given what everyone already knows about our political stances, you might guess which side supported which policy. However, before subjects rated their support, they were told that House Democrats (or Republicans) strongly endorsed each of the two welfare policies, and that the rival partly strongly rejected them. If conservatives believed their party supported the lavish welfare policy, they too supported it, and vice-versa for liberals. This shows us that the impulse to go with the tribe can override our own strongly held principles. Tellingly, subjects reported that their own policy perspectives influenced them most, and that the stances of lawmakers least, despite going with the flow in a way that so blatantly contradicted their own values. In other words, they were blind to their own tribalistic blinders.  And therein lies the problem.


When protestors see each other in their tribal regalia, when they chant in unison, they feel an ancient, emotionally intuitive sense of belonging. And that they’ve identified an outside force (represented by the liberal media, science, etc.) it draws their emotional ties to each other even tighter. Moreover, by cohering to the preposterous idea that the need for physical distancing is a liberal conspiracy, they reaffirm each other of their commitment to the tribe. But this puts everyone, including themselves, at grave risk. It’s antisocial. It’s dangerous. It’s asinine. But it is explainable. We are social animals operating in groups inclined to show commitment to one another through agreement. What we agree on can be like a virus itself. And critical thinking gets swallowed up by our ancients fears of rejection.”

“Why Social Distancing Feels So Strange” By George M. Leader [Sapiens]

“Why Social Distancing Feels So Strange

Humans are wired through millions of years of evolution to be social creatures. Faced with the COVID-19 virus, can we stay connected at a distance?

George M. Leader is a visiting assistant professor of anthropology at The College of New Jersey.


Why does intentionally avoiding physical interaction with other humans during our daily routine feel so strange? The answer may lie in millions of years of behavioral and cultural evolution.

Since our evolutionary split from chimpanzees around 7 million years ago, humans have become increasingly dependent on complex social cooperation to survive and thrive. People sometimes think of humans as fundamentally selfish or violent, but anthropological research shows that we have evolved to work cooperatively and live in supportive communities.

Some of the earliest evidence for the importance of cooperative behavior in our species comes from a surprising event: the evolution of walking on two legs. Among the earliest evidence of bipedalism in the hominid linage is Sahelanthropus tchadensis, an upright ape-like primate from Chad dating to about 7 million years ago.

There are plenty of possible reasons for why our ancestors began to stand upright: It might have helped them regulate their body temperature, decrease their exposure to natural radiation from the sun, or increase their range of sight to watch for predators, among other reasons. But one hypothesis proposed by American biological anthropologist C. Owen Lovejoy in 1981 suggests that our ancestors freed up their hands for food sharing, specifically so that a male could carry food back to a female raising their young. This type of social cooperation is much more difficult for quadruped knuckle-walkers like chimpanzees.


By about 1.9 million years ago, around the time of the appearance of Homo erectus, cooperative behavior may have greatly increased again. By this time, females were facing significant challenges giving birth: Their upright bodies had a hard time delivering big-brained babies. This physical burden might have prompted dramatic shifts in hominin social structures, with a bigger division of labor between males and females, and additional collaboration between childrearing females.

Along with this change in society seems to have come stronger social supports within these communities. Physical evidence for this can be found in the femur of an 800,000-year-old H. erectus from Java. The femur was badly broken—an injury that almost certainly means a quick death for someone trying to live alone. But, incredibly, this fracture healed. That means the injured hominin received an enormous amount of support from their social group. Our ancestors really took care of one another.


As a result of humanity’s evolution for social tendencies, we have a problem: loneliness. This feeling may act as a driver to pull people back together, much as thirst makes people drink and hunger makes people eat. But it has negative consequences too.

People who perceive themselves as being without social support, living in a world without beneficial social interaction, can become irritable and depressed. Lonely people—and animals—tend to adopt more selfish behaviors, putting their own needs first. The more a human thinks there is a lack of beneficial social interaction around them—in other words, the lonelier they feel—the more they adopt these behaviors.

The consequences of isolation and the ensuing selfish behaviors can be high. Persistent loneliness can reduce our capacity to look after ourselves and even harm our physical health. According to one 2018 study, loneliness in people is associated with a 26 percent increase in the chance of premature death.


But can we entirely override our long-programmed interactive cooperation and replace it with distant cooperation? Will virtual interaction be a suitable replacement in fulfilling the need for physical interaction? It remains to be seen.”

“Isabel Behncke: “El pánico al contagio, a lo infeccioso, es uno de nuestros miedos más atávicos” [La Tercera]

“Isabel Behncke: “El pánico al contagio, a lo infeccioso, es uno de nuestros miedos más atávicos”


La primatóloga chilena, eminente por sus investigaciones en el Congo acerca del comportamiento social de los bonobos, afirma que la biología evolutiva puede ayudarnos a comprender tanto las causas de la pandemia como la manera en que reaccionamos a ella. Doctorada en Oxford y hoy miembro del Centro de Investigación de la Complejidad Social de la UDD, Behncke propone enfrentar la crisis con “ojo de ecólogo”. Nos serviría para pensar mejor -y moralizar menos- sobre los sacrificios que debemos elegir para mitigar distintas fuentes de sufrimiento.


La discusión actual sobre las zoonosis −las enfermedades que pasan de animales a humanos−, y que si el virus provino de un murciélago o de un pangolín, y que no puede haber mercados de fauna silvestre como el de Wuhan, tiene que ver con advertencias que se venían haciendo hace rato sobre el consumo de biodiversidad y la salud de los ecosistemas. Y si seguimos destruyendo los hábitats naturales, hay muchos animales más para futuras zoonosis. Esta pandemia, ciertamente, no va a ser la última.


La cuenta gigantesca que vamos a pagar ahora es el precio de no entender cómo funcionan esas barreras. Quizás porque ya no nos sentíamos parte de la red de la vida que compartimos con otros seres. Como dice Harari en el título de su libro, nos veíamos pasando de animales a dioses. Ya estábamos pensando en Marte, nos íbamos de acá. De algún modo, perdimos el respeto por nuestra casa. Y ha sido muy impresionante que un simple virus nos devuelva a la naturaleza en tan pocas semanas. Gastamos trillones de dólares en sistemas de defensa y nos tiene de rodillas una hebra de ARN.


Es que ahí hay una ironía profundísima: el virus nos obliga a ir en contra de lo que somos para poder protegernos de él. En ese sentido, uno podría decir que este es un virus brillante. A mí me tocó vivir en el Congo lo del ébola, que era mucho más mortífero, pero no tan contagioso, por su método de transmisión. El Covid-19, al matar poco y no tan rápido, se aprovecha muy bien de nuestro comportamiento social. Es como si dijera: “Yo sé que estos animales son incapaces de no interactuar entre ellos durante 14 días, están hechos para eso, así que me voy a quedar aquí piola y dejarlos hacer lo que siempre hacen para pasarme de un humano a otro”. Es un gran estratega, por lo menos. Y otro aspecto que la biología evolutiva puede ayudar a entender son los fenómenos de contagio a través de redes de interacciones. No solo de contagio biológico, también de ideas y de emociones. Como el pánico.


¿Dirías que la competencia entre la razón y el pánico pone a prueba qué tan sapiens somos en estas circunstancias?

Es que la dicotomía entre emoción y razón no nos ha servido de mucho, porque ser sapiens también es tener emoción, no las puedes disociar. Y si bien hay que decir con mucho énfasis que, por favor ,no cedamos al pánico, porque nos cierra cognitivamente y trae consecuencias graves, reconocer el rol del miedo en nuestra historia es útil para entender lo que nos está pasando. El miedo existe porque ha servido para algo. Y el pánico al contagio, a lo infeccioso, es uno de nuestros miedos más atávicos. En parte, estamos vivos porque tenemos ancestros que alguna vez vieron a alguien muy enfermo y dijeron “uy, qué horror”, y se alejaron. O sea, es muy comprensible que el coronavirus nos aterre más allá del cálculo racional. Porque si fuéramos tan sapiens, tendríamos una planilla Excel en la cabeza que nos diría que es mucho más probable morir de enfermedades cardiovasculares. Y les tendríamos terror a las hamburguesas. Pero como arrastramos miedos atávicos, no tenemos los miedos bien calibrados. Les tenemos más terror a los aviones que a los autos, lo que estadísticamente es absurdo. Y le tenemos miedo a la sangre, a las arañas, a las culebras, mucho más que a un auto. Así que sentir este pánico al contagio es un poco inevitable. Pero tenemos que ser conscientes de él y regularlo, porque darle rienda suelta es peligroso.


La experiencia, al menos, dice que las épocas de desastres muestran lo mejor y lo peor de la naturaleza humana. Lo que pasa es que la dicotomía entre cooperación y conflicto también es un poco engañosa. Las sociedades operan en muchos niveles de organización −el individuo, la familia, el barrio, la empresa, la nación, la sociedad global, etc.− y los ecólogos te van a decir que, para observar los fenómenos de la naturaleza es clave entender que en todos esos niveles hay cooperación y conflicto al mismo tiempo. Tú mismo eres un ecosistema -en tu cuerpo hay más bacterias que células humanas− dentro del cual hay muchos conflictos. Ahora, lo que sí tiende a ocurrir ante amenazas graves es que aumenta la cooperación en los niveles altos, los grandes bandos se agrupan. Y en las últimas semanas han surgido ejemplos de cooperación a gran escala, de coordinación colectiva, bastante interesantes. ¿Cuándo fue la última vez que la humanidad se agrupó bajo un mismo propósito, con la mayor parte de los humanos al tanto de eso? Pero también han saltado a la vista los conflictos de interés. Y la polarización política, por supuesto. Yo creo que nos serviría mucho, para tener una conversación más amigable, observar lo que está pasando con ojo de ecólogo, viendo sistemas complejos en acción.


Jonathan Haidt, un psicólogo social al que es muy interesante seguir, cree que ahora vamos a cooperar más porque en los desastres aparece lo mejor de las personas, pero también está diciendo que estas situaciones incrementan el moral disgusto, el asco moral. Así como los miedos atávicos, la emoción del asco es parte de nuestro repertorio evolutivo. Y existe el asco físico ante lo que percibimos como cochinada, como las fecas, pero también tenemos asco moral, y eso es lo que está aflorando en muchas de estas peleas. Hay gente que dice “usted es un asco, quiere salvar la economía y no le importa la vida”, o al revés, “usted piensa en los enfermos, pero no le importa la cantidad de gente que va a quedar sin sustento, qué aberración”. Ese sentimiento de repulsión moral es muy humano.”

Learning from Animals by Antoine Doré & Jérôme Michalon | About: Dominique Guillo, Les Fondements oubliés de la culture. Une approche écologique, Seuil, 2019 [La Vie des Idées]

“Learning from Animals

About: Dominique Guillo, Les Fondements oubliés de la culture. Une approche écologique, Seuil


by Antoine Doré & Jérôme Michalon, 19 March

translated by Michael C. Behrent



Neither the social sciences nor the natural sciences are currently invested in studying the cultural relations between humans and animals. If we are to understand them, we must reconsider all our categories, and free ourselves once and for all from the nature-culture divide.

To use the relationship between humans and animals to rethink culture: this is the goal of Dominique Guillo’s book. A sociologist and research director at the CNRS, Guillo offers a structured and thorough synthesis of more than a decade of research. A specialist in the history and epistemology of social sciences as they relate to life sciences, Guillo maintains that the way in which these two disciplinary domains have approached culture suffers from an identity bias, which prevents them from conceiving of the existence of cultures constructed by and between different animal species.

The identity bias diagnosis

Guillo devotes the book’s first three chapters to establishing this epistemological diagnosis. He gets the ball rolling with the natural sciences (behavioral ecology, ethology, and neo-Darwinian biology), in a first chapter that proposes a highly pedagogical synthesis of research from the past forty years on animal sociability and culture. First, we encounter the neo-Darwinians’ unusual definition of the social (i.e., behavior that seeks to perpetuate the genes of individuals other than their producers); then, an ethological definition of culture understood as a set of traits transmitted by social learning, rather than by the genetic mechanisms of natural selection.


Guillo thus calls for a better connection between the social and the natural sciences, as they seem to suffer from the same problem: their inability of studying culture except in terms of animal groups belonging to the same species (whether human or non-human). They suffer from a tropism or identity bias, apparent both in their research’s focus (intraspecific and intragroup relationships) and results (culture takes place solely between similar entities and accentuates their similarities to one another). Thus, according to Guillo, these “classic” approaches to culture proceed from (i.e., postulate) and produce (i.e., accentuate) shared identity. In a world in which understanding the interdependence of creatures as different as earthworms, whales, and molecules is becoming more and more crucial, identity bias constitutes a major epistemological obstacle.


This diagnosis of a forgetting of culture’s foundations, which is itself based on several omissions, is accompanied by over-adherence to the epistemology of the behavioral sciences. The sole definition of culture used and discussed in this book is borrowed from this discipline, as is Guillo’s key concept (social learning) and the regular appeal to “parsimony.” Furthermore, it is the social sciences rather than the behavioral sciences that the author holds responsible for the impossibility of a synthesis in the study of interspecific cultures. In contrast to what they assert, the social sciences are most inclined to validate the nature-culture dualism and the boundaries between disciplines, whether because of ideology or disciplinary loyalty. Conversely, sociobiology, behavioral ecology, and evolutionary psychology, by considering humans as one living being among others, abolish the frontiers between these dualisms and appear, in Guillo’s account, as progressive theories, while the social sciences are noticeable only for their conservatism. He notes, for example, that by restricting cultural phenomena to identity, the social sciences risk fueling the rise of “’identitarian’ political discourses” (p. 302).”

Ernst Haeckel’s ‘Kant Problem’: metaphysics, science, and art” by Stefan Forrester [Biology & Philosophy, 2020]

“Ernst Haeckel’s ‘Kant Problem’: metaphysics, science, and art

Stefan Forrester

Biology & Philosophy volume 35, Article number: 27 (2020)

Published: 05 March 2020



Ernst Haeckel (1834–1919) has become famous, and perhaps infamous, for many reasons. Presently, he is probably most widely-known for his paintings of plants and animals in his very popular book, Art Forms in Nature, originally collected and published in 1904. However, in addition to Haeckel’s art, he is also well-known for his advocacy of Darwinism and Social Darwinism, for first coining the term ‘ecology,’ for having his work utilized by Nazi pseudo-scientists (Dombrowksi in Tech Commun Q 12:303–319, 2003), and for famously (perhaps fraudulently) producing drawings of animal and human embryos so as to confirm his biogenetic law (Gould in Nat Hist 109:44–45, 2000). Something Haeckel is not as well-known for today is the fact that he seemed to be both a strenuous critic of the metaphysical and moral philosophies of Immanuel Kant and yet also something of an adherent to Kant’s aesthetic views. In terms of metaphysics and morality, Haeckel sought to exorcise Kant’s ideas as much as possible from twentieth century views on science, humanity, and nature; however, in terms of aesthetic theory, Haeckel seemed to embrace a distinctly Kantian approach to art and artworks. This essay proposes to: (1) carefully examine Haeckel’s refutations of some of Kant’s central metaphysical concepts, (2) explore some of the, arguably Kantian, assumptions underlying Haeckel’s approach to aesthetics and his artistic practice, and (3) combine these two lines of inquiry into a portrait of Haeckel’s mind as one that is conflicted about the role Kantian philosophy, and more specifically Kantian noumena, should play in twentieth century science and art. This unresolved tension in Haeckel’s mind regarding Kant’s noumenal realm is what I propose to call his ‘Kant Problem’.


Haeckel’s refutations of Kantian metaphysics and morality

Ernst Haeckel had a complex relationship with the philosophy of Immanuel Kant. While Haeckel respected Kant’s thinking and his position as a highly important figure in the history of ideas, he also wanted very much to dispute several of Kant’s central philosophical claims. Haeckel wanted to do away with much of Kant’s epistemology and metaphysics, and most of his ethical theory as well. It is clear that Haeckel studied Kant during his beginning years as a professor at Jena in the early 1860s. Robert Richards in his book The Tragic Sense of Life: Ernst Haeckel and the Struggle Over Evolutionary Thought cites reliable evidence that Haeckel read Kant’s works with Kuno Fischer (1824–1907), the then rector of the University at Jena, and that Haeckel was also reading the works of Alexander von Humboldt (1769–1859), a renowned Kant and Schelling scholar who worked to suffuse the sciences with philosophical ideals, much like Haeckel himself would do later in his career (2008). The important difference being that Humboldt sought to conjoin modern science with Kantian-style metaphysical concepts, whereas Haeckel thought that Kant’s views were incompatible with the progress of scientific knowledge and with a scientific worldview.

In his own philosophical works some 35 years later, such as The Wonders of Life: A Popular Study in Philosophical Biology (1905), and The Riddle of the Universe at the Close of the Nineteenth Century (1901), Haeckel advocated vehemently for a kind of philosophical monism. A monism which set nature, i.e., scientifically-analyzable nature, as the one and only component of existence that encompasses and expresses all the properties of the universe, both physical and mental. Haeckel says clearly in Riddle of the Universe, “We adhere firmly to the pure, unequivocal monism of Spinoza: Matter, or infinitely extended substance, and spirit (or energy), or sensitive and thinking substance, are the two fundamental attributes or principal properties of the all-embracing divine essence of the world, the universal substance” (pp. 33–34).Footnote3 The main idea of monism is generally that there is only one substance that has properties: nature. Moreover, for philosophers like Spinoza, this substance is also identical to God, they are one in the same thing, the substance in which all the properties of the universe inhere. Haeckel, interestingly, also refers to his version of monism as a thoroughgoing ‘practical materialism.’Footnote4 The most controversial consequence of this view is that it eradicates the metaphysical possibility for the supernatural. If we are to conceive of God, souls, angels, the afterlife, etc., as essentially a different kind of substance than nature, then they are all rendered philosophically impossible by Haeckel’s monism; partly because God et al. are defined as being non-natural, which means that by definition they cannot exist apart from nature, and also because non-natural entities are not subject to scientific analysis. On the other hand, the philosophical benefits of monism, Haeckel believed, were many. First, scientific monism finally rids the world of all forms of superstition and supernatural religious beliefs. Haeckel thought that this result would be a great boon to humanity, he says quite bluntly in The Wonders of Life, “For my part, I hold that superstition [here he is discussing the belief in miracles] and unreason are the worst enemies of the human race, while science and reason are its greatest friends” (p. 56). Second, Haeckel saw monism as laying the philosophical groundwork for a fully scientized understanding of both the external world we explore with our senses and the internal world we explore with our minds, both of which are, simply, nature. Furthermore, Haeckel claims that all of nature is governed by rigid, universal laws, and that only science and the scientific method allow us to discover these laws. Finally, Haeckel contends that non-Monist philosophical systems, like dualism, only serve to confuse and conflate the true nature of reality and lead us to make distinctions, e.g. between the body and the mind, where none actually exist.


Haeckel’s rejection of Kant’s metaphysical views comes from two directions: (1) Since the knowledge of noumena must be a priori and since there is no way for science, which is based solely on knowledge from sensation, i.e., a posteriori knowledge, to prove the existence of a priori knowledge, we must reject noumena if we are to maintain a scientific worldview. (2) If we were to accept the existence of noumena, that would amount to a kind of dualism about the mind and external reality, which is tantamount to just another form of spiritual superstition; a superstition that is philosophically grounded instead of faith-based, but a superstition nonetheless. Haeckel’s argument for his first thrust against Kant is basically that what Kant understood as reason, or the pure a priori faculty of the mind, is in fact something that physiological studies of the brain in Haeckel’s era has explained in purely scientific terms. Namely, that the vast collection of neurons in the brain are the physical basis for consciousness, and that the uniquely human faculty for understanding what appear to be a priori truths and concepts actually has an a posteriori basis in terms of how the human brain evolved. If we understand the a posteriori history of the human brain’s development, Haeckel argues, we will then be able to dispense with the idea that our perceived faculty for a priori truths (i.e., reason) is anything more than a scientifically measurable, a posteriori, phenomenon:

Kant regarded this highest faculty of the human mind as innate, and made no inquiry into its development, its physiological mechanism, and its anatomic organ, the brain….it was impossible to have at that time a correct idea of its physiological function. What seems to us to-day to be an innate capacity, or an a priori quality, of our phronema, is really a phylogenetic result of a long series of brain-adaptations, formed by a posteriori sense-perceptions and experiences (1905, p. 69).

Haeckel argues for the second prong of his attack by stating simply that any appeal to a reality beyond what can be perceived by the senses amounts to superstition regardless of whether it comes from a religion or a powerful philosophical thinker like Kant, “The sense world (mundus sensibilis) lies open to our senses and our intellect, and is empirically knowable within certain limits. But behind it [according to Kant] there is the spiritual world (mundus intelligibilis) of which we know, and can know, nothing; its existence (as the thing in itself) is, however, assured by our emotional needs. In this transcendental world dwells the power of mysticism” (1905, p. 68). In this quote I think we see Haeckel distilling down his frustrations with Kant’s metaphysics quite sharply. Haeckel implies here that Kant’s arguments for the noumenal realm amount to some sort of emotional appeal, or the idea that it is only as a result of our psychological need for a deeper level of reality beyond the phenomenal, that we are tempted to believe in a ‘mystical’ transcendental world at all. Nevertheless, since this emotional need is very strong, it manifests itself as very powerful religious, spiritual, and mystical beliefs and practices, all of which I think Haeckel would classify as forms of superstition. Kant’s views leave the door open for a spiritual realm that is distinct from the phenomenal world that comes to us through the senses and is thereby impenetrable to the methods and modalities of science. Accepting this “mundus intelligibilis” as an integral part of reality is, I think for Haeckel, a basic philosophical mistake that is tantamount to embracing superstition.

Moving now to Haeckel’s criticisms of Kant’s moral theory, those objections emerge directly from his criticisms of Kant’s metaphysics. Haeckel argues that once Kant left open the door to the “mundus intelligibilis” in his metaphysical theory, it was easier for him to import some traditional ethical assumptions through that door to function as the basis for his moral views, namely the notions of God, free will, and the immortality of the soul, i.e., Kant’s three archetypal ideas of reason. Thus the foundations of Kant’s moral theory, says Haeckel, rest on that same fundamental mistake of affirming the existence of the noumenal realm in addition to the phenomenal realm (the realm of science). Haeckel bemoaned the fact that most other philosophers and theologians in his day were still in Kant’s camp when it came to morals, stating, “They affirm, with Kant, that the moral world is quite independent of the physical, and is subject to very different laws; hence a man’s conscience, as the basis for his moral life, must also be quite independent of our scientific knowledge of the world, and must be based rather on his religious faith” (1901, p. 348). In this passage we begin to see a kind of crystallization of Haeckel’s fears about Kant’s decision to accept the noumena as real. As a result of these fears Haeckel’s purely philosophical objections to the phenomena-noumena distinction were not altogether well-formed. He objected to the noumena mostly on the grounds that they conflicted with his preferred worldview of monism. Haeckel did not necessarily attack the noumena on logical grounds as being self-contradictory or incoherent, thus he could not advocate for their elimination from metaphysics based only on reasoning. But now we see Haeckel showing us the damaging results of allowing the noumenal level of reality into the world. Basically, all of what Haeckel saw as the destructive impact of religion and religious belief was facilitated by the noumena. The most important areas of human experience: knowledge, morality, truth, and reality all become different sorts of divine mysteries because of the noumena. Moreover, the scientific study of nature (the phenomena) becomes inherently secondary and limited compared to the conceptual understanding of the noumena. In other words, with the noumena allowed into our worldview, science can play no role in some important areas of human experience, like morality. Instead, science must remain silent, and clearly, Haeckel wishes to argue that this result is detrimental to humanity.

Lastly, while still addressing Kantian morality, Haeckel repeats his strategy of attacking Kant’s views both philosophically and scientifically. In The Wonders of Life Haeckel claims that modern science has understood the human brain to such a degree that Kant’s appeal to the unique human faculty of reason no longer holds any weight. By studying the brain, science has rendered what Kant thought was a noumenal entity (reason) into a phenomenal entity (the brain). Therefore, there is no longer any need for noumena. Likewise, in Riddle of the Universe, Haeckel asserts that various modern sciences have either explained or dispelled all of Kant’s noumenal ethical concepts. Haeckel says that modern anthropology has “…dissipated [the] pretty dream…” (1901, p. 349) that all humans have an identical set of ethical faculties because they are based on the universality of reason. The study of other cultures has told us clearly, Haeckel argues, that peoples and cultures differ widely on what constitutes a good ethical person, and what constitutes good ethical judgment. He also claims that “comparative and genetic psychology” has shown that there cannot be a soul and that modern physiology has proven the impossibility of free will (Haeckel 1901, p. 349). Although Haeckel does not fill in much scientific detail about these claims, he clearly sees them as decisive arguments against Kant’s moral theory. The final blow from modern science that Haeckel deals to Kantian morality is that its central tenet, namely Kant’s much vaunted categorical imperative,Footnote5 has been replaced by the biological understanding of human beings as social creatures. Without going into too much detail, Kant thought that the categorical imperative could be proven using a “transcendental deduction of pure reason (see especially Part I, Book I, Chapter I of the Critique of Practical Reason). This deduction, being transcendental and not empirical, involves several noumenal ideas, such as the notion of the “good will”, “autonomy”, and “freedom of the will” to name a few. Hence, when Haeckel says, “[This]…shows that the feeling of [moral] duty does not rest on an illusory ‘categorical imperative,’ but on the solid ground of social instinct, as we find in the case of all social animals” (1901, p. 350), he is casting serious doubt on Kant’s use of noumenal ideas, going so far as to call them “illusory” in this context. So here, just as Haeckel earlier dispensed with Kant’s notion of the noumenal mind with neurology, he dispenses with Kant’s noumenal ethical notions with anthropology.”

“How Did Belief Evolve?” – Agustín Fuentes [Sapiens]

“How Did Belief Evolve?

An anthropologist traces the development of Homo sapiens’ most creative and destructive force, from the making of stone tools to the rise of religions.

Agustín Fuentes
is the chair of the anthropology department at the University of Notre Dame.


About 20 years ago, the residents of Padangtegal village in Bali, Indonesia, had a problem. The famous, monkey-filled forest surrounding the local Hindu temple complex had become stunted, and saplings failed to sprout and thrive. Since I was conducting fieldwork in the area, the head of the village council, Pak Acin, asked me and my team to investigate.

We discovered that locals and tourists visiting the temples had previously brought food wrapped in banana leaves, then tossed the used leaves on the ground. But when plastic-wrapped meals became popular, visitors threw the plastic onto the forest floor, where it choked the young trees.

I told Acin we would clean up the soil and suggested he enact a law prohibiting plastic around the temples. He laughed and told us a ban would be useless. The only thing that would change people’s behavior was belief. What we needed, he said, was a goddess of plastic.

Over the next year, our research team and Balinese collaborators didn’t exactly invent a Hindu deity. But we did harness Balinese beliefs and traditions about harmony between people and environments. We created new narratives about plastic, forests, monkeys, and temples. We developed ritualistic caretaking behaviors that forged new relationships between humans, monkeys, and forests.

As a result, the soils and undergrowth were rejuvenated, the trees grew stronger and taller, and the monkeys thrived. Most importantly, the local community reaped the economic and social benefits of a healthy, vigorous forest and temple complex.

Acin taught me that science and rules cannot ensure lasting change without belief—the most creative and destructive ability humans have ever evolved.


In my recent book, Why We Believe,* I explore how we evolved this universally and uniquely human capacity, drawing on my 26 years of research into human and other primates’ evolution, biology, and daily lives. Our 2-million-year journey to complex religions, political philosophies, and technologies essentially follows a three-step path: from imagination to meaning-making to belief systems. To trace that path, we must go back to where it started: rocks.


By 500,000 years ago, Homo had mastered the skill of shaping stone, bone, hides, horns, and wood into dozens of tool types. Some of these tools were so symmetrical and aesthetically pleasing that some scientists speculate toolmaking took on a ritual aspect that connected Homo artisans with their traditions and community. These ritualistic behaviors may have evolved, hundreds of thousands of years later, into the rituals we see in religions.

With their new gadgets, Homo chopped wood, dug deeper for tubers, collected new fruits and leaves, and put a wider variety of animals on the menu. These activities—expanding their diets, constructing new ecologies, and altering the implements in their environment—literally reshaped their bodies and minds.

In response to these diverse experiences, Homo grew increasingly dynamic neural pathways that allowed them to become even more responsive to their environment. During this time period, Homo’s brains reached their modern size.


The advent of cooking opened up a new landscape of foods and nutrient profiles. By boiling, barbecuing, grinding, or mashing meat and plants, Homo maximized access to proteins, fats, and minerals.

This gave them the nutrition and energy necessary for extended childhood brain development and increased neural connectivity. It allowed them to travel greater distances. It enabled them to evolve neurobiologies and social capacities that made it possible to move from imagining and making new tools to imagining and making new ways of being human.


Once groups are attributing shared meaning to objects they can manipulate, it is an easy jump to give shared meaning to larger elements they cannot change: storms, floods, earthquakes, volcanoes, eclipses, and even death. We have evidence that by at least a few hundred thousand years ago, early humans were placing their dead in caves. Within the past 50,000 years, distinct examples of burial practices became more and more common.”

“Does the extended evolutionary synthesis entail extended explanatory power? – J. Baedke, A. Fábregas-Tejeda & F. Vergara-Silva, Biology & Philosophy, vol. 35, 2020

“Does the extended evolutionary synthesis entail extended explanatory power?

Jan Baedke, Alejandro Fábregas-Tejeda & Francisco Vergara-Silva

Biology & Philosophy, volume 35, Article number: 20 (2020)

Published: 23 January 2020



Biologists and philosophers of science have recently called for an extension of evolutionary theory. This so-called ‘extended evolutionary synthesis’ (EES) seeks to integrate developmental processes, extra-genetic forms of inheritance, and niche construction into evolutionary theory in a central way. While there is often agreement in evolutionary biology over the existence of these phenomena, their explanatory relevance is questioned. Advocates of EES posit that their perspective offers better explanations than those provided by ‘standard evolutionary theory’ (SET). Still, why this would be the case is unclear. Usually, such claims assume that EES’s superior explanatory status arises from the pluralist structure of EES, its different problem agenda, and a growing body of evidence for the evolutionary relevance of developmental phenomena (including developmental bias, inclusive inheritance, and niche construction). However, what is usually neglected in this debate is a discussion of what the explanatory standards of EES actually are, and how they differ from prevailing standards in SET. In other words, what is considered to be a good explanation in EES versus SET? To answer this question, we present a theoretical framework that evaluates the explanatory power of different evolutionary explanations of the same phenomena. This account is able to identify criteria for why and when evolutionary explanations of EES are better than those of SET. Such evaluations will enable evolutionary biology to find potential grounds for theoretical integration.”