“The misinformation virus | Lies and distortions don’t just afflict the ignorant. The more you know, the more vulnerable you can be to infection” By Elitsa Dermendzhiyska [Aeon]

“The misinformation virus

Lies and distortions don’t just afflict the ignorant. The more you know, the more vulnerable you can be to infection

Elitsa Dermendzhiyska

is a science writer and social entrepreneur working at the intersection of technology, research and mental health. She is the editor of the mental health anthology What Doesn’t Kill You: 15 Stories of Survival (2020). She lives in London.

16 April 2021

https://aeon.co/essays/why-humans-find-it-so-hard-to-let-go-of-false-beliefs

(…)

What’s different today is the speed, scope and scale of misinformation, enabled by technology. Online media has given voice to previously marginalised groups, including peddlers of untruth, and has supercharged the tools of deception at their disposal. The transmission of falsehoods now spans a viral cycle in which AI, professional trolls and our own content-sharing activities help to proliferate and amplify misleading claims. These new developments have come on the heels of rising inequality, falling civic engagement and fraying social cohesion – trends that render us more susceptible to demagoguery. Just as alarming, a growing body of research over the past decade is casting doubt on our ability – even our willingness – to resist misinformation in the face of corrective evidence.

(…)

Yet no matter how clear the correction, typically more than half of subjects’ references to the original misinformation persist. What’s remarkable is that people appear to cling to the falsehood while knowing it to be false. This suggests that, even if successfully debunked, myths can still creep into our judgments and colour our decisions – an outcome referred to in the literature as ‘the continued influence effect’.

Why does this happen? According to Jason Reifler, professor of political science at the University of Exeter, we tend to take incoming information at face value, ‘because the existence of human society is predicated on the ability of people to interact and [on] expectations of good faith.’ Moreover, myths can take on subtle, crafty forms that feign legitimacy, making them hard to expose without careful analysis or fact checks. This means that those of us too dazed by the job of living to exert an extra mental effort can easily succumb to deception. And once a falsehood has slipped in and become encoded in memory – even weakly – it can prove remarkably sticky and resistant to correction.

(…)

Another reason why misinformation resists correction is repetition. Once something gets repeated often enough – sensational claims on social media; urban legends passed from one bored timewaster to another – it can trick us into taking it as true merely because of its familiarity. The illusory truth effect, as it’s known, suggests that the easier to process and more familiar something is, the more likely we are to believe it. Which is exactly what repeating a misleading claim does – getting it to go down smooth by strengthening the neural pathways linked to it.

(…)

In recent years, as misinformation has wormed its way into large swathes of society, scientists have been looking for the most effective methods to counter it. Recently, Lewandowsky spearheaded The Debunking Handbook 2020, an online collection of best practice by 22 of the most active researchers in the field. The contributors nominated more than 50 relevant findings and more than 30 practical recommendations, rating them on their importance and the strength of the available evidence. To successfully debunk a myth, the authors conclude, it helps to provide an alternative causal explanation to fill the mental gap that retracting the myth could leave. Counterarguments work too, as they point out the inconsistencies contained in the myth, allowing people to resolve the clash between the true and the false statement. Another strategy is to evoke suspicion about the source of the misinformation. For example, you might be more critical of government officials who reject human-caused global warming if you suspect vested business interests behind the denialist claims.

(…)

John Cook, a climate change communication researcher at George Mason University in Virginia, told me: ‘I could develop the perfect message that debunks the myth completely. And, even if I could get that message to the right person, what happens if they just go home and turn on Fox News and get five hours of misinformation thrown at them? That particular message will be wiped out.’

(…)

To fully grasp the pernicious nature of the misinformation virus, we need to reconsider the innocence of the host. It’s easy to see ourselves as victims of deception by malicious actors. It’s also tempting to think of being misinformed as something that happens to other people – some unnamed masses, easily swayed by demagoguery and scandal. ‘The problem is that people are sheep,’ one friend said to me. I’ve heard this sentiment echoed time and again by others, the implication always being that they and I were not like those other, misinformed people. No: we were educated, had been taught to think, immune to dupery. But, as it turns out, misinformation doesn’t prey only on the ignorant: sometimes, those who seem least vulnerable to the virus can prove its keenest hosts, and even handmaidens.

(…)

In the 2010 study, published in Nature in 2012, Kahan and his collaborators measured subjects’ science literacy and numeracy, and plotted those against the participants’ perceived risk of global warming. If the science comprehension thesis was right, then the more knowledgeable the subjects, the more they’d converge towards the scientific consensus. Surprisingly, however, the data revealed that those who scored high on hierarchy and individualism – the hallmark values of a conservative outlook – exhibited the opposite pattern: as their science literacy and numeracy increased, their concern for climate change actually declined. What explains this seeming paradox?

Kahan argues that rather than being a simple matter of intelligence or critical thinking, the question of global warming triggers deeply held personal beliefs. In a way, asking for people’s take on climate change is also to ask them who they are and what they value. For conservatives to accept the risk of global warming means to also accept the need for drastic cuts to carbon emissions – an idea utterly at odds with the hierarchical, individualistic values at the core of their identity, which, by rejecting climate change, they seek to protect. Kahan found similar polarisation over social issues that impinge on identity, such as gun control, nuclear energy and fracking, but not over more identity-neutral subjects such as GMO foods and artificial sweeteners. In cases where identity-protective motivations play a key role, people tend to seek and process information in biased ways that conform to their prior beliefs. They might pay attention only to sources they agree with and ignore divergent views. Or they might believe congruent claims without a moment’s thought, but spare no effort finding holes in incongruent statements: the brightest climate-change deniers were simply better than their peers at counter-arguing evidence they didn’t like.

This hints at a vexing conclusion: that the most knowledgeable among us can be more, not less, susceptible to misinformation if it feeds into cherished beliefs and identities. And though most available research points to a conservative bias, liberals are by no means immune.

In a 2003 study, Geoffrey Cohen, then a professor of psychology at Yale, now at Stanford University, asked subjects to evaluate a government-funded job-training programme to help the poor. All subjects were liberal, so naturally the vast majority (76 per cent) favoured the policy. However, if subjects were told that Democrats didn’t support the programme, the results completely reversed: this time, 71 per cent opposed it. Cohen replicated this outcome in a series of influential studies, with both liberal and conservative participants. He showed that subjects would support policies that strongly contradict their own political beliefs if they think that others like them supported those policies. Despite the social influence, obvious to an outsider, participants remained blind to it, and attributed their preferences to objective criteria and personal ideology. This would come as no surprise to social psychologists, who have long attested to the power of the group over the individual, yet most of us would doubtless flinch at the whiff of conformity and the suggestion that our thoughts and actions might not be entirely our own.

For Kahan, though, conformity to group beliefs makes sense. Since each individual has only negligible impact on collective decisions, it’s sensible to focus on optimising one’s social ties instead. Belonging to a community is, after all, a vital source of self-worth, not to mention health, even survival. Socially rejected or isolated people face heightened risks of many diseases as well as early death. Seen from this perspective, then, the impulse to fit our beliefs and behaviours to those of our social groups, even when they clash with our own, is, Kahan argues, ‘exceedingly rational’. Ironically, however, rational individual choices can have irrational collective consequences. As tribal attachments prevail, emotions trump evidence, and the ensuing disagreement chokes off action on important social issues.

(…)

I’ve wondered recently if, like school violence, misinformation is becoming part of the culture, if it persists because some of us actively partake in it, and some merely stand by and allow it to continue. If that’s the case, then perhaps we ought to worry less about fixing people’s false beliefs and focus more on shifting those social norms that make it OK to create, spread, share and tolerate misinformation. Paluck shows one way to do this in practice – highly visible individual action reaching critical mass; another way could entail tighter regulation of social media platforms. And our own actions matter, too. As the Scottish biologist D’Arcy Wentworth Thompson said in 1917, ‘everything is what it is because it got that way’. We are, each and every one of us, precariously perched between our complicity in the world as it is and our capacity to make it what it can be.”

“Estudo mostra que pessoas com viés político são mais suscetíveis a fake news” [O Globo]

“Estudo mostra que pessoas com viés político são mais suscetíveis a fake news

05/07/2020 • 07:00

https://blogs.oglobo.globo.com/sonar-a-escuta-das-redes/post/estudo-mostra-que-pessoas-com-vies-politico-sao-mais-suscetiveis-fake-news.html

Suzana Correa

Um estudo brasileiro inédito que será publicado no Journal of American Politics, a partir de experimento realizado nas eleições presidenciais de 2018, pode acabar com o mito de que quem acredita em fake news é a “tia do zap”. Os pesquisadores mostram que quem mais confia em boato falso são os que já têm time definido no jogo político. E o desejo de eleitores de concluir aquilo que é sugerido por suas filiações partidárias reduz a eficácia da checagem de informações.

— Escolaridade, nível intelectual, sexo e idade não têm relação com o quanto se acredita em fake news. Mais que mexer com eleitor médio, elas reforçam crenças de quem já tem posição política e intensificam preconceitos, opiniões e valores — diz Felipe Nunes, um dos autores da pesquisa, professor da Universidade Federal de Minas Gerais (UFMG) e diretor da consultoria Quaest.

O experimento indica que expor os eleitores a esclarecimentos da própria vítima do falso rumor não funciona. Apresentar a checagem de veículo de grande porte é mais eficaz. O estudo, chamado “Raciocínio motivado sem partidarismo? Fake news nas eleições brasileiras de 2018”, mostrou petistas e anti-petistas como os mais suscetíveis a acreditarem em fake news sobre o PT, negativas ou positivas.

A boa notícia é que apenas cerca de 30% dos entrevistados no experimento brasileiro acreditavam nas fake news. A proporção é menor do que os 43% observados em estudos nos Estados Unidos e outros países. Os pesquisadores acreditam que a identificação partidária no Brasil, considerada fraca e instável, pode contribuir para limitar a disseminação de fake news no país.

— Esse é o principal efeito que produzem numa eleição. É também um mecanismo de reforço da coesão e de mobilização daquele próprio grupo — diz Nunes.

O estudo confirma os impactos do já conhecido “viés de confirmação”, segundo o qual interpretamos ou pesquisamos informações para confirmar crenças e hipóteses que já temos. O fenômeno é investigado pela psicologia social e ciência política desde 1970, mas voltou à moda com o surgimento das redes sociais e fake news.

É este viés que forma nas redes o que os estudiosos chamam de “bolhas” ou “câmaras de eco” partidárias: ao seguir e curtir apenas conteúdo que confirma suas preferências políticas ou morais, o usuário de redes sociais como Facebook ou Twitter “treina”, involuntariamente, os algoritmos da rede a mostrarem cada vez mais informações que corroboram suas crenças.

Nestes ambientes, as fake news são mais aceitas como verdade e compartilhadas, porque confirmam a visão positiva sobre o partido do usuário — ou negativa sobre aquele que odeia — e que se torna predominante ali.

— O experimento mostra que o senso comum de que os menos escolarizados seriam os mais propensos a serem afetados por fake news é mentira. O que realmente afeta é a posição política da pessoa e isso só acontece porque hoje quase ninguém usa informação para atualizar o que sabe, mas para confirmar o que já acredita — conclui Nunes.”

“How Our Ancient Brains Are Coping in the Age of Digital Distraction” [Discover Magazine]

“How Our Ancient Brains Are Coping in the Age of Digital Distraction

Our species invented the internet. Can we handle the consequences?

By Kenneth Miller

April 20, 2020 10:00 AM

https://www.discovermagazine.com/mind/how-our-ancient-brains-are-coping-in-the-age-of-digital-distraction

(…)

In the public arena, online filters generate bubbles that reinforce our preconceptions and amplify our anger. Brandishing tweets like pitchforks, we’re swept into virtual mobs; some of us move on to violence IRL. Our digitally enhanced tribalism upends political norms and sways elections.

(…)

A growing body of research suggests that this conundrum arises from a feature etched into our DNA: our unparalleled hunger to know stuff. “This is an ancient drive that leads to all sorts of complexities in how we interact with the world around us,” says Adam Gazzaley, a neuroscientist at the University of California, San Francisco, and co-author of The Distracted Mind: Ancient Brains in a High-Tech World.

(…)

Yet this wonder’s origins were strikingly humble. About 7 million years ago, hominins — our branch of the primate family tree — began the long transition to walking upright. Bipedalism, or walking on two legs, freed our hands for making and manipulating tools. It also allowed us to walk longer distances, key to our spread beyond Africa’s forests and savannas. “If you look at nonhuman primates, it’s like they have another set of hands down there,” notes Dean Falk, a professor of anthropology at Florida State University and senior scholar at Santa Fe’s School for Advanced Research, who specializes in brain evolution. “When our feet became weight-bearing instruments, that kicked everything off — no pun intended.”

Not that the effects were immediate. More than 3 million years ago, the braincase of Australopithecus afarensis, likely the first fully bipedal hominin, was only slightly larger than a chimpanzee’s. But by the time Homo sapiens emerged at least 300,000 years ago, brain volume had tripled. Our brain-to-body ratio is six times that of other mammals, and the neurons in our cerebral cortex (the brain’s outer layer, responsible for cognition) are more densely packed than those of any other creature on Earth.

In recent years, scientists have identified about two dozen genetic changes that might have helped make our brains not only bigger but incomparably capable. “It’s not just one quantum leap,” says University of Wisconsin-Madison paleoanthropologist John Hawks. “A lot of adaptations are at play, from metabolic regulation to neuron formation to timing of development.” A stretch of gene-regulating DNA called HARE5, for example, differs slightly between chimps and humans; when a team at Duke University introduced both versions into mouse embryos, the ones that got the human type developed brains that were 12 percent larger. Meanwhile, mutations in a gene called NOTCH2 increase our production of neural stem cells and delay their maturation into cortical neurons, which may be part of the reason our brains keep growing far longer than those of other primates. The FOXP2 gene, crucial for verbal communication in many species, diverges by two base pairs in humans and our nearest living ape relatives. Our mutation may explain why we can talk and chimps can’t.

Our brains were also shaped by external forces, which increased the odds of smarter hominins passing on their genes. Experts debate which factors mattered most. Falk, for one, hypothesizes that the loss of grasping feet was crucial: When infants could no longer cling to their mothers, as nonhuman primates do, the need to soothe them from a distance led to the development of language, which revolutionized our neural organization. Other researchers believe that dietary shifts, such as eating meat or cooking food in general, enabled us to get by with a shorter digestive tract, which freed up more energy for a calorie-hogging brain. Still others credit our cerebral evolution to growing social complexity or intensifying environmental challenges.

What’s clear is that our neural hardware took shape under conditions radically different from those it must contend with today. For millennia, we had to be on the alert for dangerous predators, hostile clans, potential sources of food and shelter — and that was about it. As McGill University neuroscientist Daniel J. Levitin put it in his book The Organized Mind: “Our brains evolved to focus on one thing at a time.”

Our digital devices, by design, make that almost impossible.

Tech vs. Brain

The part of the brain that enables us to make elaborate plans and carry them through — the part, arguably, that makes us most human — is the prefrontal cortex. This region is only slightly larger in H. sapiens than in chimps or gorillas, but its connections with other brain regions are more extensive and intricate. Despite this advanced network, our planning ability is far stronger than our ability to remain focused on a given task.

One reason is that, like all animals, we evolved to switch attention instantly when we sense danger: the snapping twig that might signal an approaching predator, the shadow that could indicate an enemy behind a tree. Our goal-directed, or top-down, mental activities stand little chance against these bottom-up forces of novelty and saliency — stimuli that are unexpected, sudden or dramatic, or that evoke memories of important experiences.

(…)

When the animal finds a ripe mango in the jungle — or solves a problem in the lab — brain cells in what’s called the dopaminergic system light up, creating a sensation of pleasure. These cells also build durable connections with the brain circuits that helped earn the reward. By triggering positive feelings whenever these circuits are activated, the system promotes learning.

Humans, of course, forage for data more voraciously than any other animal. And, like most foragers, we follow instinctive strategies for optimizing our search. Behavioral ecologists who study animals seeking nourishment have developed various models to predict their likely course of action. One of these, the marginal value theorem (MVT), applies to foragers in areas where food is found in patches, with resource-poor areas in between. The MVT can predict, for example, when a squirrel will quit gathering acorns in one tree and move on to the next, based on a formula assessing the costs and benefits of staying put — the number of nuts acquired per minute versus the time required for travel, and so on. Gazzaley sees the digital landscape as a similar environment, in which the patches are sources of information — a website, a smartphone, an email program. He believes an MVT-like formula may govern our online foraging: Each data patch provides diminishing returns over time as we use up information available there, or as we start to worry that better data might be available elsewhere.

The call of the next data patch may keep us hopping from Facebook to Twitter to Google to YouTube; it can also interfere with the fulfillment of goals — meeting a work deadline, paying attention in class, connecting face-to-face with a loved one. It does this, Gazzaley says, in two basic ways. One is distraction, which he defines as “pieces of goal-irrelevant information that we either encounter in our external surroundings or generate internally within our own minds.” We try to ignore our phone’s pings and buzzes (or our fear of missing out on the data they signify), only to find our focus undermined by the effort.

The other goal-killer is interruption: We take a break from top-down activity to feed our information munchies. The common term for this is multitasking, which sounds as if we’re accomplishing several things at once — working on the quarterly report, answering client emails, staying on top of the politician’s gaffe count, taking a peek at that aardvark. In truth, it means we’re doing nothing well.

(…)

It also wreaks havoc on working memory, the function that allows us to hold a few key bits of data in our heads just long enough to apply them to a task. Multiple studies have shown that “media multitasking” (the scientific term for toggling between digital data sources) overloads this mental compartment, making us less focused and more prone to mistakes. In 2012, for instance, Canadian researchers found that multitasking on a laptop hindered classroom learning not only for the user but for students sitting nearby. Heavy media multitasking has been associated with diminished cognitive control, higher levels of impulsivity and reduced volume in the anterior cingulate cortex, a brain region linked with error detection and emotional regulation.

Us vs. Them

Emotional regulation is central to another of tech’s disruptive effects on our ancient brains: exacerbation of tribal tendencies. Our distant ancestors lived in small nomadic bands, the basic social unit for most of human history. “Groups that were competing for resources and space didn’t always do so peacefully,” says paleoanthropologist Hawks. “We’re a product of that process.”

These days, many analysts see tribalism asserting itself in the resurgence of nationalist movements worldwide and the sharp rise in political polarization in the U.S., with both trends playing out prominently online. A study published in the American Journal of Political Science in 2015 found that party affiliation had become a basic component of identity for Republicans and Democrats. Social media, which spurs us to publicly declare our passions and convictions, helps fuel what the authors call “the gradual encroachment of party preference into nonpolitical and hitherto personal domains.”

And we’re hardwired to excel at telling “us” from “them.” When we interact with in-group members, a release of dopamine gives us a rush of pleasure, while out-group members may trigger a negative response. Getting online “likes” only intensifies the experience.

Our retreat into tribal mode may also be a reaction to the data explosion that the web has ignited. In 2018, in the journal Perspectives on Psychological Science, psychologist Thomas T. Hills reviewed an array of earlier studies on the proliferation of information. He found that the upsurge in digitally mediated extremism and polarization may be a response to cognitive overload. Amid the onslaught, he suggested, we rely on ingrained biases to decide which data deserve our attention (see “Tribal Tech” sidebar). The result: herd thinking, echo chambers and conspiracy theories. “Finding information that’s consistent with what I already believe makes me a better member of my in-group,” Hills says. “I can go to my allies and say, ‘Look, here’s the evidence that we’re right!’ ”

(…)

For example, when Red Sox and Yankees fans watch their rival team fail to score, even against a third team, they show heightened activity in the ventral striatum, a brain region associated with reward response.

It’s surely no coincidence that during the 2016 presidential election, Russian hackers focused largely on convincing various groups of Americans that another group was out to get them. But foreign agents are hardly the top promoters of tribalism online. As anyone who’s spent time on social media knows, there’s plenty of homegrown schadenfreude on the web.

(…)

Faced with tech’s cognitive overload, humans determine what’s worthy of attention by relying on biases shaped by evolution, says Thomas T. Hills, a professor of psychology at England’s University of Warwick. Those tendencies may have helped our ancestors survive, but they’re not always in our best interests today, Hills says. He identifies four types of “cognitive selection” that fuel digital tribalism.

Selection for belief-consistent information. Also called confirmation bias, it inclines us to prefer data that align with what we already think. In prehistoric times, this might have led people to see a rainstorm as proof of a shaman’s power over the weather — an interpretation that strengthened social cohesion, even if it was wrong. Today, confirmation bias can lead to more consequential errors, such as seeing a cold snap as proof that climate change is a hoax.

Selection for negative information. This tendency, also known as negativity bias, primed our ancestors’ brains to prioritize alertness for predators over other, less threatening types of attention. Today, it can lead us to privilege bad news over good — for example, by taking a single horrific crime by an out-group member more seriously than data showing that the group as a whole is law-abiding.

Selection for predictive information. Pattern-recognition bias, as it’s often called, helps us discern order in chaos. Noticing that large prey animals tended to arrive in the savanna after the first summer rains would have given early humans an evolutionary advantage. Today, however, a predilection for patterns can lead us to detect conspiracies where none exist.

Selection for social information. This “herd bias” prompts us, in uncertain environments, to follow the crowd. Back in the day, “if everyone else in your tribe was running toward the river, they probably had a good reason,” says Hills. But if everyone in your Reddit community says a famous politician is running a child-sex ring from the basement of a pizzeria, well, it would be wise to visit a fact-checking website before making up your mind.”

“Hydroxychloroquine and the Political Polarization of Science” by Cailin O’Connor & James Owen Weatherall [Boston Review]

“Hydroxychloroquine and the Political Polarization of Science

How a drug became an object lesson in political tribalism.

CAILIN O’CONNOR, JAMES OWEN WEATHERALL

https://bostonreview.net/science-nature-politics/cailin-oconnor-james-owen-weatherall-hydroxychloroquine-and-political

(…)

In particular, people become misinformed because they tend to trust those they identify with, meaning they are more likely to listen to those who share their social and political identities. When public figures such as Donald Trump and Rush Limbaugh make claims about hydroxychloroquine, Republicans are more likely to be swayed, while Democrats are not. The two groups then start sharing different sorts of information about hydroxychloroquine, and stop trusting what they see from the other side.

People also like to conform with those in their social networks. It is often psychologically uncomfortable to disagree with our closest friends and family members. But different clusters or cliques can end up conforming to different claims. Some people fit in by rolling their eyes about hydroxychloroquine, while others fit in by praising Trump for supporting it.

These social factors can lead to belief factions: groups of people who share a number of polarized beliefs. As philosophers of science, we’ve used models to argue that when these factions form, there need not be any underlying logic to the beliefs that get lumped together. Beliefs about the safety of gun ownership, for example, can start to correlate with beliefs about whether there were weapons of mass destruction in Iraq. When this happens, beliefs can become signals of group membership—even for something as dangerous as an emerging pandemic. One person might show which tribe they belong to by sewing their own face mask. Another by throwing a barbeque, despite stay at home orders.

And yet another might signal group membership by posting a screed about hydroxychloroquine.  There is nothing about hydroxychloroquine in particular that makes it a natural talking point for Republicans. It could just as easily have been remdesivir, or one of a half dozen other potential miracle drugs, that was picked up by Fox News, and then by Trump. The process by which Trump settled on hydroxychloroquine was essentially random—and yet, once he began touting it, it became associated with political identity in just the way we have described. (That is not to say that Trump and his media defenders were not on the lookout for an easy out from a growing crisis. Political leaders around the world would love to see this all disappear, irrespective of ideology.)

(…)

This bias toward extremes means that once opposing camps have formed, there is a lot of fodder for each side to appeal to as evidence of bias. Furthermore, with COVID-19, it is often the case that the different groups only trust one of the extremes. Extremity bias can thus amplify polarization, especially in an already factionalized environment.

The end result is that even without misinformation, or with relatively little of it, we can end up misinformed. And misinformed decision makers—from patients, to physicians, to public health experts and politicians—will not be able to act judiciously. In the present crisis, this is a matter of life and death.”

“Why Republicans Are Less Likely to View the Coronavirus as a Serious Threat” By Nigel Barber [The Human Beast/Psychology Today]

“Why Republicans Are Less Likely to View the Coronavirus as a Serious Threat

Survey results demonstrate that the two parties view the pandemic differently.

Nigel Barber Ph.D.
The Human Beast

https://www.psychologytoday.com/gb/blog/the-human-beast/202003/why-republicans-are-less-likely-view-the-coronavirus-serious-threat

Political conservatives fear disease as more of a threat and are more fearful of dirt and contamination in a variety of contexts from using public restrooms to eating unfamiliar foods (1). They have greater disgust sensitivity. This phenomenon is interestingly demonstrated by the fact that conservatives are four times more likely to have a mudroom in their homes compared to liberals (2).

Conservatives manifest a high degree of submission to authority figures such as the head of state. They are deferential towards authoritarian leaders who tell them what they want to hear (according to research on Right Wing Authoritarianism, a personality trait very correlated with political conservatism (1). Perhaps the tendency to credit the views of authority figures in this instance is stronger than fear of infection.

The fact that this is a new threat may also be significant because conservatives are more closed to new experiences as they adhere to long-established social conventions (1).

The coronavirus may be interpreted differently by Republicans and Democrats because they belong to different demographic groups. Republicans tend to be rural, older, groups that may be less receptive to information on novel threats. The coronavirus is also more likely to strike in cities because they are travel hubs and reservoirs of infection.

But it is hard to avoid two probable explanations. The first is that liberals and conservatives are exposed to differing information pools. This is often because their social media news feeds, particularly those on popular sites such as Facebook, or Twitter, feed them with the sort of news that they enjoy reading.”

“3 Reasons for the Rise of Fake News | Cailin O’Connor explains the shift in American politics By Walter Veit [Science and Philosophy, Psychology Today]

“3 Reasons for the Rise of Fake News

Cailin O’Connor explains the shift in American politics.

By Walter Veit

https://medium.com/science-and-philosophy/3-reasons-for-the-rise-of-fake-news-f0095c652533

Walter Veit: You recently published The Misinformation Age together with your husband and fellow philosopher James Owen Weatherall. What motivated you to write this book?

Cailin O’Connor: Around the time of the Brexit vote and the 2016 election in the US, I was working on several projects in formal social epistemology — using models to represent scientific communities. Social epistemology puts a big emphasis on the importance of social connections to knowledge creation. At the same time, we were seeing some serious issues related to public misinformation through social media. Many responses to this misinformation seemed to focus on the role of individual psychology and reasoning in the spread of false belief. For instance, confirmation bias, where individuals trust evidence that supports an already-held belief, is obviously relevant. But we think that understanding social ties and behavior is even more important to understanding false belief. For that reason, we wanted to bring some of the most important lessons from social epistemology, and from models of scientific knowledge, to bear on these social problems.

Walter Veit: How do you explain that despite all the evidence, demonstrably false beliefs are able to spread and persist?

Cailin O’Connor: There are many reasons that false beliefs spread, often in spite of good evidence refuting them. One reason is that we all are used to trusting other humans as sources of information. This is, to some degree, a necessity. We certainly cannot go do the work ourselves to guarantee that all our beliefs are good ones. Even when we look to scientific journals for evidence supporting our beliefs, we are ultimately trusting others (the scientists who share their data). And sometimes even these good sources lead us astray. The social sharing of data is powerful, but always opens the possibility that falsity can spread. In addition, there are various social biases that can make us more or less likely to share false beliefs. For example, in our book, we talk about the role of conformity bias — when individuals want to conform their actions or beliefs to their peers — in sometimes preventing the spread of useful or accurate knowledge. Our heuristics for social trust, such as placing more trust in those who are more similar to ourselves, or who share our beliefs, can mislead.

(…)

This interview originally appeared in Psychology Today [Apr 17, 2019]”

“The dark side of social movements: Social identity, non-conformity, and the lure of conspiracy theories” – Anni Sternisko, Aleksandra Cichocka & Jay J. Van Bavel [Current Opinion in Psychology]

“The dark side of social movements: Social identity, non-conformity, and the lure of conspiracy theories

Anni Sternisko
Aleksandra Cichocka
Jay J. Van Bavel

Current Opinion in Psychology

Available online 21 February 2020

https://www.sciencedirect.com/science/article/pii/S2352250X20300245?via%3Dihub

Highlights

• Conspiracy theories claim that a powerful group is secretly pursuing an evil goal.

• Conspiracy theories can foster anti-democratic social movements.

• Conspiracy theories attract people with both their content and qualities.

• Content and qualities appeal to people differently based on their motivations.

Social change does not always equal social progress–there is a dark side of social movements. We discuss conspiracy theory beliefs –beliefs that a powerful group of people are secretly working towards a malicious goal–as one contributor to destructive social movements. Research has linked conspiracy theory beliefs to anti-democratic attitudes, prejudice and non-normative political behavior. We propose a framework to understand the motivational processes behind conspiracy theories and associated social identities and collective action. We argue that conspiracy theories comprise at least two components – content and qualities— that appeal to people differently based on their motivations. Social identity motives draw people foremost to contents of conspiracy theories while uniqueness motives draw people to qualities of conspiracy theories.

(…)

What motivates social movements that threaten social health, economic prosperity, and democratic principles? We argue that conspiracy theories — theories that a powerful group of people are secretly working towards a malevolent or unlawful goal [8**] can be one reason. Though not all conspiracy theories are wrong, irrational, or harmful for society, many of them are in fact closely intertwined with some of today’s most powerful, destructive social movements.

(…)

Recent reviews [30,8**] distilled three main motivators behind conspiracy theory beliefs: conspiracy beliefs are higher when people want to (1) feel good about themselves and the groups they belong to [31,32, 21], (2) make sense of their environment [33–35], or (3) feel safe and in control [36–38].

(…)

Conspiracy theories can be understood as a genre of belief systems that is defined by certain qualities. Each individual conspiracy theory is a film with a unique content. Content refers to the unique narrative elements of each conspiracy theory. While conspiracy theories all share the premise that a nefarious group is secretly working towards a malicious or unlawful goal, individual conspiracy theories vary in the specific group (e.g., Illuminati; government), which goal is pursued (e.g., New World Order, war) and which events can be explained (e.g., 2008 financial crisis, 9/11 terrorist attacks). This is similar to the contents of specific movies that people find appealing, like your favorite actor.

(…)

… the belief in a flat earth might primarily emerge from the psychological benefits of holding contrarian beliefs rather than compelling physical arguments. This is consistent with findings that participants who believed in one conspiracy theory were also more likely to believe in others, even when they were contradictory [42, 43]. We illustrate our argument by the means of discussing two motives behind conspiracy theory beliefs in more detail: social identity motives and uniqueness motives.

2.1. Content drawn motives: Social identity motives

People are prone to form social identities in which group membership becomes part of the self. Social identities are connected with different motives including the need to hold positive beliefs about ingroups and negative beliefs about outgroups [44]. We argue that these motives draw people primarily to certain contents of conspiracy theories.

(…)

In these cases, conspiracy theory beliefs psychologically greatly overlap with other kinds of false beliefs and can be explained by affiliated psychological models. For instance, in line with the identity-based model of political beliefs [46**], social identity motives increased participants ’likelihood to believe in fake news that represented their own political party as moral [47]. Likewise, participants were more likely to believe conspiracy theories that aligned with their party’s political stances and vilified the opposite party [39–41,48,49,50]. Sometimes people may be predominantly drawn to conspiracy theories because their content allows them to legitimize and enforce pre-existing beliefs and attitudes.

(…)

”Indeed, research suggests that people who believe in their group’s superiority but are anxious about its recognition are drawn to conspiracy theories about outgroup members [21, see also 22,23*].

(…)

For instance, Republicans are more likely than Democrats to endorse Qanon – the far-right theory that a Deep State is conspiring against President Trump [53]. In contrast, Democrats are more likely than Republicans to believe that the 9/11 terrorist attacks were an inside job [54]. These differences might emerge from motivations to defend one’s ingroup from external threats and represent outgroups as morally inferior. Together with evidence that conspiracy theories that implicate outgroups can further prejudices, discrimination, and inter-group hostility [23,25–29] social identity motives might foster a vicious cycle where conspiracy theories intensify inter-group conflict and inter-group conflict fosters conspiracy theories.”

“Partisanship predicts belief in fake news more strongly than conspiracy mentality, study finds” [PsyPost]

“Partisanship predicts belief in fake news more strongly than conspiracy mentality, study finds

By ERIC W. DOLAN February 6, 2020

https://www.psypost.org/2020/02/partisanship-predicts-belief-in-fake-news-more-strongly-than-conspiracy-mentality-study-finds-55464

(…)

Supporters of the current Hungarian prime minister, Viktor Orbán, were more likely to rate the pro-government fake news as coming from an independent source and were more likely to believe that the pro-government fake news was real. But Orbán supporters were less likely to view the anti-government fake news as real.

Opponents of Orbán’s government, on the other hand, were more likely to rate the anti-government fake news as coming from an independent source and were more likely to believe that the anti-government fake news was real, but were more skeptical of the pro-government fake news.

Conspiracy mentality, a measure of one’s propensity to endorse conspiracy theories, was only weakly linked to belief in anti-government fake news.

“Despite fake news and conspiracy theories often being mentioned interchangeably, our research revealed that they do not necessarily overlap. We focused on wish-fulfilling political fake news, which was unrelated to the general mentality to believe in conspiracy theories. Therefore, our research suggests that pipedream fake news is processed like any other information,” Faragó explained.

(…)

“The perception of the source is also important in the evaluation process: if the news is consistent with our beliefs, we more likely think that the news was written by an independent journalist, but if the news contradicts our viewpoint, we assume that it is biased and part of political propaganda,” Faragó added.

“When we read news, the satisfaction with the economic situation is also an important factor: if we are satisfied with the economy and the political management, we will trust pro-government news more, even if it is fake, and regard opposition news as political propaganda.”

“Therefore, we should read news that come from our own side even more critically,” Faragó said.”

***

We only believe in news that we doctored ourselves: The connection between partisanship and political fake news.

Faragó, Laura; Kende, Anna; Krekó, Péter

https://psycnet.apa.org/record/2019-57441-001

Abstract

In this research we aimed to explore the importance of partisanship behind the belief in wish-fulfilling political fake news. We tested the role of political orientation, partisanship, and conspiracy mentality in the acceptance of pro- and anti-government pipedream fake news. Using a representative survey (N = 1,000) and a student sample (N = 382) in Hungary, we found that partisanship predicted belief in political fake news more strongly than conspiracy mentality, and these connections were mediated by the perceived credibility of source (independent journalism vs. political propaganda) and economic sentiment. Our findings suggest that political bias can be more important in predicting acceptance of pipedream political fake news than conspiracy mentality. (PsycINFO Database Record (c) 2019 APA, all rights reserved)