Skip to main content

Почему факты не изменяют наше сознание / Why facts don’t change our minds

Почему факты не изменяют наше сознание / Why facts don’t change our minds

by Евгений Волков -
Number of replies: 4

Why facts don’t change our minds

New discoveries about the human mind show the limitations of reason.

The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.
 
The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.
Illustration by Gérard DuBois
 

In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

Cartoon
 
“Thanks again for coming — I usually find these office parties rather awkward.”

 

This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. 

Elizabeth Kolbert has been a staff writer at The New Yorker since 1999. She won the 2015 Pulitzer Prize for general nonfiction for “The Sixth Extinction: An Unnatural History.”

This article appears in other versions of the February 27, 2017, issue, with the headline “That’s What You Think.”

3009 words

In reply to Евгений Волков

Загадка разума / The Enigma of Reason. Hugo Mercier, Dan Sperber

by Евгений Волков -

http://www.hup.harvard.edu/catalog.php?isbn=9780674368309&content=toc

The Enigma of Reason. Hugo Mercier, Dan Sperber

Reason, we are told, is what makes us human, the source of our knowledge and wisdom. If reason is so useful, why didn’t it also evolve in other animals? If reason is that reliable, why do we produce so much thoroughly reasoned nonsense? In their groundbreaking account of the evolution and workings of reason, Hugo Mercier and Dan Sperber set out to solve this double enigma. Reason, they argue with a compelling mix of real-life and experimental evidence, is not geared to solitary use, to arriving at better beliefs and decisions on our own. What reason does, rather, is help us justify our beliefs and actions to others, convince them through argumentation, and evaluate the justifications and arguments that others address to us.

In other words, reason helps humans better exploit their uniquely rich social environment. This interactionist interpretation explains why reason may have evolved and how it fits with other cognitive mechanisms. It makes sense of strengths and weaknesses that have long puzzled philosophers and psychologists—why reason is biased in favor of what we already believe, why it may lead to terrible ideas and yet is indispensable to spreading good ones.

Ambitious, provocative, and entertaining, The Enigma of Reason will spark debate among psychologists and philosophers, and make many reasonable people rethink their own thinking.

392 words

In reply to Евгений Волков

The Function of Reason / A Conversation With Dan Sperber

by Евгений Волков -
 


Contrary to the standard view of reason as a capacity that enhances the individual in his or her cognitive capacities—the standard image is of Rodin’s "Thinker," thinking on his own and discovering new ideas—what we say now is that the basic functions of reason are social. They have to do with the fact that we interact with each other’s bodies and with each other’s minds. And to interact with other’s minds is to be able to represent a representation that others have, and to have them represent our representations, and also to act on the representation of others and, in some cases, let others act on our own representations.

The kind of achievements that are often cited as the proof that reason is so superior, like scientific achievements, are not achievements of individual minds, not achievements of individual reason, they are collective achievements—typically a product of social interaction over generations. They are social, cultural products, where many minds had to interact in complex ways and progressively explore a lot of directions on which they hit, not because some were more reasonable than others, but because some were luckier than others in what they hit. And then they used their reason to defend what they hit by luck. Reason is a remarkable cognitive capacity, as are so many cognitive capacities in human and animals, but it’s not a superpower.

DAN SPERBER is a Paris-based social and cognitive scientist. He holds an emeritus research professorship at the French Centre National de la Recherche Scientifique (CNRS), Paris, and he is currently at Central European University, Budapest. He is the creator (with Deirdre Wilson) of "Relevance Theory," and coauthor (with Hugo Mercier) of The Enigma of ReasonDan Sperber's Edge Bio Page

 

THE FUNCTION OF REASON

The general question I've been living with is how do we go about getting a better scientific grip on everything social? The social sciences have developed away from the natural sciences, even with some bit of hostility toward natural sciences, and that, I believe, is a source of poverty. If we want to have a more ambitious understanding of how social life functions, of the mechanisms involved, the challenge is to achieve continuity with neighboring natural sciences. The obvious neighbors to begin with are cognitive neuroscience, ecology, biology, and others.

I started as a social scientist. I started as an anthropologist doing fieldwork in a small group of people in the south of Ethiopia, asking myself fairly standard anthropological questions.

I was in this tribe in the south of Ethiopia, studying rituals—sacrifices and divinations. They had a fairly rich ritual life with lots of symbols and so on, and I would keep asking them, “What is the meaning of the symbols you’re using? What are the reasons for why you do this ritual the way you do?” And I never got a satisfactory answer, or so I thought. When asked about the meaning they said, “We do it because that’s what our fathers did, and our forefathers.” That was always the answer: “We do it because that’s the way we’ve always done it.” I was very frustrated by this and went looking for possibly better informants—an older member of the society, a “wise man,” or whatever—who would know more, but I never found them.   

One morning I woke up after having dreamt about all that fairly intensely, and in the dream I was telling myself, “You’re not paying attention. Listen to what they’re saying. Maybe what they’re saying is exactly right. Maybe these symbols don’t have meaning. Maybe their job is not to convey meaning. Maybe the reason why the people do these things is because of the force of precedent, because indeed they’ve done it all the time.”                                 

I’d been there quite a few months, but I was so agitated by this that I flew back to Paris and started working on a book, which came out some years later, in 1975, called Rethinking Symbolism. In it, I argued that cultural symbolism is not in the job of conveying meanings. If you want to convey meanings, there are much better means to do that—as we're doing now, for instance, speaking. There's a high investment that’s involved in ritual symbols, the interpretation of which is always uncertain or vague. There are people in some societies who tell you “this means that,” but their answers themselves are mysterious and call for further interpretation. That can’t be the reason why they do it. That can’t be the function of these symbols and rituals.                                 

My argument was that cultural symbolism has more of a cognitive function. What these cultural symbols do achieve is focus attention in certain directions. Rather than "mean" something, they evoke many things. They create a certain commonality of orientation, interests, and values among people without having any signification, properly speaking. That got me into cognitive science, which was very much at the beginning. The official beginning was in the late ‘50s, early ‘60s, but at that time there was little cognitive science, especially compared to what came out after, and practically none of it was about higher cognitive processes. Issues concerning the meaning of symbols were nowhere near the questions that people doing cognitive psychology at the time had in mind.                                 

I got involved in cognitive science fairly intensely. I’ve been involved also in the study of human communication, particularly language. Part of the reason is that I developed my ideas in France, where Levi-Strauss was such a big influence. I was never a Levi-Straussian, but he was still the most interesting anthropologist around. He was always insisting that language, linguistics, provided the model for the study of culture, for the study of social science. It was the heyday of semiology or semiotics, the notion that a conceptual framework would unify all these human sciences and possibly even go beyond that.                                 

I studied linguistics quite a bit, and I came across Chomsky when I was a student at Oxford in the mid-‘60s. For me, that broke down the Levi-Straussian view because, on the one hand, Chomsky’s work on language was much more impressive than Saussure or even Jakobson—the classical structuralist work. On the other hand, the kind of model that Chomsky was proposing, generative grammar or transformational grammar as it was called at the time, was quite specific to language. The notion that you find in Saussure that the model of language can be exported to talk about culture, about music, about art, about anything, didn’t make sense anymore in the case of language because the Chomskyan approach was clearly focused on the peculiarities of human language.  

The whole idea that if you wanted to have a unified understanding of human communication, human culture, human language, all you needed was a framework provided by structural linguistics, that was out. Then either you could decide to become a student of syntax, or, say, do ethnography in South Ethiopia, or else, if you had the wide ambition that I had, then you had to go back to the drawing board and rethink very basic issues.

Also, I got involved in linguistics. The doubts I had about using a simple semiotic model of cultural symbolism—which, in my work, extended to a study of metaphors and symbolism in language—caused me to interact with people in linguistics and philosophy of language who were interested in forms of comprehension that went beyond semantics, beyond just getting the meaning of words as they may be described in a formal system, to understand how words are being used in a given context. This field developed under the name of pragmatics, and was very much influenced by the English philosopher Paul Grice.                                 

I worked on this new approach to not just linguistic communication, but communication in general with an English friend and colleague Deirdre Wilson, who was a linguist who had studied at MIT with Chomsky and others. We developed a new approach to pragmatics, which was squarely grounded in cognitive science, in cognitive psychology.

A major challenge for human cognition is this: Humans have the ability to process a very wide range of information through their senses and through the conceptual framework they can bring to bear on monitoring their environment. Plus, they get all this information from communication with others. Plus, they have all this information in memory. Now what you have is a glut. You have too much information. This happens way before the Internet and too much soliciting of your attention. That’s true for prehistorical man in a traditional environment. We have a capacity to monitor many more things than we can process in an intensive manner, than we can attend to.                                 

A crucial issue for cognitive efficiency for humans then is to decide which of all the information that is competing for your attentional resources, both from the environment and from your memory, should be prioritized. Which background information should be brought to bear on which new information in order to get the most efficient processing for information?

We developed what we called relevance theory, arguing that human cognition is geared towards the maximizing of relevance of the input that it processes. This, we argued, has a consequence for communication. When we communicate intentionally—I’m talking to you, requesting your attention—your attention spontaneously goes to what’s relevant to you, more relevant than the competition at this moment. When I try to get your attention through communication, I’m conveying that I assume that what I’m trying to communicate is worth your attention and is more relevant than anything else you could attend to at this very minute. And this, we argued, determines how you interpret in context what is meant by the words that are being used. So the right interpretation is the interpretation that will tend to confirm this expectation of relevance that every utterance raises about itself.                                 

On this basis, we developed a view that it’s not that your linguistic utterances have a literal meaning, and that when you use them you use them to convey this literal meaning and then you can depart it for a rhetorical purpose with metaphor, irony, or implicit content from this literal meaning. No. Quite generally, whether you speak literally, or metaphorically, or ironically, or whatever, your words are not an encoding of your meaning; they are a piece of evidence from which your meaning has to be inferred. That meaning and these inferences are guided by this expectation of relevance, as I was mentioning before. That meaning can go from very specific—for instance, you ask me what the time is and I look at my watch tell you it’s 6:15—to very vague meanings. And this can also be expressed by behavior, by gesture, and indeed by cultural symbols, where you convey that relevance will be achieved by orienting in a certain direction, by looking at certain things rather than others, by approaching them with a certain kind of expectation. There’s a continuum of cases between precise meanings that you can paraphrase and much vaguer effects, which boil down to a mix of focalization and evocation.                                 

Anthropological fieldwork is a nice job, a nice métier; I liked doing it. I like the company of fellow anthropologists; they are people who have an extraordinary curiosity and are willing to talk about lots of subjects. They are willing to spend hours listening to people who work in some small group in the Amazon or in Polynesia, who have studied some weird local practices. I like this kind of curiosity.                                 

On the other hand, anthropologists each specialize in their own fieldwork. They’ve invested years in something that they cannot properly share. I might talk for two hours about my fieldwork; I spent years there. In a way, it’s quite solitary work. When you’re in the field you’re with people all the time, but it’s solitary in terms of sharing what you’ve learned.

I was working in a small group of farmers and weavers in the south of Ethiopia—typical anthropological fieldwork up in the mountains. Anthropologists nowadays work in all kinds of society, but the traditional fieldwork of anthropologists was in a small group with a very traditional culture, often with very simple technology, often without writing. These are all very interesting groups to study, and they all have a share of human experience that is rapidly vanishing. If only because of that it’s worth trying to document and to understand.

I was, however, more interested in theoretical issues. Most anthropologists have very little interest or even patience for theory. I was also more personally attracted to cooperative work—discussing with others, doing joint work. The work I have been talking about, on linguistic communication with Deirdre Wilson, involved working with somebody else. We discussed endlessly and that was great. You don’t get that when you do anthropological fieldwork. Then I got involved in experimental and cognitive psychology. Again, doing experimental work has a great quality in that you work with collaborators, you jointly do experiments, you get results as evidence, which may go in favor or against the kind of hypothesis you had.

A number of issues regarding what’s common to all systems of all communication had occupied many people in the early and mid-20th century, from the linguistics of Saussure to the cybernetics of Wiener. All this was before the cognitive revolution. If any psychology was involved at all it was shallow. We, on the other hand, came after the cognitive revolution. We could take advantage of this much richer understanding of human psychology and of the mechanisms involved.

Part of the origin of cognitive psychology, of course, was the same as programs for the development of intelligent machines, the discovery of the Turing-Church Thesis, and the idea that you could have precise mechanisms and machines that processed information. This led to a much richer way of asking questions that had been asked before. We could start thinking about communication, linguistic communication, or cultural communication, or rituals, (if rituals are in the business of communication, which they are only to a certain degree) within a richer framework and asking more precise questions about the mental cognitive mechanisms that are at work. That was at the scientific basis of our work.                                 

I worked on and off in Ethiopia between ’69 and ’75. I did a regular stint of a field anthropologist. I spent a bit less than two years in the field, in the tribe altogether. But then I just got completely involved in more theoretical issues. I stayed with one foot in the anthropological community, but I was really interested in the more dynamic discussions about language, communication, the naturalistic approach, the evolutionary approach to culture and to cognition. All that was, to me, so much more intellectually exciting.                                 

For a long time I did follow very closely work in linguistics. I was very influenced by Chomsky. I’ve been, however, doing so many other things that I stopped following central issues in linguistics a long time ago. So, if you ask me today whether I agree or disagree with Chomsky’s current view of syntax, I don’t even have the competence to answer that, but I’m very impressed by what he has done. Of course, he’s not an anthropologist and he didn’t do fieldwork. He did study more than one language, but that was not his point. He’s been incredibly important. He’s changed the field completely. Even people who are extremely hostile to him in the field have been at least indirectly massively influenced by his work.                                 

Part of the intellectual excitement of Chomsky is that he was asking pretty fundamental questions. He related issues about certain constructions in the syntax of English to issues of what made human beings capable of acquiring language. He made the technical issues in linguistics relevant to general theoretical issues, and general theoretical issues relevant to the study of particular cases. It was intellectually extremely stimulating. For me, the most important intellectual encounter has been the one with Chomsky.                                 

Initially, I started with this interest in society and culture, in collective things. The classical view of what culture is, very simply, that which is transmitted in a population by non-genetic means: by communication, imitation, and all forms of interaction. In the human case, imitation is an important factor which has been overplayed. Humans imitate better than any other animal, (except maybe parrots, but parrots have a narrow range of things that they imitate).                                 

We humans are good imitators, but, more importantly, we’re great communicators. We transmit much more via communication than we do by imitation. Communication is the vector through which culture develops, is transmitted, builds, and evolves, more than anything else. The reason why I studied communication with Deirdre Wilson and did all this work on relevance theory is because I saw communication as a building block, as a crucial ingredient for understanding society and culture—which was also the idea of Levi-Strauss and others. But they thought they understood what communication was; communication was what Saussure’s structuralist model said it was. I thought that was wrong. We really had to rethink communication quite radically. But my goal in doing that was to understand society and culture, not to understand language per se, (though I'm interested in that, too).                                 

How do you move, first, from individual cognition to the interaction between typically two individuals who might be involved in communication? And then how do you scale up to what happens at the scale of populations, of human groups? In those days the social sciences were completely divorced from the cognitive sciences (which were not even called cognitive sciences). It is still true to a large extent, but much less than it was then. I thought that a bridge could be built between the social sciences and the emerging cognitive sciences. This would give us greater insights and greater tools for understanding the social, and to establish this continuity between the natural and the social sciences, which I thought was essential to improving the social sciences themselves and better understanding the world in which we live. In this work, a better understanding of communication played a fairly central role.                                 

How you move from communication in the ordinary sense to cultural transmission? This is a challenging question. And, on this, my mind was going at the time—we’re talking about the ‘70s—in the same direction in which Richard Dawkins was going when he started talking about memetics. One day an English friend of mine brought me an issue of the New Scientist where there was a long essay by Richard Dawkins, which was, in fact, the last chapter of The Selfish Gene. The book hadn’t been published yet, so he was selling memetics before selling The Selfish Gene. I found it illuminating in many ways. I’d been arguing, in a much more modest and vague manner, for similar ideas. Nevertheless, I had some reservations about Dawkins’s approach.                                 

What I found really exciting there was an idea I was also arguing for at the time. Namely, to explain the success of bits of culture, of practices, of rituals, of techniques, of ideologies, and so on, the question was not how do they benefit the population in which they evolve; the question was how do they benefit their own propagation? Dawkins was saying that much better than I could have done at the time. This was exactly right, I thought. You don’t need to explain the success of social, cultural practices by assuming that they owe this success to the benefit that they bring to the population in which they evolve. It’s only marginally that cultural practices benefit themselves by benefitting the population in which they evolve. Helping their carriers is one way in which bits of culture can benefit themselves. But there are lots of other ways.                                 

It was because I was involved in a fairly detailed study of how human communication works that I was struck by the fact that communication is not a replication system. When I communicate to you, you don’t get in your mind a copy of my meaning. You’ll transform it into something else. You extract from it what’s relevant to you. It involves both understanding and misunderstanding. But even if you’re understanding me perfectly, your goal will not be to have a copy of what was in my mind, it will be to extract from it some thoughts of yours which will have been usefully informed by mine, but which will be relevant to you.                                 

In Dawkins’s memetics, replication was a crucial element. The idea was that you could generalize the Darwinian model of selection to all kinds of replicators. Memes were cultural replicators competing with one another for space in our minds and in our social interactions, and therefore, the object of process of selection. What seemed wrong to me was the idea that information in human transmission replicates. It doesn’t. You get this paradox of evolutionary approaches to culture, which takes its extreme form in Dawkins’s memetics. Dawkins has a kind of clarity of extremist views; I admire that.                                 

Dawkins’s memetics is such a simple and clear idea, so what I think to be a problem with it is also more apparent. The same problem arises with most evolutionary approaches to culture. The problem, or the paradox, is that if you look at cultures, what you see is quite a bit of stability: The same words are being used more or less in the same sense for generations; the evolution of word meanings or word phonology is very slow; the same tales are being told to children one generation after the other; the same recipes are being cooked; the same laws are being followed, interpreted, and employed. So many aspects of culture seem to involve repetition again and again. How can things stay so stable? It has to be that they are being reproduced quite faithfully. You need high fidelity replication or reproduction to explain the stability of culture. Or so it seems.

Suppose that instead of looking at cultural phenomena generally you look at the micro mechanism of transmission: communication, imitation, and so on. What happens when you teach something to somebody? What you see is that, yes, humans are good at imitation. They’re good at communication. They’re better than any other animal species we know, but this however hardly ever results in replication.

When you communicate orally, people don’t copy in their mind what you have told them, they extract something from it. If you see a friend who has a great recipe for apple pie and you “imitate it,” you don’t really copy it. You look at it and you extract from it a way to do it your own way. There's a loss of information at every step, which is quite significant. But even to talk about a “loss of information” assumes that the goal was to replicate. Once you understand that the goal is to extract something that’s useful to the learner, to the imitator, to the addressee of communication, then it’s not loss of information; it’s a constructive use. You construct with what others provide you something that you want. And so, in fact, you rarely replicate.                                 

So how can you have this macro stability of cultural things with this micro failure to replicate? There's got to be fidelity in copying, hasn't there? You look closely and, no! I only know two very clear cases of people who would copy faithfully. One is forging money, where the forger tries to copy the dollar bill exactly. And the other is a chorus line on 42nd Street. The rest of human interaction involves a lot of coordination, but very little copying, strictly speaking.                                 

At that point there’s got to be something wrong with the idea, which is still very widespread, that what makes culture possible is high fidelity copying. And again, this is an idea that is at the center, in particular, of Dawkins’s memetics. I’ve been arguing for a long time not just what I think to be plain observation that, in fact, high fidelity is not common at all—and a lot of things are culturally transmitted without being copied in a faithful manner—but also to give a positive account of what’s happening.                                 

Fidelity is not the only way to ensure stability. You can have stability in a population not just by faithfully copying, but also if the transformations that everybody produces at each step—again, each person is looking for what’s relevant to them—if these transformations converge, if you have what I can call a “cultural attractor” … let me give you an example. Think of the word “love.” Love is a very successful word in the English language used every day millions of times, billions of times. Each time the meaning is a bit different. The lover says, “I love you.” What does she mean exactly? Does she mean what you mean by it? Does she mean the same thing she meant yesterday? There are a whole variety of uses. You can copy the sound of the word “love,” but you cannot copy the meaning. Meaning, anyhow, is not something that you could observe and then copy. All you can do is infer—not observe—what the person means when she uses the word. If, however, our transformations converge towards attractors, towards ways of thinking that are of relevance to all of us, then you may get stability without fidelity. You get it because of a convergence of transformations rather than because of the absence of transformation (which is what fidelity would be). You can also model mathematically such converging transformations and their cultural effects.

I’m asking a big question, which is then divided into many sub-questions. The big question is, again, how do we get a naturalistic understanding of culture in society? For this we have to understand the micro mechanisms of communication. And for this we have to understand some basic aspects of cognition. And for this we have to understand something about the evolution of cognition in humans. The evolutionary psychology program has an important role in all that.                                 

Then we have to put all these things together (including demographic and ecological factors) and see to what extent we can understand, and possibly even model, population-scale dynamics where these mechanisms interact and help explain how the cultural items transform, emerge, and vanish. That’s a large part of the program in which I’ve been involved.                                 

I first mentioned that I like doing experiments because I like the cooperative work that’s involved in this work. It’s a fun activity. It’s intellectually very stimulating. And also it does sometimes provide important evidence.

I started doing experiments a long time ago. When I was still mostly an anthropologist, I’d been invited by Clifford Geertz at the Institute for Advanced Study in Princeton. He hoped he would correct my mistaken ideas and turn me into a proper anthropologist in his style, but I was too stubborn and too attracted by more naturalistic approaches. But I enjoyed the time spent there.                                 

I met George Miller at Princeton University, and I found him a wonderful person. He asked me the same things you’re asking me—what was I working on, what questions I had in mind. That day I had been working with Deirdre Wilson on irony, and he asked me, “Are the ideas you have experimentally testable?” I started thinking, and I went back the next day and said, “We could do these experiments which, if our account is right, should produce these results, and if a more classical view of irony is right, should produce these other results.” We did the experiments and published them, and then this started a new cottage industry of experimental study of irony. This was my first experimental work in psychology. Starting with George Miller was not a bad start. It was fairly inspiring, so I continued. The experiments I did later were on reasoning, which I also found to be an exciting topic.

Few, if any, scientific ideas come bottom up. It’s never the case for me, and rarely the case for anybody, that you gather so much evidence and data that somehow an idea emerges. There’s a bottom up aspect, but more important is the top down, where you have hunches. You may call it intuition. I think it’s mostly luck, when you hit on a good idea. Other people, just as bright and smart as you, have the bad luck of hitting on a bad idea. They invest a lot on the bad idea and they don’t get anywhere. If you have been lucky enough to hit on a good idea then, indeed, you’ll find confirming evidence, good evidence that will start explaining lots of things. But initially I think we’re groping in the dark.                                 

As I said, I started doing experimental work on relevance, on communication, irony, and so on. Then I became a mentor, for the Fyssen Foundation, of an Italian psychologist, Vittorio Girotto, who’d been working on reasoning and had been awarded a grant. I would argue with him that a lot of findings in psychology or reasoning had to do with relevance. People seemed to be making logical mistakes in reasoning, but what they were really doing was transforming the input they were presented with in a way that would make it more relevant to them. Vittorio and I did a lot of experiments going in that direction.                                 

This is how I got involved in experimental work on reasoning. Besides the fun of doing this work, part of the reason why it mattered to me was because if you want to explain social interaction and cultural contents, reason and reasoning play a very important role. I was talking before about converging transformations in cultural transmission. One way we converge is by reasoning together or by exchanging reasons and coming to see things in the same way. It doesn’t always work, but it’s still a strong factor of convergence.

I'd been thinking that standard approaches to reason and reasoning were mistaken because they were seeing it as an individual adaptation, as a way to enhance your own individual cognition. That didn’t make much sense in these exchanges of reasons. Using your reason to produce an argument to convince others and establish a convergence of goals or ideas made more sense. True, sometimes you can convince others just because you have authority and they trust you. But when they don’t, or they don’t on the topics on which you would like to convince them, does it mean that there’s nothing you can do? No. You can still overcome the limits of trust by producing arguments that they can evaluate on their own merits, and if they find that these arguments are good enough, then they will possibly be convinced by what you’re saying.                                 

Hugo Mercier came to work with me as a student. He decided he liked the idea of the social function of reasoning and proposed to do his PhD by developing this theme, which he did splendidly by going way beyond what I had envisaged, both in deepening the ideas and gathering so much good evidence for it. We were producing ideas and papers, which got picked up in the New York Times through the Edge meeting that you were alluding to, but also with a big misunderstanding.

The success of the “argumentative theory of reasoning” was in good part based on a misunderstanding. The misunderstanding being that we were taken to be saying, “Haha, you think reason is to get more clever, more intelligent, or discover the truth, but that’s not what it's for; it’s to persuade others who wouldn’t be persuaded otherwise. It’s a way to manipulate other people.” It was taken to be a cynical view of what reason and reasoning is about; it was taken to imply that people are naïve if they think reason is in the pursuit of truth. People who take this cynical view do not apply it to themselves and think that they reason objectively.                                 

This cynical view doesn’t make any evolutionary sense. Why would something evolve to manipulate others and then nothing evolve in others so as not to be manipulated? It doesn’t work like that. If one can benefit by causing harm to others, then in others some counter measure is likely evolve, and you’ll have an arms race. So our argument was that reasoning evolved to produce arguments in order to convince others. This works, however, because reasoning also evolved to produce in each one of us a means to evaluate arguments so as to gain from the ideas of others when they're able to present good reasons for why we should accept them and to reject them otherwise. From an evolutionary point of view, reasoning has got to be beneficial on both sides otherwise people will stop listening to arguments and then producing them would be useless.                                 

Our work, then, had a success partly based on misunderstanding. In general my work has had some impact, but it was very much within the scholarly community. This was the first time I had an article on my work in the general press, in the New York Times, the Guardian, the Corriere della Sera. The argumentative theory, which Jon Haidt liked and which had both been successful and misunderstood, was only part of our overall project. The argumentative theory develops an answer to the question, what is the function of reasoning? But it says nothing about the mechanisms. Hugo and I had made some allusions about mechanisms. Now we started working on this second aspect and developing it more thoroughly. What was initially a theory of reasoning and argumentation has now become a theory of reason, not just reasoning, and not just of its function, but also of its mechanism.

There’s something very weird about standard approaches to reason. On the one hand, from Aristotle onwards, reason is seen as what makes humans superior to all other animals, and this has been repeated ad nauseam. Before Darwin, humans were not only disposed to think that they were superior to all animals, but the more differences you could show between humans and other animals, the better. We have this capacity to reason that other animals don’t have, and that sets us completely apart from them? Very good!                                 

After Darwin, the animality of humans became quite evident and apparently radical discontinuities became very puzzling. Do humans really have this kind of superpower, which doesn’t seem to fit anywhere in our natural traits, not even among our other cognitive capacities—from perception to unconscious inference, motor control—the kind of things we share with so many other animals. It’s a bit like we’re Superman or Spiderman with this fantastic capacity that only we have. This cannot be right.

Dawkins once had a nice article about why animals don't have wheels. You might think wheels would be a nice adaptation, so why didn’t it evolve? Well, it probably wouldn’t be such a good adaptation because it would be useful on only very specific terrains. On most terrains it wouldn’t help you. But even so, there might be animals living on a terrain where wheels would be very handy, so why didn’t they evolve? It’s not that it’s inconceivable; it’s that the design problems are very specific, and there are no in-between steps in the evolution of wheels that would each be adaptive. So how do you go from a non-wheeled animal to a wheeled animal? It can only be through a series of improbable steps, so improbable that it never happened on Earth.                                 

In the case of reason being seen as this improbable superpower that exists only in one species—humans—you have moreover an extra problem, because reason, which is described as a way to enhance cognition in all domains, might be useful not just to humans, but to many other species. Investing massively in cognition the way humans have could be advantageous to other species. That’s one enigma proposed by this view of reason.                                 

The second enigma is well known to psychologists. Kahneman and Tversky, Peter Wason, and others have described reason as being flawed, as making egregious mistakes all the time. On the one hand it's a superpower, but it’s a superpower that doesn’t work properly; this makes even less sense. You have this double paradox of having a superpower that doesn’t fit in an evolutionary perspective in any clear way and that, moreover, doesn’t even deliver what it’s supposed to deliver.                                 

Hugo and I set out to resolve the double enigma; first, by showing that human reason fits perfectly well among other cognitive capacities. Reasoning is only one form of inference among others. Inference, the capacity to use some input information to derive further consequences that are not given but that you can draw on the basis of, is something that all animals do. They guide their action on that basis. Cognition, in general, is inferential. Perception is inferential. The way we use the activation of our retina to infer properties of the objects that have caused this activation by reflecting light is inferential. The way we guide our body movements is inferential. We draw inference all the time. Insects, slugs, birds, any organism that locomotes, that moves around, couldn’t do that without doing inferences. Plants stay put, so they don’t need cognition. They stay in the same place. They don't take the risk of moving. Moving provides new opportunities, but also huge risks. To benefit from the opportunity and avoid the risks, you need cognition and you need to infer precisely what’s beyond your skin’s surface. Inference is ubiquitous in animal life.                                 

In which way is reason different from other forms of inference? In the literature you get some people who don’t even see the difference and assume that animals reason too, and the capacity to do logical inference must be present in lowly animals as it is in us. Then you get others who assume animal inference and human reason are completely separate capacities. We disagree with both approaches. We also disagree with an approach that is dominant today, the dual system approach, defended in particular by Daniel Kahneman and others like Stanovich or Evans, and according to which the enigma of reason can be solved by assuming that two kinds of processes are involved in inference. One kind, we share with other animals, and the other is more specifically human.                                 

According to the dual system approach, first, there is a more basic System 1, which in humans is just what we might call intuition. It's an automatic system that operates spontaneously. It uses heuristics that generally work but are not fully warranted from a logical or epistemological point of view. Other animals use similar heuristics. We rely on them most of the time because doing so takes less time, less energy, less investment. It's a good way of approaching everyday tasks of inference, which go from not banging into furniture when you walk around to knowing how to talk to one another. When this doesn’t fully work or when we meet a problem that we cannot solve in this way, then we resort to reasoning, or System 2, which involves applying rules, proceeding in a more self-conscious manner, in a way that is more linked to a proper justification of an epistemological or logical nature.                                   

If you assume that there is such a partition between two systems of inference—intuition and reasoning—then you can explain the apparent flaws of reason, the fact that in so many experiments people make these egregious mistakes, by assuming that they are guided by System 1, by the more intuitive kind of inference. Intuition is not geared toward handling atypical problems. It can be tricked in so many different ways. And so all the failures that you find in the literature of reasoning, all the cognitive illusions that you find are due to the fact that what you get is the output of System 1, of intuition.

The other system—reasoning proper—is a costlier and more painstakingly acquired system. We’re capable of deploying it when the investment is worth it, and that gives us this relatively superior power, which may not be as superior as classical philosophers like Aristotle assumed, but the possession of which is very much linked to the possession of language and to the possibility of entertaining higher order thoughts. Dual system approaches tend to be rather sketchy. There are many versions and it’s being readjusted all the time. For instance, more and more it’s being recognized that reasoning, the higher system, makes mistakes, too. So it’s not that good. Intuition often is even better than reasoning. The notion that you can explain the mistakes or successes of human inference through dual system theory isn’t that convincing. The actual workings of reason proper are still vastly mysterious.                                 

We took an alternative route. We don’t think that there are two systems. Intuition, anyhow, is not based on one system. We don’t have a faculty of intuition; it is instead based on a great variety of cognitive mechanisms, some of which have a strong innate basis while others have to do more with the acquisition of competences in the course of cognitive development. There are many autonomous systems involved.

Some of these mechanisms of intuition are not just intuitions about the facts in the world, about space, about time, about solid objects, about living creatures; they are also about representations or even meta-representations. So we have intuition about, for instance, the meaning of words, the truth or falsity of ideas, about what other people may think. We have intuition about not just objects in the world, but how they are represented in the minds of others, in our own minds, in abstract ideas, and so on.                                 

This is still the domain of intuitions. Like other systems, it’s a system that’s highly specialized. It works on a very special kind of object, namely representations. Most things in the world are not representations. Representations occur inside and in the vicinity of animals that have a cognitive system. Most of the world is representation free, so to speak. We have a specialized system to have a variety of mechanisms to develop intuitions about representations. Among the intuition we may have about representations are reasons, intuition about reasons for beliefs or decisions.                                 

Why are reasons of any relevance to us? In our own individual thinking, reasons don’t matter very much. We trust ourselves. In any case, you have to rely on your own cognition. You don’t need to look for a reason for what you intuitively believe. If you intuitively believe something, most of the time, that’s it. But if we want to communicate to others what we believe and they don’t have the same intuitions, we may still share intuitions about reasons for our belief, and then we may end up converging.

We also have other uses of reason that don’t have to do with argumentation or convincing others. We have reasons that are of more retrospective character. We use reason to justify ourselves. When we interact with one another, we depend on our good reputation, on the willingness of others to interact and cooperate with us in a variety of ways, and for that they have to think that the way we think and behave makes us reliable partners. The evidence they have is from what we do, which can be interpreted in a variety of ways. What we can do is provide reasons for our actions and our thoughts, not to convince others to adopt the same thoughts or behave in the same way, but to show that we had good reasons and can be trusted to have similarly good reasons in the future.                                 

Reasons have two functions. One is to justify ourselves and the other is to convince others. Reason is one intuitive mechanism among others; it produces intuitions about reasons. Reason serves two main functions. One function is the argumentative function (which we had discussed in our earlier work about the argumentative theory of reasoning), and the other is the justificatory function.

What we are arguing, then, is there is no division between intuition and reasoning. Reasoning is just a certain use of intuitions about reasons. Reason is just as intuitive as all the rest. It doesn’t stand in contrast with another kind of system. It’s one particular kind of intuition, which plays a very important role. It’s just as if you took another kind of intuition, say, about emotions or aesthetic emotion, and said, “Oh, that’s completely different from other intuitions in all the world.” Yes, it has a certain particular role, as do any specialized intuitions, but it’s not a second system. It’s one mechanism of intuition among many others, which, in the case of intuitions about reason, plays an important role in human interaction.                                 

The enigma of reason, we argue, gets resolved in the following manner. To begin with, reason is no superpower. Human beings, like other animals, have lots of mechanisms of intuitive inference. We have, in particular, the ability to represent the representation, to think about them, to have intuitions about them, but it’s still an intuitive capacity. It’s not a new type of capacity, but a new kind of object that we’re capable of having intuitions about. Having objects of thought that are specific to one species to think about or to use in cognition is not only for humans. Animals that have echolocation can exploit ultrasound to perceive their environment in detail and to navigate it, and we cannot. We, on the other hand, exploit reasons in our cognitive work. This is not a second system; it’s just an ordinary cognitive capacity among others, which has important implications for interaction because that’s what drove its very evolution. It’s an ability to understand others, to justify ourselves in the eyes of others, to convince them of our ideas, to accept and to evaluate the justifications and arguments that others give and be convinced by them or not.                                 

Contrary to the standard view of reason as a capacity that enhances the individual in his or her cognitive capacities—the standard image is of Rodin’s "Thinker," thinking on his own and discovering new ideas—what we say now is that the basic functions of reason are social. They have to do with the fact that we interact with each other’s bodies and with each other’s minds. And to interact with other’s minds is to be able to represent a representation that others have, and to have them represent our representations, and also to act on the representation of others and, in some cases, to let others act on our own representations.                                 

We arrive at an integrated view of reason that doesn’t assign it a fantastic goal of unique access to knowledge at the individual level. We think reason evolved in humans and not in other species because there is a specific ecological niche that humans inhabit, which is the sociality that they themselves created. It’s a niche that’s created by a social relationship and culture. In that niche, reason is adaptive and that’s why it evolved

The kind of achievements that are often cited as the proof that reason is so superior, like scientific achievements, are not achievements of individual minds, not achievements of individual reason, they are collective achievements—typically a product of social interaction over generations. They are social, cultural products, where many minds had to interact in complex ways and progressively explore a lot of directions on which they hit, not because some were more reasonable than others, but because some were luckier than others in what they hit. And then they used their reason to defend what they hit by luck. Reason is a remarkable cognitive capacity, as are so many cognitive capacities in human and animals, but it’s not a superpower. It’s well integrated in the minds of one animal and it’s well adapted to a special niche in which this particular animal, humans, live.

The dual system approach is trying to salvage something on the ruins of areas of psychology of reasoning as it had developed in the past fifty years. It has hit on a number of difficulties. It showed that the number of directions that had seemed obvious were, in fact, blind alleys, dead-ends. It ended up having problems and no solution. Sorting the evidence with the idea of two systems—System 1 and System 2—at least seemed to be a step in the right direction. But while it seemed to provide the way to explain why such a capacity as reason might malfunction, as experimental psychology has shown it does, it didn’t solve the other aspect of the enigma of where this unique superpower come from. Instead of having reason in a wider sense as a superpower, now we have just System 2, and that's still highly mysterious.

There are a number of nice gestures, hand-waving in a plausible direction, but what we’re suggesting is at least more precise. Still hand-waving maybe, but more precise hand-waving, leading to unexpected predictions that are experimentally testable and which make more sense, both of the psychological evidence and of the everyday and historical evidence regarding the role of reason in human affairs, in interaction, in the development of science, in the development of negotiation, in politics, and so on. Rather than seeing as a paradox the fact that people can use reason to defend absurd ideas, as we see happen all the time, this is exactly part of what we assume is going to happen.

There’s nothing particularly mysterious about reason as we describe it. The devil is in the details, of course, which we are not going to explore now. Right or wrong, ours is a novel approach to human reason. (This, actually, should make me guess that we must be wrong, because, if you have a deeply novel approach it’s probably a wrong idea.) Our approach really is at odds both with classical views of reason and reasoning and, indeed, with more recent developments like dual system theory.                                 

The overall view I would defend is that we each have a great many mental devices that contribute to our cognition. There are many subsystems. Not two, but dozens or hundreds or thousands of little mechanisms that are highly specialized and interact in our brain. Nobody doubts that something like this is the case with visual perception. I want to argue that it’s also the case for the so-called central systems, for reasoning, for inference in general.

8469 words

In reply to Евгений Волков

Иллюзия глубины объяснения / Philosophy and the Illusion of Explanatory Depth

by Евгений Волков -

Philosophy and the Illusion of Explanatory Depth


Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do.

That’s an excerpt from “Why Facts Don’t Change Our Minds” by Elizabeth Kolbert in The New Yorker. Kolbert looks at this and other research on cognitive biases as a way of understanding our political predicament. What allows the illusion of explanatory depth to persist, she says, is our reliance on other people:

In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins…

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about.

Sloman and Fernbach note that the fewer details about a problem a person’s familiar with, the more strongly held that person’s opinion about what to do in regards to that program will be. “As a rule, strong feelings about issues do not emerge from deep understanding.”

But once people become aware of how complicated something is, they moderate their views about it and seem to be more open to reason.

In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Confronting and working through the complicated details of an issue, Sloman and Fernbach say, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

Isn’t this exactly what philosophy instruction is? We break down views and ideas into their various parts, looking at what supports them and what they support and what the alternatives are, all with the effect of showing that things that seemed simple and obvious are rather complicated and puzzling.

Have studies been done on whether confronting complications in a distinctly philosophical context leads people to “ratchet down the intensity” of their views, removing an obstacle to deliberation and cooperation on social and political matters? If not, sounds like a good project.

I know we like to think that philosophy has these salutary effects, but more than just anecdotal evidence would be nice.

Philosophy: shattering the illusion of explanatory depth since at least 470 BC.

 
CATEGORIESPHILOSOPHY

There are 8 comments

well this is also what physicians, engineers, car-mechanics, etc also do in their very narrow technical realms, but as we have seen with philosophers this doesn’t translate to avoiding these kinds of biases when it comes to areas outside of such matters of expertise/training and as we have seen in all of these professions (via Kuhn and co) there comes to be additional professional (to some degree standardized) blinkers as we don’t question all of our working assumptions in any sort of routine way, was supposed to be part of the promise of later forms of cybernetics and the like but to no real avail.
https://syntheticzero.net/2016/12/16/the-black-box-the-world-of-cybernetics/  

I coined the phrase “illusion of explanatory depth” in a joint paper with Frank Keil in 1998 (Wilson, Robert A., and Frank Keil, “The shadows and shallows of explanation.” Minds and machines 8.1 (1998): 137-159, reprinted in our *Explanation and Cognition* (MIT Press, 2000), and the experimental follow-ups were done in his lab from roughly 2000-2005. I haven’t seen the Sloman and Fernbach book yet, but I’m assuming that the NYT’s piece took a short cut in attributing the coinage to them. (The flush toilet example is also one that we used in illustrating the idea.) 

Report

Justin,

What do you mean by “confronting complications in a distinctly philosophical context”?

Also, you asked, “Isn’t this exactly what philosophy instruction is?” In my experience, that is what some good philosophy instruction is. But it is also what a lot of good instruction is across the curriculum. 

John, by “confronting complications in a distinctly philosophical context” I had in mind guided philosophical discussion about a matter of public concern that illustrates how the matter is more complicated than it might at first appear. To take one well-known example: abortion. This is a topic on which people voice their positions with “intensity” while (if my students are representative) mistaking opening moves for last words.

I agree that getting students to confront hidden complications is not the exclusive domain of philosophy. But I do think that (1) philosophers, generally, are particularly gifted at it, (2) the complications are less expected by the students, and (3) the subject matter tends (esp. in ethics and political philosophy) to be ones about which students come in already equipped with “intense” views.

(1) may be a professional bias, I admit. For (2) and (3), my thoughts are that while students expect that there may be details, unknown to them, about a historical event, work of literature, physical phenomenon, etc., that are relevant to their understanding of these things, they tend to not expect that there may be details about matters in ethics, politics, and law—matters about which they hold strong opinions—that will alter their opinions. So there is combination of disciplinary skill and propitious subject matter that leads me to think of philosophy as particularly well suited for introducing the relevant complications. 

Thanks, Justin. I think that these questions could be settled empirically, but it would be very time- and resource-intensive and difficult to interest someone with the skills and resources needed to do it well.

Speaking again from only my own experience, the most effective discussion of abortion ethics I witnessed involved a medical doctor who specialized in women’s health. She drew distinctions, introduced complications, explained the reasons for her position, and acknowledged opposing views. She did not mention a violinist. I credit Peter Singer for the most effective discussion of the treatment of non-human animals I ever witnessed. For informed commentary apt to change/open minds on legal and political matters, on average I would expect philosophers to perform worse than lawyers and probably no better than political scientists, historians, or psychologists. 

I feel this is a very important issue and could hardly be more so. Most philosophers, like most people, seem to believe they know more than they do and rely on the not-fully-checked or understood work of others. It’s often only when we have to explain our views that we discover our ignorance. This would be the benefit of arguing, that it forces one to get ones act together.

“Philosophy: shattering the illusion of explanatory depth since at least 470 BC.”

Great strapline! 

Maybe this is really minor, but could the final sentence benefit from ending in a question mark rather than a period? 

Deena Weisberg at Penn has data which indicates that philosophers are less likely than non-philosophers (including experts in other disciplines) to be seduced by irrelevant scientific information when evaluating the quality of an explanation. I’ve spoken with Deena about taking this project further. Having received her blessing, I’m starting work on a project tentatively labelled: “Are Philosophers Bullshit Detectors?”, which investigates whether people trained in philosophy are better at distinguishing good explanations from bad ones.

This isn’t precisely about the illusion of explanatory depth, but it’s related to the point about training in philosophy having salutary effects that can be measured empirically. It would also illustrate that philosophers are less susceptible to the illusion of explanatory depth, since they are less likely to get the false sense of improved understanding that results from irrelevant info.

1690 words

In reply to Евгений Волков

Почему людей так трудно убедить фактами

by Евгений Волков -

«Ваши доказательства — не доказательства»: Почему людей так трудно убедить фактами

Опубликовано: Суббота, 25 февраля 2017 10:45
Марина МойниханАвтор статьи:
Марина Мойнихан

В The New Yorker вышла дискуссионная статья о том, как когнитивные искажения влияют на наше мировоззрение. Опираясь на старые и новые исследования (в одном из которых американцам предлагали найти на карте Украину!), её автор утверждает: привычки, которые были хороши для древних охотников и собирателей, играют злую шутку с людьми, живущими в мире «пост-правды» и «альтернативных фактов». #Буквы перевели для вас этот текст.

«Ваши доказательства – не доказательства». Почему людей так трудно убедить фактами
 

В 1975 году сотрудники Стэндфордского университета пригласили группу студентов поучаствовать в исследовании на тему суицида. Им раздали пары предсмертных записок, одна из которых была настоящей, а другая — сочиненной случайным добровольцем. Участникам нужно было определить, которая из записок настоящая.

Некоторые из них справились с заданием блестяще, дав 24 правильных ответа из 25. Другим это никак не удавалось; "потолком" были 10 правильных ответов. Как это часто бывает в случае с психологическими исследованиями, весь эксперимент был постановочным. Половина записок действительно были настоящими, — исследователи получили их в бюро судмедэкспертизы округа Лос-Анджелес, — но результаты теста были ненастоящими. Участники, которые якобы почти все угадали, на самом деле показали в среднем тот же результат, что и "ошибавшиеся".

На втором этапе исследования обман раскрывался. Участникам сообщали, что реальной целью эксперимента было исследование реакции подопытных на положительный или отрицательный результат теста (как выяснится позже, этот этап тоже был постановкой). В конце концов, студентов просили угадать, сколько записок они отсортировали правильно, и какое количество правильных ответов в среднем дали другие участники. И вот тут-то происходило нечто любопытное: члены группы "отличников" утверждали, что они действительно неплохо справились с заданием, показав результат лучше среднего — хотя им только что сообщили, что у них нет никаких причин так думать. И напротив: в группе "двоечников" студенты считали, что их результат был на порядок хуже среднего; конечно, это утверждение было столь же беспочвенным.

Читайте также
16 фейков, появившихся в американских СМИ с момента победы Трампа

Исследователи сухо резюмировали: "Единожды сформировавшиеся впечатления остаются исключительно стойкими".

Несколько лет спустя к похожему исследованию была привлечена новая группа студентов. Им раздали досье на двух пожарных, Фрэнка К. и Джорджа Х. Биография Фрэнка отмечала, в частности, что он является отцом малолетней дочери и увлекается дайвингом. Джордж – отец маленького сына и любитель гольфа. Досье также включало результаты пройденного обоими мужчинами "Теста на готовность к риску". Согласно одной из версий досье, Фрэнк был успешным профессионалом, чьи результаты теста показали, что в работе он почти всегда выбирает наиболее безопасное действие. Другие студенты получили досье, в котором Фрэнк в своих действиях также оказывался "перестраховщиком", и при этом – никудышным пожарным, на которого вышестоящие коллеги неоднократно подавали рапорты.

И снова посреди теста студентам сообщили, что их обвели вокруг пальца, раздав неправдивую информацию. Затем участников попросили составить портрет успешного пожарного – каким должно быть его отношение к риску? Те, кто получил первый вариант досье, утверждали, что риска следует избегать. Остальные – что на риск следует идти.

Как отмечают исследователи, "даже после того, как их представления были полностью опровергнуты, люди не смогли соответствующим образом скорректировать свои верования". В этом случае неспособность подстроиться под новые факты была "особенно впечатляющей", поскольку исходных данных категорически недостаточно для того, чтобы делать из них обобщенные выводы.

002

Один из авторов исследования, Крэг Андерсон, со своей книгой "Влияние на подростков жестокости в компьютерных играх"

Стэнфордские исследования стали известными. Сделанное учеными заявление о том, что люди неспособны трезво мыслить, шокировало публику 70-х. Теперь оно никого не шокирует – тысячи новых экспериментов подтвердили и уточнили это утверждение. Каждый из тех, кто следил за исследованиями (или хотя бы иногда пролистывал выпуски Psychology Today), знает, что любой выпускник ВУЗа с планшеткой способен продемонстрировать, как кажущиеся разумными люди порой ведут себя совершенно иррациональным образом. Сейчас этот парадокс кажется особенно актуальным. Но почему так происходит – все еще загадка.

В своей новой книге "Загадка разума", которая вышла в издательстве Гарвардского университета, ученые-когнитивисты Хьюго Мерсье и Дэн Спербер пытаются ответить на этот вопрос. Мерсье, работающий в исследовательском институте в Лионе (Франция), и Спербер (Центрально-Европейский университет, Будапешт) считают, что разум – свойство, развившееся в ходе эволюции, подобно бипедализму и трихроматизму. Оно зародилось в африканских саваннах, и для его понимания необходим контекст.

003

Довод Мерсье и Спербера, если озвучить его в более научно-популярном виде, звучит примерно так: наибольшее преимущество человека над другими видами – его способность к сотрудничеству. Установить отношения сотрудничества с кем-либо непросто; поддерживать их не менее сложно. Для любого индивидуума наилучшим способом существования остается паразитизм. Так вот: разум возник не для того, чтобы мы решали абстрактные логические задачи или делали отвлеченные выводы из каких-либо данных; он развился для того, чтобы помочь нам справляться с проблемами, связанными с жизнью и взаимодействием в обществе.

"Разум помогает адаптироваться к той гиперсоциальной нише, которую заняли люди как вид", - пишут ученые. Так что привычки нашего мозга, которые с "интеллектуалистской" точки зрения кажутся странными или откровенно глупыми, оказываются куда толковее, если рассматривать их с "интеракционистской" (основанной на взаимодействии) точки зрения.

Читайте также
"Самый начитанный человек в Вашингтоне". Стив Бэннон и авторы, которые формируют новую Америку

Давайте рассмотрим когнитивное искажение, известное как "предвзятость подтверждения". Так называют склонность человека принимать ту информацию, которая подтверждает их верования, и отрицать факты, которые этим верованиям противоречат. Это когнитивное искажение задокументировано лучше других: ему посвящено столько экспериментов, что хватит на отдельный учебник. Самый известный из них также проводился в Стэнфорде. Для этого эксперимента исследователи отобрали студентов, придерживавшихся противоположных взглядов на необходимость смертной казни. Половина участников выступала за смертную казнь и считала, что она снижает уровень преступности; другая половина была против высшей меры наказания, которая, по их мнению, не влияет на число преступлений.

004

- Итак, я выслушал доводы обеих сторон… пришло время самому разобраться, где правда.
["Первая попавшаяся ссылка, которая подтверждает вашу точку зрения"]
- Джекпот!

Студентов попросили ознакомиться с двумя исследованиями. Одно из них подтверждало мнение о том, что смертная казнь снижает уровень преступлений в обществе; другое приводило факты, которые ставили эту теорию под сомнение. Как вы уже догадались, оба исследования были фейковыми; их показали студентам лишь затем, чтобы они отталкивались от какой-то весомой статистики. Те из них, кто изначально поддерживал введение смертной казни, сочли убедительными данные, подтверждающие их точку зрения, а данные, противоречащие ей, посчитали не заслуживающими доверия. В другой группе все произошло с точностью до наоборот. В конце эксперимента студентов вновь спросили об их взглядах. Те, кто изначально поддерживал смертную казнь, лишь укрепились в своем мнении; те, кто был против высшей меры, теперь относились к ней еще негативнее.

Если разум нужен нам для того, чтобы формировать здравые суждения, то трудно представить себе более серьезный производственный брак, чем предвзятость подтверждения. Представьте себе мышь, которая мыслит как мы, предлагают Мерсье и Спербер. Эта мышь, "которая ищет подтверждения тому, что вокруг нет котов", вскоре станет кошачьим обедом. Если такая черта нашего мышления приводит к тому, что мы готовы отбрасывать свидетельства о новых (или недооцененных) угрозах, то она, вероятно, должна была пропасть в процессе эволюции. Тот факт, что выжило и человечество, и эта его черта, говорит о том, что у нее есть некая адаптирующая функция. И эта функция, по мнению Мерсье и Спербера, связана с нашей "гиперсоциальностью".

Мерсье и Спербер предпочитают термин “myside bias” ("склонность к подтверждению своей точки зрения"). Они напоминают, что по своей природе люди не склонны верить во что попало. Выслушав чужие аргументы, мы порой с легкостью можем обнаружить в них слабые места. При этом собственные ошибки мы зачастую в упор не видим.

005

Хьюго Мерсье / Steven Ahlgren / NYT

Недавний эксперимент, проведенный Мерсье с его европейскими коллегами, хорошо продемонстрировал этот парадокс. Участников попросили решить несколько простых логических задач. Потом им было предложено пояснить свои ответы и изменить их, если в процессе отвечающий находил ошибку. Большинство людей придерживалось своих изначальных ответов. Изменения вносили меньше 15% участников.

На следующем этапе эксперимента участнику давали одну из тех же задач вместе с его ответом и ответом другого участника, отличным от их собственного. И вновь ему предлагали изменить свое решение. Здесь организаторы шли на хитрость: под видом чужого ответа участникам показывали их собственный – и наоборот. Около половины людей догадались, что их обманывают. Другая половина внезапно стала гораздо критичнее к своим ответам: около 60% людей изменили решение, которое ранее их удовлетворяло.

По мнению Мерсье и Спербера, это несоответствие демонстрирует истинную цель возникновения мышления – не дать человеку стать "крайним" в коллективе. Наши предки - охотники и собиратели, жившие небольшими группами, в основном заботились о своем социальном положении – и о том, чтобы не рисковать своей жизнью на охоте, в то время как остальные отсиживаются в пещере. Здравомыслие в его современном понимании не принесло бы тогда особой пользы, а вот умение выиграть спор было весьма полезным.

Такие вопросы, как черты характера идеального пожарного или влияние смертной казни на уровень преступности, ни капли не волновали наших предков. Им не приходилось сталкиваться с подделанными исследованиями, Twitter’ом и фейковыми новостями. Поэтому нет ничего удивительного в том, что мышление часто нас подводит. Как пишут авторы, "это один из тех случаев, когда естественный отбор не был способен угнаться за меняющейся окружающей средой".

Стивен Сломэн (Брауновский университет) и Филип Фернбах (Университет Колорадо) также занимаются когнитивными исследованиями. И они тоже считают, что социальность – ключ к пониманию функций (и дисфункций) человеческого мышления. Свою книгу "Иллюзия знания: почему мы не мыслим самостоятельно" они начинают с описания… унитаза.

006

Сломэн и Фернбах

Любой житель цивилизованного мира знаком с устройством унитаза – как правило, керамической чаши, заполненной водой. Когда нажимаешь на рычаг или кнопку, вода засасывается в трубу, а из нее утекает в канализацию. Но как в действительности это происходит?

В ходе исследования, проведенного в Йельском университете, аспирантов просили оценить свое понимание принципа работы повседневных вещей, включая унитазы, молнии на одежде и дверные замки. После этого им нужно было написать детальное, пошаговое описание работы такого устройства, и повторно оценить уровень своего понимания. Очевидно, эксперимент продемонстрировал участникам их собственное невежество, поскольку на втором этапе оценки снижались. (Выходит, устройство унитазов сложнее, чем кажется на первый взгляд.)

Сломэн и Фернбах замечают этот эффект (который они называют "иллюзией глубины объяснения") практически повсюду. Люди склонны преувеличивать свои знания. А другие люди подкрепляют это верование. В случае с унитазом, кто-то создал такую конструкцию, чтобы ею было легко пользоваться. Люди повсеместно полагаются на чужие знания и умения – с тех времен, когда мы были пещерными охотниками (это, похоже, было ключевым этапом нашей эволюции). И мы так искусно взаимодействуем с другими, что с трудом можем определить, где кончается наше собственное понимание и начинается чужое, считают ученые.

"Одним из условий разделения умственного труда является отсутствие четкой границы между знаниями и верованиями разных членов группы", - пишут они.

Это отсутствие границы (или, если угодно, порядка) – ключ к тому, что мы называем прогрессом. Изобретая новые инструменты, а с ними – новый образ жизни, люди в то же время создавали новые "сферы неведения". К примеру, если бы каждый человек считал необходимым освоить принцип металлообработки прежде, чем взять в руки нож, от Бронзового века было бы мало толку. Когда речь идет о новых технологиях, частичное невежество может быть полезным.

Но не в сфере политики, добавляют Сломэн и Фернбах. Одно дело – нажать на кнопку слива, не зная, как она работает, и совсем другое – поддержать указ о запрете на въезд в США, сути которого ты не понимаешь. Сломэн и Фернбах ссылаются на опрос, проведенный в 2014 году, спустя некоторое время после аннексии Россией Крыма. У респондентов спрашивали, как, на их взгляд, должны отреагировать США, и просили их показать Украину на карте мира. Как правило, чем хуже у респондента было с географией, тем больше он склонялся к варианту силового вмешательства. (В целом "географическая" часть опроса оказалась для опрошенных столь непосильной, что в среднем ошибка составляла 1800 миль – практически расстояние от Киева до Мадрида.)

007

Многие другие опросы показали не менее тревожные результаты. «Как правило, сильные переживания, связанные с тем или иным вопросом, не говорят о его глубоком понимании», - пишут ученые. И наша зависимость от чужой точки зрения лишь усугубляет проблему.

К примеру, если вы считаете, что Закон о доступном здравоохранении безоснователен, и я полагаюсь на вашу точку зрения, то мое мнение тоже будет безосновательным. Некто Том также со мной соглашается – и теперь у нашей точки зрения уже три сторонника. Плюс, каждый из нас чувствует себя гораздо самодовольнее, чем раньше.

И если каждый человек будет игнорировать «недостаточно убедительную» информацию, которая противоречит его мнению, получим… администрацию Трампа.

«Вот так общество, полагающееся на знания, становится опасным», - пишут Сломэн и Фернбах. Они провели собственный вариант эксперимента с устройством унитаза – заменив предметы быта на вопросы государственной политики. В ходе исследования, проведенного в 2012 году, они задавали респондентам следующие вопросы: нужна ли США программа индивидуального медицинского страхования? Нужна ли нам система оплаты труда учителей, основанная на их конкретных заслугах? Участники должны были дать оценку тому, насколько сильно они соглашаются или не соглашаются с тем или иным предложением. Затем их просили как можно подробнее объяснить эффект от внедрения той или иной упомянутой государственной инициативы. На этом этапе большинство респондентов заходило в тупик. Когда их снова просили оценить степень своего согласия или несогласия, они снижали оценки, показав тем самым, что их точка зрения уже не столь непоколебима.

Читайте также
Информационное общество: видимость безопасности

Для Сломэна и Фернбаха результат этого теста — маленький луч света в темном царстве. Если бы мы, наши друзья или эксперты с CNN тратили больше времени на изучение последствий таких инициатив, а не поучали друг друга, то осознали бы свою беспомощность и умерили радикальность своих взглядов.

Науку можно рассматривать как систему, которая устраняет ошибки, к которым по своей природе склонны люди. В лабораторных условиях нет места для предвзятости; исследование можно повторить в другой лаборатории, у работников которой нет мотива для беспочвенного подтверждения предыдущего результата. Вероятно, именно поэтому система оказалась столь успешной. В любой момент какая-то сфера знания может оказаться во власти беспорядка, но в конце концов на помощь приходит методология. Наука движется вперед, даже если сами мы топчемся на месте.

В своей книге "Могила исправит: почему мы игнорируем факты, которые могут нас спасти", психиатр Джек Горман и его дочь, эксперт в области здравоохранения Сара Горман, исследуют разрыв между тем, что утверждает наука, и тем, во что мы верим. Их беспокоят верования, которые не только не соответствуют действительности, но и могут быть смертельно опасными – например, вера во вред прививок. Безусловно, вредно отказываться от прививок – ведь они как раз-таки призваны защитить наше здоровье. "Иммунизация – одно из больших достижений современной медицины", - пишут авторы. Но сколько бы научных исследований ни доказывали безопасность вакцин и отсутствие связи между ними и развитием аутизма, противники прививок остаются непоколебимыми. (К ним можно отнести и Дональда Трампа, который заявлял, что они с женой согласились сделать прививки своему сыну Бэррону, но вакцинирование было произведено не в сроки, рекомендованные педиатрами.)

008

Горманы считают, что типы мышления, которые сейчас кажутся самоубийственными, когда-то имели адаптивную функцию. Они также посвящают множество страниц "предвзятости подтверждения" - у которой, по мнению Горманов, есть и физиологический аспект. Они ссылаются на исследование, показавшее, что люди испытывают физическое удовольствие – всплеск допамина – сталкиваясь с информацией, которая подтверждает их точку зрения. "Оставаться верным своим верованиям, даже когда ты неправ – приятное чувство", - пишут они.

Джек и Сара Горман хотят не просто перечислить ошибки в нашем мышлении; они хотят их исправить. Должен существовать способ убедить людей в том, что прививки не вредят детям, а ношение оружия не защищает от опасности. Но и здесь они сталкиваются с проблемами, которые сами перечислили: люди попросту игнорируют предоставленную им достоверную информацию. Можно попробовать взывать к эмоциям, а не разуму – но это противоречит целям людей, пропагандирующих научный подход. В конце своей книги они пишут: "Нам все еще предстоит справиться с тенденциями в обществе, которые приводят к появлению антинаучных убеждений".

009

Келлиэнн Конуэй

Эти три книги были написаны до состоявшихся в ноябре президентских выборов. Однако они предугадали появление "альтернативных фактов" и Келлиэнн Конуэй (советницы Трампа, которая ввела этот термин в обиход). В наши дни нередко возникает чувство, что над всей страной ставится психологический эксперимент, автор которого – не кто иной, как Стив Бэннон. Возможно, рациональным путем можно прийти к решению этой проблемы, но учебники по этому вопросу не очень-то обнадеживают.  

2562 words