Перейти к основному содержанию

Научные данные о воздействии рациональной дискуссии на мнения людей (на англ. яз.)

Научные данные о воздействии рациональной дискуссии на мнения людей (на англ. яз.)

от Евгений Волков -
Количество ответов: 0


https://www.contributoria.com/issue/2014-05/5319c4add63a707e780000cd

http://mindhacks.com/2014/05/06/using-rational-argument-to-change-minds/

header_5319c4add63a707e780000cd.jpg?v=1

Are we, the human species, unreasonable? Do rational arguments have any power to sway us, or is it all intuition, hidden motivations, and various other forms of prejudice?

The question has been hanging over me because of my profession. I work as a cognitive psychologist, researching and teaching how people think. My job is based on rational inquiry, yet the picture of human rationality painted by our profession can seem pretty bleak. Every week I hear about a new piece of research which shows up some quirk of our minds, like the one about people given a heavy clip board judge public issues as more important than people given a light clip board. Or that moreattractive people are judged as more trustworthy, or they arguments they give as more intelligent.

Commentators and popularisers of this work have been quick to pick up on these findings. Dan Ariely has a book calling us "Predictably Irrational", and the introduction tells us "we are pawns in a game whose forces we largely fail to comprehend. We usually think of ourselves [with] ultimate control over the decisions we make [but] this perception has more to do with our desires...than reality". Cordelia Fine's book "[A mind of its own](http://www.cordeliafine.com/amindofitsown.html )" has the subtitle "how your brain distorts and deceives", whilst David McRaney doesn't pull any punches with the title of his "You are not so smart".

The wider context is the recent progress in the sciences that puts our species in the biological context of the animals, a project that most psychologists are signed up to to some degree. A reflection of this is all the experiments which attempt to give a mechanistic - that is natural - account of the mind, an account which downplays idiosyncrasy, subjectivity and nondeterminism. The philosopher John Gray was reflecting on this trend in research, as well as giving vent to his own enthusiastic pessimism, when he wrote:

We think our actions express our decisions. But in nearly all of our life, willing decides nothing.

"We cannot wake up or fall asleep, remember or forget our dreams, summon or banish our thoughts, by deciding to do so. When we greet someone on the street we just act, and there is no actor standing behind what we do. Our acts are end points in long sequences of unconscious responses. They arise from a structure of habits and skills that is almost infinitely complicated. Most of our life is enacted without conscious awareness."

The science, and those who promote it, seem to be saying that we're unreasonable creatures. That's a problem, given that many of our social institutions (such as democracy) are based on an assumption that rational persuasion can occur. If I believed the story told in these books I would be forced to choose between my profession as a cognitive scientist and political commitment as a citizen and democrat.

Fortunately, as a cognitive scientist, I don't have to believe what I'm told about human nature - I can look into it myself. So I set out to get to the bottom of the evidence on how we respond to rational arguments. Does rationality lose out every time to irrational motivations? Or is there any hope to those of us who want to persuade because we have good arguments, not because we are handsome, or popular, or offer heavy clipboards.

Persuasion

One of the most famous examples of the way our minds twist arguments is anexperiment performed by Charles Lord, Lee Ross and Mark Lepper way back in 1979. These American social psychologists recruited participants who had views for or against the death penalty. They then presented them with reports of studies which seemed to support or oppose the death penalty. Here's a pro-death penalty example:

Kroner and Phillips (1977) compared murder rates for the year before and the year after adoption of capital punishment in 14 states.

In 11 of the 14 states, murder rates were lower after adoption of the death penalty.

This research supports the deterrent effect of the death penalty.

Lord and colleagues found that people didn't change their minds in the direction of the arguments presented to them, far from it. Rather, people who had pro-death penalty views found flaws and biases in the anti-death penalty studies, and vice versa. The participants in the experiment ended up with more extreme views than they started with - the pro- people becoming more pro and the anti- becoming more anti. This "biased assimilation effect", whereby we only believe evidence that fits with what we already believe, is no historical artefact. Adam Corner and colleagues from the University of Cardiff showed in 2012 that this bias holds for a very contemporary topic - climate change. People who were more skeptical about climate change rated editorials supporting the reality and importance of climate change as less persuasive and reliable than those people who were less skeptical.

At first glance, evidence like this looks like a triumph for the "we're all irrational" team. And don't be tempted to dismiss this as evidence that the people in the experiment are bad thinkers or somehow not qualified think about the topic. Another recent study showed that the more scientific education a climate skeptic had, the stronger their skepticism was likely to be.

But I want to persuade you that this is evidence of the power of reason, not unreason. Psychologists perform their interventions on participants who are far from a blank slate - they are all adults, usually University educated (our great weakness is performing most psychology experiments on psychology students), all probably having spent years developing their opinions about the world. It is not really surprising that their views can't be dislodged with a few choice anecdotes. Who'd want opinions if they were shifted by the slightest counter-argument. That's not rationality.

To really look at the power of reason, we need to look at the effect of strong rather than weak arguments. Unfortunately, as two leading researchers wrote in a 1998 review, "relatively little is known about what makes an argument persuasive"

Two decades earlier, one of the authors of this report, Richard Petty, had been involved in a piece of research which showed an important qualification you need to take account of if you want to measure how persuasive good arguments can be . Along with John Cacioppo, Petty ran an experiment looking at how involvement in an issue affected the power of arguments to persuade. The experimenters tried to persuade undergraduates at the University of Missouri that University regulations should be changed so that all students would have to pass an additional comprehensive exam before being allowed to graduate. Previous work had revealed that such a change was "strongly counterattitudinal for most college students". That's psychology code for "they hated the idea". Cacioppo and Petty varied the kinds of arguments they used on their volunteers. Half received strong arguments in favour of the change, and half weak arguments - arguments that had obvious flaws or simple counter-arguments. A second factor was manipulated - how involved people felt in the argument. Half the volunteers were told that this change was under consideration for the University of Missouri. In other words, that it would effect them, possibly requiring them to pass an extra exam or flunk their degree. The other half were told that the change was being considered at North Carolina State University (approximately 1000 miles away).

The results show that when people have low involvement in an argument, neither the strong or weak arguments were persuasive. People's minds were made up, and no argument shifted them. But in the high involvement condition both the strong and weak arguments had a significant effect. Weak arguments entrenched people's positions - they shifted their attitude to be more against the final exam. Strong arguments, however, had the effect you might expect from reasonable people - they shifted their attitudes to be less against the final exam idea (it still wasn't very popular, but it was less unpopular).

This research, and research that followed on from it, showed that strong arguments can be persuasive, but only when people are motivated to deliberate on the issue. Recently, a team led by Joseph Paxton of Harvard University showed that, in the domain of moral arguments, strong arguments were only persuasive if people were given some deliberation time before being forced to answer. Like crimes, it seems, reasoning requires both motive and opportunity, but if both are there even in crude psychology experiments we can show that strong arguments persuade.

Truth wins

The strongest evidence on the power of argument comes from domains where there is a right answer. For public issues like the death penalty, or moral arguments, it will never be clear what the right answer is. Because of this, one person's strong argument won't be the same as another's. In logic or mathematics, however, because a correct answer can be defined precisely, so can strong arguments.

For a long time, Psychologists have used a logic task called the Wason Selection Taskas a lens on our power of reasoning. The task works like this: imagine there are cards which always have a letter on one side and a number on the other. You are shown, flat on the table, four cards. Their up-facing sides show "E", "G", "7", "6" and you are told that you need to test this rule: "All cards with a vowel on one side have an even number on the other side". Which cards do you need to turn over to test if this rule is true?

In experiments using this task, over 80% of people test the rule by picking the cards showing "E" and "6" and they are wrong. The result is often held up as an example of the weakness of our powers of logic, showing how unsuited our minds are to formal reasoning.

The correct answer is that you need to turn over the "E" and the "7" cards. If the "E" card doesn't have an even number on the other side, the rule is false - a vowel did not lead to an even number. Similarly, if the "7" card has a vowel on the other side, the rule has also been shown to be false - a vowel led to a non-even number. Turning over the "6" card doesn't tell you anything, since the rule doesn't say anything about what even numbers cards ought to have on the other side (i.e. it doesn't say that non-vowels can't lead to even numbers too).

But what is often held up as a testimony to our irrationality can also be a laboratory for examining our rationality. Whilst the selection task is normally completed by individuals, you can also ask small groups to try and solve the task. When you do this, two remarkable things happen. Firstly, the success rate jumps massively so that most groups solve the task correctly (75% or more, compared to a success rate of less than 10% for individuals). Secondly, we can observe the process of discussion that generates the correct solutions, enabling us to discern something powerful and encouraging about group reasoning.

Transcripts of groups reasoning about the selection task show that in the process of discussion groups manage to construct arguments in favour of the correct answer - i.e. the answer that is in line with the logic of hypothesis testing. Other work on group reasoning, this time using mathematical problems, has shown that often it is enough for a single member of group to realise the correct answer for the group to submit this as their final decision. This "Truth Wins" scenario is in total contrast to what psychologists will normally tell you about group function. In most domains, from creativity to tug-of-war, a phenomenon called "social loafing" holds, whereby the performance of the group is less than the sum of expected effect of individuals acting alone.

This encouraging story about the power of reason needs to be put in the context of the research on persuasion. The groups in these experiments have a common goal, and - we must assume - trust each other and are committed to the task. Furthermore the solutions can be demonstrated to be correct. In these circumstances rational argument is productive.

Prove me wrong

Another result that comes from analysing transcripts of the these types of experiments is that people are only persuaded when they can be shown that the answer they are currently advocating is wrong. Insight into how to do this comes from experiments on the so-called "Illusion of Explanatory Depth". The illusion concerns our beliefs about how well we understand complex systems - ranging from the forces driving global terrorism to how a flush toilet works. The original research which framed the phenomenon asked people to self-rate their understanding of how things work. Examples for this experiment were taken from the classic children's book "The Way Things Work". The volunteers were asked to rate how well they understood things like "How a speedometer works", "How a helicopter flies" or "How a cylinder lock opens with a key". After they give these ratings, the participants were asked to write out a full explanation of how the items worked. They then answered test questions about their understanding. They then rated their original understanding again. After the trying to provide explanations, participant's ratings of understanding dropped. After the test questions they dropped even further, revealing that most people have a far less confident understanding of these things than they initially believe.

There is a lesson here for all of us about over confidence. The authors of the study, Leonid Rozenblit and Frank Keil from Yale University, ascribe the effect to the ease with which we interact with these systems, allowing us to directly appreciate their effects (e.g. we make the car go faster, and the speedometer shows the new speed). We, they argue, then mistake this sampling of the environment for our own knowledge. Without the working system in front of us, we're actually pretty ignorant of its internal operation.

But for me the interesting lesson is that the study participants came to realise they were wrong in their original assessments. Although full of confidence initially, they moved to re-rating their understanding as dramatically lower - they were, in other words, persuaded to change their minds about something (in this case, about how much they knew). How did this happen?

Follow up work published last year confirms that asking people to provide mechanistic explanations can play a vital role in persuading them they are mistaken. Philip Fernbach, of the University of Colorado, and colleagues asked participants in an experiment to provide opinions on policies which are generally contentious in the US - things like healthcare, social security and tax. So, for example, they indicated their support for polities such as transitioning to a single-payer health care system. Whether they were for or against the policy, the average participant was a long way from neutral. Half were then asked to give reasons why they felt like they did, and the other half asked to give an explanations of how the policy would have effects. Both groups then re-rated their position for or against the policy and these "after" scores were compared with the "before" scores. The "reasons" group didn't shift their views at all, remaining just as entrenched in their positions, for or against, as when they started the experiment. The "explanations" group did change - on average becoming more moderate in their positions. The authors conclude that the illusion of explanatory depth supports political extremism, and that when we are asked to provide explanations for how we think the world works, some of that illusion evaporates, undermining our previous certainty.

This research goes some way to explaining why causal reasons have been found to be more persuasive than statistical ones (in this case arguing that you cannot catch aids from touching someone with aids, because transmission occurs via HIV in bodily fluids, compared to arguing that you cannot catch aids from touching someone with aids because no one ever has).

Argumentation

This raises the general topic of how we react to arguments. More recent research has shown that even children as young as three prefer an argument that uses reasons to a circular argument

So it seems that, despite all the biases we're subject to, we are sensitive to reason - we discriminate better arguments from worse ones, often recognise the truth when it can be demonstrated, and adjust the strength of our beliefs when we discover we can't justify them as fully as we thought. Other work has shown that the skill of recognising and developing arguments can be taught.

A movement called deliberative polling uses group discussion as a way of measuring people's opinions (rather than the "stop them in the street and get a knee-jerk reaction" strategy) . Typically, this approach gathers less extreme views - for example, people's opinions on the value of prisons as way of treating crime are more moderate, less in favour - as well as leaving participants better informed, more willing to compromise and more aware of nuances in the issue debated.

The power of reason

These successes of group reason are in stark contrast to the known weaknesses of individual reasoning, which is beset with a susceptibility to logical fallacies (as we saw in the Wason selection task), and biases such as confirmation bias.

So striking is the success of reason when deployed in the service of argument that two cognitive scientists, Dan Sperber and Hugo Mercier, have even proposed that this is what reason evolved to do - convince other people in arguments, a legacy of our biological nature to live in social groups. This explains the success of groups on problems that confound individuals, and also explains why we are so good at thinking up reasons why we're right, even when we're wrong. If the purpose of reason is to persuade others that we're right, rather than find the truth directly, then this is just what you'd expect.

This theory connects with that of another important theorist of rationality, Jonathan Haidt. In his book The Righteous Mind, Haidt argues that intuitions come before reasons in arguments about moral issues, and that our social natures means that it is next to impossible to persuade someone under conditions of group competition (such as the current conditions of US politics).

Haidt isn't saying that we can't persuade other people about in arguments about moral issues, just that reason and argument are less important than group membership and intuition.

If you're interested in irrationality in persuasion then the very first place to start is the book "Influence" by social psychologist Robert Cialdini. This classic work looks at six major factors which can help persuade other people. For example, one major factor is "reciprocity”, whereby we feel compelled to give something back when people have given something to us (for example when a car salesperson has agreed to cut the price by 10%, maybe we feel we should raise the amount we're willing to pay in return). There's no need to labour the opportunities for the unscrupulous to take advantage of this kind of habit of mind. None of Cialdini's important persuasion factors are rational argument, so at first glance it looks as if Cialdini's manual of persuasion is coming firmly from the "we're irrational" side. But a second look might give us pause. Much of the evidence on which the power of these factors to aid persuasion is based assumes a situation where you have an at least half-way rational argument to begin with. A closer look at the factors Cialdini highlights shows that some of them are things we would expect to be possessed by someone whose thinking was generally rational. For example, one of Cialdini's principles is the need for people to appear consistent, so that if people first say they support protecting the environment, for example, they are then far more likely to agree to donate money to a green charity. But although our desire to be consistent can tie us in knots, for a reasoning person it is far preferable to the alternative which is to revel in inconsistency and to feel no compulsion to avoid contradiction. Other persuasion factors highlighted by Cialdini are things which, you could argue, naturally accrue to someone who is more rational: they are more likeable, have more authority, are more like to gather social proof (lots of people will agree with them). Maybe relying on these factors to judge whether you should be persuaded can lead to irrational mistakes, but in the long term they might help distinguish more rational from less rational arguments.

Paul Bloom is a proponent of the power of reasoning in moral persuasion, arguing that we have direct evidence of the power of reasoning in cases where morality has changed - over time, people have been persuaded to accept gay marriage, for example, or to reject slavery. Reasoning may not be as fast as intuition, as Haidt claims, but it can play a role in where those intuitions come from.

Bloom cites an idea Peter Singer describes in his book "the Expanding Circle". This is that when you decide to make a moral argument - i.e. an argument about what is right or wrong - you must to some extent step outside of your self and adopt an impartial perspective. If you want to persuade another that you should have more of the share of the food, you need to advance a rule that the other people can agree to. "I should get more because I'm me” won't persuade anyone, but "I should get more because I did more work, and people who did more work should get more” might. But once you employ an impartial perspective to persuade you lend force to a general rule, which may take on a life of its own. Maybe tomorrow you slack off, so your own rule will work against you. In order to persuade you struck a bargain with the group's shared understanding of what's reasonable. Once you've done this, Singer argues, you breathe life into the internal logic of argument. The "impartial perspective" develops its own dynamic, driving reason forward quite apart from the external influences of emotion, prejudice and environment. Not only can the arguments you advance come back to bite you, but they might even lead you to conclusions you didn't expect when you first formulated them.

Conclusion

So where does this leave us?

Are we a rational animal, or as Robert Heinlein said, merely a rationalising one? Sure, there's no shortage of evidence that our intuitions, emotions, prejudices and motivations can push reason around. Good luck to you if you want to use only argument to persuade - unless you've got people who already like you or trust you (ideally both) you're going to have a hard time, but amidst the storm and shouting of psychological factors, reason has a quiet power. People do change each other's minds, and if you can demonstrate the truth of your point of view, or help someone come to realise the short-comings of theirs, maybe you can shift them along. But beware Singer's warning - logic has its own dynamic. If you open yourself to sincerely engage in argument then it is as likely that your interlocutor will persuade you as the other way around, after all, none of us has sole claim on what it means to be rational.

всего слов - 3927