В основе отрицания науки — мыслительная ошибка / The thinking error at the root of science denial

В основе отрицания науки — мыслительная ошибка / The thinking error at the root of science denial

by Евгений Волков -
Number of replies: 0

The thinking error at the root of science denial

May 8, 2018 11.45am BST

Could seeing things in black-and-white terms influence people’s views on scientific questions? Lightspring/Shutterstock.com

В основе отрицания науки — мыслительная ошибка / The thinking error at the root of science denial

http://theconversation.com/the-thinking-error-at-the-root-of-science-denial-96099

Currently, there are three important issues on which there is scientific consensus but controversy among laypeople: climate change, biological evolution and childhood vaccination. On all three issues, 

 members of the Trump administration, including the president, have lined up against the conclusions of research.

This widespread rejection of scientific findings presents a perplexing puzzle to those of us who value an evidence-based approach to knowledge and policy.

Yet many science deniers do cite empirical evidence. The problem is that they do so in invalid, misleading ways. Psychological research illuminates these ways.

No shades of gray

As a psychotherapist, I see a striking parallel between a type of thinking involved in many mental health disturbances and the reasoning behind science denial. As I explain in my book “Psychotherapeutic Diagrams,” dichotomous thinking, also called black-and-white and all-or-none thinking, is a factor in depression, anxiety, aggression and, especially, borderline personality disorder.

In this type of cognition, a spectrum of possibilities is divided into two parts, with a blurring of distinctions within those categories. Shades of gray are missed; everything is considered either black or white. Dichotomous thinking is not always or inevitably wrong, but it is a poor tool for understanding complicated realities because these usually involve spectrums of possibilities, not binaries.

Spectrums are sometimes split in very asymmetric ways, with one-half of the binary much larger than the other. For example, perfectionists categorize their work as either perfect or unsatisfactory; good and very good outcomes are lumped together with poor ones in the unsatisfactory category. In borderline personality disorder, relationship partners are perceived as either all good or all bad, so one hurtful behavior catapults the partner from the good to the bad category. It’s like a pass/fail grading system in which 100 percent correct earns a P and everything else gets an F.

In my observations, I see science deniers engage in dichotomous thinking about truth claims. In evaluating the evidence for a hypothesis or theory, they divide the spectrum of possibilities into two unequal parts: perfect certainty and inconclusive controversy. Any bit of data that does not support a theory is misunderstood to mean that the formulation is fundamentally in doubt, regardless of the amount of supportive evidence.

Similarly, deniers perceive the spectrum of scientific agreement as divided into two unequal parts: perfect consensus and no consensus at all. Any departure from 100 percent agreement is categorized as a lack of agreement, which is misinterpreted as indicating fundamental controversy in the field.

There is no ‘proof’ in science

In my view, science deniers misapply the concept of “proof.”

Proof exists in mathematics and logic but not in science. Research builds knowledge in progressive increments. As empirical evidence accumulates, there are more and more accurate approximations of ultimate truth but no final end point to the process. Deniers exploit the distinction between proof and compelling evidence by categorizing empirically well-supported ideas as “unproven.” Such statements are technically correct but extremely misleading, because there are no proven ideas in science, and evidence-based ideas are the best guides for action we have.

I have observed deniers use a three-step strategy to mislead the scientifically unsophisticated. First, they cite areas of uncertainty or controversy, no matter how minor, within the body of research that invalidates their desired course of action. Second, they categorize the overall scientific status of that body of research as uncertain and controversial. Finally, deniers advocate proceeding as if the research did not exist.

For example, climate change skeptics jump from the realization that we do not completely understand all climate-related variables to the inference that we have no reliable knowledge at all. Similarly, they give equal weight to the 97 percent of climate scientists who believe in human-caused global warming and the 3 percent who do not, even though many of the latter receive support from the fossil fuels industry.

This same type of thinking can be seen among creationists. They seem to misinterpret any limitation or flux in evolutionary theory to mean that the validity of this body of research is fundamentally in doubt. For example, the biologist James Shapiro (no relation) discovered a cellular mechanism of genomic change that Darwin did not know about. Shapiro views his research as adding to evolutionary theory, not upending it. Nonetheless, his discovery and others like it, refracted through the lens of dichotomous thinking, result in articles with titles like, “Scientists Confirm: Darwinism Is Broken” by Paul Nelson and David Klinghoffer of the Discovery Institute, which promotes the theory of “intelligent design.” Shapiro insists that his research provides no support for intelligent design, but proponents of this pseudoscience repeatedly cite his work as if it does.

For his part, Trump engages in dichotomous thinking about the possibility of a link between childhood vaccinations and autism. Despite exhaustive research and the consensus of all major medical organizations that no link exists, Trump has often cited a link between vaccines and autism and he advocates changing the standard vaccination protocol to protect against this nonexistent danger.

There is a vast gulf between perfect knowledge and total ignorance, and we live most of our lives in this gulf. Informed decision-making in the real world can never be perfectly informed, but responding to the inevitable uncertainties by ignoring the best available evidence is no substitute for the imperfect approach to knowledge called science.

1016 words