We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Confirmation bias is the tendency to search for or interpret information in a way that confirms one's beliefs or hypotheses. And some results illustrate that people set higher standards of evidence for hypotheses that go against their current expectations, an effect sometimes referred to as "disconfirmation bias". However, I would like to know if there is a specific name (and literature) related to the tendency among scientists to resist disconfirmation of their theories.
I understand confirmation bias as including this. The Wikipedia page you link has a section on "persistence of discredited beliefs" that corroborates my perspective:
Confirmation biases can be used to explain why some beliefs persist when the initial evidence for them is removed. This belief perseverance effect has been shown by a series of experiments using what is called the "debriefing paradigm": participants read fake evidence for a hypothesis, their attitude change is measured, then the fakery is exposed in detail. Their attitudes are then measured once more to see if their belief returns to its previous level.
A common finding is that at least some of the initial belief remains even after a full debrief. In one experiment, participants had to distinguish between real and fake suicide notes. The feedback was random: some were told they had done well while others were told they had performed badly. Even after being fully debriefed, participants were still influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending on what they had initially been told.
In another study, participants read job performance ratings of two firefighters, along with their responses to a risk aversion test. This fictional data was arranged to show either a negative or positive association: some participants were told that a risk-taking firefighter did better, while others were told they did less well than a risk-averse colleague. Even if these two case studies were true, they would have been scientifically poor evidence for a conclusion about firefighters in general. However, the participants found them subjectively persuasive. When the case studies were shown to be fictional, participants' belief in a link diminished, but around half of the original effect remained. Follow-up interviews established that the participants had understood the debriefing and taken it seriously. Participants seemed to trust the debriefing, but regarded the discredited information as irrelevant to their personal belief.
44. Ross, Lee; Anderson, Craig A. (1982), "Shortcomings in the attribution process: On the origins and maintenance of erroneous social assessments", in Kahneman, Daniel; Slovic, Paul; Tversky, Amos, Judgment under uncertainty: Heuristics and biases, Cambridge University Press, pp. 129-152, ISBN 978-0-521-28414-1, OCLC 7578020
45. Nickerson, Raymond S. (1998), "Confirmation Bias; A Ubiquitous Phenomenon in Many Guises", Review of General Psychology (Educational Publishing Foundation) 2 (2): 175-220, doi:10.1037/1089-2622.214.171.124, ISSN 1089-2680, p. 187 46. Kunda, Ziva (1999), Social Cognition: Making Sense of People, MIT Press, ISBN 978-0-262-61143-5, OCLC 40618974, p. 99
47. Ross, Lee; Lepper, Mark R.; Hubbard, Michael (1975), "Perseverance in self-perception and social perception: Biased attributional processes in the debriefing paradigm", Journal of Personality and Social Psychology (American Psychological Association) 32 (5): 880-892, doi:10.1037/0022-35126.96.36.1990, ISSN 0022-3514, PMID 1185517 via Kunda 1999, p. 99
48. Anderson, Craig A.; Lepper, Mark R.; Ross, Lee (1980), "Perseverance of Social Theories: The Role of Explanation in the Persistence of Discredited Information", Journal of Personality and Social Psychology (American Psychological Association) 39 (6): 1037-1049, doi:10.1037/h0077720, ISSN 0022-3514
This isn't quite what you are looking for, but it's close enough that it might help you find additional information.
Munro (2010) found evidence that people tend to discount the scientific possibility of studying something when presented with scientific evidence that goes against their current beliefs. In other words, if people were shown a result that went against their beliefs, then people were more likely to conclude that it was not possible to conduct rigorous scientific experimentation on the topic. He called this the scientific impotence excuse. In the experiment, subjects were shown an (artificial) abstract that claimed there was either a link between homosexuality and mental illness or claimed there was no link. A control group saw the same abstracts, but instead of a link between homosexuality and mental illness, the link was between a made-up group like Zavs. After reading the abstract, subjects were asked to indicate whether or not it was possible to investigate such links with scientific methods. If the results of the abstract were contradictory towards a subject's prior beliefs (as measured by a questionnaire at the start of the experiment), then the subject was more likely to say that the question cannot be investigated in a scientific manner.
This study was done with college students, who are not trained scientists, so it's unclear whether the result would generalize to scientists or not.
I think this is more related to the Observer-expectancy effect, where the researchers, looking for a certain result in an experiment, may inadvertently manipulate or interpret the results to reveal their expectations. That's what you a looking for, I suppose.