Webinar with Amy Orben

In collaboration with Open Science Basel, CONP presented a webinar with Amy Orben on Thursday, November 26, 2020 entitled “Approaches to Scientific Error” on errors in science and the ability of the scientific method to detect and correct them. The video of Orben’s presentation as well as her PowerPoint slides are available for later viewing.

Amy Orben is a College Research Fellow at Emmanuel College, University of Cambridge, and a Research Fellow at the MRC Cognition and Brain Sciences Unit, University of Cambridge. She completed an MA in Natural Sciences at the University of Cambridge before joining the University of Oxford to obtain her DPhil in Experimental Psychology. Amy’s research uses large-scale data to examine how digital technologies affect adolescent psychological well-being and mental health. She uses innovative and rigorous statistical methodology to shed new light on pressing questions debated in policy, parenting and mental health. She campaigns for better communication of trends in data and the wider adoption of Open Science. She is also co-founder and Chair of the ReproducibiliTea Journal Club and helps produce its Podcast.

We are a grassroots journal club initiative that helps researchers create local Open Science journal clubs at their universities to discuss diverse issues, papers and ideas about improving science, reproducibility and the Open Science movement. 

Orben began her presentation by citing an Economist article, “Trouble at the Lab”. Using a figure taken from this article, she explained “how a small proportion of false positives [i.e hypotheses] could prove to be very misleading” and that a high percentage of research ideas are actually false. The rate could be higher or lower, depending on the research field. In his article, “Why Most Published Research Findings are False”, John P.A. Ioannidis concluded that most published findings are false, yet these are used by researchers to build their own studies.

Orben pointed out that the problem is not that there are errors in the scientific literature, it is the assumption that errors would be automatically self-corrected (through retesting, for example). In practice, self-correction does not occur on its own, and many factors challenge it: publication bias, fraud, underpowered studies. Orben quoted a 2017 Tweet by James Heathers: – “Science is self-correcting” – sure, *when we correct it*, not because of Magical Progress ™. Orben said, “We need to make a conscious effort to correct our literature, our work and others’ work … [E]rrors are probably everywhere, and we shouldn’t just assume that they will be found and that they’ll be rectified, so we need to have an active role.”

Orben proceeded to show “what you can do for your own work and for other people’s work.” She outlined the issues and process of correcting one’s own research and of correcting the work of others, both of which require active steps.

Orben admitted that correcting errors in one’s own work was hard, especially for young researchers. A published paper could be based on three years of work, and when rechecking the data reveals a coding error, it is hard to admit and retract a paper. Orben stated that one study showed that “retractions due to honest error actually do not result in reputational damage for junior researchers. Another study showed that “reputation seems to be based on how we pursue knowledge, certainty and how we respond to replication evidence rather than whether our initial results are true.” She shared a corrigendum that she had published on her own work, and her reputation did not suffer because of it. Even Frances Arnold, a Nobel Prize winner, had retracted a paper and received very positive reactions from colleagues. Orben also pointed out that “open science will make spotted errors increasingly likely,” such as coding errors, especially since most researchers are self-taught programmers. She herself uploads all her data and analysis, keeps a frozen version, and then uses a second version that she would update over time as errors and improvements are known.

On the other hand, calling out errors in other people’s work has been more controversial and at times sharply criticized. Orben mentioned the work of Nick Brown and James Heathers who are very vocal in calling out questionable data and errors in others’ work, and a column written by Susan T. Fiske (Association of Psychological Science past president) that coined the phrase “methodological terrorism” to describe this. Orben quoted Simine Vazire from her Slate Magazine article that those who point out errors are “accused of damaging their field, or worse.” Yet criticism was the “bedrock of scientific method,” Orben said. Criticizing the paper is not criticizing the author, and all need to ensure that fear of interpersonal conflict not dissuade people from engaging in debate.

Orben highlighted the need for error detectors as well as a change in the research environment. It takes time and more training, as well as departmental policies and funder recognition, to encourage one to spot errors and discuss them, in one’s own and in others’ work.

However, Orben differentiated between scientific error and fraud. “There has been work that saying that about 1% of scientists say that they have falsified data at some point in their life, which is actually 1 in a 100 ­– it’s very high.” Orben spotlighted the work of Elizabeth Bik who has shown a particular talent for visually spotting duplications in Western Blots. In a review of about 20,000 papers, Bik found that one out of 25 had problematic images (images being copy/pasted; partial images duplicated or resized, etc.) and that the prevalence of problematic images was on the rise.

There are cases of scientists who commit data fraud. Orben presented the case of Anil Potti, a cancer researcher whose results were widely used. Keith Baggerly and Kevin Coombes, biostatisticians, spent 1,500 hours checking Potti’s work, were unable to replicate the results, and found both honest errors and evidence of fraud. Their results encountered disbelief and only got attention when it was learned that Potti had falsified his CV.

Scientific error could also come from general sloppiness in research and analysis. Orben outlined the case of Brian Wansink, an eminent market professor of food science, and how he encouraged his students to p-hack and other sloppy research practices. Tim van der Zee, Jordan Anaya and Nick Brown reviewed his papers and found inconsistencies, duplication in writing (self-plagiarism), and other problems.

Orben shared several tools that make error detection easier: Statcheck online software to check p-values, a kind of “spell check for statistics”; GRIM (Granularity-related inconsistency of means), developed by Heathers, Anaya and Brown to check means; GRIMMER for checking standard deviations; and SPRITE for iterative techniques.

Orben concluded by reiterating that science is “riddled with correct and incorrect scientific results” “even in a perfect world” and that “we need to encourage an environment” where science can self correct (we should not take this for granted). “All too often, error detection is seen as personal criticism if you’re doing it to somebody else or it’s seen as a big problem if you’re doing it to your own work.” Fortunately, “there are a rising number of tools that can be used for error detection.” But even more importantly, “we do need a better culture about talking about errors because it is such a crucial part of science. But we are often are made to feel like we need to be infallible. We’re not allowed to make errors and other people are not allowed to make errors. … I think that’s just very far away from the truth.”

During the Q&A, Orben mentioned the lack of professional recognition and grant funding for error detection. Bik for example does error detection full time, putting it ahead of her own scientific career. There needs to be more funding for professional code checking (most scientists are not programmers) and peer review data checking. Orben pointed out the need for all to participate in creating an environment that encourages error detection, in one’s own work and in pointing out errors in others’ work, especially for younger researchers.

In particular, Orben believes using an open science workflow would contribute greatly to error detection and in fostering an environment where error detection is the norm. Putting her code online has led Orben to check it and correct it multiple times. Even if the code is not shared, one may receive an email later with questions about the data. She does not think of herself as a sloppy person, but she says researchers are doing difficult work, transcribing a lot of data, and storing it for long periods of time. All need to find ways to safeguard themselves. “Making open by default holds us to higher standards,” she affirmed.

Prepared by:
Mary Chin