Monday, May 11, 2020

Red Team Challenge

by Nicholas A. Coles, Leo Tiokhin, Ruben Arslan, Patrick Forscher, Anne Scheel, & Daniël Lakens


All else equal, scientists should trust studies and theories that have been more critically evaluated. The more that a scientific product has been exposed to processes designed to detect flaws, the more that researchers can trust the product (Lakens, 2019; Mayo, 2018). Yet, there are barriers to adopting critical approaches in science. Researchers are susceptible to biases, such as confirmation bias, the “better than average” effect, and groupthink. Researchers may gain a competitive advantage for jobs, funding, and promotions by sacrificing rigor in order to produce larger quantities of research (Heesen, 2018; Higginson & Munafò, 2016) or to win priority races (Tiokhin & Derex, 2019). And even if researchers were transparent enough to allow others to critically examine their materials, code, and ideas, there is little incentive for others--including peer reviewers--to do so. These combined factors may hinder the ability of science to detect errors and self-correct (Vazire, 2019).

Today we announce an initiative that we hope can incentivize critical feedback and error detection in science: the Red Team Challenge. Daniël Lakens and Leo Tiokhin are offering a total of $3,000 for five individuals to provide critical feedback on the materials, code, and ideas in the forthcoming preprint titled “Are facial feedback effects solely driven by demand characteristics? An experimental investigation”. This preprint examines the role of demand characteristics in research on the controversial facial feedback hypothesis: the idea that an individual’s facial expressions can influence their emotions. This is a project that Coles and colleagues will submit for publication in parallel with the Red Team Challenge. We hope that challenge will serve as a useful case study of the role Red Teams might play in science.

We are looking for five individuals to join “The Red Team”. Unlike traditional peer review, this Red Team will receive financial incentives to identify problems. Each Red Team member will receive a $200 stipend to find problems, including (but not limited to) errors in the experimental design, materials, code, analyses, logic, and writing. In addition to these stipends, we will donate $100 to a GoodWell top ranked charity (maximum total donations: $2,000) for every new “critical problem” detected by a Red Team member. Defining a “critical problem” is subjective, but a neutral arbiter--Ruben Arslan--will make these decisions transparently. At the end of the challenge, we will release: (1) the names of the Red Team members (if they wish to be identified), (2) a summary of the Red Team’s feedback, (3) how much each Red Team member raised for charity, and (4) the authors’ responses to the Red Team’s feedback.

If you are interested in joining the Red Team, you have until May 14th to sign up here. At this link, you will be asked for your name, email address, and a brief description of your expertise. If more than five people wish to join the Red Team, we will ad-hoc categorize people based on expertise (e.g., theory, methods, reproducibility) and randomly select individuals from each category. On May 15th, we will notify people whether they have been chosen to join the Red Team.

For us, this is a fun project for several reasons. Some of us are just interested in the feasibility of Red Team challenges in science (Lakens, 2020). Others want feedback about how to make such challenges more scientifically useful and to develop best practices. And some of us (mostly Nick) are curious to see what good and bad might come from throwing their project into the crosshairs of financially-incentivized research skeptics. Regardless of our diverse motivations, we’re united by a common interest: improving science by recognizing and rewarding criticism (Vazire, 2019).






References
Heesen, R. (2018). Why the reward structure of science makes reproducibility problems inevitable. The Journal of Philosophy, 115(12), 661-674.
Higginson, A. D., & Munafò, M. R. (2016). Current incentives for scientists lead to underpowered studies with erroneous conclusions. PLoS Biology, 14(11), e2000995.
Lakens, D. (2019). The value of preregistration for psychological science: A conceptual analysis. Japanese Psychological Review.
Lakens, D. (2020). Pandemic researchers — recruit your own best critics. Nature, 581, 121.
Mayo, D. G. (2018). Statistical inference as severe testing. Cambridge: Cambridge University Press.
Tiokhin, L., & Derex, M. (2019). Competition for novelty reduces information sampling in a research game-a registered report. Royal Society Open Science, 6(5), 180934.
Vazire, S. (2019). A toast to the error detectors. Nature, 577(9).

2 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete