Tuesday, July 28, 2020

The Red Team Challenge (Part 4): The Wildcard Reviewer

This is a guest blog by Tiago Lubiana, Ph.D. Candidate in Bioinformatics, University of São Paulo.

Read also Part 1, Part 2, and Part 3 of The Red Team Challenge

Two remarkable moments as a researcher are publishing your first first-author article and the first time a journal editor asks you to review a paper.

Well, at least I imagine so. I haven’t experienced either yet. Yet,for some reason, the author of the Red Team Challenge accepted me as a (paid) reviewer for their audacious project.

I believe I am one of the few scientists to receive money for a peer review before doing any unpaid peer-reviews. I’m also perhaps one of the few to review a paper before having any first-author papers. Quite likely, I am the first one to do both at the same time.

I am, nevertheless, a science aficionado. I’ve breathed science for the past 9 years, working in10 different laboratories before joining the Computational Systems Biology Lab at the University of São Paulo, where I am pursuing my PhD. I like this whole thing of understanding more about the world, reading, doing experiments, sharing findings with people. That is my thing (to be fair, that is likely our thing).

I also had my crises with the scientific world. A lot of findings in the literature are contradictory. And many others are simply wrong. And they stay wrong, right? It is incredible, but people usually do not update articles even given a need for corrections. The all-powerful, waxy stamp of peer-reviewed is given to a monolithic text-and-figure-and-table PDF, and this pdf is then frozen forever in the hall of fame. And it costs a crazy amount of money to lock this frozen pdf behind paywalls.

I have always been very thorough in my evaluation of any work. With time, I discovered that, for some reason, people don’t like to have their works criticized (who would imagine, huh?). That can be attenuated by a lot of training in how to communicate. But even then, people frown upon even constructive criticism. If it is a critic about something that is already published, then it is even worse. So I got quite excited when I saw this call for people to have a carte blanche to criticize a piece of work as much as possible.

I got to know the Red Team Challenge via a Whatsapp message sent by Olavo Amaral, who leads the Brazilian Reproducibility Initiative. Well, it looked cool, it paid a fairly decent amount of money, and the application was simple (it did not require a letter of recommendation or anything like this). So I thought: “Why not? I do not know a thing about psychology, but I can maybe spot a few uncorrected multiple comparisons here and there, and I can definitely look at the code.”

I got lucky that the Blue Team (Coles et al.) had a place for random skills (the so-called wildcard category) in their system for selecting reviewers. About a week after applying, I received a message in my mail that stated that I had been chosen as a reviewer. “Great! What do I do now?”

I was obviously a bit scared of making a big blunder or at least just making it way below expectations. But there was a thing that tranquilized me: I was hired. This was not an invitation based on expectations or a pre-existing relationship with the ones hiring me. People were actually paying me, and my tasks for the job were crystal clear.

If I somehow failed to provide a great review, it would not affect my professional life whatsoever (I guess). I had just the responsibility to do a good job that any person has after signing a contract.

I am not a psychologist by training, and so I knew beforehand that the details of the work would be beyond my reach. However, after reading the manuscript, I felt even worse: the manuscript was excellent. Or I mean, at least a lot of care was taken when planning and important experimental details as far as I could tell as an outsider.

It is not uncommon for me to cringe upon a few dangling uncorrected p-values here and there, even when reading something slightly out of my expertise. Or to find some evidence of optional stopping. Or pinpoint some statistical tests from which you cannot really tell what the null hypothesis is and what actually is being tested.

That did not happen. However, everyone involved knew that I was not a psychologist. I was plucked from the class of miscellaneous reviewers. From the start, I knew that I could contribute the most by reviewing the code.

I am a computational biologist, and our peers in the computer sciences usually look down on our code. For example, software engineers called a high profile epidemiological modeling code a “null I would say that lack of computational reproducibility is pervasive throughout science, and not restricted to a discipline or the other.

Luckily, I have always been interested in best practices. I might not always follow them (“do what I say not what I do”), mainly because of environmental constraints. Journals don’t require clean code, for example. And I’ve never heard about “proofreadings” of scripts that come alongside papers.

It was a pleasant surprise to see that the code from the paper was good, better than most of the code I’ve seen in biology-related scripts. It was filled with comments. The required packages were laid down at the beginning of the script. The environment was cleared in the first line so to avoid dangling global variables.

These are all good practices. If journals asked reviewers to check code (which they usually do not), it would come out virtually unscathed.

But I was being paid to review it, so I had to find something. There are some things that can improve one’s code and make it much easier to check and validate. One can avoid commenting too much by using clear variable names, and you do not have to lay down the packages used if the code is containerized (with Docker, for example). A bit of refactoring could be done here and there, also, extracting out functions that were repeatedly used across the code. That was mostly what my review focused on, honestly.

Although these things are relatively minor, they do make a difference. It is a bit like the difference in prose between a non-writer and an experienced writer. The raw content might be the same, but the effectiveness of communication can vary a lot. And reading code can be already challenging, so it is always good to make it easier for the reader (and the reviewer, by extension).

Anyways, I have sent 11 issue reports (below the mean of ~20, but precisely the median of 11 reports/reviewer), and Ruben Arslan, the neutral arbiter, considered one of them to be a major issue. Later, Daniël and Nicholas mentioned that the reviews were helpful, so I am led to believe that somehow I contributed to future improvements in this report. Science wins, I guess.

One interesting aspect of being hired by the authors is that I did not feel compelled to state whether I thought the work was relevant or novel. The work is obviously important for the authors who hired me. The current peer-review system mixes the evaluation of thoroughness and novelty under the same brand. That might be suboptimal in some cases. A good reviewer for statistics, or code, for example, might not feel that they can tell how much a “contribution is significant or only incremental,” as currently required. If that was a requirement for the Red Team Challenge, I would not have been able to be a part of the Red Team.

This mix of functions may be preventing us from getting more efficient reviews. We know that gross mistakes pass peer review. I’d trust a regularly updated preprint, with thorough, open, commissioned peer review, for example. I am sure we can come up with better ways of giving “this-is-good-science” stamps and improve the effectiveness of peer reviews.

To sum up, it felt very good to be in a system with the right incentives. Amidst this whole pandemic thing and chaos everywhere, I ended up being part of something really wonderful. Nicholas, Daniël, and all the others involved in the Red Team challenge are providing prime evidence that an alternate system is viable. Maybe one day, we will have reviewer-for-hire marketplaces and more adequate review incentives. When that day comes, I will be there, be it hiring or being hired.

No comments:

Post a Comment