The 20% Statistician

A blog on statistics, methods, philosophy of science, and open science. Understanding 20% of statistics will improve 80% of your inferences.

Wednesday, September 4, 2024

Why I don’t expect to be convinced by evidence that scientific reform is improving science (and why that is not a problem)

Since more or less a decade there has been sufficient momentum in science to not just complain about things scientists do wrong, but to actually do something about it. When social psychologists declared a replication crisis in the 60’and 70’s, nothing much changed (Lakens, 2023). They also complained about bad methodology, flexibility in the data analysis, a lack of generalizability and applicability, but no concrete actions to improve things emerged from this crisis.

 

After the 2010 crisis in psychology, scientists did make changes to how they work. Some of these changes were principled, others less so. For example, badges were introduced for certain open science practices, and researchers implementing these open science practices would get a badge presented alongside their article. This was not a principled change, but a nudge to change behavior. There were also more principled changes. For example, if researchers say they make error-controlled claims at a 5% alpha level, they should make error controlled claims at a 5% alpha level, and they should not engage in research practices that untransparently inflate the Type 1 error rate. The introduction of a practice such as preregistration had the goal to prevent untransparently inflating Type 1 error rates, by making any possible inflation transparent. This is a principled change because it increases the coherence of research practices.

 

As these changes in practices became more adopted, a large group of researchers was confronted with requirements such as having to justify their sample size, indicate whether they deserved an open science badge, or make explicit that a claim was exploratory (i.e., not error controlled). As more people were confronted with these changes, the absolute number of people critical about these changes increased. A very reasonable question to ask as a scientist is ‘Why?’, and so people asked: “Why should I do this new thing?’.

 

There are two ways to respond to the question why scientific practices need to change. The first justification is ‘because science will improve’. This is an empirical justification. The world is currently in a certain observable state, and if we change things about our world, it will be in a different, but better, observable state. The second justification is ‘because it logically follows’. This is, not surprisingly, a logical argument. There is a certain way of working that is internally inconsistent, and there is a way of working that is consistent.

 

An empirical justification requires evidence. A logical justification requires agreement with a principle. If we want to justify preregistration empirically, we need to provide evidence that it improved science. If you want to disagree with the claim that preregistration is a good idea, you need to disagree with the evidence. If we want to justify preregistration logically, we need to people to agree with the principle that researchers should be able to transparently evaluate how coherently their peers are acting (e.g., they are not saying they are making an error controlled claim, when in actuality they did not control their error rate).

 

Why evidence for better science is practically impossible.

Although it is always difficult to provide strong evidence for a claim, some things are more difficult to study than others. Providing evidence that a change in practice improves science is so difficult, it might be practically impossible. Paul Meehl, one of the first meta-scientists, developed the idea of cliometric meta-theory, or the empirical investigation of which theories are doing better than others. He proposes to follow different theories for something like 50 years, and see which one leads to greater scientific progress. If we want to provide evidence that a change in practice improves science, we need something similar. So, the time scale we are talking about makes the empirical study of what makes science ‘better’ difficult.

But we also need to collect evidence for a causal claim, which requires excluding confounders. A good start would be to randomly assign half of the scientists to preregister all their research for the next fifty years, and order half not to. This is the second difficulty: It is practically impossible to go beyond observational data, and this will always have confounds. But even if we would be able to manipulate something, the assumption that the control condition is not affected by the manipulation is too likely to be violated. The people who preregister will – if they preregister well – have no flexibility in the data analysis, and their alpha levels are controlled. But the people in the control condition know about preregistration as well. After p-hacking their way to a p = 0.03 in Study 1, p = 0.02 in Study 2, and p = 0.06 (marginally significant) in Study 3, they will look at their studies and wonder if these people will take their set of studies seriously. Probably not. So, they develop new techniques to publish evidence for what they want to be true – for example by performing large studies with unreliable measures and a tiny sprinkle of confounds, which consistently yield low p-values.

So after running several studies for 50 years each, we end up with evidence that is not particularly difficult to poke holes in. We have invested a huge amount of effort, for what we should know from the outset will yield very little gain.

 

As we wrote in our recent paper “The benefits of preregistration and Registered Reports(Lakens et al., 2024):

 

It is difficult to provide empirical support for the hypothesis that preregistration and Registered Reports will lead to studies of higher quality. To test such a hypothesis, scientists should be randomly assigned to a control condition where studies are not preregistered, a condition where researchers are instructed to preregister all their research, and a condition where researchers have to publish all their work as a Registered Report. We would then follow the success of theories examined in each of these three conditions in an approach Meehl (2004) calls cliometric metatheory by empirically examining which theories become ensconced, or sufficiently established that most scientists consider the theory as no longer in doubt. Because such a study is not feasible, causal claims about the effects of preregistration and Registered Reports on the quality of research are practically out of reach.

 

At this time, I do not believe there will ever be sufficiently conclusive empirical evidence for causal claims that a change in scientific practice makes science better. You might argue that my bar for evidence is too high. That conclusive empirical evidence in science is rarely possible, but that we can provide evidence from observational studies – perhaps by attempting to control for the most important confounds, measuring decent proxies of ‘better science’ on a shorter time scale. I think this work can be valuable, and it might convince some people, and it might even lead to a sufficient evidence base to warrant policy change by some organizations. After all, policies need to be set anyway, and the evidence base for most of the policies in science are based on weak evidence, at best.

 

A little bit of logic is worth more than two centuries of cliometric metatheory.

 

Psychologists are empirically inclined creatures, and to their detriment, they often trust empirical data more than logical arguments. We published the nine studies on precognition by Daryl Bem because they followed standard empirical methods and yielded significant p values, even when one of the reviewers pointed out that the paper should be rejected because it logically violated the laws of physics. Psychologists too often assign more weight to a p value than to logical consistency.

And yet, a little bit of logic will often yield much greater returns, with much less effort. A logical justification of preregistration does not require empirical evidence. It just needs to point out that it is logically coherent to preregister. Logical propositions have premises and a conclusion: If X, then Y.

In meta-science logical arguments are of the form ‘if we have the goal to generate knowledge following a certain philosophy of science, then we need to follow certain methodological procedures.’ For example, if you think it is a fun idea to take Feyerabend seriously and believe that science progresses in a system that cannot be captured by any rules, then anything goes. Now let’s try a premise that is not as stupid as the one proposed by Feyerabend, and entertain the idea that some ways of doing science are better than others. For example, you might believe that scientists generate knowledge by making statistical claims (e.g., ‘we reject the presence of a correlation larger than r = 0.1’) that are not too often wrong. If this aligns with your philosophy of science, you might think the following proposition is valid: If a scientist wants to generate knowledge by making statistical claims that are not too often wrong, then they need to control their statistical error rates’. This puts us in Mayo’s error-statistical philosophy. We can change the previous proposition, which was written on the level of individual scientist, if we believe that science is not an individual process, but a social one. A proposition that is more in line with a social epistemological perspective would be: “If the scientific community wants to generate knowledge by making statistical claims that are not too often wrong, then they need to have procedures in place to evaluate which claims were made by statistically controlling error rates”.

 

This in itself is not a sufficient argument for preregistration, because there are many procedures that we could rely on. For example, we can trust scientists. If they do not say anything about flexibly analyzing their data, we can trust that they did not flexibly analyze their data. You can also believe that science should not be based on trust. Instead, you might believe that scientists should be able to scrutinize claims by peers, and that they should not have to take their word for it: Nullius in Verba. If so, then science should be transparent. You do not need to agree with this, of course, just as you did not have to agree with the premise that the goal of science is to generate claims that are not too often wrong. If we include this premise, we get the following proposition: “If the scientific community wants to generate knowledge by making statistical claims that are not too often wrong, and if scientists should be able to scrutinize claims by peers, then they need to have procedures in place for peers to transparently evaluate which claims were made by statistically controlling error rates”.

Now we have a logical argument for preregistration as one change in the way scientists work, because it makes it more coherent. Preregistration is not the only possible change to make science coherent. For example, we could also test all hypotheses in the presence of the entire scientific community, for example by live-streaming and recording all research that is being done. This would also be a coherent improvement to how scientists work, but it would also be more cumbersome. The hope is that preregistration, when implemented well, is a more efficient change to make science more coherent.

 

Should logic or evidence be the basis of change in science?

 

Which of the two justifications for changes in scientific practice is more desirable? A benefit of evidence is that it can convince all rational individuals, as long as it is strong enough. But evidence can be challenged, especially when it is weak. This is an important feature of science, but when disagreements about the evidence base can not be resolved, it quickly leads to ‘even the experts are do not agree about what the data shows’. A benefit of logic is also that it should convince rational individuals, as long as they agree with the premise. But not everyone will agree with the premise. Again, this is an important feature of science. It might be a personal preference, but I actually like disagreements about the premises of what the goals of science are. Where disagreements about evidence are temporarily acceptable, but in the long run undesirable, disagreements about what the goals of science are is good for the diversity in science. Or at least that is a premise I accept.

 

As I see it, the goal should not be to convince people to implement certain changes to scientific practice per se, but to get scientists to behave in a coherent manner, and to implement changes to their practice if this makes their practice more coherent. Whether practices are coherent or not is unrelated to whether you believe practices are good, or desirable. Those value judgments are part of your decision to accept or reject a premise. You might think it is undesirable that scientists make claims, as this will introduce all sorts of undesirable consequences, such as confirmation bias. Then, you would choose a different philosophy of science. That is fine, as long as you then implement research practices that logically follow from the premises. Empirical research can guide you towards or away from accepting certain premises. For example, meta-scientists might describe facts that make you believe scientists are extremely trustworthy, and transparency is not needed. Meta-scientists might also point out ways in which research practices are not coherent with certain premises. For example, if we believe transparency is important, but most researchers selectively publish results, then we have identified in incoherency that we might need to educate people about, or we need to develop ways for researchers to resolve this incoherency (such as developing preprint servers that allow researchers to share all results with peers). And for some changes to science, such as the introduction of Open Science Badges, there might not be any logical justifications (or if they exist, I have not seen them). For those changes, empirical justifications are the only possibility.

 

Conclusion

 

As changes to scientific practice become more institutionalized, it is only fair that researchers ask why these changes are needed. There are two possible justifications: One based on empirical evidence, and one on logically coherent procedures that follow from a premise. Psychologists might intuitively believe that empirical evidence is the better justification for a practice. I personally doubt it. I think logical arguments will often provide a stronger foundation, especially when scientific evidence is practically difficult to collect.

Tuesday, July 23, 2024

New paper: The benefits of preregistration and Registered Reports.

 With my PhD students Cristian Mesquida and Sajedeh Rasti, and former lab visitor Max Ditroilo we published a new paper on preregistration and Registered Reports. We aim to provide a state-of-the-art overview of the idea behind and metascience on preregistration and Registered Reports. https://www.tandfonline.com/doi/full/10.1080/2833373X.2024.2376046

We explain the link between preregistration and severe testing, and how systematic bias might reduce the severity of tests. Preregistration is a tool to allow others to evaluate the severity of tests.

We provide and defend a more narrow use-case of preregistration. In essence, we argue you can only preregister level 6 and 5 studies from this table from the Peer Community In guide for authors https://rr.peercommunityin.org/help/guide_for_authors



We deviate from current consensus, but in the conviction that our use of the term preregistration is more principled, and will become the default in the future (just as how the Preregistration+ badge would be seen as the only valid preregistration today. As our understanding changes, so do our definitions.


We summarize 18 surveys on research practices that reduce the severity of tests. You might have seen previous version of this Figure – this is the final published version, in case you want to re-use or cite this. More details on the studies in this figure is available from https://osf.io/sxg7q. 



We carefully point out: “It is important to point out that the percentages presented here do not directly translate into the percentage of researchers who are engaging in these practices.” We wish we knew, but we just don’t know. 

We discuss cost-benefit analyses of preregistration, and conclude there are too many unknowns to determine if preregistration is beneficial. We also say it does not really matter, because the main reason to preregister is based on a normative argument.

We say: “researchers who test hypotheses from a methodological falsificationist approach to science should preregister their studies if they want a science that has intersubjectively established severely tested claims.” As always, we believe it is essential to be clear about your philosophy on scientific knowledge generation - not being clear about it can lead to a lot of discussion that will go nowhere (see Lakens, 2019).  

That means we also do not expect people who have different epistemological philosophies to preregister – nor is it a logical solution for exploratory research, or certain types of secondary data analysis. We feel it is important to point this out, because there are alternative approaches to argue a test is severe that are better suited for those studies: open lab notebooks, sensitivity analyses, robustness checks, independent replication. It is always important to use the right tool for the job - we do not want preregistration to be mindlessly overused. 

We discuss meta-scientific evidence that shows preregistration makes it possible to evaluate the severity of tests (and we cite some anecdotal examples). Of course, not all preregistrations are equally good yet – people need more training. 

We also engage with the most important criticism on preregistration. Beyond the valid concern that the mere presence of a preregistration may be mindlessly used as a proxy for high quality, we identify conflicting viewpoints, several misunderstandings, and a general lack of empirical support for the criticisms that have been raised. I personally feel critics need to raise the bar if they want to be taken seriously. They should at the very least resolve the contradicting criticisms among each other. They should also collect empirical data to test their claims. 

I strongly expect this fourth paper (following Nosek & Lakens, 2014, Lakens, 2019, and Lakens, 2024) to be my last contribution to this topic. I have said all I want to say, and contributed all I can with this final paper.

 

Friday, February 9, 2024

Why Effect Sizes Selected for Significance are Inflated

Estimates based on samples from the population will show variability. The larger the sample, the closer our estimates will be to the true population values. Sometimes we will observe larger estimates than the population value, and sometimes we will observe smaller values. As long as we have an unbiased collection of effect size estimates, combining effect sizes estimates through a meta-analysis can increase the accuracy of the estimate. Regrettably, the scientific literature is often biased. It is specifically common that statistically significant studies are published (e.g., studies with p values smaller than 0.05) while studies with p values larger than 0.05 remain unpublished (Ensinck & Lakens, 2023; Franco et al., 2014; Sterling, 1959). Instead of having access to all effect sizes, anyone reading the literature only has access to effects that passed a significance filter. This will introduce systematic bias in our effect size estimates.

The explain how selection for significance introduces bias, it is useful to understand the concept of a truncated or censored distribution. If we want to measure the average length of people in The Netherlands we would collect a representative sample of individuals, measure how tall they are, and compute the average score. If we collect sufficient data the estimate will be close to the true value in the population. However, if we collect data from participants who are on a theme park ride where people need to be at least 150 centimeters tall to enter,  the mean we compute is based on a truncated distribution where only individuals taller than 150 cm are included. Smaller individuals are missing. Imagine we have measured the height of two individuals in the theme park ride, and they are 164 and 184 cm tall. Their average height is (164+184)/2 = 174 cm. Outside the entrance of the theme park ride is one individual who is 144 cm tall. Had we measured this individual as well, our estimate of the average length would be (144+164+184)/3 = 164 cm. Removing low values from a distribution will lead to overestimation of the true value. Removing high values would lead to underestimation of the true value.

The scientific literature suffers from publication bias. Non-significant test results – based on whether a p value is smaller than 0.05 or not – are often less likely to be published. When an effect size estimate is 0 the p value is 1. The further removed effect sizes are from 0, the smaller the p value. All else equal (e.g., studies have the same sample size, and measures have the same distribution and variability) if results are selected for statistical significance (e.g., p < .05) they are also selected for larger effect sizes. As small effect sizes will be observed with their corresponding probabilities, their absence will inflate effect size estimates. Every study in the scientific literature provides it’s own estimate of the true effect size, just as every individual provides it’s own estimate of the average height of people in a country. When these estimates are combined – as happens in meta-analyses in the scientific literature – the meta-analytic effect size estimate will be biased (or systematically different from the true population value) whenever the distribution is truncated. To achieve unbiased estimates of population values when combining individual studies in the scientific literature in meta-analyses researchers need access to the complete distribution of values – or all studies that are performed, regardless of whether they yielded a p value above or below 0.05.

In the figure below we see a distribution centered at an effect size of Cohen’s d = 0.5 for a two-sided t-test with 50 observations in each independent condition. Given an alpha level of 0.05 in this test only effect sizes larger than d = 0.4 will be statistically significant (i.e., all observed effect sizes in the grey area). The threshold for which observed effect sizes will be statistically significant is determined by the sample size and the alpha level (and not influenced by the true effect size). The white  area under the curve illustrates Type 2 errors – non-significant results that will be observed if the alternative hypothesis is true. If researchers only have access to the effect sizes estimates in the grey area – a truncated distribution where non-significant results are removed – a weighted average effect size from only these studies will be upwardly biased.


If researchers only have access to the effect sizes estimates in the grey area – a truncated distribution where non-significant results are removed – a weighted average effect size from only these studies will be upwardly biased. We can see this in the two forest plots visualizing meta-analyses below. In the top meta-analysis all 5 studies are included, even though study C and D yield non-significant results (as can be seen from the fact that the 95% CI overlaps with 0). The estimated effect size based on all 5 studies is d = 0.4. In the bottom meta-analysis the two non-significant studies are removed - as would happen when there is publication bias. Without these two studies the estimated effect size in the meta-analysis, d = 0.5, is inflated. The extent to which meta-analyses are inflated depends on the true effect size and the sample size of the studies.

 


The inflation will be greater the larger the part of the distribution is truncated, and the closer the true population effect size is to 0. In our example about the height of individuals the inflation would be greater had we truncated the distribution by removing everyone smaller than 170 cm instead of 150 cm. If the true average height of individuals was 194 cm, removing the few people that are expected to be smaller than 150 (based on the assumption of normally distributed data) would have less of an effect on how much our estimate is inflated than when the true average height was 150 cm, in which case we would remove 50% of individuals. In statistical tests where results are selected for significance at a 5% alpha level more data will be removed if the true effect size is smaller, but also when the sample size is smaller. If the sample size is smaller, statistical power is lower, and more of the values in the distribution (those closest to 0)  will be non-significant.

Any single estimate of a population value will vary around the true population value. The effect size estimate from a single study can be smaller than the true effect size, even if studies have been selected for significance. For example, it is possible that the true effect size is 0.5, you have observed an effect size of 0.45, but only effect sizes smaller than 0.4 are truncated when selecting studies based on statistical significance (as in the figure above). At the same time, this single effect size estimate of 0.45 is inflated. What inflates the effect size is the long-run procedure used to generate the value. In the long run effect sizes estimates based on a procedure where estimates are selected for significance will be upwardly biased. This means that a single observed effect size of d = 0.45 will be inflated if it is generated based on a procedure where all non-significant effects are truncated, but it will be unbiased if it is generated based on a distribution where all observed effect sizes are reported, regardless of whether they are significant or not. This also means that a single researcher can not guarantee that the effect sizes they contribute to a literature will contribute to an unbiased effect sizes estimate: There needs to be a system in place where all researchers report all observed effect sizes to prevent bias. An alternative is to not have to rely on other researchers, and collect sufficient data in a single study to have a highly accurate effect size estimate. Multi-lab replication studies are an example of such an approach, where dozens of researchers collect a large number (up to thousands) of observations.

The most extreme consequence of the inflation of effect size estimates occurs when the true effect size in the population is 0, but due to selection of statistically significant results, only significant effects in the expected direction are published. Note that if all significant results are published (and not only effect sizes in the expected direction) 2.5% of Type 1 error rates will be in the positive direction, and 2.5% will be in the negative direction, and the average effect size would be actually be 0. Thus, as long as the true effect size is exactly 0, and all Type 1 errors are published, the effect size estimate would be unbiased. In practice, we see scientists often do not simply publish all results, but only statistically significant results in the desired direction. An example of this is the literature on ego-depletion, where hundreds of studies were published, most showing statistically significant effects, but unbiased large scale replication studies revealed effect sizes of 0 (Hagger et al., 2015; Vohs et al., 2021).

What can be done about the problem of biased effect sizes estimates if we mainly have access to the studies that passed a significance filter? Statisticians have developed approaches to adjust biased effect size estimates by taking a truncated distribution into account (Taylor & Muller, 1996). This approach has recently been implemented in R (Anderson et al., 2017). Implementing this approach in practice is difficult, because we never know for sure if an effect size estimate is biased, and if it is biased, how much bias there is. Furthermore, selection based on significance is only one form of bias, whereas researchers who selectively report significant results may engage in additional problematic research practices, such as selectively reporting results, which are not accounted for in the adjustment. Other researchers have referred to this problem as a Type M error (Gelman & Carlin, 2014; Gelman & Tuerlinckx, 2000) and have suggested that researchers always report the average inflation factor of effect sizes. I do not believe this approach is useful. The Type M error is not an error, but a bias in estimation, and it is more informative to compute the adjusted estimate based on a truncated distribution as proposed by Taylor and Muller in 1996, than to compute the average inflation for a specific study design. If effects are on average inflated by a factor of 1.3 (the Type M error) it does not mean that the observed effect size is inflated by this factor, and the truncated effect sizes estimator by Taylor and Muller will provide researchers with an actual estimate based on their observed effect size. Type M errors might have a function in education, but they are not useful for scientists (I will publish a paper on Type S and M errors later this year, explaining in more detail why I think neither are useful concepts).

Of course the real solution to bias in effect size estimates due to significance filters that lead to truncated or censored distributions is to stop selectively reporting results. Designing highly informative studies that have high power to both reject the null, as a smallest effect size of interest in an equivalence test, is a good starting point. Publishing research as Registered Reports is even better. Eventually, if we do not solve this problem ourselves, it is likely that we will face external regulatory actions that force us to include all studies that have received ethical review board approval to a public registry, and update the registration with the effect size estimate, as is done for clinical trials.


References:

Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample-size planning for more accurate statistical power: A method adjusting sample effect sizes for publication bias and uncertainty. Psychological Science, 28(11), 1547–1562. https://doi.org/10.1177/0956797617723724

Ensinck, E., & Lakens, D. (2023). An Inception Cohort Study Quantifying How Many Registered Studies are Published. PsyArXiv. https://doi.org/10.31234/osf.io/5hkjz

Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502–1505. https://doi.org/10.1126/SCIENCE.1255484

Gelman, A., & Carlin, J. (2014). Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors. Perspectives on Psychological Science, 9(6), 641–651.

Gelman, A., & Tuerlinckx, F. (2000). Type S error rates for classical and Bayesian single and multiple comparison procedures. Computational Statistics, 15(3), 373–390. https://doi.org/10.1007/s001800000040

Hagger, M. S., Chatzisarantis, N. L., Alberts, H., Anggono, C. O., Batailler, C., Birt, A., & Zwienenberg, M. (2015). A multi-lab pre-registered replication of the ego-depletion effect. Perspectives on Psychological Science, 2.

Sterling, T. D. (1959). Publication decisions and their possible effects on inferences drawn from tests of significance—Or vice versa. Journal of the American Statistical Association, 54(285), 30–34. JSTOR. https://doi.org/10.2307/2282137

Taylor, D. J., & Muller, K. E. (1996). Bias in linear model power and sample size calculation due to estimating noncentrality. Communications in Statistics-Theory and Methods, 25(7), 1595–1610. https://doi.org/10.1080/03610929608831787

Vohs, K. D., Schmeichel, B. J., Lohmann, S., Gronau, Q. F., Finley, A. J., Ainsworth, S. E., Alquist, J. L., Baker, M. D., Brizi, A., Bunyi, A., Butschek, G. J., Campbell, C., Capaldi, J., Cau, C., Chambers, H., Chatzisarantis, N. L. D., Christensen, W. J., Clay, S. L., Curtis, J., … AlbarracĆ­n, D. (2021). A Multisite Preregistered Paradigmatic Test of the Ego-Depletion Effect. Psychological Science, 32(10), 1566–1581. https://doi.org/10.1177/0956797621989733

 

 

Thursday, January 11, 2024

Surely God loves 51 km/h nearly as much as 49 km/h?

Next time you get a fine for speeding, I suggest you try the following line of defense. First, you explain to the judge that a speed limit of 50 km/h in densely populated areas is a convention. It could just as easily have been set at 51 km/h, or 49 km/h. There is no bright line at 50 km/h that prevents all accidents, because the number of deaths due to speeding is a continuous variable. Tell the judge that you believe it is better if drivers ignore speed limits, and instead drive in a thoughtful, open, and modest manner. Tell the judge that, surely, God loves 51 kilometers per hour nearly as much as 49 kilometers an hour. I strongly suspect the judge will roll their eyes, and instructs you to pay the fine.

 


And yet, when it comes to statistics, these are exactly the arguments statisticians bring forward to criticize current rules in science that exist to regulate when scientists can make claims based on tests. They will argue against dichotomous decisions, and in favor of being “thoughtful, open, and modest” (Wasserstein et al., 2019). Statisticians are like drivers. They deal with individual studies, like drivers deal with their own car. Driving one kilometer faster or slower feels like an arbitrary choice, just as how making a claim based on p < 0.06 or p < 0.04 is arbitrary for a statistician. And in the individual world of drivers and statisticians, there is no logical argument to treat driving 51 km/h at this time and on this street differently from driving 49 km/h.

 

Philosophers of science are like the government. They do not deal with individual studies, but they deal with the scientific system, just as governments deal with the traffic management system. At this higher level it sometimes becomes necessary to set rules, and enforce them. For example, as explained by the European Road Safety Observatory, originally the driving speed was largely determined by drivers, and fixed at the 85th percentile of the speed on a road. If all drivers drive at a high speed, the maximum speed would be high, if drivers drove at a low speed, the maximum speed would be lower. Such a system is sometimes also advocated by statisticians: Let the community decide what is best, without strict top-down rules. Regrettably, it is often not responsible to let the community make their own rules. As the European Road Safety Observatory notes: “However, many behavioural observations, attention measurements, and the high number of traffic crashes caused by excessive speed have shown that one cannot always rely on the judgement of drivers to set a suitable speed limit.”

 

The reason why drivers can not determine how fast they should be allowed to drive is because as a society we want to prevent accidents. The reward structure for drivers is such that if they speed and do not get into an accident, they get to where they want to be more quickly, and when they speed and do get into an accident, they might kill a pedestrian or bicyclist. If we ignore having a bad conscience (combined the inability of drivers to adequately estimate the probability they will get into an accident), the reward structure would lead unacceptable risks for pedestrians. If we left the criteria that allow a scientists to make a claim up to scientists themselves, the reward structure would lead to unacceptable rates of false claims.

 

If someone told you they were speeding to be on time for a meeting, you would likely not scold them, or report them to the police. It is relatively accepted behavior, at least when someone does not violate the speed limit by too much. According to the  European Road Safety Observatory “67% of Europeans admit to having speeded on rural roads over the previous 30 days”. And yet, reducing the average driving speed with 1 additional kilometer would save more than 2000 lives a year. Those small violations that we find acceptable have real consequences that we are often not aware of when we violate the rules. Scientists also admit to practices that increase the probability of false claims, and no one will be fired for not correcting for multiple comparisons, even if this in practice leads to a higher Type 1 error rate than the 5% they say they will use to make scientific claims.

 

Enforcing rules can prevent accidents and errors. And therefore, the driver who tries to convince the judge that ‘surely God loves 51 km/h nearly as much as 49 km/h’ will have little success. The judge knows that enforcing violations saves lives. In practice, drivers need to speed by more than 1 km/h to get a fine, due to corrections for measurement uncertainty. In The Netherlands, 3 km is subtracted from the speed measurement to guarantee a driver was speeding, given imperfect measurement equipment. According to the  European Road Safety Observatory “The detection equipment is often set in such a way that there is a margin of tolerance with regard to the speed limit. The use of such margins of tolerance serves to filter out minor, accidental violations and to deal with the possible unreliability of the equipment. A disadvantage of this approach is, however, that it strengthens drivers' opinion that a minor offence is not so serious”. Similarly, in science, if we allow author to not correct for multiple comparisons, or make claims based on ‘marginally significant’ findings of p = 0.06 they might similarly feel the consequences are not so serious.

 

But at the system level, a Type 1 error rate of 10% instead of 5% has a massive impact on the safety and efficiency of the scientific system. Whether acceptable safety is reached by setting the alpha at 5% deserves to be empirically studied (just as the acceptable driving speed is determined empirically). Just as driving speeds, we might find different alpha levels acceptable in different research lines. But that a driving speed has to be established and enforced will remain important on a system level.

 

Some drivers will continue to complain about being fined for speeding, convinced as they are that they can determine how fast they can drive at a specific time at a specific road. Some people will never like being told what to do. Some statisticians will continue to complain that they need to adhere to a 5% error rate when making scientific claims, when they strongly believe that they can determine when they should make a claim, and when not, on a case by case basis.

 

We allow drivers to voice their complaints, and if there are signals that traffic rules lead to problems, they might be adjusted. And of course, no government is perfect, so suboptimal decisions will sometimes be made. But we will never abandon traffic rules, and will at best change the driving speed that will be enforced, or the parts of the road where driving speeds will be enforced. Similarly, we allow statisticians to complain about the use of significance levels to make claims. But we will never abandon the use of enforced criteria that regulate when scientists can make claims, and will at best change alpha levels, or decrease the amount of research questions that test claims in favor of descriptive research. When it comes to decisions about how we should organize the traffic management system, we don’t ask drivers. Similarly, when it comes to decisions about how to organize scientific knowledge generation, we don’t ask statisticians. Scientific knowledge generation is studied by social epistemologists. Science, like driving, is a social system with a specific goal. It is of course beneficial if those government employees that create the traffic management system are also drivers, just as it is useful if social epistemologists understand statistics. But social epistemology is it’s own specialization.

 

Some scientists don’t like to think of science as a large ‘knowledge production system’. Maybe it makes them feel like a cog in a machine. I like to think of scientists as part of a system. Our jobs are very similar to garbage collectors. It’s a large and essential system that exists because society needs it and is willing to pay for it, that aims to achieve a goal efficiently, with a strong social component. Therefore, it makes sense to me that science needs a set of rules to reduce errors in the system. Not all scientists will agree, just as not all civilians agree with the government. From a statistical perspective, there might not be a difference between driving 49 or 51 km/h, but from a social epistemological perspective it is justifiable to fine a driver if they drive 51 km/h inside city limits, and not fine them if they drive 49 km/h.

Wednesday, August 23, 2023

Reflections on the viral thread by Dr Louise Raw spreading fake news about unethically performed radiation experiments on Punjabi women in the 1950's

On the 19th of August, Dr Louise Raw wrote a series of tweets that spread fake news about unethically performed radiation experiments on Punjabi women in the 1950’s. The tweet went viral, and I saw many academics I follow on Twitter uncritically retweet this fake news.