The inflation
will be greater the larger the part of the distribution is truncated, and the
closer the true population effect size is to 0. In our example about the height
of individuals the inflation would be greater had we truncated the distribution
by removing everyone smaller than 170 cm instead of 150 cm. If the true average
height of individuals was 194 cm, removing the few people that are expected to
be smaller than 150 (based on the assumption of normally distributed data) would
have less of an effect on how much our estimate is inflated than when the true
average height was 150 cm, in which case we would remove 50% of individuals. In
statistical tests where results are selected for significance at a 5% alpha
level more data will be removed if the true effect size is smaller, but also
when the sample size is smaller. If the sample size is smaller, statistical
power is lower, and more of the values in the distribution (those closest to 0)
will be non-significant.
Any single
estimate of a population value will vary around the true population value. The
effect size estimate from a single study can be smaller than the true effect
size, even if studies have been selected for significance. For example, it is
possible that the true effect size is 0.5, you have observed an effect size of
0.45, but only effect sizes smaller than 0.4 are truncated when selecting
studies based on statistical significance (as in the figure above). At the same
time, this single effect size estimate of 0.45 is inflated. What inflates the
effect size is the long-run procedure used to generate the value. In the long
run effect sizes estimates based on a procedure where estimates are selected
for significance will be upwardly biased. This means that a single observed
effect size of d = 0.45 will be inflated if it is generated based on a
procedure where all non-significant effects are truncated, but it will be
unbiased if it is generated based on a distribution where all observed effect
sizes are reported, regardless of whether they are significant or not. This
also means that a single researcher can not guarantee that the effect sizes
they contribute to a literature will contribute to an unbiased effect sizes
estimate: There needs to be a system in place where all researchers report all
observed effect sizes to prevent bias. An alternative is to not have to rely on
other researchers, and collect sufficient data in a single study to have a
highly accurate effect size estimate. Multi-lab replication studies are an example
of such an approach, where dozens of researchers collect a large number (up to
thousands) of observations.
The most
extreme consequence of the inflation of effect size estimates occurs when the
true effect size in the population is 0, but due to selection of statistically
significant results, only significant effects in the expected direction are
published. Note that if all significant results are published (and not only
effect sizes in the expected direction) 2.5% of Type 1 error rates will be in
the positive direction, and 2.5% will be in the negative direction, and the
average effect size would be actually be 0. Thus, as long as the true effect
size is exactly 0, and all Type 1 errors are published, the effect size
estimate would be unbiased. In practice, we see scientists often do not simply
publish all results, but only statistically significant results in the desired
direction. An example of this is the literature on ego-depletion, where
hundreds of studies were published, most showing statistically significant
effects, but unbiased large scale replication studies revealed effect sizes of
0 (Hagger
et al., 2015; Vohs et al., 2021).
What can be
done about the problem of biased effect sizes estimates if we mainly have access
to the studies that passed a significance filter? Statisticians have developed approaches
to adjust biased effect size estimates by taking a truncated distribution into
account (Taylor & Muller, 1996). This approach has recently been
implemented in R (Anderson et al., 2017). Implementing this approach in
practice is difficult, because we never know for sure if an effect size estimate
is biased, and if it is biased, how much bias there is. Furthermore, selection
based on significance is only one form of bias, whereas researchers who
selectively report significant results may engage in additional problematic
research practices, such as selectively reporting results, which are not
accounted for in the adjustment. Other researchers have referred to this
problem as a Type M error (Gelman & Carlin, 2014; Gelman & Tuerlinckx,
2000) and have suggested that researchers
always report the average inflation factor of effect sizes. I do not believe
this approach is useful. The Type M error is not an error, but a bias in
estimation, and it is more informative to compute the adjusted estimate based
on a truncated distribution as proposed by Taylor and Muller in 1996, than to compute
the average inflation for a specific study design. If effects are on average
inflated by a factor of 1.3 (the Type M error) it does not mean that the
observed effect size is inflated by this factor, and the truncated effect sizes
estimator by Taylor and Muller will provide researchers with an actual estimate
based on their observed effect size. Type M errors might have a function in
education, but they are not useful for scientists (I will publish a paper on Type S and M errors later this year, explaining in more detail why I think neither are useful concepts).
Of course
the real solution to bias in effect size estimates due to significance filters
that lead to truncated or censored distributions is to stop selectively
reporting results. Designing highly informative studies that have high power to
both reject the null, as a smallest effect size of interest in an equivalence
test, is a good starting point. Publishing research as Registered Reports is
even better. Eventually, if we do not solve this problem ourselves, it is
likely that we will face external regulatory actions that force us to include
all studies that have received ethical review board approval to a public
registry, and update the registration with the effect size estimate, as is done
for clinical trials.
References:
Anderson, S. F., Kelley, K., & Maxwell, S. E.
(2017). Sample-size planning for more accurate statistical power: A method
adjusting sample effect sizes for publication bias and uncertainty. Psychological
Science, 28(11), 1547–1562. https://doi.org/10.1177/0956797617723724
Ensinck, E., & Lakens,
D. (2023). An Inception Cohort Study Quantifying How Many Registered Studies
are Published. PsyArXiv. https://doi.org/10.31234/osf.io/5hkjz
Franco, A., Malhotra, N.,
& Simonovits, G. (2014). Publication bias in the social sciences: Unlocking
the file drawer. Science, 345(6203), 1502–1505.
https://doi.org/10.1126/SCIENCE.1255484
Gelman, A., & Carlin,
J. (2014). Beyond Power Calculations: Assessing Type S (Sign) and Type M
(Magnitude) Errors. Perspectives on Psychological Science, 9(6),
641–651.
Gelman, A., &
Tuerlinckx, F. (2000). Type S error rates for classical and Bayesian single and
multiple comparison procedures. Computational Statistics, 15(3),
373–390. https://doi.org/10.1007/s001800000040
Hagger, M. S.,
Chatzisarantis, N. L., Alberts, H., Anggono, C. O., Batailler, C., Birt, A.,
& Zwienenberg, M. (2015). A multi-lab pre-registered replication of the
ego-depletion effect. Perspectives on Psychological Science, 2.
Sterling, T. D. (1959).
Publication decisions and their possible effects on inferences drawn from tests
of significance—Or vice versa. Journal of the American Statistical
Association, 54(285), 30–34. JSTOR. https://doi.org/10.2307/2282137
Taylor, D. J., &
Muller, K. E. (1996). Bias in linear model power and sample size calculation
due to estimating noncentrality. Communications in Statistics-Theory and
Methods, 25(7), 1595–1610. https://doi.org/10.1080/03610929608831787
Vohs, K. D., Schmeichel,
B. J., Lohmann, S., Gronau, Q. F., Finley, A. J., Ainsworth, S. E., Alquist, J.
L., Baker, M. D., Brizi, A., Bunyi, A., Butschek, G. J., Campbell, C., Capaldi,
J., Cau, C., Chambers, H., Chatzisarantis, N. L. D., Christensen, W. J., Clay,
S. L., Curtis, J., … Albarracín, D. (2021). A Multisite Preregistered
Paradigmatic Test of the Ego-Depletion Effect. Psychological
Science, 32(10), 1566–1581. https://doi.org/10.1177/0956797621989733