tag:blogger.com,1999:blog-987850932434001559.post5284652158630846254..comments2024-09-04T18:04:15.840+02:00Comments on The 20% Statistician: Requiring high-powered studies from scientists with resource constraintsDaniel Lakenshttp://www.blogger.com/profile/18143834258497875354noreply@blogger.comBlogger1125tag:blogger.com,1999:blog-987850932434001559.post-28975326316811825912019-08-14T14:54:08.374+02:002019-08-14T14:54:08.374+02:00Power = people?
Thomas Schmidt, University of Kais...Power = people?<br />Thomas Schmidt, University of Kaiserslautern, Germany<br /><br />There is yet another way to improve your power: Use more trials from the participants you have. Actually power depends on two things: the number of participants in the sample and the reliability of the measurements. However, reliability directly depends on the number of trials. There are several simulation studies that show that both levels (people and trials) are about equally important in determining statistical power. There are many areas of psychology that successfully work with small groups of subjects but massive repetition of measurement -- psychophysics is a good example. In my research, I almost invariably use eight participants and control power entirely by the number of sessions. In my experience, well-trained subjects perform so much more reliably than untrained ones that they can give you high data quality even with limited resources. There is also a convenient, citable name for this approach: Smith & Little (2018) call it "small-N design".<br /><br />Apart from statistical power, there is yet another time-honoured concept that is used in engineering: measurement precision. Precision can simply be defined by setting an upper limit to the standard error of the dependent variable -- all you need is a rough idea about the standard deviation. In a recent paper, we included the following passage to justify our sample sizes (Biafora & Schmidt, 2019): <br /><br />"In multi-factor repeated-measures designs, statistical power is difficult to predict because too many terms are unknown. Instead, we control measurement precision at the level of individual participants in single conditions. We calculate precision as s/√r (Eisenhart, 1969), where s is a single participant's standard deviation in a given cell of the design and r is the number of repeated measures per cell and subject. With r = 120 and 240 in the priming and prime identification task, respectively, we expect a precision of about 5.5 ms in response times (assuming individual SDs around 60 ms), at most 4.6 percentage points in error rates, and at most 3.2 percentage points in prime identification accuracy (assuming the theoretical maximum SD of .5)."<br />Thomas Schmidthttps://www.blogger.com/profile/06538006415414781450noreply@blogger.com