tag:blogger.com,1999:blog-987850932434001559.post6520109152589835633..comments2024-06-02T09:21:30.528+02:00Comments on The 20% Statistician: After how many p-values between 0.025-0.05 should one start getting concerned about robustness? Daniel Lakenshttp://www.blogger.com/profile/18143834258497875354noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-987850932434001559.post-59065044962979839652015-05-27T00:27:50.533+02:002015-05-27T00:27:50.533+02:00Hi Daniel,
Mickey's question and your answer...Hi Daniel, <br /><br />Mickey's question and your answer suggest another way to examine bias. <br /><br />It is similar to TIVA. <br /><br />https://replicationindex.wordpress.com/2014/12/30/the-test-of-insufficient-variance-tiva-a-new-tool-for-the-detection-of-questionable-research-practices/<br /><br />Both tests are based on the insight that test-statistics (whether they are presented as z-scores, p-values, post-hoc power, or other transformations) should vary considerably. Obtaining p-values that are too close to each other suggests that bias is present. <br /><br />The difference between TIVA and the critical region approach (Neyman-Pearson) is that TIVA does not require an a priori specification of the critical region. If Mickey would always use .05 to .025, the approach is fine. However, if the critical region is not fixed, the bias test itself is biased. <br /><br />The problem with .05 to .025 is that it is very narrow. This reduces the type-I error rate very much (even with k=2, p = .01), but the type-II error rate is high because p-hacking doesn't always produce p-values just below .05 as you posted on another blog. <br /><br />Thus, the trick is to find a good balance between type-I and type-II error. For two studies, I suggest a range from 50% to 80% power, which corresponds to z-scores of 1.96 to 2.8, and p-values from .05 to .005. <br /><br />The type-I error rate for this test with k = 2 is about 10%, which is considered acceptable to just raise awareness of bias. This test has more power for k = 2 than TIVA. This makes it appealing to use it for pairs of studies. <br /><br />Dr. Rhttps://replicationindex.wordpress.com/noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-25812655845844330132015-05-27T00:08:19.838+02:002015-05-27T00:08:19.838+02:00Sanjay. To get the probability that the data are b...Sanjay. To get the probability that the data are biased given the observation of a pair of p-values between .05 and .025, we have to make some assumptions about the probability of this event to occur when bias is present. Does 50% seem reasonable to you? In this case, the probability that bias is present when the red flag is raised would be 50 out of 51, or 98%, a little bit less than 99 out of 100 (99%). <br /><br />Maybe you want to be more conservative. with 25% probability of bias producing the event, there are still 25 out of 26 events where bias produced the critical event (96% correct positive rate). <br /><br />Bayesians often trick us by using a medical analogy where the event we are looking for is very low (brain cancer). <br /><br /> Both p in .05-.025 One p not in .05-.025<br />Bias 50 50 <br />NoBias 1 99 <br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-26887431842444730602015-05-26T23:42:32.262+02:002015-05-26T23:42:32.262+02:00My interpretation of Mickey's question, slight...My interpretation of Mickey's question, slightly paraphrased, is: "Given that I have observed two p-values between .025 and .05, what is the probability of them coming from an unbiased report?" <br /><br />On the other hand, the calculations in this blog post (such as the 6%) are asking, "Given an unbiased report, what is the probability of observing two p-values between .025 and .05?"<br /><br />I'd just like to point out that these are not the same thing. It's a reversal of the conditional probabilities.Sanjay Srivastavahttps://www.blogger.com/profile/03677223120010904540noreply@blogger.com