tag:blogger.com,1999:blog-987850932434001559.post992605512562184786..comments2019-08-20T00:49:38.986-07:00Comments on The 20% Statistician: How a p-value between 0.04-0.05 equals a p-value between 0.16-017Daniel Lakenshttp://www.blogger.com/profile/18143834258497875354noreply@blogger.comBlogger23125tag:blogger.com,1999:blog-987850932434001559.post-30731810893691348952015-03-27T13:08:01.557-07:002015-03-27T13:08:01.557-07:00Makes sense. So perhaps a nuanced moral could be s...Makes sense. So perhaps a nuanced moral could be something like this:<br /><br />"Across a wide range of possible and highly plausible values for power, .04-.05 probably does in fact constitute evidence. If power is below 25% or over 75%, start getting skeptical of those pesky p = .042s."<br /><br />Of course, given the conditions that would lead to power below 25%, there would probably be loads of reasons to be skeptical. I'm guessing you're looking at a tiny N, which is its own problem on numerous fronts. This equates to a d of .4, and an N under 20. My first concern with an N under 20 per isn't necessarily how I should interpret p = .042.<br /><br />The potentially more interesting extension is that p = .042 wouldn't be evidence in a huge study. That case will be more rare, I'm guessing. But worth keeping in mind.<br /><br />Good chat.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-27098792336918901002015-03-27T12:39:03.455-07:002015-03-27T12:39:03.455-07:00Definately - I think both points are important. It...Definately - I think both points are important. It often is evidence, but sometimes it's not. No need to be overly critical to p-values (as you see, I'm one of the few people consistently saying they are useful and taking the time to explain to people when they are).Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-12910160919051391942015-03-27T12:31:44.844-07:002015-03-27T12:31:44.844-07:00Sure, we set it at 25% and the ratio dips just bel...Sure, we set it at 25% and the ratio dips just below 3. I'm not missing the point here, because in many real-world-relevant conditions, .04-.05 can be interpreted as evidence, even if we decide to adopt the highly arbitrary ratio of 3 as a cutoff. This directly contradicts your claim that "you should not think of p-values between .04-.05 as evidence."<br /><br />Perhaps you meant "you should not NECESSARILY think of p-values between .04 and .05 as evidence because the situation is complicated." Which nobody who knows stats would disagree with. But your statement implied the stronger step that we should ignore the .04-.05 range as evidence, full stop.<br /><br />Fact is, the visualizations show that if power is between about 27% and 76%, Ha is at least 3 times as likely as Ho for p-values between .04-.05.<br /><br />That's a rather large range of power for which the statement "you should not think of p-values between 0.04-0.05 as evidence" does not seem to be true. It spans about half of the logically possible values of power, and I'm guessing most of the real-world plausible values.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-49855320109988099582015-03-27T06:56:16.084-07:002015-03-27T06:56:16.084-07:00Again, close. Use the visualization. You want powe...Again, close. Use the visualization. You want power close to 25%? Use d = 0.4, with n = 20. You see you drop below 3. Bayesians call anything below 3 'anecdotal evidence', which I perhaps overstated as extremely weak (but perhaps not). If you continue to interpret p between 0.04-0.05 as evidence (3X), you've missed the point. If grad students say p = 0.04 supports the null, they are not specifying their priors either, and missing the point. Remember very high power and very low power leads to p 0.04-0.05 being difficult to interpret as 'evidence'.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-8691501944873153492015-03-27T06:40:36.722-07:002015-03-27T06:40:36.722-07:00Seems to me that your moral isn't right either...Seems to me that your moral isn't right either. Under realistic conditions (looking at our woefully underpowered literature), this suggests that a p-value of .04-.05 is better support for the alternative than the null by a factor of at least 3-4. Bayesians would be willing to talk about one hypothesis being 3x more likely; why should we call it "extremely weak evidence?"<br /><br />Sure, a tiny minority of our literature might have power of 95%. But for the bulk of the literature where power is closer to 25% than 95%, p-values near .04 do suggest evidence, at least in a relative sense. Yet I've heard grad students and others naively apply the logic presented here to say that p=.04 actually supports the null. The logic in this post clearly shows that to be untrue.<br /><br />My 1 in 10000 comment was tongue-in cheek. But given that 242 per is what it takes to get 95% power for a median effect size in social psych, and we rarely see 242 per, I think it's pretty safe to say that very very few social psych studies need to worry about what happens with 95% power. That's like worrying if 100% solar powered hovercars driven by robots will put taxi drivers out of business. Maybe some day we'll get there, but I'm not holding my breath ;)<br /><br />Now, there are myriad problems with underpowered studies. But devaluing p=.04 doesn't seem to be one of the most pressing.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-21501256343533280682015-03-26T22:56:01.392-07:002015-03-26T22:56:01.392-07:00Close. The moral is that you should not think of p...Close. The moral is that you should not think of p-values between 0.04-0.05 as evidence. You should think about those p-values as 'extremely weak evidence, assuming power is not extremely high'. Your estimate of 10000 for when you have power > .95 is off - I clearly mention you already achieve this with 242 participants in two conditions, which is rare, but occurs already. Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-86151060327396740242015-03-26T19:53:02.394-07:002015-03-26T19:53:02.394-07:00So, one moral of this story is that, given the pow...So, one moral of this story is that, given the power we typically see in social psych studies (50% on a very good day), a p-value between .04 and .05 actually is providing evidence that's more supportive of the alternative than the null (given equal priors, blah blah blah).<br /><br />I'll start worrying more about the pesky .04-.05 range as soon as I see more than 1 social psych paper in 10,000 with > 95% power. That'll be a great problem to have one day.<br /><br />Expanding this to the .03-.05 range (which some have flagged as problematic), the alternative is more than 4x better than the null. And we don't see Ho = Ha until power exceeds 96%.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-34661956171802215592015-03-22T12:40:35.184-07:002015-03-22T12:40:35.184-07:00Sorry - not on twitter. Maybe one day!Sorry - not on twitter. Maybe one day!Thom Baguleyhttps://www.blogger.com/profile/00392478801981388165noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-8515779217054873612015-03-21T17:11:38.700-07:002015-03-21T17:11:38.700-07:00This actually came up on twitter just today. https...This actually came up on twitter just today. https://twitter.com/AlexanderEtz/status/579421141694951424<br /><br />Three experiments:<br />n1 = n2 = 30, p = .031, d = .57<br />n1 = n2 = 50, p = .029, d = .44<br />n1 = n2 = 100, p = .024, d = .32<br /><br />Even if we set the scale on a cauchy prior to be exactly the observed ES, which of course is unreasonable, BFs are still roughly 2. Even for the experiment with the smallest n (30 per group) and largest ES (d = .57).<br /><br />Goes to show that p doesn't just fail when n is overly large! <br /><br />PS- Thom, are you on twitter? Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-6683634011545052832015-03-21T09:06:16.313-07:002015-03-21T09:06:16.313-07:00Following Alexander's point about likelihood, ...Following Alexander's point about likelihood, it is fairly easy to show that the strongest evidence p = .05 can supply is around .128 chance that H1 is true (assuming both H1 and H0 are equally likely a priori). This is based on the likelihood ratio for a just significant test with H1 being that the parameter is at the maximum of the likelihood (the hypothesis with the strongest evidence relative to the null).Thom Baguleyhttps://www.blogger.com/profile/00392478801981388165noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-16378124498461074172015-03-20T14:52:39.148-07:002015-03-20T14:52:39.148-07:00Hey Daniel,
Regarding your pps:
The calculation ...Hey Daniel,<br /><br />Regarding your pps:<br /><br />The calculation of the posterior probability that a hypotheses is true is not always the goal of bayesian inference. Hypothesis testing by Bayes factors, for example, does not tell you what the posterior probability of your hypothesis is, Pr(H|D). It also does not tell you the posterior odds if you are comparing two hypotheses, Pr(H1|D)/Pr(H2|D) (unless you start with 1:1 prior odds). Instead, hyp testing with Bayes factors tells you how you should update whatever prior odds you hold into posterior odds, now that you've seen the data. The *evaluation* of the evidential value of the data can stand alone from the prior or posterior probability/odds of a hypothesis.<br /><br />Bayes Factors tell you the relative predictive success of the two hypotheses under consideration, and this is formulated as Pr(D|H1)/Pr(D|H2). The prior distributions on the parameters for your hypotheses can even be spike priors, like d=0 and d=.3 in this post, and in that case you just simplify your Bayes factor to the Plain Jane Likelihood ratio. <br /><br />I think you would really enjoy Likelihoods based on this post and other conversations we've had. They control for probabilities of misleading evidence and they only use spike priors :) Here are two links if you are interested: <br /><br />http://www.stat.fi/isi99/proceedings/arkisto/varasto/roya0578.pdf<br /><br />http://www.sortie-nd.org/lme/Bayesian%20methods%20in%20ecology/Royall_2004_Likelihood_Paradigm.pdf<br /><br />PLUS- Using likelihoods doesn't mean you are a bayesian (Royall certainly was not), so you could stay on the NHST ship if you really wanted to.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-46693530214981559292015-03-20T10:50:49.819-07:002015-03-20T10:50:49.819-07:00I've got an example here, with mapply embedded...I've got an example here, with mapply embedded in another function that is then run 10000 times: http://janhove.github.io/analysis/2014/08/20/adjusted-pvalues-breakpoint-regression/<br /><br />This is an easier example, though:<br /><br />###<br />mapply(sum, c(1, 2, 3), c(4, 5, 6), c(7, 8, 9))<br />[1] 12 15 18<br />###<br /><br />You apply the function over the first element of each argument (1+4+7), then over the second (2+5+8) etc.Janhttp://janhove.github.ionoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-86053633297926157212015-03-20T09:38:47.340-07:002015-03-20T09:38:47.340-07:00Thanks so much! I've seen the mapply function ...Thanks so much! I've seen the mapply function before, but never really understood how it should be used. I use variations of these simulations, and they indeed take a long time sometimes, so this will definately be useful!Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-4730097585369557362015-03-20T09:12:37.518-07:002015-03-20T09:12:37.518-07:00Thanks for the food for thought. I'll need to ...Thanks for the food for thought. I'll need to think about this some more before actually commenting, but in the meantime, I wanted to give you a pointer about R code, if you don't mind.<br /><br />The for-loop has the advantage that it's ubiquitous in basic programming courses and semantically fairly transparent. But it slows down your simulation (takes about 16 seconds on my machine). Below is an alternative that takes less than 2 seconds. <br />First, it defines the variables of interest.<br />Then, the code for a single simulation (one draw) is defined.<br />This code is then run 10000 times using the replicate function (see ?replicate or ?mapply for more complicated stuff). The 10000 p-values are stored in a vector.<br />The rest is the same as in your code (I prefer sum(ps > lowp & ps <= highp) to length(p[p > lowp & p <= highp) because it's a neat R trick :)).<br /><br />It's no big deal for this simulation, but in case you wanted to run more arduous simulations, it cuts down on waiting time.<br /><br />HTH - Jan<br /><br />###<br />nSims <- 10000<br />lowp <- 0.04<br />highp <- 0.05<br /><br /># Simulate one draw<br />oneRun.fnc <- function(N = 32) {<br /> x <- rnorm(n = N, mean = 100, sd = 20)<br /> y <- rnorm(n = N, mean = 110, sd = 20)<br /> z <- t.test(x,y)<br /> return(z$p.value)<br />}<br /><br /><br /># Simulate multiple draws<br />pvalues <- replicate(nSims,<br /> oneRun.fnc(N = 32))<br /><br />#Calculate power in simulation<br />cat ("The power is",(sum(pvalues < 0.05)/nSims*100),"%")<br /><br /><br /><br />p2 <- sum((ps > lowp & ps <= highp))<br /><br />cat ("The probability of finding a p-value between ",lowp," and ", highp," is ",<br /> (p2/nSims*100),"%,\n which makes it ",<br /> ((p2/nSims*100)/(((highp-lowp)*100)))," <br /> times more probable under the alternative hypothesis than the null-hypothesis \b <br /> (numbers below 1 mean the observed p-value is more likely under the null-hypothesis than under the alternative hypothesis)\n")<br /><br />#now plot histograms of p-values (the most left bar contains all p-values between 0.00 and 0.05)<br />hist(pvalues, main="Histogram of p-values", xlab=("Observed p-value"), breaks = 20) Janhttp://janhove.github.ionoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-10543417374062011972015-03-20T08:11:31.331-07:002015-03-20T08:11:31.331-07:00My third sentence in my comment at 7:26 AM.
Your...My third sentence in my comment at 7:26 AM. <br /><br />Your claim that "We can never know if the null is false", is either inconsistent with your previous post "The Null Is Always False (Except When It Is True)" (because what's the point of arguing whether null is false if we can't know) or irrelevant to our discussion (since I'm using the same language as you did in your previous post, regardless of whether this language super-accurate).matushttp://simkovic.github.io/noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-33159393702419006322015-03-20T07:36:57.443-07:002015-03-20T07:36:57.443-07:00which third sentence exactly? We can never know if...which third sentence exactly? We can never know if the null is false of the alternative is false. But we can use methods that make it unlikely we make too many errors, in the long term.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-19541972803811485932015-03-20T07:28:22.985-07:002015-03-20T07:28:22.985-07:00The third sentence should read "We don't ...The third sentence should read "We don't know whether the null is TRUE".matushttp://simkovic.github.io/noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-61986083760625696272015-03-20T07:26:20.906-07:002015-03-20T07:26:20.906-07:00Nah, you haven't answered anything. You have j...Nah, you haven't answered anything. You have just shown that some tests in the many labs study fail to reject null. We don't know whether the null is false. It may be that just your sample size is not large enough to detect that particular effect. <br /><br />"But yes, after you have discovered something is a signal and not noise, move on to estimation." Why after? Why do we need this two step procedure? Seems like a completely unnecessary research slow-down to me. The ES estimate gives you the ability to make any comparison including a comparison with your null.matushttp://simkovic.github.io/noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-9706012440821671772015-03-20T07:23:40.347-07:002015-03-20T07:23:40.347-07:00Thanks, changed it to the 0.3 it should have been!...Thanks, changed it to the 0.3 it should have been! I'll leave your comment, credit where credit is due, thanks for taking the effort to point this out!Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-85681871957857547502015-03-20T07:21:09.761-07:002015-03-20T07:21:09.761-07:00and feel free to delete the smartypants comment af...and feel free to delete the smartypants comment after fixing it..Lucas Kellerhttps://www.blogger.com/profile/01115525602440772824noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-46693112370889921792015-03-20T07:20:25.024-07:002015-03-20T07:20:25.024-07:00there's a typo in one paragraph, albeit d = 0....there's a typo in one paragraph, albeit d = 0.03 is a small effect, 484 participants would not be enough to detect itLucas Kellerhttps://www.blogger.com/profile/01115525602440772824noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-91055464352011859852015-03-20T06:55:00.262-07:002015-03-20T06:55:00.262-07:00Hi, luckily I have already answered that question ...Hi, luckily I have already answered that question in this post: The Null Is Always False (Except When It Is True) http://daniellakens.blogspot.nl/2014/06/the-null-is-always-false-except-when-it.html. But yes, after you have discovered something is a signal and not noise, move on to estimation (if that answers a question you are interested in).Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-26233680105351892872015-03-20T06:52:24.100-07:002015-03-20T06:52:24.100-07:00P.P.P.S. But Daniel, why would anyone "try to...P.P.P.S. But Daniel, why would anyone "try to infer whether the observed effect is random noise"? We are not shuffling cards or throwing dice, we are testing human beings whose behavior (and hence your measurements) will always be a product of highly structured process. Your null hyp is always false. Either go back to studying card games or move on to parameter estimation. matushttp://simkovic.github.io/noreply@blogger.com