tag:blogger.com,1999:blog-987850932434001559.post3524536609091537335..comments2017-12-17T03:54:08.225-08:00Comments on The 20% Statistician: Observed power, and what to do if your editor asks for post-hoc power analysesDaniel Lakensnoreply@blogger.comBlogger13125tag:blogger.com,1999:blog-987850932434001559.post-5895486407750795842017-12-13T08:51:48.609-08:002017-12-13T08:51:48.609-08:00Yes, you can always calculate the effect size you ...Yes, you can always calculate the effect size you could detect with a certain level of power. But, there is never information that goes beyond the p-value. So, knowing how sensitive your design was is always good info to have, but it is difficult to use it as a way to draw inferences from data. Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-45413209655198769272017-12-13T08:23:02.769-08:002017-12-13T08:23:02.769-08:00I think there are some forms of post-hoc power ana...I think there are some forms of post-hoc power analyses that are appropriate.<br /><br />I agree wholeheartedly with everything you have said. Calculating the power of the study from the sample size (and SD) the alpha and the *observed* power is completely useless post-hoc.<br /><br />However, would it not be reasonable for the editor to ask for the following:<br /><br />In cases where a power/sample size calculation has not been performed in the original paper (perhaps in cases where group sizes are determined by other factors), would it not be suitable for the editor to ask for a calculation of the *detectable* difference. i.e put into the power calculation the alpha, sample size, a pre-agreed beta value and see what size of difference the study would have been able to fix.<br /><br />I understand that this method would closely align with confidence intervals. However, I think it will demonstrate under-powered studies with more impact. Particularly in non-inferiority trials that claim non-inferiority when that are massively under-powered. George Harveyhttps://twitter.com/_Harvsnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-59374601986491561982017-09-05T22:16:18.858-07:002017-09-05T22:16:18.858-07:00Interesting points!
But, and if the editor is req...Interesting points!<br /><br />But, and if the editor is requesting a power analysis because he/she considers that your sample size are too small, and then that the significant differences you found using ANOVA might not be trustful?Elisa Seybothhttps://www.blogger.com/profile/00517840021661977297noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-86433696831611133202017-06-22T23:45:52.438-07:002017-06-22T23:45:52.438-07:00(1) Effect size is useful irrespective of whether ...(1) Effect size is useful irrespective of whether the study is experimental or not as it applies to the result and not the methodology of research. (2) Effect size is useful for accepting positive results. (3) I do not have experience to comment on its role in negative results. Post-hoc power analysis of negative result usually produces very low power when <br />sample size is modest ( note that I did nor say small). In our genetics case-control study we found negative result with a sample of 100 per group. Result may not change even if we repeat it with say 1000 cases. But how do we establish this statistically. In another project, we found negative result with first 30 samples but positive result after analyzing 100 samples. Sharath Bhttps://www.blogger.com/profile/06026361535400425559noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-3596006042877528862017-06-20T05:27:39.458-07:002017-06-20T05:27:39.458-07:00Very nice blog! I have a question!
I have two inde...Very nice blog! I have a question!<br />I have two independent groups. I have looked at the means of these of two groups and ran t-tests to detect significant differences between them, but there were not any to be found. I have not done anything experimental, it is an observational/comparing/cross sectional (don't know what to call it) study. Now I am asked to run a post hoc power analysis (power analysis wasnt done before because of a new field and lack of data) to see if it was even possible for me to detect any reasonable differences with my number of observations?<br />Does this make sense? How could I do this? Is effect size even necessary in studies that are not experimental?<br />/Thom, frustrated bsc-studentUnknownhttps://www.blogger.com/profile/04624870937595191453noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-63263577636275718012017-05-08T07:31:58.802-07:002017-05-08T07:31:58.802-07:00It's not the best approach - you want to use e...It's not the best approach - you want to use equivalence testing. I explain why in detail here: https://osf.io/preprints/psyarxiv/97gpc/Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-66967054240159240192017-05-08T06:31:36.338-07:002017-05-08T06:31:36.338-07:00I wonder whether it would be a legitimate way to a...I wonder whether it would be a legitimate way to assess "observed power" of an experiment by assuming a desired effect size? <br /><br />What I mean by this: Say that I want to see whether a null finding might stem from a lack of power. I could define a desirable effect size that I want to be able to detect and compute power to observe such an effect given the sample size, alpha level and experimental design of the study. Would it be valid to reason that "we did not find a significant effect, even though we had a power of .95 to find an effect of size d = 0.25, therefore we assume that the nonsignificant finding is not the result of lacking power"? Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-7233419546586084942017-04-05T05:41:01.429-07:002017-04-05T05:41:01.429-07:00This is a very useful article and desperately usef...This is a very useful article and desperately usefulPerevestihttps://perevesti.by/noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-86129962321634626542017-03-20T07:43:41.380-07:002017-03-20T07:43:41.380-07:00It is very interesting!
It is very interesting!<br />Beltranslationshttp://beltranslations.com/english/noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-13897246898035216432015-01-30T05:28:34.698-08:002015-01-30T05:28:34.698-08:00The way the effect size was calculated by Rosentha...The way the effect size was calculated by Rosenthal has been discredited (well it may be harsh to say "discredited"). See: http://www.researchgate.net/publication/232494334_Is_psychological_research_really_as_good_as_medical_research_Effect_size_comparisons_between_psychology_and_medicine Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-89521999624798140912014-12-20T11:18:47.294-08:002014-12-20T11:18:47.294-08:00Hi Roger, could you elaborate a little on "di...Hi Roger, could you elaborate a little on "discredited examples like Rosenthal's aspirin study"? I am aware of the paper you're referring to but not clear on what is discredited.Jake Westfallhttp://jakewestfall.orgnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-29978691964706169152014-12-19T16:33:38.320-08:002014-12-19T16:33:38.320-08:00You are right to focus attention on the effect siz...You are right to focus attention on the effect size rather than power as an argument for "proving the null" (even if only suggestively); but we have a long way to go in agreeing what effect size is indeed too small to matter, when discredited examples like Rosenthal's aspirin study keep circulating.<br /><br />I would go further and say that a priori power analysis is useful for a direct replication; has only suggestive value for a conceptual replication; and is near-useless for the 90% (?) of published research that rests on finding a novel effect, even one such as a moderation by context that may include an incidental replication (the power needed for the interaction will have little to do with that needed for the main effect).Roger G-Shttps://www.blogger.com/profile/08594440701279968693noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-5675240266049411092014-12-19T13:19:11.493-08:002014-12-19T13:19:11.493-08:001. I would like to add that post-hoc power for a s...1. I would like to add that post-hoc power for a single statistical test is useless. However, post-hoc power provides valuable information for a set of independent statistical tests. I would not trust an article that reports 10 significant results when the median power is 60%. Even if median power is 60% and only 60% of results are significant, post-hoc power is informative. It would suggest that researchers have only a 60% chance to get a significant effect in a replication study and should increase power to make a replication effort worthwhile. <br /><br />2. The skewed distribution of observed power for true power unequal .50 was discussed in Yuan and Maxwell (2005) http://scholar.google.ca/scholar_url?url=http://irt.com.ne.kr/data/on_the_pos-ences.pdf&hl=en&sa=X&scisig=AAGBfm1W3wVWuS3MecdWMU0dEkoVal4U3A&oi=scholarr&ei=wZOUVNuXD67IsASZ-YKIDQ&ved=0CB0QgAMoADAA<br /><br />3. It was also discussed in Schimmack (2012) as a problem in the averaging of observed power as an estimate of true power and was the reason why the replicability index uses the median to estimate true (median) power of a set of studies. http://r-index.org/uploads/3/5/6/7/3567479/introduction_to_the_r-index__14-12-01.pdf <br /><br /><br />drreplicablehttp://replicationindex.wordpress.com/noreply@blogger.com