tag:blogger.com,1999:blog-987850932434001559.comments2023-03-19T13:52:37.441+01:00The 20% StatisticianDaniel Lakenshttp://www.blogger.com/profile/18143834258497875354noreply@blogger.comBlogger1156125tag:blogger.com,1999:blog-987850932434001559.post-65581236642232518602021-12-04T06:58:40.175+01:002021-12-04T06:58:40.175+01:00Part 2: Interpreting the p value of a statistical ...Part 2: Interpreting the p value of a statistical analysis is dependent on it mode of presentation. A verbal presentation can be derived on statistical analysis but is a different presentation mode. Somehow this topic has not been discussed. A proposal based on alternative representations with examples from pre-clinical and clinical research can be found in https://www.dropbox.com/s/zfmuc81ho2yschm/Kenett%20Rubinstein%20Scientometrics%202021.pdf?dl=0ronkenetthttps://www.blogger.com/profile/04375883591463344490noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-35260509526658371032021-12-04T06:22:55.760+01:002021-12-04T06:22:55.760+01:00Part 1: Interpretation is the focus of this blog. ...Part 1: Interpretation is the focus of this blog. As often found, the comments to the blog are also interesting. Stating the goal of the analysis is an obvious step to clarify the interpretation of the analysis results. We expanded on Hand's deconstruction paper and propose a framework of information quality. It has four components and 8 dimensions. "Goal" being the first component. http://infoq.galitshmueli.com/homeronkenetthttps://www.blogger.com/profile/04375883591463344490noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-84466526225889279152021-11-29T20:25:41.430+01:002021-11-29T20:25:41.430+01:00How about we first discuss what Fisher actually sa...How about we first discuss what Fisher actually said before dismissing it without engaging with it? In any case, I would have expected an actual argument for why “Fisher is not really the best source on how to interpret test results”…Peterhttps://www.blogger.com/profile/00208530977584493728noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-70619078179920524842021-11-25T13:27:40.962+01:002021-11-25T13:27:40.962+01:00Fisher is not really the best source on how to int...Fisher is not really the best source on how to interpret test result. It is a lot simpler (and better) from a Neyman-Pearson approach. You conclude something *with a known maximum error rate* - so, you draw a conclusion but at the same time accept that in the long run, you could be wrong at most e.g., 5% of the time. Conclusions are, as I write in the blog, always tentative.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-36565630601435391232021-11-24T14:23:55.601+01:002021-11-24T14:23:55.601+01:00There is one thing I keep asking and never get an ...There is one thing I keep asking and never get an answer to—which is kind of weird since it’s so obviously relevant and is a point that comes from one of the founders of significance testing. You say: “After observing a p-value smaller than the alpha level, one can therefore conclude…” How is that compatible with <a href="http://www.theopensociety.net/category/fisher-r-a/" rel="nofollow">what Fisher said</a> about significance tests: “A scientific fact should be regarded as experimentally established only if a properly designed experiment *rarely fails* to give this level of significance”?<br /><br />Do we all agree that Fisher can only have meant that after observing (obtaining, actually) a single p-value *we do not conclude anything*? But that we only conclude things after obtaining *many* p-values? (As many as we deem necessary to be able to speak of “rarely fails”.)Peterhttps://www.blogger.com/profile/00208530977584493728noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-18458973052543210142021-11-22T11:31:00.315+01:002021-11-22T11:31:00.315+01:00Part 2: The original observed-quantile conceptuali...Part 2: The original observed-quantile conceptualization of P-values can conflict with the NPL/decision conceptualization in e.g. Lehmann 1986 used for example in Schervish 1996. The latter paper showed how NPL P-values can be incoherent measures of support, with which I wholly agree. As I think both K. Pearson and Fisher saw, the value of P can only indicate compatibility of data with models, and many conflicting models may be highly compatible with data. But P-values can be transformed into measures of refutation, conflict, or countersupport, such as the binary S-value, Shannon or surprisal transform -log2(p) as reviewed in the Greenland et al. cites above. <br /><br />Schervish 1996 failed to recognize the Fisherian alternative derivation/definition of P-values and so wrote (as others do) as if the NPL formalization was the only one available or worth considering - a shortcoming quite at odds with sound advice like "there is no reason to limit oneself to a single tool or philosophy, and if anything, the recommendation is to use multiple approaches to statistical inferences." And while I hope everyone agrees that "It is not always interesting to ask what the p-value is when analyzing data, and it is often interesting to ask what the effect size is", I think it important to recognize that most of the time our "best" (by the usual statistical criteria) point estimates of effect sizes can be represented as maxima of 2-sided P-value functions or crossing points of upper and lower P-value functions, and our "best" interval estimates can be read off the same P-functions. <br /><br />I must add that I am surprised that so many otherwise perceptive writers keep repeating the absurd statement that "P-values overstate evidence", which I view as a classic example of the mind-projection fallacy. The P-value is just a number that sits there; any overstatement of its meaning in any context has to be on the part of the viewer. I suspect the overstatement claim arises because some are still subconsciously sensing P-values as some sort of posterior probability (even if consciously they would deny that vehemently). This problem indicates that attention should also be given to the ways in which P-values can supply interesting bounds on posterior probabilities, as shown in Casella & R. Berger 1987ab and reviewed in Greenland & Poole 2013ab (all are cited in Greenland 2019 above), and how P-values can be rescaled as binary S-values -log2(p) to better perceive their information content (again as reviewed in the Greenland et al. citations above).Sander Greenlandhttps://www.blogger.com/profile/15914223271081034858noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-82412396300359331942021-11-22T11:29:52.219+01:002021-11-22T11:29:52.219+01:00Part 1: I thought this post provided mostly good c...Part 1: I thought this post provided mostly good coverage under the Neyman-Pearson-Lehmann/decision-theory (NPL) concept of P-values as random variables whose single-trial realization is the smallest alpha-level at which the tested hypothesis H could be rejected (given all background assumptions hold). In this NPL vision, P-values are inessential add-ons that can be skipped if one wants to just check in what decision region the test statistic fell. <br /><br />But I object to the coverage above and in its cites for not recognizing how the Pearson-Fisher P-value concept (which is the original form of their "value of P") differs in a crucial fashion from the NPL version. Fisher strongly objected to the NP formalization of statistical testing, and I think his main reasons can be made precise when one considers alternative formalizations of how he described P-values. There is no agreed-upon formal definition of "evidence" or how to measure it, but in Fisher's conceptual framework P-values can indeed "measure evidence" in the sense of providing coherent summaries of the information against H contained in measure of divergence of data from models.<br /> <br />Pearson and Fisher defintion started from divergence measures in single trials, such as chi-squared or Z-statistics; P is then the observed divergence quantile (tail area) in a reference distribution under H. No alpha or decision need be in the offing, so those become the add-ons. For some review material see <br />Greenland S. 2019 http://www.tandfonline.com/doi/pdf/10.1080/00031305.2018.1529625<br />Rafi & Greenland. https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-020-01105-9<br />Greenland & Rafi. https://arxiv.org/abs/2008.12991<br />Cole SR, Edwards J, Greenland S. (2021). https://academic.oup.com/aje/advance-article-abstract/doi/10.1093/aje/kwaa136/5869593 <br />Related views are in e.g.<br />Perezgonzalez JD. P-values as percentiles. Commentary on: “Null hypothesis significance tests. A mix-up of two different theories: the basis for widespread confusion and numerous misinterpretations”. Front Psych 2015;6. https://doi.org/10.3389/fpsyg.2015.00341.<br />Vos P, Holbert D. Frequentist inference without repeated sampling. ArXiv190608360 StatOT. 2019; https://arxiv.org/abs/1906.08360.Sander Greenlandhttps://www.blogger.com/profile/15914223271081034858noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-38321125567664777782021-11-22T10:54:54.333+01:002021-11-22T10:54:54.333+01:00The original definition of the "value of P&qu...The original definition of the "value of P" in Pearson 1900 and which became known as the P-value by the 1920s is an observed tail area of a divergence statistic, while in the Neyman-Pearsonian definition assumed above P is a random variable defined from a formal decision rule with known conditional error rates. The two concepts can come into conflict over proper extension beyond simple hypotheses in basic models, e.g. see Robins et al. JASA 2000.Sander Greenlandhttps://www.blogger.com/profile/15914223271081034858noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-18299713350244876052021-11-21T23:45:04.624+01:002021-11-21T23:45:04.624+01:00Who would dispute the definition of a p value? And...Who would dispute the definition of a p value? And who would dispute that it it's in fact a value that is called p? The discussion is about what inferences to draw from a p value, and whether such inferences are consistent with it's definition. But there are no different ways to calculate a p value.johannhttps://www.blogger.com/profile/16428748724803791707noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-85453934861839215502021-11-20T14:38:40.974+01:002021-11-20T14:38:40.974+01:00"p-values should be interpreted as p-values&q..."p-values should be interpreted as p-values" is no different to the former UK Prime Minister's comment "Brexit means Brexit". Since at the time there was no consensus as to the meaning of Brexit, the Brexit meme was meaningless. The same may be true for this p-value meme, if such it is to become, since the "value" in p-value is itself disputed.Andy Grievehttps://www.blogger.com/profile/02233728713472326257noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-46838235065593268132021-11-01T08:44:06.109+01:002021-11-01T08:44:06.109+01:00Your first part about sequential analysis is not r...Your first part about sequential analysis is not really a solid analysis - the problems you mentioned are all easily solved, see https://psyarxiv.com/x4azm/. <br /><br />About the second part: Preregistration is more than just being 'open' about a process. It is about allowing others to evaluate the severity of a test. This means you need to provide very specific information in the preregistration - a problem is many are now too vague to evaluate the severity of a test. Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-87563481308494467512021-10-31T16:51:42.298+01:002021-10-31T16:51:42.298+01:00This is all fine except for the advice on testing ...This is all fine except for the advice on testing - running - testing cycle. While it's true that you can plan for this and avoid inflated Type I error rates, it still has problems. The first is that, in the long run, it is not more efficient. Sometimes you'll need to run more participants and sometimes fewer; but it doesn't make things more efficient in the long run. The second is that it generates a literature where all of the small studies have exaggerated effect sizes and the large ones underestimated effect sizes. Consider the situation, you're going to run until you find an effect. If your initial sample was an underestimates of that effect, even in the wrong direction, you'll need to run many participants in order to eventually find an effect, and the under estimate bias will never be eliminated. If you start with an over estimate in your initial sample you'll be done collecting data quickly and, again, not have eliminated the over estimation bias in your sample. <br /><br />Every other bit of overregularization mentioned here is spot on though. I especially often run into the preregistration issue. I never explain it to my students as a way to avoid Type I errors. I only describe it as a way to be open about your process. With that mindset, of doing open science, they don't worry about being able to solve every analysis problem prior to do it.Unknownhttps://www.blogger.com/profile/00227235335343168838noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-84388695405938253282021-10-31T11:52:58.997+01:002021-10-31T11:52:58.997+01:00Thanks for clarifying the issues around when p-hac...Thanks for clarifying the issues around when p-hacking is/is not p-hacking Daniël. I'm now considering how your take applies to our entry in the Catalogue of Bias: https://catalogofbias.org/biases/data-dredging-bias/<br /><br />I wonder if you'd be kind enough to provide your view of our entry and where, if any, edits can be made to improve the accuracy of its content?<br /><br />DavidAnonymoushttps://www.blogger.com/profile/06349977360386625986noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-4323049150121921252021-09-27T12:10:43.072+02:002021-09-27T12:10:43.072+02:00This comment has been removed by a blog administrator.Md Mijanur Rahaman (MR)https://www.blogger.com/profile/10784665411097622944noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-19451999986461356452021-09-24T12:31:26.211+02:002021-09-24T12:31:26.211+02:00This comment has been removed by a blog administrator.ralndohttps://www.blogger.com/profile/09108436965428992493noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-49414484904896686212021-09-21T23:20:26.930+02:002021-09-21T23:20:26.930+02:00Thank you for sharing. He deserves more credit. Thank you for sharing. He deserves more credit. Ulrich Schimmackhttps://www.blogger.com/profile/03244014319857211622noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-30035957614832354612021-09-21T18:27:47.699+02:002021-09-21T18:27:47.699+02:00This comment has been removed by a blog administrator.CellFatherhttps://www.blogger.com/profile/06815017507368582608noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-86899846227117237422021-09-21T13:51:40.834+02:002021-09-21T13:51:40.834+02:00This comment has been removed by a blog administrator.lakshmibhucynixhttps://www.blogger.com/profile/09244438858315114254noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-91245950960849209782021-09-21T10:31:30.294+02:002021-09-21T10:31:30.294+02:00An interesting post but misleading in some respect...An interesting post but misleading in some respects. Neyman desrves praise for his support of David Blackwell but the implication that in this he was somehow different from Fisher is false. Fisher supported and collaborated with many Indian statisticians (see ths Sankhya obituary here http://www.senns.uk/Sankhya_Obit_RAF.pdf) and his doctoral students included CR Rao and the Ghanaian Ebenezer Laing. (It is interesting to note that PV Sukhatme, whose work Fisher liked, studied for a PhD with Neyman and a DSc with Fisher, consistent with the point of view that attitudes to non-European researchers does not really separate them.) Furthermore, Neyman's enthusiasm for communism had its negative side. It proved to be an embarassment for Polish statisticians who had to live the reality of the paradise he imagined.<br />As regards significance and hypothesis testing, in my opinion. the difference between Neyman and Pearson and Fisher has little to do with P-values versus rejection but more to do with the role of alternative hypotheses (crucial for Neyman and not needed by Fisher) and conditioning (important for Fisher but less obviously so for Neyman).<br /><br />(Small point There is a typo. Neyman was born in 1894. Also, I suppose it is debatable as to whether Bender, Neyman's place of birth, should be described as being in Russia.)<br />Stephen Sennhttps://www.blogger.com/profile/02626984605433782027noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-7346792757196071302021-09-21T05:38:17.346+02:002021-09-21T05:38:17.346+02:00Thanks Daniel for taking your time and efforts in ...Thanks Daniel for taking your time and efforts in writing this post. Very interesting and much appreciated. I see a typo error in Neyman birth year.Anonymoushttps://www.blogger.com/profile/02610299745286311656noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-61412308109133669682021-09-20T09:41:58.250+02:002021-09-20T09:41:58.250+02:00This comment has been removed by a blog administrator.Faris Zahidhttps://www.blogger.com/profile/02014278194005574137noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-22073953115827795662021-09-20T07:20:38.471+02:002021-09-20T07:20:38.471+02:00This comment has been removed by a blog administrator.Gyan Tipshttps://www.blogger.com/profile/14567432899395509098noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-62843410259260014182021-09-17T20:14:01.583+02:002021-09-17T20:14:01.583+02:00This comment has been removed by a blog administrator.sunil kumarhttps://www.blogger.com/profile/01542877223176906644noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-41998492028976636812021-09-14T18:08:43.364+02:002021-09-14T18:08:43.364+02:00This comment has been removed by a blog administrator.foodhttps://www.blogger.com/profile/01780263012628648107noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-74761823280521493602021-09-11T10:14:09.904+02:002021-09-11T10:14:09.904+02:00This comment has been removed by a blog administrator.bloghttps://www.blogger.com/profile/14372895113089442030noreply@blogger.com