tag:blogger.com,1999:blog-987850932434001559.post317945301814838558..comments2023-09-17T16:45:23.263+02:00Comments on The 20% Statistician: One-sided F-tests and halving p-valuesDaniel Lakenshttp://www.blogger.com/profile/18143834258497875354noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-987850932434001559.post-73692590694923845012016-04-10T11:33:20.350+02:002016-04-10T11:33:20.350+02:00Great that you clarified this.Great that you clarified this.Rickardhttps://www.blogger.com/profile/13961591797051699223noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-89280811621613908402016-04-08T12:23:46.284+02:002016-04-08T12:23:46.284+02:00Once again, nice post.
Incidentally, there is a co...Once again, nice post.<br />Incidentally, there is a context in which you employ a "proper two-sided F-test" (i.e. you look at both the left and right tail of the F-distribution): the variance ratio test.<br />If you have two populations, both normally distributed with unknown variance, and are interested in H0: sigma^2(A) = sigma^2(B) versus the two-sided alternative, the F-test is given by F = var(A)/var(B). Values in the left 2.5% of the distribution indicate that sigma^2(A) is smaller than sigma^2(B), the end of the right tail indicates the opposite.<br />Anonymoushttps://www.blogger.com/profile/05364304504311348392noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-81111362120731073002016-04-08T12:12:17.480+02:002016-04-08T12:12:17.480+02:00Hi Daniel,
I haven't read that Fisher did this...Hi Daniel,<br />I haven't read that Fisher did this with Mendel's work using F tests, but he certainly did with chi squared, see for example https://digital.library.adelaide.edu.au/dspace/bitstream/2440/15123/1/144.pdf . Quite a bit more on this in the Mendel article on Wikipedia. Chi squared is like F in that the statistic can't be negative.Kevin McConwayhttps://www.blogger.com/profile/13163867937943443456noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-7674120949833477982016-04-08T09:50:59.298+02:002016-04-08T09:50:59.298+02:00Or, one could say the analysis evaluates whether t...Or, one could say the analysis evaluates whether the error we make taking the grand mean as a model for the data (model H0) is equal to the error we make if we assume a mean for each group (model H1) is needed to describe the data.<br /><br /><br />Then the two tailed test becomes rather silly: "we predict the group mean model is better AND worse than the grand mean model at the same time"Anonymoushttps://www.blogger.com/profile/01414244802603249395noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-43882730548110990152016-04-08T01:20:25.470+02:002016-04-08T01:20:25.470+02:00Hi Daniel,
Interesting question. I might be point...Hi Daniel,<br /><br />Interesting question. I might be pointing out the obvious here, but:<br /><br />1) The F-test is one-sided in the sense that we only look at the extreme values on one side *of the F distribution*, but:<br /><br />2) It is not one-sided in the sense of a one-sided t-test, where one is only looking at differences *between the actual sample means* that go in a particular direction.<br /><br />Informally, this is fairly clear from the fact that a two-sided t-test and an F-test with 1 df have exactly the same p values.<br /><br />In a bit more detail:<br />The p value from an F-test provides the probability of observing a sum of squared differences between the group means and the grand mean that is as great or larger than that observed. Because the differences from the grand mean are squared, the direction of the differences between sample means are ignored.<br /><br />So in the sense that everyday researchers probably care about most (i.e., with respect to the actual hypothesis tested), an F test probably isn't best described as "one-sided". <br /><br />(I wouldn't exactly call it two-tailed either of course, given that there can be many different sample means, any pair of which can differ from one another in either direction!) Matthttps://www.blogger.com/profile/15143483413289978878noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-3851194017204014862016-04-07T21:27:48.737+02:002016-04-07T21:27:48.737+02:00Hi Thom! Yes, see my sneaky use of 'default...Hi Thom! Yes, see my sneaky use of 'default' F-test, and how we are interested in whether the F-value is larger than 1. It is possible to test for the other tail. Uli Schimack mentioned on Twitter something about how Fischer might have used it to examine Mendel's data - but I couldn't find anything in detail on it, and decided it wasn't part of the 80% I talk about on this blog ;)Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-56321670611304169252016-04-07T21:20:45.019+02:002016-04-07T21:20:45.019+02:00This was the subject of an argument between the tw...This was the subject of an argument between the two examiners of my PhD thesis (during my viva) ... There is arguably a one sided F tests that looks at F values that are too small to be chance (F below 1) - I have a paper on it somewhere. Essentially you are testing whether there is less variability than chance. I'm not sure it is often that useful in practice as it seems to rely on strong continuity assumptions and require large samples.thomhttps://www.blogger.com/profile/00392478801981388165noreply@blogger.com