tag:blogger.com,1999:blog-987850932434001559.post6359525723205112526..comments2023-09-17T16:45:23.263+02:00Comments on The 20% Statistician: Absence of evidence is not evidence of absence: Testing for equivalenceDaniel Lakenshttp://www.blogger.com/profile/18143834258497875354noreply@blogger.comBlogger25125tag:blogger.com,1999:blog-987850932434001559.post-51093736202975738752017-11-08T06:31:15.875+01:002017-11-08T06:31:15.875+01:00Never heard about it, but why don't you just u...Never heard about it, but why don't you just use the soreadsheet that comes with my 2017 article? And R is SUPER easy if you just want to use TOST. Like a simple calculator.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-24243660740750757612017-11-08T01:45:48.380+01:002017-11-08T01:45:48.380+01:00Have you had experience using XLSTAT 'add on&#...Have you had experience using XLSTAT 'add on' software for Excel to calculate TOST? Their online tutorial makes it look simple. I am unfamiliar with R and to save me learning it, I thought this might be useful for equivalence testing. Any thoughts? Many thanks in advance.Anonymoushttps://www.blogger.com/profile/06377200583605331824noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-64599228220613039442016-10-29T09:10:49.093+02:002016-10-29T09:10:49.093+02:00Very good question. I would think so as well, but ...Very good question. I would think so as well, but I've not found a reference that does the simulations to show this. I will do them and let you know, when I publish a paper about this.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-57520650171328589862016-10-29T09:07:07.248+02:002016-10-29T09:07:07.248+02:00Hi,
I probably missed this, but wouldn't you n...Hi,<br />I probably missed this, but wouldn't you need a Bonferroni type of correction when looking at two tests? Anonymoushttps://www.blogger.com/profile/15363975022666800597noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-4036152714963932522016-09-25T17:00:30.448+02:002016-09-25T17:00:30.448+02:00Applying for an admission or scholarship to any na...Applying for an admission or scholarship to any national or international university requires a person to submit either filled hard copy or an online application form. Attaching certified documents, financial statement, English Proficiency test result and 3 recommendation letters are the pre-mandatory items to be submitted along with an application form. See more <a href="http://www.programmingassignment.net/who-can-do-my-programming-homework/" rel="nofollow">programming homework</a>Anonymoushttps://www.blogger.com/profile/15658152316709205819noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-50122778516022516212016-09-15T08:53:17.203+02:002016-09-15T08:53:17.203+02:00I'm late to the game...
I agree with nearly e...I'm late to the game...<br /><br />I agree with nearly everything written in the post, except one I believe crucial issue. <br /><br />In the section under "Rejecting the presence of a meaningful effect" it reads:<br />"This means we can reject the null of an effect that is larger than d = 0.5 or smaller than d = -0.5 and conclude this effect is smaller than what we find meaningful (and you’ll be right 95% of the time, in the long run)."<br /><br />The first part of the sentence is of course correct, But the second part makes a probabilistic claim about the truth of the alternative hypothesis, which cannot be made in the frequentist framework (yes, the sentence uses frequentist language, but the inference is about the truth of the hypothesis). If one wants to make such claims, one would need to use Bayes and a prior to go from P(data|H0) to p(H1|data). <br /><br />I think a more accurate version of the cited sentence would be <br /><br />"This means we can reject the null of an effect that is larger than d = 0.5 or smaller than d = -0.5 <br />because the probability of the observed data given the hypothesis that |d| > .05 is smaller than 5%."<br /><br />Maybe that doesn't sound very satisfying, but if one likes to make statements about the probability of hypotheses there is no way around a Bayesian approach.guidohttps://www.blogger.com/profile/10981583489689200030noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-76323390140949306822016-07-19T04:17:27.089+02:002016-07-19T04:17:27.089+02:00If you only do an equivalence test after p > 0....If you only do an equivalence test after p > 0.05 isn't alpha now inflated for that test?Unknownhttps://www.blogger.com/profile/00227235335343168838noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-43554570131120269892016-06-22T15:14:48.092+02:002016-06-22T15:14:48.092+02:00Hi, I'm thinking about turning this into a pap...Hi, I'm thinking about turning this into a paper - will look into a power analysis for r script, indeed, makes sense to provide!Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-6204843412930973202016-06-22T15:13:00.110+02:002016-06-22T15:13:00.110+02:00Hi Daniel,
thanks for making it so easy to conduct...Hi Daniel,<br />thanks for making it so easy to conduct these tests. <br /><br />Are you planning on amending the syntax to provide a power analysis for TOST r (correlations)?<br />That would be really useful for me.Oli à LLNhttps://www.blogger.com/profile/11865806292409544128noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-24186404450631070692016-06-14T10:44:51.898+02:002016-06-14T10:44:51.898+02:00Yes, the null of non-equivalence or the null in eq...Yes, the null of non-equivalence or the null in equivalence tests is correct, the null of equivalence is not correct. Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-44444575193309894212016-06-14T10:37:22.941+02:002016-06-14T10:37:22.941+02:00Hi, Daniel. Thanks for a very interesting post and...Hi, Daniel. Thanks for a very interesting post and for making your R code available.<br /><br />In the conclusion, you refer to the "null of equivalence." Strictly speaking, shouldn't this be the null hypothesis of nonequivalence?<br /><br />See Rogers, Howard, and Vessey (1993, p. 554): "There is a null hypothesis asserting that the difference between the two groups is at least as large as the one specified by the investigator [i.e., nonequivalence], and there is an alternative hypothesis asserting that the difference between two groups is smaller than the specified one [i.e., equivalence]."Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-57394950809549688622016-06-03T18:53:27.142+02:002016-06-03T18:53:27.142+02:00Hi Daniels, interesting stuff! A couple more or le...Hi Daniels, interesting stuff! A couple more or less random thoughts on this:<br /><br />1) Equivalence testing is usually used only when a researcher actually hypothesises that a particular null is true. But it can be used more widely: there's no reason we couldn't generally approach inference about any parameter as a problem of working out whether we can conclude that a parameter is trivially small, conclude that it is reasonably large, or conclude that there is too much uncertainty to say. We can do that by combining equivalence testing with traditional NHST, or using something like magnitude-based inference as used in sports science - see http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0147311 <br /><br />2) The frequentist approach works fine, but one key advantage of a Bayesian approach here is that we can take into account the fact that most effects in psychology are small. I.e., we can place a prior that represents a belief that most effects aren't too far from zero. That makes it less likely that we'll conclude there's a substantial effect, but also more likely that we can conclude confidently that an effect is trivial.Matthttps://www.blogger.com/profile/15143483413289978878noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-85633549806696427332016-05-27T14:06:28.178+02:002016-05-27T14:06:28.178+02:00Hi, if you set a smallest effect size of interest,...Hi, if you set a smallest effect size of interest, poewr up for it, and don't find a significant result, you might but don't automatically, have evidence for an effect SMALLER than your SESOI. You could be in the 'undetermined' condition visualized above.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-55812078709362778002016-05-26T11:26:50.195+02:002016-05-26T11:26:50.195+02:00Hi Daniel. For good reason, Popper's principle...Hi Daniel. For good reason, Popper's principle of falsifiability has been a pillar of science but even Gosset and Fisher recognised imperfections of 0.05 as a cut off. As you state, over- and under-powered tests will mask meaningful effects. In medicine and sport, there are two key questions about interventions: first, does the treatment/training work and second, if yes, how well? With equivalence-type trials where effects of a new therapy are compared with those of usual care, the conventional null hypothesis testing approach can, perhaps, be retained via the minimum clinically (or practically) important difference that is declared at the outset and that must be exceeded before the new treatment can be considered to be an improvement. The next stage is to evaluate if the improvement is cost effective. Apologies if I have missed something but that wasn't clear in your otherwise helpful account. Incidentally the there-was-no-effect-(P > 0.05)-but-oh-yes-there-was-(d = 0.36) is the pantomime that arises from mixing null-hypothesis significance testing and magnitude-based inferences. Especially when alpha (0.05) is stated in the methods section. The authors of such a statement are using oleaginous Uriah-Heap statements to cover their backs but in fact, by so doing, confuse both themselves and readers. Edward M Winternoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-26177844835328276702016-05-24T06:15:37.241+02:002016-05-24T06:15:37.241+02:00Hi Remko, from my blog post above:
One thing I no...Hi Remko, from my blog post above:<br /><br />One thing I noticed while reading this literature is that TOST procedures, and power analyses for TOST, are not created to match the way psychologists design studies and think about meaningful effects. In medicine, equivalence is based on the raw data (a decrease of 10% compared to the default medicine), while we are more used to think in terms of standardized effect sizes (correlations or Cohen’s d). Biostatisticians are fine with estimating the pooled standard deviation for a future study when performing power analysis for TOST, but psychologists use standardized effect sizes to perform power analyses. Finally, the packages that exist in R (e.g., equivalence) or the software that does equivalence hypothesis tests (e.g., Minitab, which has TOST for t-tests, but not correlations) requires that you use the raw data. In my experience (Lakens, 2013) researchers find it easier to use their own preferred software to handle their data, and then calculate additional statistics not provided by the software they use by typing in summary statistics in a spreadsheet (means, standard deviations, and sample sizes per condition). So my functions don’t require access to the raw data (which is good for reviewers as well). Finally, the functions make a nice picture such as the one above so you can see what you are doing.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-63659904012982425552016-05-24T03:23:49.557+02:002016-05-24T03:23:49.557+02:00Thanks Nick for pointing out the very useful equiv...Thanks Nick for pointing out the very useful equivalence test. Perhaps you are not aware of the 'equivalence' R package, but if you are, how does your implementation differ?RemkoDuursmahttps://www.blogger.com/profile/12375540616615250117noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-20875091067142438382016-05-22T02:00:20.078+02:002016-05-22T02:00:20.078+02:00Thanks. Looks like I actually understood for once....Thanks. Looks like I actually understood for once. :-) Your previous post(s) about one-tailed tests were a big part of why I asked.Nick Brownhttps://www.blogger.com/profile/18266307287741345798noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-2025716610912387072016-05-21T07:02:08.080+02:002016-05-21T07:02:08.080+02:00Hi Nick - yes, all that is possible (and makes sen...Hi Nick - yes, all that is possible (and makes sense). You can test for noninferiority, for example. You can also set the equivalence range any way you like (from -0.1 to 0.5). The symmetric situation is easiest, my code only works with symmetrical intervals (but I can update it). I discussed it in an earlier draft, but the blog was already so long, I removed it. But you know I am a big fan of one-sided tests if you have one-sided hypotheses, and that generalizes to equivalence tests.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-46719537392667771792016-05-21T02:29:22.267+02:002016-05-21T02:29:22.267+02:00Can you explain in nice small words why the equiva...Can you explain in nice small words why the equivalence range goes from -0.5 to +0.5, rather than from 0 to +0.5 or perhaps minus infinity to +0.5? That seems to imply that I'm equally interested in results in both directions. But if I'm testing medicines, for example, I don't really care (i.e., I don't have to distinguish between) whether my new pill is less good than the old one, or no good at all, or kills people; I just want to know if it's better than the old one.<br /><br />Maybe what I'm saying is, this all sounds a bit two-tailed, so how would it fit into a one-tailed world? Or (most likely) have I missed something?Nick Brownhttps://www.blogger.com/profile/18266307287741345798noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-19942311507088463172016-05-20T22:11:32.248+02:002016-05-20T22:11:32.248+02:00You'd have one of the two situations in the 4 ...You'd have one of the two situations in the 4 graphs at the end of the post, right? So either a significant meaningful effect, or an undetermined situation. As far as I understand you can perform both tests (NHST and EHT), and you interpret them both. And it seems stat training was a bit more complete where you had it than where I had it 10 years ago :)Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-59752161025882200102016-05-20T22:07:21.047+02:002016-05-20T22:07:21.047+02:00Similar to Rickard, I had equivalence testing in s...Similar to Rickard, I had equivalence testing in stats intro course ca. 10 years ago :)<br /><br />What if your equivalence test fails to reject the equivalence hypothesis? Would you perform a post-hoc test for significant difference? Isn't this HARKing? <br /><br />To be sure bayes factors can't avoid the inferential limbo either if the evidence isn't decisive (BF~1). But they at least can separaty "non-sig difference due to small power" (BF~1) from the lack of difference (BF_01<1). <br /><br />I recall reading about frequentist three-way hypothesis tests (reject H1, reject H0, needs more power), but I haven't seen them in use.matushttp://simkovic.github.ionoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-29597757403965992472016-05-20T21:39:06.295+02:002016-05-20T21:39:06.295+02:00As far as I know, Neyman recommends to interpret a...As far as I know, Neyman recommends to interpret a p > 0.05 as either accepting the null, or 'remaining in doubt'. It seems to me that equivalence is a nice way to differentiate between the two depending on the smallest effect size of interest (and assuming you did not have 99% power for that effect size). I don't know how it is related to a severity test - sounds like a useful blog post on your end!Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-20954614475358091942016-05-20T20:04:32.436+02:002016-05-20T20:04:32.436+02:00It's not "intentions" that change bu...It's not "intentions" that change but rather the relevant error probabilities (as a result of things like optional stopping, cherry picking, biasing selection effects).<br /><br />https://errorstatistics.com/2015/05/27/intentions-is-the-new-code-word-for-error-probabilities-allan-birnbaums-birthday/<br /><br />It's very strange that users of tests wouldn't know how to interpret insignificant results when it's part of N-P testing. I'm curous as to how this use of equivalence testing compares with (a) power analysis and (b) a severity analysis of a negative result. See for example section 3.1, 4.2 and 4.3 of Mayo and Spanos (for a one sample Normal test). A severity analysis doesn't require that you set a range of interest or equivalence.<br />http://www.phil.vt.edu/dmayo/personal_website/2006Mayo_Spanos_severe_testing.pdf<br /><br />MAYO:ERRORSTAThttps://www.blogger.com/profile/02967648219914411407noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-63376247752777852452016-05-20T19:00:33.813+02:002016-05-20T19:00:33.813+02:00Very nice post. I remember we had this in a course...Very nice post. I remember we had this in a courses I TA like 7 seven years ago (I was undergrad then) but I also remember I didn't really see the point over simply eyeballing the CI. And then I totally forgot about this until you brought it up.<br /><br />(The reason why we had it was probably that they had minitab as well as SPSS! But I honestly don't remember )<br /><br /><br />What is in your opinion the benefits over simply calculating a 95 CI around the observed d? I can think of two 1) eyeballing is not very precise 2) p can be used as a continuous measures in an easier fashion. Are there any others? Am I missing something completely??<br />Rickardhttps://www.blogger.com/profile/13961591797051699223noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-89319513591014703872016-05-20T18:14:45.409+02:002016-05-20T18:14:45.409+02:00Thank you for this nice post. An analogous procedu...Thank you for this nice post. An analogous procedure exists in Bayesian estimation: Check whether the posterior credible interval falls within the SESOI. I like the Bayesian version better than the frequentist version because frequentist confidence intervals change when the stopping or testing intentions change, but Bayesian intervals don't depend on those intentions. If interested, see Ch 12 of <a href="https://sites.google.com/site/doingbayesiandataanalysis/" rel="nofollow">DBDA2E</a>, or pp. 16-17 of <a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2606016" rel="nofollow">this manuscript</a>.John K. Kruschkehttps://www.blogger.com/profile/17323153789716653784noreply@blogger.com