tag:blogger.com,1999:blog-987850932434001559.post265035593673838315..comments2024-03-29T11:00:11.612+01:00Comments on The 20% Statistician: Power analysis for default Bayesian t-testsDaniel Lakenshttp://www.blogger.com/profile/18143834258497875354noreply@blogger.comBlogger11125tag:blogger.com,1999:blog-987850932434001559.post-64237715697091881152018-05-24T19:53:39.361+02:002018-05-24T19:53:39.361+02:00Hi Timothy, Felix Schonbrodt and EJ Wagenmakers ha...Hi Timothy, Felix Schonbrodt and EJ Wagenmakers have papers on Bayesian Design Analysis - you should use those to plan your study. Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-71788565765709620222018-05-24T18:16:49.092+02:002018-05-24T18:16:49.092+02:00Hi Daniel,
Thank you for your post. This could b...Hi Daniel, <br /><br />Thank you for your post. This could be very helpful for me in my work and research. <br /><br />However, I've run into some error messages when running the script. I am not vey well versed in RStudio so could you or someone else help me out in resolving this problem?<br /><br />These are the messages I am getting:<br /><br />Error in winProgressBar(title = "progress bar", min = 0, max = nSim, width = 300) : <br /> could not find function "winProgressBar"<br /><br />Error in setWinProgressBar(pb, i, title = paste(round(i/nSim * 100, 1), : <br /> could not find function "setWinProgressBar"<br /><br />Error in close(pb) : object 'pb' not found<br /><br />Error in hist.default(log(bf), breaks = 20) : character(0)<br />In addition: Warning messages:<br />1: In min(x) : no non-missing arguments to min; returning Inf<br />2: In max(x) : no non-missing arguments to max; returning -Inf<br /><br />Thanks in advance, <br /><br />Timothy Timothy Houtmannoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-57358533491141773352016-01-15T15:49:26.314+01:002016-01-15T15:49:26.314+01:00I know! I'll try to improve. Trying to get Dav...I know! I'll try to improve. Trying to get David's code to run, no success yet, but I really need to learn how to optimize simulations, because I use them so often, and now I often just run all of them overnight after doing small numbers of simulations to check the code during the day. It works, but hardly optimal.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-40755224220780731132016-01-15T15:36:51.391+01:002016-01-15T15:36:51.391+01:00Boohoo Daniel, that's a hell of an unoptimized...Boohoo Daniel, that's a hell of an unoptimized code :Dmatushttp://simkovic.github.ionoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-18634203740041522582016-01-15T12:27:02.659+01:002016-01-15T12:27:02.659+01:00Hi Daniel,
I found your post really interesting,...Hi Daniel, <br /><br />I found your post really interesting, however, the computing time (22 minutes on my machine) bugged me a lot (probably more than it should). I used snowfall and parallel-load-balancing to encounter the issue and reduce the time to 25 seconds. You find the code at my blog here: https://datashenanigan.wordpress.com/2016/01/15/speeding-bayesian-power-analysis-t-test-up-with-snowfall/<br /><br />DavidAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-67475361253952112612016-01-14T18:10:41.610+01:002016-01-14T18:10:41.610+01:00If you *want* a specific point effect size in the ...If you *want* a specific point effect size in the prior, you can specify scale = 0.01 and your desired effect size as the centrality parameter, so you approximate a delta function.Anonymoushttps://www.blogger.com/profile/09640729547040033538noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-6101421388462448802016-01-14T18:05:38.162+01:002016-01-14T18:05:38.162+01:00No, that's not how I interpret them. But maybe...No, that's not how I interpret them. But maybe I missed a deep mathematical identity or heuristic somewhere. I think you should ask them. But as the Cauchy distribution does not even have a defined mean, it is not even possible to estimate the average effect size with this prior. All we know, in terms of summary statistics, is that the mode = 0. Of course, a bigger r does mean a wider spread, hence more mass at the higher effect sizes, but I don't think it is possible to equate that with a specific effect size. Anonymoushttps://www.blogger.com/profile/09640729547040033538noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-26896279899760386252016-01-14T17:38:23.470+01:002016-01-14T17:38:23.470+01:00Thanks, you are right, it's a distribution, bo...Thanks, you are right, it's a distribution, bot a point. I still find it difficult to think about the differences between these two approaches, given that the observed (not true!) effect size also has a distribution. Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-79356535772279764892016-01-14T17:37:19.863+01:002016-01-14T17:37:19.863+01:00As Joe notes above, it should be a distribution, n...As Joe notes above, it should be a distribution, not a point hypothesis. But I don't really know how to talk about it correctly. But as I understand the Rouder et al paper, they use the effect size d as the r-scale, right?Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-28992083930072288472016-01-14T17:26:29.409+01:002016-01-14T17:26:29.409+01:00Could you elaborate why you equate the r parameter...Could you elaborate why you equate the r parameter of the Cauchy Prior with the Effect Size?Anonymoushttps://www.blogger.com/profile/09640729547040033538noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-89803749385933002232016-01-14T16:22:46.405+01:002016-01-14T16:22:46.405+01:00Quick point: I wish to clarify what the alternativ...Quick point: I wish to clarify what the alternative hypothesis of delta ~ Cauchy(.707) means.<br /><br />In the text, you say "The default Bayesian t-test originally used a r-scale of 1 (a very large effect), but the updated default test uses a r-scale of 0.707. This means that whenever you perform a study where you calculate the default Bayes Factor, and find a BF10 < 1/3, you have observed support for the null-hypothesis, relative to an alternative hypothesis of an effect of d = 0.707." <br /><br />It's worth pointing out that the alternative hypothesis has a broad smear of probability, 50% of which is in the interval d = [-.707, .707]. So we are talking about an effect of approximate absolute magnitude .707, but potentially rather more or rather less. The text makes it sound like the alternative hypothesis is d = .707, a point-alternative hypothesis rather than an alternative distribution.<br /><br />I spend a lot of time agonizing over what constitutes an appropriate scale on the Cauchy alternative, so I'm glad you're thinking about this too. I think that in the future we may see a greater emphasis on pre-registered one-tailed tests to take make better use of how we spend our probability in our priors.Joehttps://www.blogger.com/profile/10825531253125205466noreply@blogger.com