tag:blogger.com,1999:blog-987850932434001559.post8090242552412820939..comments2023-06-06T14:23:05.573+02:00Comments on The 20% Statistician: How many participants should you collect? An alternative to the N * 2.5 ruleDaniel Lakenshttp://www.blogger.com/profile/18143834258497875354noreply@blogger.comBlogger13125tag:blogger.com,1999:blog-987850932434001559.post-72055690412305247632021-03-18T11:04:50.477+01:002021-03-18T11:04:50.477+01:00This comment has been removed by a blog administrator.Pooja Kumarihttps://www.blogger.com/profile/17449093115744812311noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-17461833213466831122018-03-28T23:54:37.109+02:002018-03-28T23:54:37.109+02:00The Schonbrodt paper was published after my blog. ...The Schonbrodt paper was published after my blog. The scale factor of 1 is a nonsense prior and should never be used.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-10913106512247939272018-03-27T19:20:15.328+02:002018-03-27T19:20:15.328+02:00Hi Daniel,
Why the recommendation to stop at Bayes...Hi Daniel,<br />Why the recommendation to stop at Bayes Factors > 3 (with a scale r on the effect size of 0.5)? Schönbrodt et al suggest BF > 5 (with a scale parameter r to 1).<br />Best regards<br />/BillAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-34978377216879334942015-08-14T15:53:33.753+02:002015-08-14T15:53:33.753+02:00Rolf Zwaan also blogged about this: http://rolfzwa...Rolf Zwaan also blogged about this: http://rolfzwaan.blogspot.co.uk/2015/05/p20-what-now-adventures-of-good-ship.htmlAnonymoushttps://www.blogger.com/profile/12021257986004606211noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-59834081774527838302015-04-18T18:51:32.439+02:002015-04-18T18:51:32.439+02:00(other than this one, which is alluded to in Jeff&...(other than this one, which is alluded to in Jeff's comment http://www.ncbi.nlm.nih.gov/pubmed/24101570 http://www.ncbi.nlm.nih.gov/pubmed/24659049 )Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-66134655837758651682015-04-18T09:06:12.191+02:002015-04-18T09:06:12.191+02:00Thom,
Royall discusses this issue briefly in the ...Thom,<br /><br />Royall discusses this issue briefly in the rejoinder to his 2000 paper on the misleading evidence (I see you've discussed this paper in your serious stats book). He does not go into much detail, other than to say that any "real" likelihood will necessarily satisfy the universal bound (p. 779). When Bayes factors are based off composite hypotheses, rather than simple hypotheses, they do not have to adhere to this bound. Profile likelihoods are in the same boat. <br /><br />The extent to which they can exceed the universal bound is not explained in any detail. He only mentions it in passing, and I haven't seen any papers that give it a more rigorous look. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-6237728972521278822015-04-14T10:49:01.681+02:002015-04-14T10:49:01.681+02:00Jeff,
Royall makes the case that likelihood ratio...Jeff,<br /><br />Royall makes the case that likelihood ratios minimise both Type I and Type II errors (or at least their equivalent in the evidential approach).<br /><br />So at least it seems to me that Bayes Factors would inherit this behaviour (at least in a weak sense). I know of nor formal analysis of this but it would hold in simple cases where the BF and the LR are equivalent (or near equivalent).thomhttps://www.blogger.com/profile/00392478801981388165noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-12979633377271802092015-04-12T18:34:46.222+02:002015-04-12T18:34:46.222+02:00Thanks Jeff! I'll give it a try - this post se...Thanks Jeff! I'll give it a try - this post seems to be very popular (for a Sunday!) so I'll probably turn it into a paper, and will take a shot at computing the probabilities you suggest!Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-37930109966297687822015-04-12T18:20:03.723+02:002015-04-12T18:20:03.723+02:00Fun post. I find it a bit surprising that the Bay...Fun post. I find it a bit surprising that the Bayesian approach, which is not motivated at all by control of errors in the limit of many replicate experiments, controls these errors reasonably well in the provided simulations. I think it reflects more the conservatism of the 3-to-1 choice more than any deep properties of Bayes factors.<br /><br />As an aside, the simulations here are not really appropriate for testing Bayesian stats and stopping rules. Instead, simulate data from d=0 and d=.5. Consider all sample effect sizes in a common small range, say .3 to .31. Then compute how many truly came from each hypothesis, perhaps 10 times as many came from d=.5 than form d=0. Now see if the BF captures this ratio. It should assuming you had an equal number of simulations for d=0 as for d=.5. And it should do so regardless of the stopping rule. Computing probabilities on hypotheses from observed data is neat and fun as well as a uniquely Bayesian concept. At least I found it neat and fun (http://pcl.missouri.edu/sites/default/files/Rouder-PBR-2014.pdf).Jeff Rouderhttps://www.blogger.com/profile/12042232118911308833noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-27996872221944386972015-04-12T16:25:37.738+02:002015-04-12T16:25:37.738+02:00Hi Thom, thanks - never heard of it! I get most of...Hi Thom, thanks - never heard of it! I get most of my information from medical statistics on this, and these statisticians have little need to keep things simple (they have a full time job helping medical researchers doing studies, and the medical researchers never have to do the statistics). I'll look into it, sounds like it's a similar idea!Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-6798258433203964602015-04-12T16:20:16.913+02:002015-04-12T16:20:16.913+02:00Are you familiar with the stopping rules proposed ...Are you familiar with the stopping rules proposed by Frick in the late 90s? The hist is this. You plan an initial n (say for 80% power to detect an effect) and then add in increments of say 10. At each stage if p > .33 or less than p < .01 you stop. For a wide range of tests this keeps alpha at around .05.<br /><br />Various modifications have been proposed. A nice feature is that is fairly robust to the initial n, number of looks and increment size.<br /><br />Thomthomhttps://www.blogger.com/profile/00392478801981388165noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-43616610710611566092015-04-12T15:48:28.056+02:002015-04-12T15:48:28.056+02:00Hi, thanks! The Pocock boundary is in essence simi...Hi, thanks! The Pocock boundary is in essence similar to a Bonferroni correction in the basic idea, but not in the calculation. See http://en.wikipedia.org/wiki/Pocock_boundary for some quick info, or read http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2333729 for a slightly more extensive introduction to sequential analyses.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-15961286944268892202015-04-12T15:38:36.126+02:002015-04-12T15:38:36.126+02:00This is an excellent post. This is a systematic wa...This is an excellent post. This is a systematic way to do something that is quite intuitive and that we do in my group; however, it seems like one could come up with a specific and formulaic way to do this routinely. <br />I have one question: Can you explain why the p values for sequential looks are .018, .019, .020 and .021. What is the criteria for changing these scores? They are not bonferroni corrected so I am wondering what the criteria are.Anonymousnoreply@blogger.com