tag:blogger.com,1999:blog-987850932434001559.post5340395963291117807..comments2024-03-29T11:00:11.612+01:00Comments on The 20% Statistician: P-values vs. Bayes FactorsDaniel Lakenshttp://www.blogger.com/profile/18143834258497875354noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-987850932434001559.post-36114502982674358492022-08-26T13:11:33.804+02:002022-08-26T13:11:33.804+02:00I guess this is an older post but I hope you can g...I guess this is an older post but I hope you can get back to me. I found this (worryingly) convincing. Worryingly, because I use Bayes Factors so often in my work (reporting them alongside the frequentist indices). I do not like the subjective Bayesian methodology for the reasons you have argued and I always try and use the hopefully reasonable "default" priors Erik and co recommend (despite push back from elsewhere regarding them that is above my pay grade). <br /><br />But there are some practical (rather than philosophical) reasons why I have to go Bayes sometimes. The models I tend to use most are mixed models (repeated measurements from each participant) and sometimes even on minimal structure (e.g. just random intercepts for subject) they can be singular fit and the only option I know from there is to use Bayesian mixed-models (which is fine for linear models but I have have binary outcomes I am bit stuck as there are no "defaults" yet.)<br /><br />What would you do in these situations? Also did Erik write a response somewhere, even if it was just a twitter thread? I can't find one but it would be cool to see more discussion between my two favorite stats guys.<br /><br />Lastly, you mention "Leonard Held brought up these Bayesian/Frequentist compromise methods." Do you have a link to anything on this point as I just do both analyses and report the indices but this seems much better.<br /><br />Thanks!<br /><br />P.S. I recently completed my PhD and your posts have always been a great help!Whirly (Simon Myers)https://www.blogger.com/profile/18434933533969015033noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-80448171946676053852021-09-02T15:47:25.802+02:002021-09-02T15:47:25.802+02:00Hi all,
you don't have to be a proponent of t...Hi all, <br />you don't have to be a proponent of the Neyman-Pearson approach to use p-values or power. All that is required is that you pit two models against each other: one for the null hypothesis of interest and one for a specific alternative hypothesis. A statistical criterion then gives you p and beta as conditional probabilities under the model assumptions. There is no need to interpret them as long-term error probabilities because there is no need to make inferences past the data and the model. -- So what is my goal of a statistical test? Not to make inferences about future tests or about a platonic "population", but simply to safeguard against chance by comparing models. Thinking of tests more in terms of "evaluation" than of "decision" turns p and beta into useful, standardized measures of data quality, and all the metaphysics of populations, true values, infinite experiments, long term errors and so on (all quite bizarre from an ontological point of view) is out of the window.Anonymoushttps://www.blogger.com/profile/06880761681096663210noreply@blogger.com