tag:blogger.com,1999:blog-987850932434001559.post2860489372739462649..comments2017-12-17T03:54:08.225-08:00Comments on The 20% Statistician: Verisimilitude, Belief, and Progress in Psychological ScienceDaniel Lakensnoreply@blogger.comBlogger21125tag:blogger.com,1999:blog-987850932434001559.post-18613266022758177112017-11-14T23:09:22.611-08:002017-11-14T23:09:22.611-08:00The tail of the test follows from the theoretical ...The tail of the test follows from the theoretical preduction you are testing. It is unrelated to belief (you can test a hypothesis you don't believe).Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-90211639626325082232017-09-25T06:08:15.035-07:002017-09-25T06:08:15.035-07:00Hi Daniel,
Great and thoughtful post. Just a quick...Hi Daniel,<br />Great and thoughtful post. Just a quick question. (I know I'm three months "late"). You say "since your belief is not relevant for it, scientific realism suggests there is no rationale to include it in a statistical test." But isn't it true that belief enters the statistical test if you choose between one- and two-tailed tests, both of which make perfect sense from the perspective of error-control (but not from a p-value-as-evidence perspective)? Or, to give another example, when you determine things like regions-of-practical-equivalence (rope) (or whatever you may want to call them), where you somehow have to believe that the end-points are practically equivalent to some null/ roped value? Just wondering.Gerben Mulderhttps://www.blogger.com/profile/13239926388232485676noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-48594603876097854702017-08-08T02:11:39.532-07:002017-08-08T02:11:39.532-07:00Being an empirical scientist doesn't necessari...Being an empirical scientist doesn't necessarily determine which "-ist" we can be. The aim of science seems to "decide which features are present in our world", but this is already a philosophical (realistic) statement. I believe we need to forget some of the "common sense" of science temporarily, or avoid philosophical bias just like avoiding selection bias, to think about philosophy of science.<br /><br />But anyway, it's a great post!Shellingnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-81537956276929040262017-07-18T00:11:48.848-07:002017-07-18T00:11:48.848-07:00This comment has been removed by a blog administrator.cheap transcription rateshttp://clickfortranscription.com/cheap-transcription-services.phpnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-8150639474334191522017-06-22T07:35:31.663-07:002017-06-22T07:35:31.663-07:00Dear Daniel, Thank you very much for your effort h...Dear Daniel, Thank you very much for your effort here, a very constructive post. Two quick things. <br /><br />First, when something is really unknown, one probably would prefer to run a "door-to-door" search to find it using some initial clue (Bayesian Inference) rather then probably take a null position and wait for some null-falsifying evidence to reject that null position (Frequentist Inference).<br /><br /><br />Second, inference is important **only after** correct probability modeling. A HUGE share of social and behavioral research uses measurement tools that are either dichotomousely scored or on a Likert scale. Such research findings must be only stochatistcally modeled accurately using discrete probability modeling (e.g., negative binomial, hypergeometric) taking into account possible over-dispersion almost always present in such type of research data.<br /><br />I think **After** we really accurately model an actual research using an accurate probability model, the issue of inference **reasonably** just starts.<br /><br />I very much look forward to a day when two things in social and behavioral sciences happen. (A) we don't use t-tests and (M)AN(C)OVAs and LMs when really the measurement tools we see in social & behavioral research cry out loud for Generalized Linear Models, and Discrete Probability Modeling. (B) Efforts to make an inference happen only after (A) is met.Annynomoushttps://www.blogger.com/profile/06559730075316700000noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-40742720035023779792017-06-21T10:51:21.465-07:002017-06-21T10:51:21.465-07:00I hope you are correct and that equivalence testin...I hope you are correct and that equivalence testing gains popularity. I fear that most practicing scientists have too strong an incentive to continue with "nil hypothesis" testing - it is easy to do, requires almost no understanding of what is actually being done, and it substantially increases the chances of getting a paper published. I appreciate your work in pushing for a much more philosophically sound alternative.Benhttps://www.blogger.com/profile/08481083767625560025noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-72197814422263225852017-06-20T21:54:38.189-07:002017-06-20T21:54:38.189-07:00It will become much easier, and we will see more, ...It will become much easier, and we will see more, now people are starting to use equivalence testing: http://journals.sagepub.com/doi/full/10.1177/1948550617697177Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-46319533738897664662017-06-20T21:53:30.762-07:002017-06-20T21:53:30.762-07:00There is a difference between accepting model assu...There is a difference between accepting model assumptions, and including belief in your model. You can believe there is a truth out there - but since your belief is not relevant for it, scientific realism suggests there is no rationale to include it in a statistical test. Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-14669330818540044642017-06-20T15:51:57.369-07:002017-06-20T15:51:57.369-07:00Regarding Meehl, you write:
"Meehl believes ...Regarding Meehl, you write:<br /><br />"Meehl believes accepting or rejecting predictions is a sound procedure, as long as you test risky predictions in procedures with low error rates"<br /><br />I agree, but I also take Meehl's position as meaning that nearly all "significant" results are useless, given sufficient power. The error rates will be low but the results will (perhaps ironically) tell you less and less the more power you have. From the abstract to "Theory Testing in Psychology" (1967):<br /><br />"Because physical theories typically predict numerical values, an improvement in experimental precision reduces the tolerance range and hence increases corroborability. In most psychological research, improved power of a statistical design leads to a prior probability approaching 1/2 of finding a significant difference in the theoretically predicted direction. Hence the corroboration yielded by "success" is very weak, and becomes weaker with increased precision. "Statistical significance" plays a logical role in psychology precisely the reverse of its role in physics..."<br /><br />So yes, Meehl would agree with the goal of error control, but I read this above quote as saying that you can't get error control AND the testing of risky predictions using a procedure that attempts to reject a special case of "not the hypothesis" instead of attempting to directly reject the hypothesis. Do you see many cases of NHST being used to test risky predictions, in which "reject Ho" means "reject my scientific hypothesis"?Ben Prytherchnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-3011550964593040932017-06-20T15:51:23.089-07:002017-06-20T15:51:23.089-07:00Hi Daniel, I enjoy your blog and I appreciate you ...Hi Daniel, I enjoy your blog and I appreciate you emphasizing the importance of philosophy in evaluating statistical inferences. You state that:<br /><br />"From a scientific realism perspective, Bayes Factors or Bayesian posteriors do not provide an answer to the main question of interest, which is the verisimilitude of scientific theories."<br /><br />I'm sure you've heard the similar Bayesian critique of frequentist methods, which is that p-values and decisions about statistical significance don't answer the question we are usually interested in. From talking to my non-statistician friends about how they interpret statistical results, I've found that they all want the p-value to be the probability that their results were due to chance, so that they can interpret a small p-value as the probability their research hypothesis is incorrect. This was Cohen's critique in "The Earth is Round (P<0.05)":<br /><br />"What's wrong with NHST? Well, among many other things it does not tell us what we want to know, and we so much want to know what we want to know that, out of desperation, we nevertheless believe that it does!"<br /><br />I've found that my students in introductory statistics also instinctively want to interpret the p-value as the probability of the null. This could be because they are just being introduced to NHST and the logic is somewhat convoluted and so they initially go with the simpler (and incorrect) interpretation of statistical significance. I suspect that it is also because the incorrect interpretation of statistical significance makes the most intuitive sense, and answers the question that is of most interest to them.<br /><br />Of course, the clever students eventually learn the model, and understand the logic of rules such as "we treat population parameters as having fixed but unknown values, and so therefore we cannot make probabilistic statements about these values. It is only our data that are random, not the truth." But usually learning this is a struggle.<br /><br />I know you qualified your statement with "from a scientific realism perspective" - does treating probability as epistemological rather than ontological mean having rule out or suspend scientific realism? It seems to me you can both treat probability as referring to a state of knowledge *and* believe that there is a truth out there that is ultimately beyond our reach, even as we constantly strive to improve our understanding of it. I don't see the conflict here. For example I'm allowed to put a "normally distributed random error" term in a model even though I know that what I'm treating as "error" is really governed, at least in part, by other deterministic forces. In this sense, "normal random error" is a substitute for uncertainty; I know that I can't model everything and make perfect predictions and so I'm going to pretend that "normal random error" explains all of the observed variation that my model fails to predict. It's certainly fine to call this a frequency. It's also fine to call it a model of uncertainty, without having to give up on objective reality.<br />Ben Prytherchnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-91120498397081282622017-06-20T02:30:56.977-07:002017-06-20T02:30:56.977-07:00It's a pleasure to read these posts where the ...It's a pleasure to read these posts where the contrast of methods and philosophy of science is underscored. The Meehl objection to 'NHST everywhere' in psychology is a weak version to that of Gelman (no such things as 'null effect' or 'null HP', why are you testing against it?) and very similar to that of Gigerenzer in one of his recent talks (https://www.youtube.com/watch?v=4VSqfRnxvV8&t=1910s): NHST is perfectly OK and may add a lot to the theory, as long as you are pitting two proper alternative explanations against each other (his examples relates to the use of heuristics in accurate decision-making: instead of pitting heuristic A against H0, you should pit heuristic A against heuristic B and check which is more accurate). This gives incremental theoretical value to statistically significant results.<br />My position here is this: I agree with Meehl and Gigerenzer (not with Gelman). But, Feyerabend makes an extreme point which we should be mindful of: there is no 'one method' to do science, and thus I'll remain open to NHST against 'pure H0', while maybe asking for a higher burden of proof there than I would in NHST 'explanation 1 vs explanation 2'.Ignazio Zianonoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-82602027273249801792017-06-19T13:13:23.512-07:002017-06-19T13:13:23.512-07:00I typically don't reply to anonymous comments....I typically don't reply to anonymous comments.Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-59397620070966338922017-06-19T13:00:38.415-07:002017-06-19T13:00:38.415-07:00It seems quite a stretch to note that Meehl accept...It seems quite a stretch to note that Meehl accepted N-P type testing under certain conditions and then go on to argue that his writings support the idea that, "error control is the most important goal in science."Unknownhttps://www.blogger.com/profile/00227235335343168838noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-10139953143078579962017-06-19T11:59:36.027-07:002017-06-19T11:59:36.027-07:00No, belief and truth-likeness are not the same. No...No, belief and truth-likeness are not the same. Note that the problem is not the relative likelihood (likelihoods are fine and can be used) the problem is the prior. Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-57395344448303485902017-06-19T11:58:35.453-07:002017-06-19T11:58:35.453-07:00Hi Robert, thanks for your comments (even though I...Hi Robert, thanks for your comments (even though I'm pretty sure I didn't understand the second paragraph, but I'll google). I guess you are right that if outcome of the Frequentist and Bayesian decision procedure are the same, there is only a philosophical difference, but not one in practice. I think Bayesian updating can be used combined with a decision threshold as long as the frequentist error rates are ok (If I understand your main point!).Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-70696809389484111912017-06-19T11:02:51.394-07:002017-06-19T11:02:51.394-07:00This is a well-written, dense blog post. It seems ...This is a well-written, dense blog post. It seems to be a quite concise summary of your position. Thanks for writing it.<br /><br />Well, you read van Fraassen and Feyerabend and still belief in scientific realism. So no need to recapitulate their arguments, i guess. If you want more food for thought though, maybe try Adornos Negative Dialectics for a very dense text on incommensurability. <br /><br />One of your other points is whether Bayesian posteriors can map the verisimilitude of scientific theories. This is an intriguing question. I'd argue that if reality exists in a verisimilitude fashion, then only as Dirac or Kronecker delta functions. Consider that it is questionable whether any prior (but the oracle prior) can ever converge to such a function in finite time, or finite iterations of experiments. Even more so if we assume that the delta function is non-stationary, or if the objective scientific experiment generating the evidence is non-reproducible (e.g. prediction of an election result, or similar). Therefore it could be there is a set of statements about reality, which might never be captured by Bayesian updating. In that regard, i fully agree with you that it needs a jump of faith for verisimilitude, maybe using thresholding at which point we treat a belief function as a delta function. But there exist many ways how this could be incorporated.<br /><br />Consider that even hard Bayesians would accept that Trump won the election as inevitable fact, i.e. their posterior is 1 on Trump and 0 on Hillary. So i might not really understand your line of reasoning against Bayesian updating here. Hm. Maybe you are more wondering whether a Bayesian may use thresholding also for probabilistic statements, for which we could still perform reproducable experiments to gain further evidence? Robert Bauerhttps://www.blogger.com/profile/11478797098322656213noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-22587740044365149172017-06-19T04:27:06.462-07:002017-06-19T04:27:06.462-07:00But I'm still puzzled: even in 1935, is it obv...But I'm still puzzled: even in 1935, is it obvious that Wald had had the time to read the Neyman-Pearson's papers? And if he had had the time, why doesn't Popper quote them in Logik der Forschung? Perhaps more importantly, I'm unsure whether the modification suggested by Wald is really fundamental; and if it isn't, we migth think that Popper had already built up his own ideas independantly of Neyman-Pearson ;) (it's just a detail, I'll admit it!)Aurélien Allardnoreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-4846666591069102072017-06-19T03:28:22.677-07:002017-06-19T03:28:22.677-07:00"From a scientific realism perspective, Bayes..."From a scientific realism perspective, Bayes Factors or Bayesian posteriors do not provide an answer to the main question of interest, which is the verisimilitude of scientific theories. Belief can be used to decide which questions to examine, but it can not be used to determine the truth-likeness of a theory."<br /><br />If Bayes factors tell you the plausibility of one hypothesis over another then doesn't that also imply that they tell you something about the truthlikeness or verisimilitude of the hypothesis, relative to the other (i.e., the one with greater plausibility is closer to the truth based on the observable data)?farid1323https://www.blogger.com/profile/14777633298107288619noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-53922485615542455792017-06-19T02:41:04.217-07:002017-06-19T02:41:04.217-07:00Maybe I should have added I draw heavily on the 3r...Maybe I should have added I draw heavily on the 3rd addendum in later editions of Poppers book. I cite the 2002 version intentionally. Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-72234862365850669702017-06-19T02:39:10.618-07:002017-06-19T02:39:10.618-07:00But Popper didn't die in 1934, and in later (t...But Popper didn't die in 1934, and in later (translated and updated) additions, he added the following footnote indicating he talked to Wald: <br /><br />"Here the word ‘all’ is, I now believe, mistaken, and should be replaced, to be a little more precise, by ‘all those . . . that might be used as gambling systems’. Abraham Wald showed me the need for this correction in 1935. Cf. footnotes *1 and *5 to section 58 above (and footnote 6, referring to A. Wald, in section *54 of my Postscript)"<br /><br />If only falsifying hypotheses was so easy all the time ;)Daniel Lakenshttps://www.blogger.com/profile/18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-24207548338426729622017-06-19T02:35:09.524-07:002017-06-19T02:35:09.524-07:00Very nice post! I'll need more time to think a...Very nice post! I'll need more time to think about the substantive issues, but here's some nitpicking: I'm not sure that "this methodological falsification (Lakatos, 1978) is clearly inspired by a Neyman-Pearson perspective on statistical inferences." Logik der Forschung was published in 1934, while Neyman and Pearson's papers were published in the 1930's, I think (1933 for the paper you quote). Given the slow communication between Austria and Great-Britain at that time, I think it's more likely that they developped their thinking independantly of each other (I don't think Wald was already writing statistical papers at that time). But I'd be glad to be proved wrong!Aurélien Allardnoreply@blogger.com