Monday, May 9, 2022

Tukey on Decisions and Conclusions

In 1955 Tukey gave a dinner talk about the difference between decisions and conclusions at a meeting of the Section of Physical and Engineering Science of the American Statistical Association. The talk was published in 1960. The distinction relates directly to different goals researchers might have when they collect data. This blog is largely a summary of his paper.

 


Tukey was concerned about the ‘tendency of decision theory to attempt to conquest all of statistics’. In hindsight, he needn’t have worried. In the social sciences, most statistics textbooks do not even discuss decision theory. His goal was to distinguish decisions from conclusions, to carve out a space for ‘conclusion theory’ to complement decision theory. He distinguishes decisions from conclusions.

 

In practice, making a decision means to ‘decide to act for the present as if’. Possible actions are defined, possible states of nature identified, and we make an inference about each state of nature. Decisions can be made even when we remain extremely uncertain about any ‘truth’. Indeed, in extreme cases we can even make decisions without access to any data. We might even decide to act as if two mutually exclusive states of nature are true! For example, we might buy a train ticket for a holiday three months from now, but also take out life insurance in case we die tomorrow.   

 

Conclusions differ from decisions. First, conclusions are established without taking consequences into consideration. Second, conclusions are used to build up a ‘fairly well-established body of knowledge’. As Tukey writes: “A conclusion is a statement which is to be accepted as applicable to the conditions of an experiment or observation unless and until unusually strong evidence to the contrary arises.” A conclusion is not a decision on how to act in the present. Conclusions are to be accepted, and thereby incorporated into what Frick (1996) calls a ‘corpus of findings’. According to Tukey, conclusions are used to narrow down the number of working hypotheses still considered consistent with observations. Conclusions should be reached, not based on their consequences, but because of their lasting (but not everlasting, as conclusions can now and then be overturned by new evidence) contribution to scientific knowledge.

 

Tests of hypotheses

 

According to Tukey, a test of hypotheses can have two functions. The first function is as a decision procedure, and the second function is to reach a conclusion. In a decision procedure the goal is to choose a course of action given an acceptable risk. This risk can be high. For example, a researcher might decide not to pursue a research idea after a first study, designed to have 80% power for a smallest effect size of interest, yields a non-significant result. The error rate is at most 20%, but the researcher might have enough good research ideas to not care.

 

The second function is to reach a conclusion. This is done, according to Tukey, by controlling the Type 1 and Type 2 error rate at ‘suitably low levels’ (Note: Tukey’s discussion of concluding an effect is absent is hindered somewhat by the fact that equivalence tests were not yet widely established in 1955 – Hodges & Lehman’s paper appeared in 1954). Low error rates, such as the conventions to use a 5% of 1% alpha level, are needed to draw conclusions that can enter the corpus of findings (even though some of these conclusions will turn out to be wrong, in the long run).

 

Why would we need conclusions?

 

One might reasonably wonder if we need conclusions in science. Tukey also ponders this question in Appendix 2. He writes “Science, in the broadest sense, is both one of the most successful of human affairs, and one of the most decentralized. In principle, each of us puts his evidence (his observations, experimental or not, and their discussion) before all the others, and in due course an adequate consensus of opinion develops.” He argues not for an epistemological reason, nor for a statistical reason, but for a sociological reason. Tukey writes: There are four types of difficulty, then, ranging from communication through assessment to mathematical treatment, each of which by itself will be sufficient, for a long time, to prevent the replacement, in science, of the system of conclusions by a system based more closely on today’s decision theory.” He notes how scientists can no longer get together in a single room (as was somewhat possible in the early decades of the Royal Society of London) to reach consensus about decisions. Therefore, they need to communicate conclusions, as “In order to replace conclusions as the basic means of communication, it would be necessary to rearrange and replan the entire fabric of science.” 

 

I hadn’t read Tukey’s paper when we wrote our preprint “The Epistemic and Pragmatic Function of Dichotomous Claims Based on Statistical Hypothesis Tests”. In this preprint, we also discuss a sociological reason for the presence of dichotomous claims in science. We also ask: “Would it be possible to organize science in a way that relies less on tests of competing theories to arrive at intersubjectively established facts about phenomena?” and similarly conclude: “Such alternative approaches seem feasible if stakeholders agree on the research questions that need to be investigated, and methods to be utilized, and coordinate their research efforts”.  We should add a citation to Tukey's 1960 paper.

 

Is the goal of an study a conclusion, a decision, or both?

 

Tukey writes he “looks forward to the day when the history and status of tests of hypotheses will have been disentangled.” I think that in 2022 that day has not yet come. At the same time, Tukey admits in Appendix 1 that the two are sometimes intertwined.

 

A situation Tukey does not discuss, but that I think is especially difficult to disentangle, is a cumulative line of research. Although I would prefer to only build on an established corpus of findings, this is simply not possible. Not all conclusions in the current literature are reached with low error rates. This is true both for claims about the absence of an effect (which are rarely based on an equivalence test against a smallest effect size of interest with a low error rate), as for claims about the presence of an effect, not just because of p-hacking, but also because I might want to build on an exploratory finding from a previous study. In such cases, I would like to be able to conclude the effects I build on are established findings, but more often than not, I have to decide these effects are worth building on. The same holds for choices about the design of a set of studies in a research line. I might decide to include a factor in a subsequent study, or drop it. These decisions are based on conclusions with low error rates if I had the resources to collect large samples and perform replication studies, but other times they involve decisions about how to act in my next study with quite considerable risk.

 

We allow researchers to publish feasibility studies, pilot studies, and exploratory studies. We don’t require every study to be a Registered Report of Phase 3 trial. Not all information in the literature that we build on has been established with the rigor Tukey associates with conclusions. And the replication crisis has taught us that more conclusions from the past are later rejected than we might have thought based on the alpha levels reported in the original articles. And in some research areas, where data is scarce, we might need to accept that, if we want to learn anything, the conclusions will always more tentative (and the error rates accepted in individual studies will be higher) than in research areas where data is abundant.

 

Even if decisions and conclusions can not be completely disentangled, reflecting on their relative differences is very useful, as I think it can help us to clarify the goal we have when we collect data. 

 

For a 2013 blog post by Justin Esarey, who found the distinction a bit less useful than I found it, see https://polmeth.org/blog/scientific-conclusions-versus-scientific-decisions-or-we%E2%80%99re-having-tukey-thanksgiving

 

References

Frick, R. W. (1996). The appropriate use of null hypothesis testing. Psychological Methods, 1(4), 379–390. https://doi.org/10.1037/1082-989X.1.4.379

Tukey, J. W. (1960). Conclusions vs decisions. Technometrics, 2(4), 423–433.

Uygun Tunç, D., Tunç, M. N., & Lakens, D. (2021). The Epistemic and Pragmatic Function of Dichotomous Claims Based on Statistical Hypothesis Tests. PsyArXiv. https://doi.org/10.31234/osf.io/af9by

 

 

 

 

 

No comments:

Post a Comment