A blog on statistics, methods, and open science. Understanding 20% of statistics will improve 80% of your inferences.

Monday, June 30, 2014

Data peeking without p-hacking



You might have looked at your data while the data collection was still in progress, and have been tempted to stop the study because the result was already significant. Alternatively, you might have analyzed your data, only to find the result was not yet significant, and decided to collect additional data. There are good ethical arguments to do this. You should spend tax money in the most efficient manner, and if adding some data makes your study more informative, that's better than running a completely new and bigger study. Similarly, asking 200 people to spend 5 minutes thinking about their death in a mortality salience manipulation when you only needed 100 participants to do this depressing task is not desirable. However, if you peek at your data but don’t control the Type 1 error rate when deciding to terminate or continue the data collection, you are p-hacking.

No worries: There’s an easy way to peek at data the right way, and decide whether to continue the data collection or call it a day while controlling the Type 1 error rate. It’s called sequential analyses, has been used extensively in large medical trials, and the math to control the false positives level is worked out pretty well (and there's easy to use software to perform the required calculations). If you’ve been reading this blog, you might have realized I think it’s important to point out what’s wrong, but even more important to prevent people from doing the wrong thing by explaining how to do the right thing.

Last week, Ryne Sherman posted a very cool R function to examine the effects of repeatedly analyzing data, and adding participants when the result is not yet significant. It allows you to specifiy how often you will collect additional samples, and gives the inflated alpha level and effect size. Here, I’m going to use his function to show how easy it is to prevent p-hacking while still being able to repeatedly analyze results while the data collection is in progress. I should point out that Bayesian analyses have no problem with repeated testing, so if you want to abandon NHST, that's also an option.

I’ve modified his original code slightly as follows:

res <- phack(initialN=50, hackrate=50, grp1M=0, grp2M=0, grp1SD=1, grp2SD=1, maxN=150, alpha=.0221, alternative="two.sided", graph=TRUE, sims=100000)

I’ve set the initial sample size (per condition) to 50, and the ‘hackrate’ (or the number of participants that are collected, if the original sample is not significant) to 50 additional participants in each group. I’ve set MaxN, the maximum sample size you are willing to collect, to 150. This means that you get three tries: After 50, after 100, and after 150 participants per condition. That’s not a p-hacking rampage (Ryne simulates results of checking after every 5 participants), but as we’ll see below, it’s enough to substantially inflate the Type 1 error rate. I also use ‘two-sided’ tests in this simulation, and increased the number of simulations from 1000 to 100000 for more stable results.

Most importantly, I have adjusted the alpha-level. Instead of the typical .05 level, I’ve lowered it to .0221. Before I explain why I adjusted the alpha level, let’s see if it works.

Running The Code


Make sure to have first installed and loaded the ‘psych’ package, and read in the p-hack function Ryne made:

install.packages("psych")  # load psych package
source("http://rynesherman.com/phack.r") # read in the p-hack function  

Then, run the code below (the set.seed(3) function makes sure you get the same result as in this example - remove it to simulate different random data).

set.seed(3)
res <- phack(initialN=50, hackrate=50, grp1M=0, grp2M=0, grp1SD=1, grp2SD=1, maxN=150, alpha=.0221, alternative="two.sided", graph=TRUE, sims=100000)

The output you get will tell you a lot of different things, but here I'm mainly interested in:

Proportion of Original Samples Statistically Significant = 0.02205 
Proportion of Samples Statistically Significant After Hacking = 0.04947 

This means that if we look at the data after the first 50 participants are in, only 0.02205% of the studies reveal a statistically significant result. That’s pretty close to the significance level of 0.0221 (as it should be, when there is no true effect to be found). Now for the nice part: We see that after ‘p-hacking’ (looking at the data multiple times) the overall alpha level is approximately 0.04947%. It stays nicely below the 0.05% significance level that we would adhere to if we had performed only a single statistical test.

What I have done here is formally called ‘sequential analyses’. I’ve applied Pocock’s (1977) boundary for three sequential analyses, and not surprisingly, it works very nicely. It lowers the alpha level for each analyses in a way such that the overall alpha level for three looks at the data stays below 0.05%. If we hadn’t lowered the significance level (which you can try out by re-running the analysis, changing the alpha=.0221 to alpha=.05, we would have found an overall Type 1 error rate of 10.7% - which is an inflated alpha-level due to flexibility in the data analysis that can be quite problematic (see also Lakens & Evers, 2014).

On page 7 of Simmons, Nelson, & Simonsohn (2011), the authors discuss correcting alpha levels (as we’ve done above), where they even refer to Pocock (1977). The paragraph reads a little bit like a reviewer made them write it, but in it, they say: “unless there is an explicit rule about exactly how to adjust alphas for each degree of freedom […] the additional ambiguity may make things worse by introducing new degrees of freedom.” I think there are good explicit rules that can be used in the specific case of repeatedly analyzing data and adding participants. Nevertheless, they are right that in sequential analyses, researchers need to determine the number of looks at the data, and the alpha correction function. All this could be additional sources of flexibility, and therefore I think sequential analyses need to be pre-registered. But for a pre-registered rule to determine the sample size, it allows for surprising flexibility in the data collection, while controlling the Type 1 error rate.

Note that Pocock’s rule is actually not the one I would recommend, and it isn’t even the rule Pocock would recommend (!), but it’s the only one that has the same alpha level for each intermittent test, and thus the only one I could demonstrate in the function Ryne Sherman wrote. I won’t go in too much detail about which adjustments to the alpha-level you should make, because I’ve written a practical primer on sequential analyses in which this, and a lot more, is discussed.

Note that another adjustment of Ryne's code nicely reproduces the 'Situation B' in Simmons et al's False Positive Psychology paper of collecting 20 participants, and adding 10 if the test is not significant (for a significance level of .05):

res <- phack(initialN=20, hackrate=10, grp1M=0, grp2M=0, grp1SD=1, grp2SD=1, maxN=30, alpha=.05, alternative="two.sided", graph=TRUE, sims=100000)


When there is an effect to be found

I want to end by showing why sequential analyses can be very beneficial if there is a true effect to be found. Run the following code, where the grp1M (mean in group 1) is 0.4.

set.seed(3)
res <- phack(initialN=50, hackrate=50, grp1M=0.4, grp2M=0, grp1SD=1, grp2SD=1, maxN=150, alpha=.0221, alternative="two.sided", graph=TRUE, sims=100000)

This study has an effect size of d=0.4. Remember that in real life, the true effect size is not known, so you might have just chosen to collect a number of participants based on some convention (e.g., 50 participants in each condition) which would lead to an underpowered study. In situations when the true effect is uncertain, sequential analyses can have a real benefit. After running the script above, we get:

Proportion of Original Samples Statistically Significant = 0.37624
Proportion of Samples Statistically Significant After Hacking = 0.89686 

Now that there is a true effect of d=0.4 these numbers mean that in 37.6% of the studies, we got lucky and already observe a statistical difference after collecting only 50 participants in each condition. That’s efficient, and you can take an extra week off, because even though single studies are never enough to accurately estimate the true effect size, the data give an indication something might be going on. Note this power is quite a lot lower than if we only look at the data once - other corrections for the alpha level than Pocock's correction have a lower cost in power. 

The data also tell us that after collecting 150 participants, we will have observed and effect in approximately 90% of the studies. If the difference happens not to be significant after running 100 participants, and you deem a significant difference to be important, you can continue collecting participants – without it being p-hacking - and improve your chances of observing a significant result.

If, after 50 participants in each condition, you observe a Cohen’s d of 0.001 (and you don’t have access to thousands and thousands of people on Facebook) you might decide you are not interested in pursuing this specific effect any further, or choose to increase the strength of your manipulation in a new study. That’s also more efficient than collecting 100 participants in each condition without looking at the data until you are done, and hoping for the best.

It was because of these efficiency benefits that Wald (1945), who published an early paper on sequential analyses, was kept from publically sharing his results during war time. These insights were judged to be sufficiently useful for the war effort to keep them out of the hands of the enemy:


Given how much more efficient sequential analyses are, it’s very surprising people don’t use them more often. If you want to get started, check out my practical primer on sequential analyses, which is in press in The European Journal of Social Psychology in a special issue on methodological improvements. If you want to listen to me explain it in person (or see how I look like when wearing a tie), you can listen to my talk about this at European Association of Social Psychology conference (EASP 2014) in Amsterdam, Wednesday July 9th, 09:40 AM in room OMHP F0.02. But I would suggest you just read the paper. There’s an easy step-by-step instruction (also for calculations in R), and the time it takes is easily worth it, since your data collection will be much more efficient in the future, while you will be able to aim for well-powered studies at a lower cost. I call that a win-win situation.

Thanks to Ryne Sherman for his very useful function (which can be used to examine the effects of peeking at data, even when it's not p-hacking!). This was his first post, and if future ones will be as useful, you will want to follow his blog or twitter account.


References

Lakens, D. (in press). Performing high-powered studies efficiently with sequential analyses. European Journal of Social Psychology. DOI: 10.1002/ejsp.2023. Pre-print available at SSRN: http://ssrn.com/abstract=2333729 

Pocock, S. J. (1977). Group sequential methods in the design and analysis of clinical trials. Biometrika, 64(2), 191-199. 
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359-1366.
Wald, A. (1945). Sequential tests of statistical hypotheses. The Annals of Mathematical Statistics, 16(2), 117-186.

Friday, June 27, 2014

Too True to be Bad: When Sets of Studies with Significant and Non-Significant Findings Are Probably True


Most of this post is inspired by a lecture on probabilities by Ellen Evers during a PhD workshop we taught (together with Job van Wolferen and Anna van ‘t Veer) called ‘How do we know what’s likely to be true’. I’d heard this lecture before (we taught the same workshop at Eindhoven a year ago) but now she extended her talk to the probability of observing a mix of significant an non-significant findings. If this post is useful for you, credit goes to Ellen Evers.

A few days ago, I sent around some questions on Twitter (thanks for answering!) and in this blog post, I’d like to explain the answers. Understanding this is incredibly important and will change the way you look at sets of studies that contain a mix of significant and non-significant results, so you want to read until the end. It’s not that difficult, but you probably want to get a coffee. 42 people answered the questions, and all but 3 worked in science, anywhere from 1 to 26 years. If you want to do the questions before reading the explanations below (which I recommend), go here

I’ll start with the easiest question, and work towards the most difficult one.

Running a single study

I asked: You are planning a new study. Beforehand, you judge it is equally likely that the null-hypothesis is true, as that it is false (a uniform prior). You set the significance level at 0.05 (and pre-register this single confirmatory test to guarantee the Type 1 error rate). You design the study to have 80% power if there is a true effect (assume you succeed perfectly). What do you expect is the most likely outcome of this single study?

The four response options were:

1) It is most likely that you will observe a true positive (i.e., there is an effect, and the observed difference is significant).


2) It is most likely that you will observe a true negative (i.e., there is no effect, and the observed difference is not significant)


3) It is most likely that you will observe a false positive (i.e., there is no effect, but the observed difference is significant).


4) It is most likely that you will observe a false negative (i.e., there is an effect, but the observed difference is not significant)



59% of the people chose the correct answer: It’s most likely that you’ll observe a true negative. You might be surprised, because the scenario (5% significance level, 80% power, the null hypothesis (H0) and the alternative hypothesis (H1) are equally likely to be true) is pretty much the prototypical experiment. It thus means that a typical experiment (at least when you think your hypothesis is 50% likely to be true) is most likely not to reject the null-hypothesis (earlier, I wrote 'fail', but in the comments Ron Dotsch correctly points out not rejecting the null can be informative as well). Let’s break it down slowly.

If you perform a single study, the effect you are examining is either true or false, and the difference you observe is either significant or not significant. These four possible outcomes are referred to as true positives, false positives, true negatives, and false negatives. The percentage of false positives equals the Type 1 error rate (or α, the significance level), and false negatives (or Type 2 errors, β) equal 1 minus the power of the study. When the null hypothesis (H0) and the alternative hypothesis (H1) are a-priori equally likely, the significance level is 5%, and the study has 80% power, the relative likelihood of the four possible outcomes of this study before we collect the data is detailed in the table below.



H0 True
(A-Priori 50% Likely)
H1 True
(A-Priori 50% Likely)
Significant Finding
False Positive (α)
2.5%
True Positive (1-β)
40%
Non-Significant Finding
True Negative (1- α)
47.5%
False Negative (β)
10%


The only way a true positive is most likely (the answer provided by 24% of the participants) given this a-priori likelihood of H0 is when the power is higher than 1-α, so in this example higher than 95%. After asking which outcome was most likely, I asked how likely this outcome was. In the sample of 42 people who filled out my there were people who responded intuitively, and those who did the math. Twelve people correctly reported 47.5%. What’s interesting is that 16 people (more than one-third) reported a percentage higher than 50%. These people might have simply ignored the information that the hypothesis was equally likely to be true, as it was that it’s false (which implies no outcome can be higher than 50%), and intuitively calculated probabilities assuming the effect was true, while ignoring the probability it was not true. The modal response for people who had indicated earlier that they thought it was most likely to observe a true positive also points to this, because they judged it would be 80% probable that this true positive was observed.

Then I asked: 

“Assume you performed the single study described above, and have observed a statistical difference (p < .05, but you don’t have any further details about effect sizes, exact p-values, or the sample size). Simply based on the fact that the study is statistically significant, how likely do you think it is you observed a significant difference because you were examining a true effect?”

Eight people (who did the math) answered 94.1%, the correct answer. All but two people who responded intuitively underestimated the correct answer (the average answer was 57%). The remaining two answered 95%, which indicates they might have made the common error to assume that observing a significant result means it’s 95% likely the effect is true (it’s not, see Nickerson, 2000). It’s interesting that people who responded intuitively overestimated the a-priori chance of a specific outcome, but then massively underestimate the probability of having observed a specific outcome if the effect was true. The correct answer is 94.1% because now that we know we did not observe a non-significant effect, we are left with the remaining probabilities that the effect is significant. There was 2.5% chance of a Type 1 error, and a 40% chance of a true positive. That means the probability of observing this positive outcome, if the effect is true, is 40 divided by the total, which is 40+2.5. And 40/(40+2.5)=94.1%. Ioannidis (2005) calls this, the post-study probability that the effect is true, the positive predictive value, PPV, (thanks to Marcel van Assen for pointing this out).

What happens if you run multiple studies?

Continuing the example as Ellen Evers taught it, I asked people to imagine they performed three of the studies described above, and found that two were significant but one was not. How likely would it be to observe this outcome of the alternative hypothesis is true? All people who did the math gave the answer 38.4%. This is the a-priori likelihood of finding 2 out of 3 studies to be significant with 80% power and a 5% significance level. If the effect is true, there’s an 80% probability of finding an effect, times 80% probability of finding an effect, times 20% probability of finding a Type 2 error. 0.8*0.8*0.2= 12.8%. If you calculate the probability for the three ways to get two out of three significant results (S S NS; S NS S; NS S S) you multiply it by 3, and 3*12.8 gives 38.4%. Ellen prefers to focus on the single outcome you have observed, including the specific order in which it was observed.

I might have not formulated the question clearly enough (most probability statements are so unlike natural language, they can be difficult to formulate precisely), but I tried to ask not for the a-priori probability,but for the probability that given these observations, the studies examined a true effect (similar to the single study case above, where the answer was not 80%, but 94.1%). In other words, the probability that H1 is true, conditional on the acceptence of H1, which Ioannidis (2005) calls the PPV. This is the likelihood of finding a true positive, divided by the total probability of finding a significant result (either a true positive or a false positive).

We therefore also need to know how likely it is to observe this finding when the null-hypothesis is true. In that case, we would find a Type 1 error (5%), another Type 1 error (5%), and a true negative (95%), and 0,05*0,05*0,95 = 0.2375%. There are three ways to get this pattern of results, so if you want the probability of 2 out of 3 significant findings under H0 irrespective of the order, this probability is 0.7125%. That’s not very likely at all. 

To answer the question, we need to calculate 12.8/(12.8+0.2375) (for the specific order in which the results were observed) or 38.4/(38.4+0.7125) (for any 2 out of 3 studies) and both calculations give us 98.18%. Although a-priori it is not extremely likely to observe 2 significant and 1 non-significant finding, after you have observed this outcome, it is more than 98% likely to have observed 2 significant and one non-significant result in three studies when the effect is true (and thus only 1.82% when the effect is not true).

The probability that, given that you observed a mix of significant and non-significant studies, the effect you observed was true, is important to understand correctly if you do research. In a time where sets of 5 or 6 significant low-powered studies are criticized for being ‘too good to be true’ it’s important that we know when a set of studies with a mix of significant and non-significant studies is ‘too true to be bad’. Ioannidis (2005) briefly mentions you can extend the calculations for multiple studies, but focusses too much on when findings are most likely to be false. What struck me from the lecture Ellen Evers gave, is how likely some sets of studies that include non-significant findings are to be true.

These calculations depend on the power, significance level, and a-priori likelihood that H0 is true. If Ellen and I ever find the time to work on a follow up to our recent article on Practical Recommendations to Increase the Informational Value of Studies, I would like to discuss these issues in more detail. To interpret whether 1 out of 2 studies is still support for your hypothesis, these values matter a lot, but to interpret whether 4 out of 6 studies are support for your hypothesis, they are almost completely irrelevant. This means that one or two non-significant findings in a larger set of studies do almost nothing to reduce the likelihood that you were examining a true effect. If you’ve performed three studies that all worked, and a close replication isn’t significant, don’t get distracted by looking for moderators, at least until the unexpected result is replicated.

I've taken the spreadsheet Ellen Evers made and shared with the PhD students, and extended is slightly. You can download it here, and use it to perform your own calculations with different levels of power, significant levels, and a-priori likelihoods of H0. On the second tab of the spreadsheet, you can perform these calculations for studies that have different power and significance levels.  If you want to start trying out different options immediately, use the online spreadsheet below:



If we want to reduce publication bias, understanding (I mean, really understanding) that sets of studies that include non-significant findings are extremely likely, assuming H1 is true, is a very important realization. Depending on the number of studies, their power, significance level, and the a-priori likelihood of the idea you were testing, it can be no problem to submit a set of studies with mixed significant and non-significant results for publication. If you do, make sure that the Type 1 error rate is controlled (e.g., by pre-registering your study design). 

I want to end with a big thanks to Ellen Evers for explaining this to me last week, and thanks so much to all of you who answered my questionnaire about probabilities.

Thursday, June 12, 2014

The Null Is Always False (Except When It Is True)

An often heard criticism of null-hypothesis significance testing is that the null is always false. The idea is that average differences between two samples will never be exactly zero (there will practically always be a tiny difference, even if it is only 0.001). Furthermore, if the sample size is large enough, tiny differences can be statistically significant. Both these statements are correct, but they do not mean the null is never true.

The null-hypothesis assumes the difference between the means in the two populations is exactly zero. However, the two means in the samples drawn from these two populations vary with each sample (and the less data you have, the greater the variance). The difference between two means will get really really close to zero when the number of samples approaches infinity. This is a core assumption in Frequentist approaches to statistics. It’s therefore not important that the observed difference in your sample isn’t exactly zero, as long as the difference in the population is zero.

Some researchers, such as Cohen (1990) have expressed their doubt that the difference in the population is ever exactly zero. As Cohen says:

The null hypothesis, taken literally (and that's the only way you can take it in formal hypothesis testing), is always false in the real world. It can only be true in the bowels of a computer processor running a Monte Carlo study (and even then a stray electron may make it false). If it is false, even to a tiny degree, it must be the case that a large enough sample will produce a significant result and lead to its rejection. So if the null is always false, what’s the big deal about rejecting it? (p. 1308).

One ‘big deal’ about rejecting it, is that to reject a small difference (e.g., a Cohen’s d of 0.001) you need a sample size of at least 31 million participants to have a decent chance of observing such a statistical difference in a t-test. With such sample sizes, almost all statistics we use (e.g., checks for normality) break down and start to return meaningless results.

Another ‘big deal’ is that we don’t know whether the observed difference will remain equally large irrespective of the increase in sample size (as should happen, when it is an accurately measured true effect) or whether it will become smaller and smaller, without ever becoming statistically significant, the more measurements are added (as should happen when there is actually no effect). Hagen (1997) explains this latter situation in his article ‘In Praise of the Null-Hypothesis Significance Test’ to prevent people from mistakenly assuming that every observed difference will become significant if you simply add participants. He writes:

‘Thus, although it may appear that larger and larger Ns are chasing smaller and smaller differences, when the null is true, the variance of the test statistic, which is doing the chasing, is a function of the variance of the differences it is chasing. Thus, the "chaser" never gets any closer to the "chasee."’
 

What’s a ‘real’ effect?

The more important question is whether it is true that there are always real differences in the real world, and what the ‘real world’ is. Let’s consider the population of people in the real world. While you read this sentence, some individuals in this population have died, and some were born. For most questions in psychology, the population is surprisingly similar to an eternally running Monte Carlo simulation. Even if you could measure all people in the world in a millisecond, and the test-retest correlation was perfect, the answer you would get now would be different from the answer you would get in an hour. Frequentists (the people that use NHST) are not specifically interested in the exact value now, or in one hour, or next week Thursday, but in the average value in the ‘long’ run. The value in the real world today might never be zero, but it’s never anything, because it’s continuously changing. If we want to make generalizable statements about the world, I think the fact that the null-hypothesis is never precisely true at any specific moment is not a problem. I’ll ignore more complex questions for now, such as how we can establish whether effects vary over time.

When perfect randomization to conditions is possible, and the null-hypothesis is true, every p-value is going to be just as likely. There a great blog post by Jim Grange explaining that p-values are uniformly distributed if the null is true using simulations in R. Take the script from his blog, and change the sample size (e.g., to 100000 in each group), or change the variances, and as long as the means of the two groups remain identical, p-values will be uniformly distributed. Although it is theoretically possible that differences are randomly fluctuating around zero in the long term, some researchers have argued this is often not true. Especially in correlational research, or in any situation where participants are not randomly assigned to conditions, this is a real problem.

Meehl talks about how in psychology every individual-difference variable (e.g., trait, status, demographic) correlates with every other variable, which means the null is practically never true. In these situations, it’s not that testing against the null-hypothesis is meaningless, but it’s not informative. If everything correlates with everything else, you need to create good models, and test those. A simple null-hypothesis significance test will not get you very far. I agree.



Random Assignment vs. Crud

To illustrate when NHST can be used to as a source of information in large samples, and when NHST is not informative in large samples, I’ll analyze data of large dataset with 6344 participants from the Many Labs project. I’ve analyzed 10 dependent variables to see whether they were influenced by A) Gender, and B) Assignment to the high or low anchoring condition in the first study. Gender is a measured individual difference variable, and not a manipulated variable, and might thus be affected by what Meehl calls the crud factor. Here, I want to illustrate this is A) probably often true for individual difference variables, but perhaps not always true, and B) it is probably never true for when analyzing differences between groups individuals were randomly assignment to.

You can download the CleanedData.sav Many Labs Data here, and my analysis syntax here. I perform 8 t-tests and 2 Chi-square tests on 10 dependent variables, while the factor is either gender, or the random assignment to the high or low condition for the first question in the anchoring paradigm. You can download the output here. When we analyze the 10 dependent variables as a function of the anchoring condition, none of the differences are statistically significant (even though there are more than 6000 participants). You can play around with the script, repeating the analysis for the conditions related to the other three anchoring questions (remember to correct for multiple comparisons if you perform many tests), and see how randomization does a pretty good job at returning non-significant results even in very large sample sizes. If the null is always false, it is remarkably difficult to reject. Obviously, when we analyze the answer people gave on the first anchoring question, we find a huge effect of the high vs. low anchoring condition they were randomly assigned to. Here, NHST works. There is probably something going on. If the anchoring effect was a completely novel phenomenon, this would be an important first finding, to be followed by replications and extensions, and finally model building and testing.

The results change dramatically if we use Gender as a factor. There are Gender effects on dependent variables related to quote attribution, system justification, the gambler’s fallacy, imagined contact, the explicit evaluation of arts and math, and the norm of reciprocity. There are no significant differences in political identification (as conservative or liberal), on the response scale manipulation, or on gain vs. loss framing (even though p = .025, such a high p-value is stronger support for the null-hypothesis than for the alternative hypothesis with 5500 participants). It’s surprising that the null-hypothesis (gender does not influence the responses participants give) is rejected for seven out of ten effects. Personally (perhaps because I’ve got very little expertise in gender effects) I was actually extremely surprised, even though the effects are small (with Cohen d’s or around 0.09). This, ironically, shows that NHST works - I've learned gender effects are much more widespread than I'd have though before I wrote this blog post.


It also shows we have learned very little, because NHST when examining gender differences does not really tell us anything about WHY gender influences all these different dependent variables. We need better models to really know what’s going on. For the studies where there was no significant effect (such as political orientation), it is risky to conclude gender is irrelevant – perhaps there are moderators, and gender and political identification are related. 


Conclusion

We can reject the hypothesis that the null is always false. Generalizing statements about how the null-hypothesis is always false, and thus how null-hypothesis significance testing is a meaningless endeavor, are only partially accurate. The null hypothesis is always false, when it is false, but it’s true when it’s true. It's difficult to know when a not statistically significant difference reflects a Type 2 error (there is an effect, but it will only become significant if the statistical power is increased, for example by collecting more data), or whether it actually means the null is true. Null-hypothesis significance testing cannot be used to answer these questions. NHST can only reject the null-hypothesis, and when observed differences are not statistically significant, the outcome of a significance test necessarily remains inconclusive. But assuming the null-hypothesis is true in exploratory research, at least in experiments where random assignment to conditions is possible, is a useful statistical tool.