15 to 42 percent of medical research are false positives (Yet Another Calculation)

A while ago I found a very interesting paper from Leah R. Jager and Jeffrey T. Leek  via a post in the Simply Statistics blog arguing that most published medical research is true with a rate of false positives among reported results of 14% ± 1%.  Their paper came as a response to an essay from John P. A. Ioannidis and several others authors claiming that most published research findings are false.

After dealing with some criticisms Mr. Leek made a good point in his post:

“I also hope that by introducing a new estimator of the science-wise fdr we inspire more methodological development and that philosophical criticisms won’t prevent people from looking at the data in new ways.”

And thus, following this advice, I didn’t let criticisms prevent me from looking at the data in a new way. So for this problem I have devised a probability distribution for p-values to then fit the data via MLE and infer from there the rate of false positives.

pvalues PDF CDFSo this is my take; 15.33% rate of false positive with a worse case scenario of 41.75% depending on how mischievous researchers are but, in any case, and contrary to what others authors claim, most medical research seems to be true.

Introduction

Leah R. Jager and Jeffrey T. Leek’s paper methodology has to main characteristics:

  1. It focus on a method that can be applied to only reported p-values less than 0.05
  2. Models the p-values distribution from 0 to 0.05 using the following mixture:
    • f(p|a,b,\pi_0) = \pi_0 U(0,0.05) + (1-\pi_0)t\beta(a,b; 0.05)

I would like to thank the authors for providing a very useful script to scrap p-values from Pubmed. This is how the Pubmed p-values histogram looks like in the range [0,0.1]

pvalues from pubmed 0-0.1The first obvious issue the authors deal with is the conspicuous rounding of p-values when published, we can also see how it makes sense to cut data at the 0.05 threshold since there is a “big jump” in the frequency, most probably due to publication bias and several flavors of p-hacking experiments to pass editors’ filters and publish research.

Publication bias should not affect the density of p-values in the range [0,0.05] but the p-hacking is another story because we can only guess what researchers are doing. This means that any calculations will have to be either robust to any kind of p-hacking or to assess precisely the p-hacking impact which I see as a difficult task.

Although Mr. Leek addresses this problem in his post, this is the first thing I will do differently in my approach; in order to avoid as much as possible the p-hacking bias in the data instead using p-values in the range [0,0.05] we will use the p-values in the range (0.05,1]. This range also suffers from the “harmless” publication bias but, since there is no reason (at least not a convoluted one) to p-hack experiments to present results above 0.05, we can fairly assume that these p-values densities are more “honest”.

Besides, if the p-values close to 0.05 among these honest p-values were artificially pushed by researchers into the significant threshold, the altered densities of these p-values would favor the pessimistic view of high rates of false positives since it would increase the prior probability for studies not to be statistically significant. Therefore, doing inference with these “honest” p-values guarantees conservative results.

Jager and Leek also use a mixture of Beta and Uniform distribution to model the p-values. This roughly splits experiments into those with some effect and those with no effect at all which at first sight might be perhaps a bit unrealistic for a model since experiments with no effect at all virtually do not exist. On the other hand the flexibility the Beta distribution offers might be seen more of a problem that a quality in this context since it might adapt to anything p-value like regardless where it comes from. However, Leek also offers some simulations in his post showing the strength of their approach when p-hacking is present.

Nonetheless, instead using a mixture, we will construct from scratch a truncated statistical distribution based on the p-values obtained from one-sided Hypothesis Test experiments where the Null Hypothesis H_0 follows a standard normal distribution \mathcal{N}(0,1)   and all the possible Alternative Hypothesis H_1 spread following a \mathcal{N}(D,1) with D \sim \mathcal{N}(\mu,\sigma) \; \mu>=0. This might result in a more realistic distribution to describe the p-values since it has less flexibility than the Beta distribution and has no need for a mixture with an Uniform distribution to account for non significant results…. So let’s begin.

The data

Next we can see the (0.05,1] “honest” p-values from Pubmed:

pvalues from pubmed 0.05-1.raw

We can first observe how researchers usually round the p-values to two digits, so in order to even out the distribution we will round to two digits all the p-values available, this is the result:

rounded p-values from pubmed 0.05-1Now the data looks good except for tiny peaks showing that some researchers round the p-values to one digit instead two. The effect is though so small that we might as well ignore it since it would make no sense to lose further information rounding the data to one digit. We can instead adjust the values for the sequence 0.10, 0.20, … , 0.90 by averaging the two previous and two posterior rounded set of values and use this result instead. The final result is shown next:rounded and fixed pvalues from pubmed 0.05-1

Once these rounding peaks have been smoothed out we have the data ready to work with. Now we just need a meaningful probability distribution to fit the data and do some inference.

The P-Value Distribution

Let’s consider a collection of experiments where the Null Hypothesis H_0 follows a \mathcal{N}(0,1) and all the possible Alternative Hypothesis H_1 spread following a \mathcal{N}(D,1) with D \sim \mathcal{N}(\mu,\sigma) \; \mu>=0. Such scenario will look something like this:

h0 vs multiple h1

So we will assume that each p-value from our data comes from one of these H_1 experiments. If now we consider the collection of experiments as a whole we have that

\mathcal{N}(D,1) \equiv \mathcal{N}(\mu, \sqrt{\sigma^2+1})

Which means we can simplify the collection of experiments into one unique normal distribution that fully characterizes the collection as we can see in the next figure:

h0 vs resulting h1

So now instead treating each p-value as coming from different experiments we can consider all of them coming from repetitions of one unique experiment without losing generality. This insight will allow us to easily treat the p-values as a random variable and to produce a CDF for them based on this scenario.

P-Value CDF and PDF

To figure out the CDF F_P(p) and the PDF f_P(p) for a random variable P behaving as the p-values previously described we can proceed as follows:

\begin{array}{l l} -X \sim \mathcal{N}(\mu, \sigma) \\ P = 1-\Phi(-X) \Rightarrow X = -\Phi^{-1}(1-P)\\ \end{array}

Since the CDF for X is the same as for -X we can now calculate the CDF and PDF for P:

\begin{array}{l l} F_P(p) = \Phi\left(\frac{-\Phi^{-1}(1-p)-\mu}{\sigma}\right) \Rightarrow\\ f_P(p) =\partial_p F_P(p) = \frac{1}{\sigma }e^{\left(\text{erfc}^{-1}(2 (1-p))^2-\frac{\left(\sqrt{2} \text{erfc}^{-1}(2 (1-p))+\mu \right)^2}{2 \sigma ^2}\right)} \end{array}

The next plot shows how the P-Values’ PDF behave for a variety of H_1 scenarios:

pvalues PDFJust as expected, once the H_1 behaves as the H_0 we have a uniform distribution \mathcal{U}(0,1) and, the bigger the size effect, the closer the looks to the p-values we collected from Pubmed.

So now we need to figure out which from among all possible H_1 scenarios is the one that better describes the p-values we have from Pubmed, and we will do so via MLE.

Likelihood Function for P-Values

Considering P_i a collection of i.i.d random variables following the P-Value distribution as previously described we have the following joined PDF:

f_P(p_1,p_2,...,p_n | \mu,\sigma) = \prod_{i=1}^n f_P(p_i | \mu, \sigma)

Thus the likelihood function will be:

\mathcal{L}(\mu,\sigma | p_1,p_2, ..., p_n) = f_P(p_1,p_2,...,p_n | \mu,\sigma)

In order to maximize the likelihood for our data we will use instead the log-likelihood for computational reasons

\ell(\mu,\sigma | p_1,p_2,...,p_n ) = \log \prod_{i=1}^n f_P(p_i | \mu, \sigma) = \sum_{i=1}^n \log f_P(p_i | \mu, \sigma)

So to obtain the MLE estimators from \mu and \sigma we simply need to maximize the log-likelihood

\max_{\mu,\sigma} \ell(\mu,\sigma | p_1,p_2,...,p_n )

Now, since we are only going to use honest p-values that is, p-values in the range (0.05,1], we need to use the PDF truncated version for p-values to estimate the H_1 parameters, therefore we have that if p \in [0,0.05] then the truncated PDF is zero, otherwise:

f_{P(0.05,1]}(p | \mu, \sigma) = \frac{f_P(p | \mu, \sigma)}{1-F_P(0.05 | \mu, \sigma)}

And the log-likelihood function that we need to maximize is

\max_{\mu,\sigma} \ell_{(0.05,1]}(\mu,\sigma | p_1,p_2,...,p_n ) = \sum_{i=1}^n \log f_{P(0.05,1]}(p_i | \mu, \sigma)

And if we do so with the honest p-values from Pubmed we obtain the following result:

rounded and fixed pvalues from pubmed 0.05-1 MLEWhich means that with \mu = 1.870551 \; \sigma=1.883980 the honest p-values coming from Pubmed suggest that medical experiments as a whole behave like multiple repetitions of an Hypothesis Test experiment where H_0 \sim \mathcal{N}(0,1) and H_1 \sim \mathcal{N}(1.870551,1.883980). The next figure shows this scenario:

h0 vs optimal h1

And this scenario is equivalent to having the p-values coming from a collection of H_1 \sim \mathcal{N}(D,1) with D \sim \mathcal{N}(1.870551,\sqrt{1.883980^2-1}=1.596678) as shown in the next plot:

h0 vs optimal set h1The percentage of experiments with significant expected values with \alpha = 0.05 can be inferred from the power of the test when H_1 is considered as a unique experiment, or simply by calculating F_P(0.05). This percentage is ~54.768%.

Rate of False Positives

The model fitted with the honest p-values suggest that they come from an framework where 55% of hypothesis tested are statistically significant  with \alpha = 0.05. Therefore, If no p-hacking or publication bias would take place the percentage of statistical significant medical literature in Pubmed should be around 55%. However, the actual percentage of statistical significant literature in Pubmed is 80.7%… suspiciously close to the 80% power with which most hypothesis tests are designed.

This 25% gap might be explained by editors rejecting non-significant papers to achieve the suspicious 80% of statistical significant literature in their journals, by researchers torturing their experiments to achieve significance or, the most likely reason, a combination of both.

honest model vs pvalues from pubmed

If all the bias was due to editors choices then the rate of false positives would be unaffected. The total rate of false positives coming from the model can be calculate via

\begin{array}{l l} D_i \sim \mathcal{N}(\mu=1.87,\sigma=1.60) | D_i<\Phi^{-1}(1-0.05)\\ \lim_{n \rightarrow \infty }\sum_{i=1}^n \frac{1-\Phi(\Phi^{-1}(1-0.05)-D_i)}{n}\approx 0.084 \end{array}

And since the model fitted with honest p-values suggests a 55% percent of statistical significant results, if now we consider that 8.4% of that 55% are false positives we have a relative rate of false positives of 15.33%.

On the other hand, if we consider the worst possible scenario where all the 25% gap is caused by p-hacking non statistically significant experiments, then we would have 25% + 8.4 %  for an 80% rate which would mean that we would have a relative rate of 41.75% of false positives so, even in the worst case, we would still have a majority of around 58% of correct statistically significant publications.

Discussion

The 15.33% false positive rate we have obtained for a non p-hacking scenario falls quite close to the 14% ± 1% from Jager and Leek. However, since it is just as hard to believe that the 25% gap from the estimated 55% to the real 80% is all due to p-hacking, as that there is no p-hacking at all, it is reasonable to assume that the real percentage should be something in between the mildly optimistic 15.33% and the overly pessimistic 41.75%.

In any case the approach presented in this post vindicates the main claim from Jager and Leek about most medical research being true.

Studies arguing about most research being false are based on research having low prior probability to be statistically significant, which might not be a realistic scenario since researchers do not randomly try substances to see what happens, I mean, it is not like one day a researcher decides to test if Cocoa Puffs cures cancer.

UPDATE: I have reasons to believe that honest p-values are not that honest after all; revisiting the calculations for the MLE fitting I removed the 0.9-0.1 p-values and when fitting and the resulting PDF does not nicely fits the histogram in the higher p-values range:

mle.0.05-0.9.png

So if we assume that researchers might be tempted to publish results close to those statistically significant (or editors tempted to choose those)  turns out that I will need to repeat the calculation with another set of honest p-values, that is, those in the range (0.1,0.9]. Now, when doing the fitting in this range this what we see:

mle.0.1-0.9.png

And this means that the mu and sigma (1.870551, 1.883980) obtained before turns into (0.6283843, 1.3576996) which means I need to recalculate the whole thing and update the results in the post… yaaay!

One thought on “15 to 42 percent of medical research are false positives (Yet Another Calculation)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s