When combining p-values in a meta-analysis framework there are many different methods we can apply based upon where the p-values come from and what their relationship is. The most well known methods being Fisher’s Method and Stouffer’s.
So I thought I would add one more out of fun and because why not; The Bayesian P-Value method!
So this is how….
Turns out that thanks to Selke, Bayarri & Berger we have this cute little formula to turn p-values into the posterior probability of H0 that arises from use of the Bayes factor together with the assumption that H0 and H1 have equal prior probabilities of 1/2.
Cool, uh? The “only” caveat is that this formula just works for p-values lower or equal than 1/e ~ 0.36 so for greater p-values we might be out of luck but, considering that usually we want to combine small p-values, this limitation might not be much of a problem in many situations.
All right, so now that we have turned the p-values into Bayesian probabilities we can use a little of Naive Bayes to combine these probabilities together into one to then use the inverse of our little formula above to recover the p-value that would correspond to such Bayesian probability.
Let’s say we have three p-values coming from three identical experiments with results: 0.05, 0.1 & 0.2. Now let’s apply the previous steps depicted in the following diagram and in this code.
These are the results of combining these three p-values with three different methods:
|Method||P-Values Set||Naïve Bayes||Combined P-Value|
|Fisher’s||0.05, 0.1, 0.2||–||0.0317663|
|Stouffer’s||0.05, 0.1, 0.2||–||0.01479742|
|Bayesian P-Value||0.05, 0.1, 0.2||0.1823287||0.02132706|
And as we can see the result for the Bayesian P-Value falls nearly right in the middle of Fisher’s and Stouffer’s results in this example. Oh, well.