If you play with your Prior you’ll go blind

bayeshp2

And thus, the Huffington Post predicted a 98% probability for Hillary Clinton to be the next President of the United States. Amen… Let’s tease them a little bit, shall we?

My Bayesian friends, I understand playing with your priors is a very joyful activity but you see, it leads to blindness. It allows you to believe, let me cap & bold this one, BELIEVE that Hillary’s chances to be the next President of United States were 98%! No wonder that betting sites favored heavily Hillary’s side days before the election! I mean 98%! Who wouldn’t put some money there. Right?

But you know, a 98% probability coming from a Bayesian means very little unless, of course, they do some math pirouette to guarantee that the probability has frequentist properties, but then, if they do that, why bother going Bayesian in the first place?

If a frequentist tells you there is 98% probability for an event to happen he/she means that 98 out of 100 times where you find yourself in a situation like where the event is taking place the event will occur. Now, if a Bayesian tells you there is 98% probability he/she means that this is his/her degree of believe (wot?) on the event to happen… Amen again.

In other words, Bayesian results are as credible as the beliefs of the Bayesian statistician making the calculations, now we can understand why they calculate credible intervals instead confidence ones.

If we check on the Huffpo methodology we can read:

Many Bayesian models ― including the Pollster averaging model as it’s implemented for our charts ― use “uninformed” priors that don’t affect the model or provide any background information.

However, we do use information from previous elections in these priors to make predictions in our presidential model.

Ba dum tsssss

Much has been written on the pros and cons of going Bayesian and how evil Frequentists are, but this amazing Bayesian result from Huffpo was just too good to let go as a beautiful example of how blind you can go when playing with your priors.

Social Network Analysis & GOP Verbal Attacks

Not that I know anything about the GOP debates or candidates, but I casually saw in a CNN post this nice visualization of verbal attacks during the RL GOP Debate, and I thought that I would do a little SNA and try to draw conclusions on the debate WITHOUT actually having seen it…

let’s see how it goes and, please, if you’ve seen the debate and know better than me, let me know if I am very wrong 🙂

Continue reading

Media on “Video Games & Violence”

video.games.study
                                           Click to watch video

This is the numerical result the researchers used to make their “ridiculous” assumptions in their paper:

The ANOVA procedure for repeated measurement designs yield significant results for the

  • dACC (Wilk’s Λ = 0.33, F = 4.59, p <.027, η2 = 0.67)
  • rACC (Wilk’s Λ= 0.19, F = 9.55, p <.003, η2 = 0.81)
  • amygdala (Wilk’s Λ = 0.28, F = 5.75, p<.014, η2 = 0.72)

Tests for linear trends were significant in the three ROIs:

  • dACC: F = 8.28, p < .014;
  • rACC: F = 17.97, p < .001;
  • amygdala: F = 30.02, p < .001

but not for higher order trends

The other study they mention does not involve any experiment and is merely a review of other studies.

Whether the significance in the study is significant for science is up to the researchers but, yeah, we can make assumptions with a sample size of just 13. Interestingly, others would regard “too many” people in a sample size as a manipulation to achieve significance. So I guess that when we don’t like something we can always find reasons to complain about it.