In the previous post I showed how James Hansen at GISS NASA clearly over estimated global warming in the late 80’s due to the modeling choices he made. To make a point on how influential the choice of a model is, in this post I will make modeling choices that will allow us to claim that global warming can be explained as a fluke in a random process.
I like to explain the relationship between data and models saying that data is the shadow reality casts, and models are what we believe is casting the shadow. So once we have a model we can use it to cast shadows (make predictions) like the one James Hansen did and could be read in 1986 newspapers:
Hansen predicted global temperatures should be nearly 2 degrees higher in 20 year. “Which is about the warmest the earth has been in the last 100,000 years.”
Interestingly James Hansen downgraded his prediction in a 1988 paper from the nearly two degrees higher to a one degree higher. Though to be fair I would not be surprised if media misquoted him; I might not trust scientists but I absolutely distrust media.
Anyhow, let’s now compare NASA’s prediction in this 1988 paper (in red) to what actually happened years later (in blue):
Hansen works in his paper with three scenarios, and this is how Hansen describes scenario A:
Scenario A assumes that growth rates of trace gas emissions typical of the 1970s and 1980s will continue indefinitely; the assumed annual growth averages 1.5% of current emissions, so the net greenhouse forcing increases exponentially.
The previous plot shows that this model overestimates reality; it steps away from the real temperatures right from the beginning and it seems that it sticks closely to the expected exponential growth of CO2 gas emissions. Let’s check the plot with Hansen’s prediction scenarios up to 2060:
So there you go, if Hansen’s model is right and we continue to pollute I guess we are pretty much… how to say in English… screwed, yeah, that’s the word. Fortunately Hansen’s model diverges from reality right from the start and for the last decade or so we have experienced a mild decline in the global temperatures. Hansen’s model casts the wrong shadow, and models pay this scientific sin of ugliness with their lives… or plastic surgery if they are pretty enough, which is usually what happens since scientists love their babies.
But this is anyway how science works if you agree with Karl Popper; we design models to describe reality with falsifiability features, we make predictions with it, they fail, we upgrade our models to fit new data and we make new predictions. Nonetheless some climatologists have the nasty habit to shape models that always overestimate temperatures, which makes the rest of us wary of the next end-of-the-world prediction coming from them.
So now that I have devilish planted a seed of doubt on the models used by climatologists, I am going to test the hypothesis that the increase in temperatures in the last century is nothing more than a random fluke.
Come on! It’s growing! That can’t possibly be random!
Well actually a random walk can easily show growing trends and yet it is a complete random process. So do temperatures behave like a random walk?
Whatever temperature we have today is likely to influence the temperature that we have tomorrow, in other words, it can be reasoned that temperatures can be describe with an autoregressive model of the form:
So let’s fit the simplest version of this model using the yearly aggregated temperatures (not to confuse with the yearly running average) instead the monthly aggregated temperatures (since monthly seasonal patterns are of no interest in this analysis). Let’s also use temperatures just from the 1880 on since this is when measurements are more reliable. The fitting yields the following results:
ARIMA(1,0,0) with non-zero mean Coefficients: ar1 intercept 0.8671 0.0364 s.e. 0.0449 0.1317 sigma^2 estimated as 0.04435: log likelihood=17.63 AIC=-29.26 AICc=-29.07 BIC=-20.61
We can see that the autoregressive coefficient (ar1) is dangerously close to one, and I say “dangerously” because if it actually was one the autoregressive model would be describing a random walk. We can actually test if this coefficient is one with a unit root test.
Augmented Dickey-Fuller Test Test Results: PARAMETER: Lag Order: 1 STATISTIC: Dickey-Fuller: -1.5483 P VALUE: 0.1203
So the p-value for this unit root test is low but not low enough to reject the hypothesis that the temperatures in the context of an autoregressive model have a unit root and, therefore, considering data from 1880 we cannot reject the possibility that global warming is caused by a natural random process with no underlying cause.
NASA model vs Random model
Okay, so how good would be a simple autoregressive model predicting temperatures? Would it yield much worse predictions than James Hansen models? After all, a lot of science has been accounted for in NASA simulations. Well… Let’s see, considering the results of the unit root test we can fit a simple AR model embedded in a simple AR seasonal model with a period of eleven years (solar wind cycle). If we now fit the model with temperatures from 1880 to 1988 when Hansen published his predictions we have:
ARIMA(1,1,0)(1,1,0) Coefficients: ar1 sar1 -0.3386 -0.4803 s.e. 0.0969 0.0887 sigma^2 estimated as 0.05727: log likelihood=-0.44 AIC=6.89 AICc=7.15 BIC=14.58
And if now we plot side by side the forecast of this autoregressive model with the forecast of NASA’s model on top of the actual values from 1988 until 2011 we have the following:
Well, turns out that the autoregressive model does a better job predicting temperatures than the NASA simulations!
I find very interesting that the eleven years seasonal period in the ARIMA model yields such a good fitting considering eleven years is the average time it takes to the Sun to reach maximum solar activity (periods outside the solar winds 9 -13 years range yield bad predictions)… Serendipity feeling here.
So is global warming a random fluke?
I personally don’t think it is. Climatologists overestimation is most likely due to their natural bias towards end-of-the-world scenarios when modeling global interactions.
On the other hand it is a bit embarrassing that a simple ARIMA model does a better job predicting temperatures than a full fledge scientific NASA model. It makes you wonder how much climatologists really know about climate.
Sure now climatologists probably have new models that cast more precise shadows; models that can explain the past and present really well but, until they don’t predict the future really well they are not something we should bet our houses on. You see, models are like witches from the middle ages; if you torture them enough you can get them to say anything you want.
I was going to end this post by making a point about how by simply making modeling choices we could justify global warming as a random fluke, but since the ARIMA model fits so well I think I will use it to predict the future global temperatures until 2060 just as Hansen did.
It’s worth to note that even though the ARIMA predictions are very good the really important thing in this case are the prediction intervals since they tell us how bad or good things can be. The ARIMA model shows a mean yearly trend of 0.028 ºC increase in global temperatures but, if the trend changes, so will the prediction intervals when updating the ARIMA model.
So, will the ARIMA model in the near future still have the upper hand over 1988 NASA fancy models? Time will tell… Any bets? 🙂
- Global Warming at a “Standstill” Admits Man-Made Warming Proponent Hansen (reason.com)
- E. Thomas McClanahan | Whatever happens, it’s all ‘climate change’ (kansascity.com)
- MICHAELS: Global warming apocalypse canceled (washingtontimes.com)
- Being James Hansen (stevengoddard.wordpress.com)
- Pissed Off Scientists: Earth Is Fucked, Commence Resistance Now (earthfirstnews.wordpress.com)
- Claims of ‘Warmest Year’ For Continental U.S. (Less than 2% of Earth’s Surface) Ignore Flat Global Temperature Trend (climatedepot.com)