# Climategate (1/3): be careful what you model for because you might get it

Deception is all around us, in every little parcel of our life; from our personal Bart Simpson’s “It wasn’t me” to our local TV news host selling us the latest “You’re not going to believe this” but we eventually do. One might just wish there would exist communities out there with higher standards like, for example, Christians priests but, nope, they cover up pedophile networks in order to preserve The Church’s “good” name. But how about the atheist priests a.k.a scientists? How about their standards?

Well, unfortunately the community of scientists might have more to do with priesthood than one might expect or desire, and a nice example of this would be the Climategate (or the Climatic Research Unit email controversy, as some people had the kindness to rename the Climategate article in the Wikipedia following Fox News’ motto “fair and balanced” )

So in this post I am going to replicate earlier studies on global warming to uncover how over pessimistic were the maths models of the past, but I will also talk about human weakness, and scientists are human… for now.

We cannot be experts in every complex subject out there and soon or later we have to rely on someone we trust to form an opinion about it, so when it comes to global climate issues I trust Dr. Richard A. Muller. He was one of the very few scientists with the decency to denounce the bad science displayed in “Mike’s Trick” and speak up regardless of what truth might do to the “good” name of The Church of Science. He does believe though that man made CO2 is the most likely cause of global warming, but he also recognizes bad science for what it is.

After the Climategate Dr. Muller lead a study to verify that what we thought we knew about global warming was not “enhanced” with tricks similar to Mike’s. This study is public to scrutiny at The Berkeley Earth Surface Temperature project where we can freely download all the data, code and relevant information to play with and have some fun… And I myself will do so by analyzing the earliest and latest works of James Hansen from NASA GISS on global temperature. James Hansen is a well known climatologist and activist keen to talk about end of the world scenarios which brings a fair question; what are scientists who love mother nature so much capable of? Will they have a greater bias when approaching environmental issues? What are they willing to do if they think that by doing so they are saving humanity? And this is not just a way to speak, they literally believe they are saving humanity; Dr Hansen has recently published a book titled Storms of My Grand Children: The truth about the coming climate catastrophe and our last chance to save humanity. So, in the process of having fun, I will show how Dr. Hansen back in the late 80’s rendered an over pessimistic view of the global warming

# The Data

Unfortunately being able to access raw data from studies is often close to impossible, in fact, reading some of the Climategate infamous leaked emails with the hiding of declining proxy temperatures, the forcing of the peer-review process to stop studies that contradict your work to see the light, and the petitioning to fellas to delete emails to avoid further embarrassment due to the FOI, I can only be thankful that projects like The Berkeley Earth Surface exist and people like Dr. Muller lead them.

To study Hansen’s papers I am going to use the TAVG Berkeley Earth Dataset: Seasonality Removed and Quality Controlled, but instead using Kriging to model and interpolate the temperature in the planet as the B.E.S.T team wonderfully describe in their site, I will try to be close to Hansen’s methodology for comparisons sake.

# The Hansen Papers

So let’s go back in history and check first on this 1987 NASA’s paper: Global trends of measured surface air temperature (J Hansen, S Lebedeff – Journal of Geophysical Research, 1987).

In this paper Hansen divides a cylindrical projection of planet Earth in 80 squares with the same Earth surface area, then he further divides every square in an lattice of 10×10 squares and, when data is available, estimates for each of these 8,000 squares the global temperature considering weighted temperatures from stations within a radio of 1,200 Km from the center of the squares.

The first thing that I don’t like from Hansen approach is that, though every box have the same area, boxes in the poles will have an elongated triangle like shape whereas boxes in the equator will have a more squared shaped. In the averaging process every box is treated the same and, though Hansen is aware of this in the paper, he does not explain why treating equally different geometries should not affect the final results.

My guess is that since the boxes are 200km wide and his method takes stations as far away as 1,200km he concluded that the elongated shapes of these “small” squares would not affect much the calculations.

Nonetheless the geometry in which Earth is segmented is the first thing I am going to change in the calculations; instead using the center of these 8,000 Hansen squares I will use the 8,672 points of a symmetric icosadeltahedral solution for the Thomson Problem. This solution divides planet Earth with same area surface hexagonal shapes (except for a very few pentagonal exceptions) and removes the possible bias that the poles elongated Hansen squares might introduce in the calculations.

The second thing that bugs me in this paper is that, though the temperatures for each node are weighted linearly based of the fact that the closer two stations are the more correlated are their temperatures, it overlooks in its formulas the fact this correlation fades far from the poles down to a maximum of 0.4. Hansen weighting temperature formula goes as follows:

$W_n = (D-d_n)/D$

Where D is the 1,200 km maximum radius for stations to be weighed and dn the station distance to the center of a Hansen square. So to account for the fading latitude correlation I will instead use the formula:

$W_n = \frac{D-d_n}{D}\left( \frac{0.6}{90}|{latitude_n}|+0.4 \right )$

I will also merge temperatures differently. Hansen uses one temperature series as a baseline and merges it with next series accounting for its mean differences. I simply differentiate all series since my interest is focused in how much temperature increases/decreases and not what its absolute value is. I will also assume that Earth is absolutely spherical and that the stations altitude does not affect the increase and decrease in temperatures for the calculations but only its absolute value, and finally I will also ignore the cities heat island effect.

Let’s then name this modified bias method  to  average surface temperatures Icosadeltahedral Bias Average, and let’s check in the next plot how its results compare to Hansen’s approach:

The results are simply pasted on top of the plot we can find in Hansen’s paper. In his study Hansen claims a global warming from 1880 to 1985 of 0.7 ºC when the heat island effect is ignored, but this increase is twice as big as the increase we have when we use the simple and common sense enhancements described above in his methodology! If we do so the increment in that century is then a meager 0.3 ºC.

So what method is better, the Hansen Bias method or the Icosadeltahedral Bias method? the Icosadeltahedral one, How do I know this? Check the following plot from Hansen’s later paper in 2010 Global surface temperature change. (J Hansen, R Ruedy, M Sato, K Lo.)

This time the Icosadeltahedral results are in blue , but notice that after Hansen applies new methodologies the increment of temperature in the period 1880-1985 is… that’s right, around 0.3 ºC. So now a question might come to our minds… Why NASA didn’t get it right back in 1987?.

Well, let’s speculate… maybe NASA didn’t have the resources to access good quality data, doubtful. Maybe people at NASA didn’t know how to segment a sphere in equal shapes to avoid bias in the poles, doubtful. Maybe nobody thought about correcting the temperatures weight with the fading correlations in the Earth latitudes as Hansen himself describes in his paper, doubtful. It is not either about computer power in the late 80’s. I chose a symmetric Icosadeltahedral with around the same number of nodes than Hansen’s Earth segmentation since I did not want any extra advantage in my calculations. So, really, why didn’t NASA get it right back in 1987?

Seems like if back then the tiny 0.3 ºC in a century with 1985 maximum temperatures at the same level of the 1940’s was a hard sell for the environmentalist agenda and, since in their hearts they knew global warming was happening, they might have pushed the process and the models to fit their hearts… This raises a moral issue beyond science. Should scientists relax their standards when they believe they are saving the world?

One of my favorites Chinese wisdoms say “Be careful what you wish for because you might get it”. Well, turns out this wisdom applies to science as well as in any other aspect of life.

We must be very careful what we model for because we might get it. Our desire to find results might push us to build procedures and models that favor our hypothesis and, when we add the ideological and activist component to the recipe, the result is a very well deserved lack of confidence in scientists from the general public when Climategates blow up. But don’t get me wrong; I trust science… I just don’t trust scientists.