Climate Observations

Meteorologist Chris Brierley on the first measurements of climate, weather forecasts and how can we double check the accuracy of the old climate readings

videos | December 2, 2020

I’d like to tell you a bit about world warming, how we know the world has warmed, and how we know the climate itself has changed over the course of the past 150 years or so.

So, the world has warmed by about a degree on average since the pre-industrial period, the pre-industrial period being from the 1600s heading through to 1850-1900, which is an official definition. But nonetheless, we only really have good information from direct measurements from the 1850s going forward through to today. Those very first measurements were taken with thermometers: clearly, any measurement of temperature is taken with a kind of thermometer because that’s what a thermometer is, but here, I mean instrumental mercury thermometers were often used.

The longest record we have is based in central England: it’s some notional triangle that contains Oxford, London and bits of Gloucester, and that’s where people, interested amateurs, with their little thermometers, were going out and reading the weather every day. We have enough to be able to say how cold it was at a monthly resolution back to about 1660s, which is when these instruments were really being invented. There are enough accurate readings to know the temperature of the central England area on a daily basis going back to the 1770s. I think those are the longest instrumental records. We’ve got records of other recordings that are clearly associated with climate and weather, like when cherry trees blossom: there’s a lovely record of that from Japan that goes back to the 800s. So we have stuff that clearly tells you about climate.

Geographer Mathias Disney on the Sun's spectrum, albedo and why plants and animals are sensitive to a very specific part of the overall spectrum
But actually, direct measurements of climate only go back to the 1660s and only really in this small portion of central England. And then you start getting more and more thermometers coming online much as you get through the scientific revolution associated with the Industrial Revolution. And then, once you get to about the 1850s, we start having enough measurements to at least have a guess at what the global temperature is. These thermometers come from a rather technical blend of measurements that are taken from the air in a sort of standard weather instrument called a Stevenson screen: it’s a box about 1.8 meters high that you can look in and measure the temperature. If you’re on a ship, it’s taken from the ocean surface, and so it’s a sea surface temperature. Although the temperatures are different between the two, the anomalous temperatures, like a warm day, are going to be warm at the surface of the ocean and warm out there in the atmosphere, even though the temperatures themselves might be slightly offset.

People have done really diligent work digging out all of these records and now digitizing them. There’s a process referred to as homogenization. You know that the instruments are changing slightly, and the example that’s close to us here in London is the weather station in Kew Gardens, which is right on the edge of London in a wonderful Arboretum. It was on the edge of London when it was set up, it was right at the outskirts of London, but London has expanded, and it’s still in an Arboretum, so it’s still trees directly around it, but clearly, there’s a lot more London in the area, and that means that there’s an influence of the urban heat island that can infect these temperature readings. So we need to know how to get rid of those. So what this process of homogenization is doing – if it was just with a computer, you would refer to it as machine learning, but it’s often not with a computer; it’s looking at these data time series and picking out errors in them, in the raw data, and coping with problems with them.

I gave the example of Kew Gardens, but that’s possibly not even the best example. The one I’ve always been most amazed about is that after the Second World War, people were recording the surface of the ocean. The way you record the temperature of the surface of the ocean is that you throw a bucket over the side, pull the bucket up, and you stick a thermometer in it, and you measure the water. That’s fairly standard, and that’s the way it’s always been done until maybe 30 years ago when more sophisticated techniques were used to do it automatically.

But during and after the Second World War, we stopped using wooden buckets and started using aluminium buckets or metal buckets. As you pull a metal bucket up, it loses more heat, and so there’s actually been really very diligent studies of just constantly throwing buckets over the side, a wooden bucket and a metal bucket, and pulling them both up and working out precisely what sort of bias that might introduce into the global record and then working out when different ships and different navies changed there, what kind of bucket they were using, and bringing even that very technical detail into it.

As I said, the world’s warmed by about a degree, but, say, today the weather has warmed up by 10 degrees over the course of just that day. So there is an obvious warming of the climate, but it’s quite small compared to the noise, and you need to be really careful about how you treat the readings to remove it.

That sort of accurate measurement through weather observation is still continuing. The peak of the amount of weather stations was about the 1980s, and since then, we’ve got other ways of measuring weather, so we scaled back on that. Those other ways of measuring weather are either more automated or, more likely, are coming from satellites, the other major source. So in the 1950s, not only did you have the space race where we were sending things up into space, but people realized quite quickly that once you send something up into space, you don’t have to look at space; you can turn around and look at the Earth and see what’s going on in the Earth, and you can do some quite snazzy measurements from that.

So the satellite era started in the 1970s, but simultaneously along in that space race comes real push forwards in computing. With those push forwards in computing comes the birth of numerical weather prediction. Prior to that, the way you would do a weather forecast would be that you would record all of the observations, you would get them telephoned or telegraphed to your weather forecasters, and they would either look through some charts or normally have some mental filing system and think: this looks like what happened on the 7th of May 40 years ago, so it’s most likely going to progress that way, and then we’re some expert judgment going into it.

But from the 50s onwards, you really start seeing the birth of numerical weather forecasting and the real power of computers. One of the wonderful things about the power of computers is that they are able to ingest an awful lot more information, and so that information comes in through the process called data assimilation: the satellite will talk directly to the weather forecasting module, and what you need to do is you need to account for the fact that in the satellite information might be an error and you need to know when it came in, and so there’s a very sophisticated set of mathematical algorithms to deal with this, and that’s called a data assimilation process.

Physicist Joanna D. Haigh on how climate modelling can help in weather forecasting, the role of fundamental physic equations, and various factors that influence climate change
What that’s doing is you’re taking your model forecast, and then you’re nudging it through the observations towards your best guess of the atmospheric state of this very instance. When you want to do a forecast moving forward, you need to know what the weather is now; that’s the key thing, and so you get this sort of data assimilation going into this to provide an analysis of now and give you analysis of your best guess of the weather forecast. That same machinery has now been applied to look backwards, and the earliest of the reanalysis goes back to about the 1870s, where there’s not as much data that’s being ingested in there, so there’s more of the model, and it’s less constrained by the data just because there’s not as much data. But now we’re seeing homogeneous datasets of not just the surface and, say, the surface warming, but of the whole atmosphere, and you’re using a physically-based weather forecasting model to make the guesses in between rather than, say, some sort of interpolation scheme. There are big efforts like over-reading, and there’s an ethical weather rescue where they’re trying to get as many old forecasts through volunteers to be fed into these reanalyses of what the climate system is.

I hope that that’s described a little bit about how we’re taking the temperature of the climate and how we know it and the amount of effort we need to put in to be able to do it with accuracy, to be able to detect these climate changes. But we’re doing this, and I can’t help but feel that it’s a bit like a doctor working out how ill the patient is getting and just watching that process, just watching the world get warmer and warmer, charting its demise. What is quite important is tying that more detailed knowledge of how the world is changing and has changed into justifying action, moving forward, so that you don’t just keep watching it warm up and up and up and up and up and keep going, and then actually we then start mitigating against climate change and reducing our emissions and getting it to level off. Then hopefully, we can start following the temperature as it levels off as well rather than just watching it go on.

Become a Patron!

PhD in Meteorology, University College London
Did you like it? Share it with your friends!
Published items
To be published soon