Climate models: an essential tool for guiding policy

Climate models have come under scrutiny. Stephen Belcher, the Met Office’s Chief Scientist, explains the nuances of the science and why it is imperative now to reduce atmospheric carbon.

Are climate models accurately predicting global warming?

This question has been a hotly debated this week. Whilst climate models do not represent all aspects of our climate, they provide an essential tool for guiding policy by simulating the warming we have seen since the pre-industrial era. So how well do they do?

The figure below shows observations of global warming (in black) compared to the range projected by the group of models submitted to CMIP5, which was used by the IPCC for their latest assessment report on climate change. The 1880-1919 period was chosen here to represent pre-industrial temperatures so that three observational data sets could be used in the comparison. The observations lie comfortably within the modelled range. For example, the warming seen between the 1880-1919 period and 2016 is 1.10oC in HadCRUT4, 1.19 oC in GISTEMP and 1.12 oC in NOAAGlobalTemp, which compares well with 1.14 oC, the average of the CMIP5 models (which have a 2.5 – 97.5% range of 0.70 – 1.58 oC).

Comparison of simulated temperatures from CMIP5 models (historical and RCP4.5 experiments) with three observed near-surface temperature datasets, as changes from the 1880-1919 baseline. The observational datasets are HadCRUT4 (with an estimate of the uncertainty on the dataset) produced by the Met Office Hadley Centre and the Climate Research Unit; GISTEMP produced by NASS GISS; and NOAAGlobalTemp produced by NOAA. The model and observational data have been re-analysed to the same coverage as HadCRUT4 to enable fair comparison.

What factors need to be considered in making comparisons between observed and modelled warming?

First, during the so-called slowdown during the 2000s the observations sit within the lower half of the model range. This has prompted questions about whether the models have been warming too much. Analysis at the Met Office shows that these variations are consistent with variations associated with the Pacific Decadal Oscillation. We have since seen record-breaking temperatures in 2014, 2015 and 2016 which signal an end to the slowdown and a return of higher warmer rates.

Second, the CMIP5 simulations were initiated in 2005, and emissions of greenhouse gases from 2005 onwards were assessed, based on estimates for socio-economic development. So the CMIP5 simulations do not account for the cooling effects of small volcanic eruptions which recent research shows slightly cools the modelled warming. (Further information on these topics can be found in Ed Hawkins’ blog, and in the Nature journal paper by Medhaug et al)

Third, the observations themselves contain uncertainties, for example due to difficulties of estimating temperature changes in poorly-observed regions such as the Arctic and Antarctica, and due to definition of the pre-industrial temperature, when the observations have greater uncertainties.

So can we limit warming to the ambitions of the Paris agreement?

Policy makers need to have reliable information about greenhouse gas emissions because there is a direct link between the amounts of total CO2 we have emitted and the amount our world warms. By studying this link, it is possible to estimate how much more carbon we can emit while remaining within given levels of warming. There is currently an intense research effort to reduce the current uncertainties in the carbon budget.

One recent study has suggested that remaining carbon budgets may be larger than previously thought. (The close agreement between the models and the observations in the figure above does not change this conclusion.) Meanwhile other areas of research suggest they may be smaller. For example, the next generation of climate models will include processes such as the impacts of thawing permafrost, changes to wetlands, and the impact of the nitrogen cycle on plant growth. Early evidence suggests that accounting for these processes will reduce the amount of carbon we can emit while staying within the over all budget and the global warming targets.

The Paris Agreement to limit warming to ‘well below 2 oC above pre-industrial levels and to pursue efforts to limit the temperature increase even further to 1.5 °C’ is driving a need for greater precision in climate science, both in the need to assess warming above pre-industrial levels, and in the need to assess carbon budgets consistent with these targets.

What remains clear, however, is that the aim of limiting warming to 1.5C remains a huge global challenge that requires rapid cuts to greenhouse gas emissions.

This entry was posted in Met Office News. Bookmark the permalink.

5 Responses to Climate models: an essential tool for guiding policy

  1. craigm350 says:

    Reblogged this on WeatherAction News and commented:
    Climate models have no place guiding policy, unless you are a priest engulfing your hands in entrails and demanding payment for your services. You could at least grow your hair and beards so the public know what we are facing

    When I first started in climate modeling as a graduate student in the early 1990’s it was understood that if you ever claimed that climate models could actually predict climate it would be the end of your career — the same as if you proposed perpetual motion or cold fusion. Climate models were for studying the various processes of climate…That there are numerous models, that they originally (before comparison) gave significantly different results, and that they are compared, with none hailed as definitive, shows that climate modelers know no model can predict climate, they just want to have their own, for publication and funding advantage. Climate modelers then say that because most of these models now show global warming that proves global warming. However, this ignores that scientists are all-too-human and don’t want to be outliers — after comparison they literally tuned their climate models (this is easily done) to give results more like the rest of the herd. Plus they even started with an assumed result (warming), which is well-known in science to skew results toward that assumption.

  2. nuwurld says:

    The graph you are using required the recent El Niño to put models ‘back on track’. For well over a decade the Earth has not been warming as predicted by the models and two natural events the PDO and the El Niño are held responsible for the observed temperatures.

    Interestingly for well over a decade (2001 to end 2013) despite all the GHG ‘forcing’, the atmosphere gained no energy as shown by 300mb geopotential height anomalies. The atmosphere (70% by mass at 300mb) cannot gain energy without increasing volume. And between the above dates there was no net expansion as measured by radiosonde, microwave sounding, lidar infrared and radio occultation techniques.
    This stagnation ended with a failure of central Pacific trade winds allowing warm water to spread with reduced cold water upwelling. This warm water heated the atmosphere which distributed the warm anomaly poleward.

    After the El Niño temperatures have started to fall again and currently this year temperatures have fallen significantly from the 2016 peak, as shown in your own more recent HADCRUT4 data,

    The credibility of models now rests again on whether another El Niño builds, or more likely, how well developed the current central Pacific cold region develops.

    But all you have to hold on to is faith that GHG’s warm in the manner you programmed into the models.

    The PDO is returning to lower levels whilst the Atlantic remains warm. Should the Atlantic enter its cool phase then the predicted warming pattern of the models is lost.

    Best of luck.

  3. nuwurld says:

    Having replied once already to this post I believe another matter needs to be addressed.

    The global mean near surface temperature anomaly figure is referenced to 1880-1919 as if this represents something significant; some important stable period that the Earth should be at this point in time. This referencing does nothing but create the illusion of unprecedented warming pure and simple.

    Many sensibly achieved Holocene reconstructs place your reference anomaly zero point well below the average temperature of this interglacial. Many geological and biological studies place the Little Ice Age ocean temperatures at 1.2 to 1.4degC below the Holocene Climatic Optimum.

    The reference anomaly point you have chosen then is clearly way below the Holocene average temperature and must incorporate as a result a significant bias.

    All persons who have studied climate variations over the Holocene period will be aware of this to some degree.

    Natural variation has been swept away by starting at the beginning of the industrial revolution and by doing this you are creating an illusion. This being that nature cannot do this (unproven!), that man can (unproven!) and that this is the only available conclusion, which is just plain false and purposefully misleading.

  4. scottdenning says:

    Dear MetOffice Press Office: What’s the big downward jump in the envelope of model simulations around 1960? Are these different simulations before and after that time?

    Thanks for any information. I’m pretty familiar with these models and I don’t think I’ve seen that before. Thanks for getting back to me if you can.

    • Hello Scott
      Apologies for the delay in replying. The dip after 1960 in both the models and the observations is most likely due to the eruption of Mt Agung (1963), but it is possible that there are other factors that contribute. For example, aerosol emissions from industrial pollution in Europe and the USA increased considerably in the 1960s.

Comments are closed.