“Greenhouse gas emission targets for limiting global warming to 2°C”

Meinshausen's, Hare, Raper, Frieler, Knutti, Frame, and Allen.


More than 100 countries have adopted a global warming limit of 2° Celsius or below (relative to pre-industrial levels) as a guiding prin- ciple for mitigation efforts to reduce climate change risks, impacts and damages. However, the greenhouse gas (GHG) emissions corresponding to a specified maximum warming are poorly known owing to uncertainties in the carbon cycle and the climate response. Here we provide a comprehensive probabilistic analysis aimed at quantifying GHG emission budgets for the 2000–50 period that would limit warming throughout the twenty-first century to below 2° C, based on a combination of published distributions of climate system properties and observational constraints. We show that, for the chosen class of emission scenarios, both cumulative emissions up to 2050 and emission levels in 2050 are robust indicators of the probability that twenty-first century warming will not exceed 2° C relative to pre-industrial temperatures. Limiting cumulative CO2 emissions over 2000–50 to 1,000 Gt CO2 yields a 25% probability of warming exceeding 2° C and a limit of 1,440 Gt CO2 yields a50% probability—given a representative estimate of the distri- bution of climate system properties. As known 2000–06 CO2 emissions were 234 Gt CO2, less than half the proven economi- cally recoverable oil, gas and coal reserves can still be emitted up to 2050 to achieve such a goal. Recent G8 Communiques envisage halved global GHG emissions by 2050, for which we estimate a 12– 45% probability of exceeding 2° C—assuming 1990 as emission base year and a range of published climate sensitivity distribu- tions. Emissions levels in 2020 are a less robust indicator, but for the scenarios considered, the probability of exceeding 2° C rises to 53–87% if global GHG emissions are still more than 25% above 2000 levels in 2020.

Determining probabilistic climate change for future emission scenarios is challenging, as it requires a synthesis of uncertainties along the cause–effect chain from emissions to temperatures; for example, uncertainties in the carbon cycle radiative forcing and climate responses. Uncertainties in future climate projections can be quantified by constraining climate model parameters to reproduce historical observations of temperature ocean heat uptake and independent estimates of radiative forcing. By focusing on emission budgets (the cumulative emissions to stay below a certain warming level) and their probabilistic implications for the climate, we build on pioneering mitigation studies. Previous probabilistic studies while sometimes based on more complex models—either considered uncertainties only in a few forcing components, applied relatively simple likelihood estimators ignoring the correlation structure of the observational errors or constrained only model parameters like climate sensitivity rather than allowed emissions...


We use a reduced complexity coupled carbon cycle-climate model (MAGICC 6.0), that uses (hemispheric) emissions of greenhouse gases, aerosols, and tropospheric ozone precursors as its main inputs and calculates atmospheric concentrations, radiative forcings, surface air temperatures, and ocean heat uptake. MAGICC has been used extensively in past IPCC Assessment reports to emulate higher complexity models. MAGICC is able to closely emulate both IPCC AR4 CMIP3 AOGCMs and C4MIP carbon cycle models, and has been used extensively in past IPCC assessment reports. We use MAGICC 6.0 here both for future climate projections based on historical constraints and for emulating more complex AOGCMs or carbon-cycle models. The model contains many parameters whose values are uncertain. We looked at the impact of 82 parameters on model behaviour, which are summarized in the vector H.

(Video Note: One of the writers explains the technical model in lay person's terms.)


The model contains many parameters whose values are uncertain. We looked at the impact of 82 parameters on model behaviour. They include the main sources of uncertainty for temperature projections, such as equilibrium climate sensitivity, dependence of the effective climate sensitivity on forcing, ocean diffusivity and its dependence on the ocean’s vertical warming gradient, forcing of CO2 at doubled concentrations, the indirect aerosol forcing in 2005 and the respective contributions from SOx, NOx, black and organic carbon emissions. Nine parameters are crucial for the simple energy balance upwelling-diffusion-entrainment climate model in MAGICC 6.0, while 33 parameters are related to the different gas-cycles in MAGICC (not considering the 18 carbon cycle parameters that are calibrated separately1 in order to emulate individual C4MIP carbon cycle models) as well as global radiative forcing parameters. The remaining parameters that were varied relate to the relative patterns of the radiative forcings across the four spatial boxes, Northern hemisphere land (NL) and ocean (NO) and Southern hemisphere land (SL) and ocean (SO) (Table S1). All 82 parameters are summarized in the vector Θ.

(Video Note: One of the writers explains the uncertainties in the technical model and the nine parameters.)


Observational Constraints

As one set of observational constraints, we use yearly averaged temperatures in our four grid boxes (Northern and Southern hemisphere land and ocean) as provided by Brohan et al.2 for the years 1850-2006. We arrange those measurements in a 628-dimensional vector T. Uncertainty in these measurements comprises both an independent error, reflecting measurement uncertainties, and up to two potential bias sources, reflecting urbanisation effects and thermometer exposure changes. The independent measurement errors and the bias errors are assumed to follow ‘skewed normal’ distributions with appropriate time-varying variance. Here, a random variable X is called ‘skewed normal’, if log(X-c) has a Gaussian distribution with mean μ and variance σ2. This allows a continuous approximation between a Gaussian distribution of X and skewed log-normal distributions – by making an appropriate choice for c. The constants c, μ and σ are chosen so that the reported quantiles of the distribution of X are matched. They are uniquely determined this way. An exception is the possible urbanization bias over land areas. In this case, we use a truncated normal distribution, whose mass on the negative axis is concentrated at zero. The respective space-time dependency is obtained from Brohan et al.2. The contributions from each of the two bias sources are assumed to be perfectly correlated between different years and land or ocean areas, respectively. The first bias source is assumed to be independent of the second bias source, though.

(Video #3 Note: I am reading JoAnn's book.)


Internal climate variability has to be taken into account when comparing model results under various parameter settings to observed surface temperatures. We use the full-length control runs of all AOGCMs runs available at PCMDI (http://www-pcmdi.llnl.gov/, as of mid 2007) to assess internal variability. For each AOGCM, we estimate the 628x628 dimensional covariance matrix Σ of the observations T resulting from measurement error and internal climate variability under the constraint of stationarity of the temperature time series.

(Video #4 Note: My bio video.)


Even if we were assuming a multivariate normal distribution for the internal climate variability of T (which we do not), we are still faced with estimating a high-dimensional covariance matrix Σ and using its inverse in the calculation of the likelihood. As the control runs are of limited length, the associated structural errors in the estimate of internal climate variability can affect the inverse of Σ and hence the likelihood substantially. We address this by projecting the 628- dimensional vector of temperature measurements onto a low-dimensional subspace. Optimizing a signal-to-noise ratio for this projection is dangerous, as discussed in Allen & Tett3, as one is then likely to pick up directions which do not show sufficient variance in the GCM control runs. As we also want to avoid projecting onto dimensions that cannot be modelled by MAGICC, we simply search for the m-dimensional projection (with low m) which best preserves the MAGICC signal. To this end, we draw 5,000 parameters Θ from the prior distribution of Θ, compute the MAGICC output T under all these parameter sets and perform a PCA analysis. We choose m so that 99.95% of the MAGICC variance is preserved and find that m=8 is sufficient. Our findings are insensitive to this choice. We then find the mx628-dimensional matrix Pm, which corresponds to the projection of T onto the space spanned by the first m PCA components. The likelihood is finally based on the m-dimensional vector Tm=PmT instead of the 628-dimensional vector T. Note that each PCA component is spatio-temporal in nature to achieve maximum dimensionality reduction while retaining most of the signal in MAGICC. We now assume that the internal variability of Tm has a Gaussian distribution (which is more plausible than assuming a Gaussian distribution for T directly, due to the involved projection) and estimate the mxm-dimensional covariance matrix Σm from the data set as Pm T Σ Pm,, where Σ is the previously derived covariance matrix of the observations.

(Video #5: The Story of the North Bridge.)


Ocean heat uptake is only considered via its linear trend Z1 of +0.3721(1-sigma: ±0.0698) 1022 J/yr for the heat content trend over 1961 to 2003 up to 700m depth4. See Figure S2 for the match between the constrained model results and the observational data by Levitus et al.5 as well as more recent results by Domingues et al.4.

Radiative forcing estimates as listed in Forster et al. (Table 2.12 therein)6 provide an additional set of 17 constraints Z2,...,Z18 (Table S2). The error of 14 of these radiative forcing estimates is assumed to have a Gaussian distribution, as the 5% and 95% quantiles of the error distribution given in Forster et al.6 do not hint towards skewness of the error. The remaining 3 observational constraints are modelled by a ‘skewed normal’ (see above). Using a Gaussian distribution instead yields very similar results, though, and all conclusions are not affected by the details of the distribution chosen. All radiative forcing uncertainties are assumed to be independent.

Given that MAGICC 6.0 has substantially more freedom to change the effective climate sensitivity over time1 than what is observed from AOGCM diagnostics, we introduce another constraint Z19. This constraint limits the ratio of the 21st century change in effective climate sensitivity, expressed by the ratio of average effective climate sensitivities in the periods 2050- 2100 and 1950-2000. Based on AOGCM CMIP3 model emulations1, we derive a distribution for our constraint with a median at 1.23 and a 90% range between 1.06 to 1.51 under the SRES A1B scenario.

Likelihood Estimation

To calculate the likelihood, the observations are split into the projected temperature observations Tm and the remaining observational constraints Z1,...,Z19. Let f be the density of temperature observations under a given parameter setting Θ, taking into account both the measurement errors and internal climate variability. Let hk, k=1,…,19, be the density functions of the remaining observational constraints. Under independence of Z1,...,Z19 and T, the likelihood L(Θ) of model parameters Θ is given by:

Footnotes / Writing Samples / Video Samples

American Power Conversion User Manual

Church Service Video Channel

My YouTube Channel

My Vimeo Channel


Send E-Mail (dlance47@gmail.com)

Typecast™ by Monotype - Sign up free