Daily Archives: May 4th, 2012

HadSST3: A detailed look

Skepticalscience

Posted on 5 May 2012 by Kevin C

The Hadley centre of the UK Meteorological office has for a number of years maintained a dataset of sea surface temperatures (SSTs), HadSST2, which has formed the basis for estimating global surface temperatures. The HadSST2 dataset was used in the widely quoted HadCRUT3 temperature record, as well as forming the basis for an interpolated record, HadISST, which is used along with ERSST in NASA’s GISTEMP record. The source data is the International Comprehensive Ocean Atmosphere Data Set (ICOADS), which includes historical records from many sources.

The SST data is a little more complex than the weather station data with which most of use are familiar: Whereas temperature measurements at weather stations have been performed according to a standard protocol for over a century, measurement methods for SST data have changed significantly over the same period. Early measurements were taken using a canvas bucket trailed in the water, or later a better insulated wooden or rubber bucket. Later measurements were taken from Engine room intakes, hull sensors, or buoys. The different methods have different biases, and thus significant corrections are required to produce a stable temperature series. The HadSST2 record included a ‘bucket correction’ for data collected before 1942, to correct for a known cool bias in the data.

This year, the Hadley centre released a new version of this dataset, HadSST3, based on additional data and more importantly, some additional corrections. These are described in Kennedy et al, 2012.

Temperature bias

A number of studies have looked at the sources of bias in SST measurements. Kennedy et al provide a review of the literature, and identify the following key issues:

  • Canvas buckets allow significant evaporation while hauling in the bucket, cooling the water and leading to a measurement which is biased low.
  • Wooden or rubber buckets reduce the evaporation effect.
  • Engine room sensors take their samples from deeper water, but the water is heated by the pipework, and so the resulting measurements are most often biased high.
  • Hull sensors suffer similar problems to engine room sensors, although the effect is probably smaller – this has not been widely studied.
  • Buoys tend to be more consistent.

These effects have been quantified; for example engine room intake (ERI) temperatures have been compared to bucket measurements by a number of studies. Engine room temperatures can be directly checked against buoy measurements by mining the ICOADS data for examples of a ship passing close to a buoy. Reasonable estimates are therefore available for the temperature biases, however there are still uncertainties, with ERI measurements in particular varying from ship to ship and with loading.

How often is each method used?

The effect of these biases will vary according to how often each method is used at any point in time. In many cases the measurement method is recorded in ICOADS metadata, in other cases it must be inferred from other sources, such as the standard operating procedures for the nation operating the ship. Kennedy et al have examined the available records and devised a set of rules to determine which method is most likely to have been used for an unclassified measurement. Combining these classifications with the known cases give rise to an estimate of the proportion of measurements made using a given method by year. This is illustrated in Figure 1.

Figure 1: Proportional usage of different measurement types

This is a re-plotting of Kennedy et al Figure 2, to sort the measurement types according to bias. The grey region represents measurements of unknown type.

The black line in Figure 1 gives an approximate indication of the correction required. When all the data come from cool-biased buckets, a positive adjustment is required; when they come from warm-biased engine room intakes, a negative adjustment is required. There is a big shift from buckets to engine room intakes in 1941-1942; this is the ‘bucket correction’ implemented in existing datasets such as HadSST2. The big change identified in this work is a shift back to buckets in the mid 40’s. This corresponds to a switch from using US ships to UK ships, with a corresponding change in operating procedures. The resulting discontinuity in the temperature record has been know for a while, but the cause was first identified by Thompson et al (2008).

Note however that this figure does not tell the whole story – for example it ignores the distribution of measurements across the globe, and the transition from canvas to insulated buckets between 1954 and 1970. Kennedy et al take these and other factors into account, and the resulting adjustment is shown in Figure 2.

Figure 2: Bias adjustements in HadSST2 and HadSST3

The HadSST2 and HadSST3 adjustments are in good agreement until 1940; after that HadSST2 assumes that the data is homogeneous while HadSST3 makes corrections for the continuing changes in the mix of observations. From the late 40’s the increasing use of first insulated buckets and then buoys means that the size of the adjustment has declined. However in recent decades there has been a small shift towards a cool bias, and a corresponding positive correction with the switch from engine room temperatures to buoy measurements.

How does this play out in the global SST series? The difference between HadSST3 and HadSST2 is shown in Figure 3, along with the change in the adjustments.

Figure 2: Change in adjustment from HadSST2 to HadSST3

The green line in Figure 3 is the difference between the red and green lines in Figure 2. Clearly the bulk of the difference between the two datasets is due to the new adjustments. The remainder comes from additional records which have been digitized and added to the ICOADS database.

Why stop at 2006?

The currently released data runs up to 2006. After that time ship identifiers were removed from the ICOADS records for security reasons.

Uncertainties

One more complex feature of the HadSST3 data is the provision of an ensemble of possible ‘realizations’. This is related to determining the uncertainty in an estimate of global mean sea surface temperature, or of a trend over multiple years. The simplest approach is to attach an uncertainty estimate to each map grid cell for each month. That works fine if the errors behave like measurement errors, because measurement errors are independent – if you average them over the globe they tend to cancel out, and the global mean thus has a lower uncertainty than an individual measurement. Some biases (referred to as microbiases in the paper) also behave in this way. However others, such as the bias due to sampling method, may affect whole groups of observations in the same way.

The corrections for these biases also have uncertainties. If a bias correction is made which applies to all the measurements in a particular month, the the resulting uncertainty in the global temperature will be just as big as the corresponding uncertainty in any individual cell. The issue becomes even more complex with bias corrections which are correlated from month to month. Bias corrections which are stable over time will have a big effect on time averages, but no effect on trends. Conversely, bias corrections which vary over time can have a big effect on trends.

A common mathematical approach to this problem is to calculate the ‘covariances’ of all the variables – i.e. how the temperature of every map cell for every month is related to every other through the common corrections. For the SST problem this approach is unfeasible, requiring of the order of a billion covariances. Instead Kennedy et al have created 100 ‘realizations’ of data using different values for the various bias corrections which sample the uncertainty range for each correction. To estimate the uncertainty in an average or a trend, all that is required is to calculate the average or trend for each of the 100 realizations, and estimate the uncertainty from the distribution of the results.

An important implication is that uncertainties due to uncorrected biases (which are often correlated over time) will not usually be picked up by post-hoc uncertainty estimates, such as the estimates produced by the Skeptical Science trend calculator.

Implications

A single temperature series can be constructed from the ensemble of realizations by some kind of average – in the case of HadSST3 the median (a robust estimator) is used. The resulting series for HadSST3 and HadSST2 are compared in Figure 4. (Note that these series are global averages as opposed to the NH/SH average usually quoted by Hadley/CRU.)

Figure 4: Comparison between HadSST2 and HadSST3

The most important result from a climate perspective is the correction of the discontinuity in the mid 40’s. Climate models have consistently failed to reproduce this feature of the temperature record from any known set of climate forcings. If the adjustment incorporated by Kennedy et al is correct then the discrepancy is at least partly accounted for by bias in the temperature record rather than a problem with the models. However natural variability and uncertainties in the observations and forcings mean that the issue is far from clear-cut.

A more topical issue is the temperature trend over the last decade or so, with the trend since 1998 receiving frequent attention. The HadSST2 trend over the 9 year period 1998-2006 is 0.08°C/decade. The median trend from the HadSST3 ensemble is 0.12°C/decade, however the 95% uncertainty interval obtained from the whole HadSST3 ensemble ranges from 0.10 to 0.16°C/decade not including the substantial uncertainty in the trend. Thus, while it is more likely that the recent trend has been under- rather than overestimated, there remains significant uncertainty. This highlights the need for longer periods of data when assessing climate trends.

Conclusions

Oceans make up 70% of the Earth’s surface, and so accurate sea surface temperature measurements are important in understanding the temperature changes of the last 1½ centuries. HadSST3 is an important contribution to that understanding. However the problem is not simple, and the bias problems present a very different challenge to weather station records. Kennedy et al summarize the current state of knowledge in the following way:

"It should be noted that the adjustments presented here and their uncertainties represent a first attempt to produce an SST data set that has been homogenized from 1850 to 2006. Therefore, the uncertainties ought to be considered incomplete until other independent attempts have been made to assess the biases and their uncertainties using different approaches to those described here."

Acknowledgements

The author would like to thanks to John Kennedy at the Hadley centre for data and suggestions concerning this article.

View article…

HadSST3: A detailed look

Skepticalscience

Posted on 5 May 2012 by Kevin C

The Hadley centre of the UK Meteorological office has for a number of years maintained a dataset of sea surface temperatures (SSTs), HadSST2, which has formed the basis for estimating global surface temperatures. The HadSST2 dataset was used in the widely quoted HadCRUT3 temperature record, as well as forming the basis for an interpolated record, HadISST, which is used along with ERSST in NASA’s GISTEMP record. The source data is the International Comprehensive Ocean Atmosphere Data Set (ICOADS), which includes historical records from many sources.

The SST data is a little more complex than the weather station data with which most of use are familiar: Whereas temperature measurements at weather stations have been performed according to a standard protocol for over a century, measurement methods for SST data have changed significantly over the same period. Early measurements were taken using a canvas bucket trailed in the water, or later a better insulated wooden or rubber bucket. Later measurements were taken from Engine room intakes, hull sensors, or buoys. The different methods have different biases, and thus significant corrections are required to produce a stable temperature series. The HadSST2 record included a ‘bucket correction’ for data collected before 1942, to correct for a known cool bias in the data.

This year, the Hadley centre released a new version of this dataset, HadSST3, based on additional data and more importantly, some additional corrections. These are described in Kennedy et al, 2012.

Temperature bias

A number of studies have looked at the sources of bias in SST measurements. Kennedy et al provide a review of the literature, and identify the following key issues:

  • Canvas buckets allow significant evaporation while hauling in the bucket, cooling the water and leading to a measurement which is biased low.
  • Wooden or rubber buckets reduce the evaporation effect.
  • Engine room sensors take their samples from deeper water, but the water is heated by the pipework, and so the resulting measurements are most often biased high.
  • Hull sensors suffer similar problems to engine room sensors, although the effect is probably smaller – this has not been widely studied.
  • Buoys tend to be more consistent.

These effects have been quantified; for example engine room intake (ERI) temperatures have been compared to bucket measurements by a number of studies. Engine room temperatures can be directly checked against buoy measurements by mining the ICOADS data for examples of a ship passing close to a buoy. Reasonable estimates are therefore available for the temperature biases, however there are still uncertainties, with ERI measurements in particular varying from ship to ship and with loading.

How often is each method used?

The effect of these biases will vary according to how often each method is used at any point in time. In many cases the measurement method is recorded in ICOADS metadata, in other cases it must be inferred from other sources, such as the standard operating procedures for the nation operating the ship. Kennedy et al have examined the available records and devised a set of rules to determine which method is most likely to have been used for an unclassified measurement. Combining these classifications with the known cases give rise to an estimate of the proportion of measurements made using a given method by year. This is illustrated in Figure 1.

Figure 1: Proportional usage of different measurement types

This is a re-plotting of Kennedy et al Figure 2, to sort the measurement types according to bias. The grey region represents measurements of unknown type.

The black line in Figure 1 gives an approximate indication of the correction required. When all the data come from cool-biased buckets, a positive adjustment is required; when they come from warm-biased engine room intakes, a negative adjustment is required. There is a big shift from buckets to engine room intakes in 1941-1942; this is the ‘bucket correction’ implemented in existing datasets such as HadSST2. The big change identified in this work is a shift back to buckets in the mid 40’s. This corresponds to a switch from using US ships to UK ships, with a corresponding change in operating procedures. The resulting discontinuity in the temperature record has been know for a while, but the cause was first identified by Thompson et al (2008).

Note however that this figure does not tell the whole story – for example it ignores the distribution of measurements across the globe, and the transition from canvas to insulated buckets between 1954 and 1970. Kennedy et al take these and other factors into account, and the resulting adjustment is shown in Figure 2.

Figure 2: Bias adjustements in HadSST2 and HadSST3

The HadSST2 and HadSST3 adjustments are in good agreement until 1940; after that HadSST2 assumes that the data is homogeneous while HadSST3 makes corrections for the continuing changes in the mix of observations. From the late 40’s the increasing use of first insulated buckets and then buoys means that the size of the adjustment has declined. However in recent decades there has been a small shift towards a cool bias, and a corresponding positive correction with the switch from engine room temperatures to buoy measurements.

How does this play out in the global SST series? The difference between HadSST3 and HadSST2 is shown in Figure 3, along with the change in the adjustments.

Figure 2: Change in adjustment from HadSST2 to HadSST3

The green line in Figure 3 is the difference between the red and green lines in Figure 2. Clearly the bulk of the difference between the two datasets is due to the new adjustments. The remainder comes from additional records which have been digitized and added to the ICOADS database.

Why stop at 2006?

The currently released data runs up to 2006. After that time ship identifiers were removed from the ICOADS records for security reasons.

Uncertainties

One more complex feature of the HadSST3 data is the provision of an ensemble of possible ‘realizations’. This is related to determining the uncertainty in an estimate of global mean sea surface temperature, or of a trend over multiple years. The simplest approach is to attach an uncertainty estimate to each map grid cell for each month. That works fine if the errors behave like measurement errors, because measurement errors are independent – if you average them over the globe they tend to cancel out, and the global mean thus has a lower uncertainty than an individual measurement. Some biases (referred to as microbiases in the paper) also behave in this way. However others, such as the bias due to sampling method, may affect whole groups of observations in the same way.

The corrections for these biases also have uncertainties. If a bias correction is made which applies to all the measurements in a particular month, the the resulting uncertainty in the global temperature will be just as big as the corresponding uncertainty in any individual cell. The issue becomes even more complex with bias corrections which are correlated from month to month. Bias corrections which are stable over time will have a big effect on time averages, but no effect on trends. Conversely, bias corrections which vary over time can have a big effect on trends.

A common mathematical approach to this problem is to calculate the ‘covariances’ of all the variables – i.e. how the temperature of every map cell for every month is related to every other through the common corrections. For the SST problem this approach is unfeasible, requiring of the order of a billion covariances. Instead Kennedy et al have created 100 ‘realizations’ of data using different values for the various bias corrections which sample the uncertainty range for each correction. To estimate the uncertainty in an average or a trend, all that is required is to calculate the average or trend for each of the 100 realizations, and estimate the uncertainty from the distribution of the results.

An important implication is that uncertainties due to uncorrected biases (which are often correlated over time) will not usually be picked up by post-hoc uncertainty estimates, such as the estimates produced by the Skeptical Science trend calculator.

Implications

A single temperature series can be constructed from the ensemble of realizations by some kind of average – in the case of HadSST3 the median (a robust estimator) is used. The resulting series for HadSST3 and HadSST2 are compared in Figure 4. (Note that these series are global averages as opposed to the NH/SH average usually quoted by Hadley/CRU.)

Figure 4: Comparison between HadSST2 and HadSST3

The most important result from a climate perspective is the correction of the discontinuity in the mid 40’s. Climate models have consistently failed to reproduce this feature of the temperature record from any known set of climate forcings. If the adjustment incorporated by Kennedy et al is correct then the discrepancy is at least partly accounted for by bias in the temperature record rather than a problem with the models. However natural variability and uncertainties in the observations and forcings mean that the issue is far from clear-cut.

A more topical issue is the temperature trend over the last decade or so, with the trend since 1998 receiving frequent attention. The HadSST2 trend over the 9 year period 1998-2006 is 0.08°C/decade. The median trend from the HadSST3 ensemble is 0.12°C/decade, however the 95% uncertainty interval obtained from the whole HadSST3 ensemble ranges from 0.10 to 0.16°C/decade not including the substantial uncertainty in the trend. Thus, while it is more likely that the recent trend has been under- rather than overestimated, there remains significant uncertainty. This highlights the need for longer periods of data when assessing climate trends.

Conclusions

Oceans make up 70% of the Earth’s surface, and so accurate sea surface temperature measurements are important in understanding the temperature changes of the last 1½ centuries. HadSST3 is an important contribution to that understanding. However the problem is not simple, and the bias problems present a very different challenge to weather station records. Kennedy et al summarize the current state of knowledge in the following way:

"It should be noted that the adjustments presented here and their uncertainties represent a first attempt to produce an SST data set that has been homogenized from 1850 to 2006. Therefore, the uncertainties ought to be considered incomplete until other independent attempts have been made to assess the biases and their uncertainties using different approaches to those described here."

Acknowledgements

The author would like to thanks to John Kennedy at the Hadley centre for data and suggestions concerning this article.

View article…

Hate campaign against climate scientists hits the denier spin-cycle « Graham Readfearn

http://www.readfearn.com

RIGHT now, as I type, we’re in the middle of the global dissemination of a gross misrepresentation of facts.

The line currently being spun by climate change sceptic commentators and bloggers is that climate change scientists have lied about getting death threats.

At the same time a campaign of systematic abuse of climate scientists in an attempt to get them to withdraw from public debate is being ignored.

This spin-cycle started yesterday in The Australian, with a story reporting the findings of a report from the Privacy Commissioner Timothy Pilgrim.

Mr Pilgrim ordered that 11 documents turned up through a Freedom of Information request to the Australian National University could, against the wishes of the university, be released to the public.

Mr Pilgrim concluded that 10 of the 11 documents “contain abuse in the sense that they contain insulting and offensive language” but did not contain “threats to kill or threats of harm”.

Oh. Well that’s OK then?

One email, the commissioner said, described an “exchange” during an “off-campus” event. The commissioner said the exchange “could be regarded as intimidating and at its highest perhaps alluding to a threat”, adding that the “danger to life or physical safety” was “only a possibility, not a real chance”.

In the report, Mr Pilgrim added: “In my view, there is a risk that release of the documents could lead to further insulting or offensive communication being directed at ANU personnel or expressed through social media. However, there is no evidence to suggest disclosure would, or could reasonably be expected to, endanger the life or physical safety of any person.”

Climate sceptic commentators and bloggers have taken this decision to mean that climate scientists have not received death threats and, on the face of it, that might seem like a fair conclusion.

Except they’ve ignored two key facts which undermine their conclusion.

The first, is that the FOI request only asked for correspondence covering a six month period from January to June 2011. What’s more, the request only asked for correspondence regarding six ANU academics. The report from the Privacy Commissioner made this clear.

Secondly, the original investigation which sparked the FOI request, published in The Canberra Times, found more than 30 climate scientists had received threats or abuse of one kind or another at universities across Australia and that this campaign had been going on for years. It wasn’t news to some of us. None of the emails I published on my blog were from scientists at ANU.

Despite the narrow nature of the FOI request and the foul nature of the campaign, sceptic blogger Jo Nova was utterly beside herself claiming the Privacy Commissioner’s report had shown that the campaign of intimidation didn’t exist.

Anthony Watts wrote the claims were entirely “manufactured” with “not a single document” to back it up.

James Delingpole said there had been no death threats “whatsoever” during the campaign, and then went on to trivialise reports that Professor Phil Jones, of the University of East Anglia, had considered suicide.

At Catallaxy Files, senior IPA fellow Sinclair Davidson, said the threats “never happened” and were a lie.

All of these reports, no doubt hastily compiled but with a total lack of care or compassion, failed to take into account that the FOI request was so narrow that it couldn’t possibly back up their conclusions.

Sounds to me a little bit like cherry-picking one particular piece of climate data to try and construct an argument, while ignoring all the other evidence around them.

We still don’t even know what the documents in this selective trove actually say because the ANU has not yet released them, saying instead that it is “reviewing the report” and “considering our options”.

The question of whether the abuse constitutes a “death threat” is a red herring.

When climate researchers have their children threatened with sexual abuse, have their cars smeared with excrement and get emails telling them they’re going to “end up collateral damage”, then what else is it but a hate campaign.

In my view, the campaign of abuse is designed to intimidate climate scientists, discourage them from engaging with the public and discourage them from carrying out their research. Failing to condemn it shows just how low the climate change debate has become.

Read more

PHYSorg.com : Ocean acidification will likely reduce diversity, resiliency in coral reef ecosystems: …

BIGTIX recommends the following story from Phys.Org:

Ocean acidification will likely reduce diversity, resiliency in coral reef ecosystems: new study http://phys.org/news/2011-05-ocean-acidification-diversity-resiliency-coral.html

BIGTIX’s comment:
no comment.

PHYSorg.com : Ocean acidification turns climate change winners into losers: research

BIGTIX recommends the following story from Phys.Org:

Ocean acidification turns climate change winners into losers: research http://phys.org/news/2012-02-ocean-acidification-climate-winners-losers.html

BIGTIX’s comment:
Let’s say Ocean Neutralization.
The seawater PH is generally higher than 7. It’s fluctuation is about 7.8 and 8.1. Precipitation would cause PH to decline. And this is not acidification. This is neutralization.

PHYSorg.com : Climate change study warns against one-off experiments

BIGTIX recommends the following story from Phys.Org:

Climate change study warns against one-off experiments
http://phys.org/news/2012-02-climate-one-off.html

BIGTIX’s comment:
no comment

PHYSorg.com : Global change puts plankton under threat

BIGTIX recommends the following story from Phys.Org:

Global change puts plankton under threat
http://phys.org/news/2012-05-global-plankton-threat.html

BIGTIX’s comment:

Unlocking the secrets to ending an Ice Age

http://www.realclimate.org

Guest Commentary by Chris Colose, SUNY Albany

It has long been known that characteristics of the Earth’s orbit (its eccentricity, the degree to which it is tilted, and its “wobble”) are slightly altered on timescales of tens to hundreds of thousands of years. Such variations, collectively known as Milankovitch cycles, conspire to pace the timing of glacial-to-interglacial variations.

Despite the immense explanatory power that this hypothesis has provided, some big questions still remain. For one, the relative roles of eccentricity, obliquity, and precession in controlling glacial onsets/terminations are still debated. While the local, seasonal climate forcing by the Milankovitch cycles is large (of the order 30 W/m2), the net forcing provided by Milankovitch is close to zero in the global mean, requiring other radiative terms (like albedo or greenhouse gas anomalies) to force global-mean temperature change.

The last deglaciation occurred as a long process between peak glacial conditions (from ~26-20,000 years ago) to the Holocene (~10,000 years ago). Explaining this evolution is not trivial. Variations in the orbit cause opposite changes in the intensity of solar radiation during the summer between the Northern and Southern hemisphere, yet ice age terminations seem synchronous between hemispheres. This could be explained by the role of the greenhouse gas CO2, which varies in abundance in the atmosphere in sync with the glacial cycles and thus acts as a “globaliser” of glacial cycles, as it is well-mixed throughout the atmosphere. However, if CO2 plays this role it is surprising that climatic proxies indicate that Antarctica seems to have warmed prior to the Northern Hemisphere, yet glacial cycles follow in phase with Northern insolation (“INcoming SOLar radiATION”) patterns, raising questions as to what communication mechanism links the hemispheres.

There have been multiple hypotheses to explain this apparent paradox. One is that the length of the austral summer co-varies with boreal summer intensity, such that local insolation forcings could result in synchronous deglaciations in each hemisphere (Huybers and Denton, 2008). A related idea is that austral spring insolation co-varies with summer duration, and could have forced sea ice retreat in the Southern Ocean and greenhouse gas feedbacks (e.g., Stott et al., 2007).

Based on transient climate model simulations of glacial-interglacial transitions (rather than “snapshots” of different modeled climate states), Ganopolski and Roche (2009) proposed that in addition to CO2, changes in ocean heat transport provide a critical link between northern and southern hemispheres, able to explain the apparent lag of CO2 behind Antarctic temperature. Recently, an elaborate data analysis published in Nature by Shakun et al., 2012 (pdf) has provided strong support for these model predictions. Shakun et al. attempt to interrogate the spatial and temporal patterns associated with the last deglaciation; in doing so, they analyze global-scale patterns (not just records from Antarctica). This is a formidable task, given the need to synchronize many marine, terrestrial, and ice core records.

The evolution of deglaciation

By analyzing 80 proxy records from around the globe (generally with resolutions better than 500 years) the authors are able to evaluate the changes occurring during different time periods in order to characterize the spatial and temporal structure of the deglacial evolution.

Shakun et al. confirm Ganopolski’s and Roche’s proposition that warming of the Southern Hemisphere during the last deglaciation is, in part, attributable to a bipolar-seesaw response to variations in the Atlantic Meridional Overturning Circulation (AMOC). This is hypothesized to result from fresh water input into the Northern Hemisphere (although it is worth noting that the transient simulations of this sort fix the magnitude of the freshwater perturbation, so this doesn’t necessarily mean that the model has the correct sensitivity to freshwater input).

The bi-polar seesaw is usually associated with the higher-frequency abrupt climate changes (e.g., Dansgaard-Oeschger and Heinrich events) that are embedded within the longer, orbital timescale variations. However, numerous studies have indicated that it also sets the stage for initiating the full deglaciation process. In this scenario, the increase in boreal summer insolation melts enough NH ice to trigger a strong AMOC reduction, which cools the North at the expense of warming the South. The changes in Antarctica are lagged somewhat due to the thermal inertia of the Southern Ocean, but eventually the result is degassing of CO2 from the Southern Ocean and global warming. In particular, CO2 levels started to rise from full glacial levels of about 180 parts per million (ppm), reaching 265 ppm 10,000 years ago (or ~2.1 W/m2 radiative forcing), and with another slow ~15 ppm rise during the Holocene.

https://i2.wp.com/feeds.feedburner.com/images/shakun_fig1.jpg
Figure 1: Simplified schematic of the deglacial evolution according to Shakun et al (2012). kya = kiloyears ago; NH = Northern Hemisphere

The evolution of temperature as a function of latitude and the timing of CO2 rise are shown below (at two different time periods in part a, see the caption). There is considerable spatial and temporal structure in how the changes occur during deglaciation. There is also long-term warming trend superimposed on higher-frequency “abrupt climate changes” associated with AMOC-induced heat redistributions.

https://i1.wp.com/feeds.feedburner.com/images/shakun_fig2.jpg
Figure 2: Temperature change before increase in CO2 concentration. a, Linear temperature trends in the proxy records from 21.5–19 kyr ago (red) and 19–17.5 kyr ago (blue) averaged in 10° latitude bins with1σ uncertainties. b, Proxy temperature stacks for 30° latitude bands with 1σ uncertainties. The stacks have been normalized by the glacial–interglacial (G–IG) range in each time series to facilitate comparison. From Shakun et al (2012)

What causes the CO2 rise?

The ultimate trigger of the CO2 increase is still a topic of interesting research. Some popular discussions like to invoke simple explanations, such as the fact that warmer water will expel CO2, but this is probably a minor effect (Sigman and Boyle, 2000). More than likely, the isotopic signal (the distribution of 13C-depleted carbon that invaded the atmosphere) indicates that carbon should have been “mined’ from the Southern ocean as a result of the displacement of southern winds, sea ice, and perturbations to the ocean’s biological pump (e.g., Anderson et al., 2009).

This view has been supported by another recent paper (Schmitt et al., 2012) that represents a key scientific advance in dissecting this problem. Until recently, analytical issues in the ice core measurements provided a limitation on assessing the deglacial isotopic evolution of 13C. Because carbon cycle processes such as photosynthesis fractionate the heavy isotope 13C from the lighter 12C, isotopic analysis can usually be used to “trace” sources and sinks of carbon. A rapid depletion in 13C between about 17,500 and 14,000 years ago, simultaneous with a time when the CO2 concentration rose substantially, is consistent with release of CO2 from an isolated deep-ocean source that accumulated carbon due to the sinking of organic material from the surface.

https://i2.wp.com/feeds.feedburner.com/images/shakun_fig3.jpg
Figure 3: Ice core reconstructions of atmospheric δ13C and CO2 concentration covering the last 24 kyr, see Schmitt et al (2012)

Skeptics, CO2 lags, and all that…

Not surprisingly, several people don’t like this paper because it reaffirms that CO2 is important for climate. The criticisms have ranged from the absurd (water vapor is still not 95% of the greenhouse effect, particularly in a glacial world where one expects a drier atmosphere) to somewhat more technical sounding (like criticizing the way they did the weighting of their proxy records, though the results aren’t too sensitive to their averaging method). There’s also been confusion in how the results of Shakun et al. fit in with previous results that identified a lag between CO2 and Antarctic temperatures (e.g., Caillon et al., 2003).

Unlike the claims of some that these authors are trying to get rid of the “lag,” Shakun et al. fully support the notion that Antarctic temperature change did in fact precede the CO2 increase. This is not surprising since we fully expect the carbon cycle to respond to radical alterations to the climate. Moreover, there is no mechanism that would force CO2 to change on its own (in preferred cycles) without any previous alterations to the climate. Instead, Shakun et al. show that while CO2 lagged Antarctic temperatures, they led the major changes in the global average temperature (including many regions in the Northern Hemisphere and tropics).

It is important to realize that the nature of CO2’s lead/lag relationship with Antarctica is insightful for our understanding of carbon cycle dynamics and the sequence of events that occur during a deglaciation, but it yields very little information about climate sensitivity. If the CO2 rise is a carbon cycle feedback, this is still perfectly compatible with its role as a radiative agent and can thus “trigger” the traditional feedbacks that determine sensitivity (like water vapor, lapse rate, etc). Ganopolski and Roche (2009), for example, made it clear that one should be careful in using simple lead and lags to infer the nature of causality. If one takes the simple view that deglaciation is forced by only global ice volume change and greenhouse feedbacks, then one would be forced to conclude that Antarctic temperature change led all of its forcings! The communication between the NH and Antarctica via ocean circulation is one way to resolve this, and is also supported by the modeling efforts of Ganopolski and Roche. This also helps clear up some confusion about whether the south provides the leading role for the onset or demise of glacial cycles (it apparently doesn’t).

A number of legitimate issues still remain in exploring the physics of deglaciation. For instance, the commentary piece by Eric Wolff references earlier deglaciations and points out that solar insolation may have increased in the boreal summer during the most recent event, but was still not as high as during previous deglacial intervals. It will be interesting to see how these issues play out over the next few years.

References

  1. P. Huybers, and G. Denton, "Antarctic temperature at orbital timescales controlled by local summer duration", Nature Geoscience, vol. 1, 2008, pp. 787-792. DOI.
  2. L. Stott, A. Timmermann, and R. Thunell, "Southern Hemisphere and Deep-Sea Warming Led Deglacial Atmospheric CO2 Rise and Tropical Warming", Science, vol. 318, 2007, pp. 435-438. DOI.
  3. A. Ganopolski, and D.M. Roche, "On the nature of lead–lag relationships during glacial–interglacial climate transitions", Quaternary Science Reviews, vol. 28, 2009, pp. 3361-3378. DOI.
  4. J.D. Shakun, P.U. Clark, F. He, S.A. Marcott, A.C. Mix, Z. Liu, B. Otto-Bliesner, A. Schmittner, and E. Bard, "Global warming preceded by increasing carbon dioxide concentrations during the last deglaciation", Nature, vol. 484, 2012, pp. 49-54. DOI.
  5. D.M. Sigman, and E.A. Boyle, , Nature, vol. 407, pp. 859-869. DOI.
  6. R.F. Anderson, S. Ali, L.I. Bradtmiller, S.H.H. Nielsen, M.Q. Fleisher, B.E. Anderson, and L.H. Burckle, "Wind-Driven Upwelling in the Southern Ocean and the Deglacial Rise in Atmospheric CO2", Science, vol. 323, 2009, pp. 1443-1448. DOI.
  7. J. Schmitt, R. Schneider, J. Elsig, D. Leuenberger, A. Lourantou, J. Chappellaz, P. Kohler, F. Joos, T.F. Stocker, M. Leuenberger, and H. Fischer, "Carbon Isotope Constraints on the Deglacial CO2 Rise from Ice Cores", Science. DOI.
  8. N. Caillon, "Timing of Atmospheric CO2 and Antarctic Temperature Changes Across Termination III", Science, vol. 299, pp. 1728-1731. DOI.

The legend of the Titanic

http://www.realclimate.org

It’s 100 years since the Titanic sank in the North Atlantic, and it’s still remembered today. It was one of those landmark events that make a deep impression on people. It also fits a pattern of how we respond to different conditions, according to a recent book about the impact of environmental science on the society (Gudmund Hernes Hot Topic – Cold Comfort): major events are the stimulus and the change of mind is the response.

Hernes suggests that one of those turning moments that made us realize our true position in the universe was when we for the first time saw our own planet from space.

https://i0.wp.com/www.nasa.gov/images/content/297752main_image_1249_946-710.jpg

NASA Earth rise

He observes that

[t]he change in mindset has not so much been the result of meticulous information dissemination, scientific discourse and everyday reasoning as driven by occurrences that in a striking way has disclosed what was not previously realized or only obscurely seen.

Does he make a valid point? If the scientific information looks anything like the situation in a funny animation made by Alister Doyle (Dummiez: climate change and electric cars), then it is understandable.

Moreover, he is not the only person arguing that our minds are steered by big events – the importance of big events was even acknowledged in the fiction ‘State of Fear‘.

A recent paper by Brulle et al (2012) also suggests that the provision of information has less impact than what opinion leaders (top politicians) say.

However, if the notion that information makes little impact is correct, one may wonder what the point would be in having a debate about climate change, and why certain organisations would put so much efforts into denial, as described in books such as Heat is on, Climate Cover-up, Republican war on science, Merchants of doubt, and The Hockeystick and Climate Wars. Why then, would there be such things as ‘the Heartland Institute’, ‘NIPCC’, climateaudit, WUWT, climatedepot, and FoS, if they had no effect? And indeed, the IPCC reports and the reports from the National Academy of Sciences? One could even ask whether the effort that we have put into RealClimate has been in vain.

Then again, could the analysis presented in Brulle et al. be misguided because the covariates used in their study did not provide a sufficiently good representation of important factors? Or could the results be contaminated by disinformation campaigns?

Their results and Hernes assertion may furthermore suggest that there are different rules for different groups of people: What works for scientists doesn’t work for lay people. It is clear from the IPCC and international scientific academies that climate scientists in general are impressed by the increasing information (Oreskes, 2004).

Hernes does, however, acknowledge that a background knowledge is present and may play a role in interpreting events, which means that most of us no longer blame the gods for calamities (in the time before the enlightenment, there were witch hunts and sacrifices to the gods). The presence of the knowledge now provides a rational background, which sometimes seems to be taken for granted.

Maybe it should be no surprise that the situation is as described by Hernes and Brulle et al., because historically science communication hasn’t really been appreciated by the science community (according to ‘Don’t be such a scientist‘) and has not been enthusiastically embraced by the media. There is a barrier to information flow, and Somerville and Hassol (2011) observe that a rational voice of scientists is sorely needed.

The rational of Hernes’ argument, however, is that swaying people does not only concern rational and intellectual ideas, but also an emotional dimension. The mindset influences a person’s identity and character, and is bundeled together with their social network. Hence, people who change their views on the world, may also distance themselves from some friends and connect with new people. A new standpoint will involve a change in their social connections in addition to a change in rational views. Events, such as the Titanic, Earth rise, 911, and Hurricane Katrina influence many people both through rational thought and emotions, where people’s frame of mind shifts together with their friends’.

What do I think? Public opinion is changed not by big events as such, but by the public interpretation of those events. Whether a major event like hurricane Katrina or the Moscow heat wave changes attitudes towards climate change is determined by people’s interpretation of this event, and whether they draw a connection to climate change – though not necessarily directly. I see this as a major reason why organisations such as the Heartland are fighting their PR battle by claiming that such events are all natural and have nothing to do with emissions.

The similarity between these organisations and the Titanic legend is that there was a widespread misconception that it could not sink (and hence its fame) and now organisations like the Heartland make dismissive claims about any connection between big events and climate change. However, new and emerging science is suggesting that there may indeed be some connections between global warming and heat waves and between trends in mean precipitation and more extreme rainfall.

References

  1. R.J. Brulle, J. Carmichael, and J.C. Jenkins, "Shifting public opinion on climate change: an empirical assessment of factors influencing concern over climate change in the U.S., 2002–2010", Climatic Change. DOI.
  2. N. Oreskes, "BEYOND THE IVORY TOWER: The Scientific Consensus on Climate Change", Science, vol. 306, 2004, pp. 1686-1686. DOI.
  3. R.C.J. Somerville, and S.J. Hassol, "Communicating the science of climate change", Physics Today, vol. 64, 2011, pp. 48-. DOI.

View article…

Expert credibility in climate change

a Department of Biology, Stanford University, Stanford, CA 94305; b Electrical and Computer Engineering, University of Toronto, Toronto, ON, Canada M5S
3G4; c William and Flora Hewlett Foundation, Palo Alto, CA 94025; and d Woods Institute for the Environment, Stanford University, Stanford, CA 94305

Contributed by Stephen H. Schneider, April 9, 2010 (sent for review December 22, 2009)

Although preliminary estimates from published literature and expert surveys suggest striking agreement among climate scientists on the tenets of anthropogenic climate change (ACC), the American public expresses substantial doubt about both the anthropogenic cause and the level of scientific agreement underpinning ACC. A broad analysis of the climate scientist community itself, the distribution of credibility of dissenting researchers relative to agreeing researchers, and the level of agreement among top climate experts has not been conducted and would inform future ACC discussions.
Here, we use an extensive dataset of 1,372 climate researchers and their publication and citation data to show that (i) 97–98% of the climate researchers most actively publishing in the field support the tenets of ACC outlined by the Intergovernmental Panel on Climate Change, and (ii) the relative climate expertise and scientific prominence of the researchers unconvinced of ACC are substantially below that of the convinced researchers.

Source

Why Natural Gas Is Cheap and Oil Isn’t

Not require interpretation.

The CNG Times

Floyd Norris explains the basic reasons for the divergence at The New York Times:

Crude oil is a relatively efficient international market, in which the product moves around the globe in tankers that can be diverted from one destination to another almost instantaneously in response to shifts in demand. A sharp change in demand or supply in any region of the globe is likely to show up in prices everywhere.

Oil prices can also be affected by geopolitical concerns well before actual events take place. These days, it appears that oil prices have been pushed up by worries that Israel might attack Iran, leading to a drastic reduction in Iranian oil exports.

The natural gas market, on the other hand, is not a global one. There is a limited trade in liquefied natural gas, which can be transported in tankers, but mostly natural gas must move in pipelines over…

View original post 190 more words

“Yes” to Carbon

Yes to Tar sands and No to Coal? What’s going on!

Solar

I went sailing once. Using the inordinate amount of tact I clearly possess, I mentioned to my partner–a girl who happened to have a dramatic lazy eye–that she would be an optimal sailing buddy as she could keep one eye on the tiller while simultaneously watching the ropes.

After reading an article in Inside Climate News on Obama’s “Yes” to tar sands and “No” to coal, I was reminded of my unfortunate comment to my sailing partner: Obama is trying to keep one eye on his election poles and one eye on his long term carbon and energy goals. Unlike my lazy-eyed friend however, his vision seems to be mostly focused on the poles.

People are a bit flabbergasted with Obama at present: he is endorsing the tar sand moving  Keystone XL pipeline and all the while is creating initiatives that will put more stringent regulations for future power planets. A portion of…

View original post 467 more words

US’s Environmental Policy Agency proposal for carbon emissions from US power plants

Carbon dioxide is not a pollutant.

trading powers

The US’s Environmental Protection Agency proposal regarding power plants in the US. (Excerpt follows).

“Currently, there is no uniform national limit on the amount of carbon pollution new power plants can emit. As a direct result of the Supreme Court’s 2007 ruling, EPA in 2009 determined that greenhouse gas pollution threatens Americans’ health and welfare by leading to long lasting changes in our climate that can have a range of negative effects on human health and the environment”.

via 03/27/2012: EPA Proposes First Carbon Pollution Standard for Future Power Plants/Achievable standard is in line with investments already being made and will inform the building of new plants moving forward.

View original post

Is the sun the answer to India’s energy problems?

For any comment in this case, it is necessary to examine the economic plan.

waltieainsworth

Solar power in India  – Waiting for the sun

Is the sun the answer to India’s energy problems?

Apr 28th 2012 | CHARANKA, GUJARAT – The Economist Magazine

ON A salt plain near the border with Pakistan lies half a billion dollars’ worth of solar-energy kit paid for by firms from all over the world. A million panels stretch as far as the eye can see. Past a dishevelled brass band is a tent crammed with 5,000 people who cheer when Narendra Modi, the chief minister of Gujarat, declares the solar park open: “I pray, sun god, that today Gujarat will show the way to the rest of the world for solar energy.”

Despite the uncomfortable cult of personality around Mr Modi, Gujarat is an easy place to do business. And solar power would appear to be an obvious winner for India. The country has plenty of sun and flat…

View original post 79 more words

A Tennessee Fireman’s Solution to Climate Change – Forbes

Steve Zwick

Steve Zwick, Contributor

I write about the economic value of nature’s services.

Will anyone who is coming to this post after reading about it on other blogs please read the whole thing — including the addendum and the PS and the PPS at the end — before commenting? If you disagree, please disagree with what I said, and not with what you imagine I said.

We know who the active denialists are – not the people who buy the lies, mind you, but the people who create the lies.  Let’s start keeping track of them now, and when the famines come, let’s make them pay.

Read more

Small Nuclear Power Reactors.

  • There is revival of interest in small and simpler units for generating electricity from nuclear power, and for process heat.
  • This interest in small and medium nuclear power reactors is driven both by a desire to reduce the impact of capital costs and to provide power away from large grid systems.
  • The technologies involved are very diverse.

As nuclear power generation has become established since the 1950s, the size of reactor units has grown from 60 MWe to more than 1600 MWe, with corresponding economies of scale in operation. At the same time there have been many hundreds of smaller power reactors built both for naval use (up to 190 MW thermal) and as neutron sourcesa, yielding enormous expertise in the engineering of small units. The International Atomic Energy Agency (IAEA) defines ‘small’ as under 300 MWe, and up to 700 MWe as ‘medium’ – including many operational units from 20th century. Together they are now referred to by IAEA as small and medium reactors (SMRs).  However, ‘SMR’ is used more commonly as acronym for Small Modular Reactors.

Today, due partly to the high capital cost of large power reactors generating electricity via the steam cycle and partly to the need to service small electricity grids under about 4 GWe,b there is a move to develop smaller units. These may be built independently or as modules in a larger complex, with capacity added incrementally as required (see section below on Modular construction using small reactor units). Economies of scale are provided by the numbers produced. There are also moves to develop small units for remote sites.  Small units are seen as a much more manageable investment than big ones whose cost rivals the capitalization of the utilities concerned.

This paper focuses on advanced designs in the small category, i.e. those now being built for the first time or still on the drawing board, and some larger ones which are outside the mainstream categories dealt with in the Advanced Reactors paper.   Note that many of the designs described here are not yet actually taking shape.  Three main options are being pursued: light water reactors, fast neutron reactors and also graphite-moderated high temperature reactors. The first has the lowest technological risk, but the second (FNR) can be smaller, simpler and with longer operation before refueling.

Generally, modern small reactors for power generation are expected to have greater simplicity of design, economy of mass production, and reduced siting costs. Most are also designed for a high level of passive or inherent safety in the event of malfunctionc. A 2010 report by a special committee convened by the American Nuclear Society showed that many safety provisions necessary, or at least prudent, in large reactors are not necessary in the small designs forthcomingd.

A 2009 assessment by the IAEA under its Innovative Nuclear Power Reactors & Fuel Cycle (INPRO) program concluded that there could be 96 small modular reactors (SMRs) in operation around the world by 2030 in its ‘high’ case, and 43 units in the ‘low’ case, none of them in the USA.  (In 2009 there were 133 units up to 700 MWe in operation and 16 under construction, in 28 countries, totaling 60.3 GWe capacity.)

A 2011 report for US DOE by University of Chicago Energy Policy Institute says development of small reactors can create an opportunity for the United States to recapture a slice of the nuclear technology market that has eroded over the last several decades as companies in other countries have expanded into full‐scale reactors for domestic and export purposes. However, it points out that detailed engineering data for most small reactor designs are only 10 to 20 percent complete, only limited cost data are available, and no US factory has advanced beyond the planning stages. In general, however, the report says small reactors could significantly mitigate the financial risk associated with full‐scale plants, potentially allowing small reactors to compete effectively with other energy sources. In January 2012 the DOE called for applications from industry to support the development of one or two US light-water reactor designs, allocating $452 million over five years.   Other SMR designs will have modest support through the Reactor Concepts RD&D program.

In March 2012 the US DOE signed agreements with three companies interested in constructing demonstration SMRs at its Savannah River site in South Carolina. The three companies and reactors are: Hyperion with a 25 MWe fast reactor, Holtec with a 140 MWe PWR, and NuScale with 45 MWe PWR. DOE is discussing similar arrangements with four further SMR developers, aiming to have in 10-15 years a suite of SMRs providing power for the DOE complex. DOE is committing land but not finance.  (Over 1953-1991, Savannah River was where a number of production reactors for weapons plutonium and tritium were built and run.)

The most advanced modular project is in China, where Chinergy is starting to build the 210 MWe HTR-PM, which consists of twin 250 MWt high-temperature gas-cooled reactors (HTRs) which build on the experience of several innovative reactors in the 1960s and 1970s.

Another significant line of development is in very small fast reactors of under 50 MWe. Some are conceived for areas away from transmission grids and with small loads; others are designed to operate in clusters in competition with large units.

Already operating in a remote corner of Siberia are four small units at the Bilibino co-generation plant. These four 62 MWt (thermal) units are an unusual graphite-moderated boiling water design with water/steam channels through the moderator. They produce steam for district heating and 11 MWe (net) electricity each. They have performed well since 1976, much more cheaply than fossil fuel alternatives in the Arctic region.

Also in the small reactor category are the Indian 220 MWe pressurised heavy water reactors (PHWRs) based on Canadian technology, and the Chinese 300-325 MWe PWR such as built at Qinshan Phase I and at Chashma in Pakistan, and now called CNP-300. These designs are not detailed in this paper simply because they are well-established. The Nuclear Power Corporation of India (NPCIL) is now focusing on 540 MWe and 700 MWe versions of its PHWR, and is offering both 220 and 540 MWe versions internationally. These small established designs are relevant to situations requiring small to medium units, though they are not state of the art technology.

Other, mostly larger new designs are described in the information page on Advanced Nuclear Power Reactors.

Medium and Small (25 MWe up) reactors with development well advanced

Name Capacity Type Developer
KLT-40S 35 MWe PWR OKBM, Russia
VK-300 300 MWe BWR Atomenergoproekt, Russia
CAREM 27-100 MWe PWR CNEA & INVAP, Argentina
IRIS 100-335 MWe PWR Westinghouse-led, international
Westinghouse SMR 200 MWe PWR Westinghouse, USA
mPower 125-180 MWe PWR Babcock & Wilcox + Bechtel, USA
SMR-160 160 MWe PWR Holtec, USA
SMART 100 MWe PWR KAERI, South Korea
NuScale 45 MWe PWR NuScale Power + Fluor, USA
 CAP-100/ACP100 100 MWe PWR CNNC & Guodian, China
HTR-PM 2×105 MWe HTR INET & Huaneng, China
EM2 240 MWe HTR General Atomics (USA)
 SC-HTGR (Antares) 250 MWe HTR Areva
BREST 300 MWe FNR RDIPE, Russia
SVBR-100 100 MWe FNR AKME-engineering (Rosatom/En+), Russia
 Gen4 module 25 MWe FNR Gen4 (Hyperion), USA
 Prism 311 MWe FNR GE-Hitachi, USA
FUJI 100 MWe MSR ITHMSO, Japan-Russia-USA

 

Light water reactors

These are moderated and cooled by ordinary water and have the lowest technological risk, being similar to most operating power and naval reactors today. They mostly use fuel enriched to less than 5% U-235 with no more than 6-year refueling interval, and regulatory hurdles are likely least of any SMRs.
US experience of small light water reactors (LWRs) has been of very small military power plants, such as the 11 MWt, 1.5 MWe (net) PM-3A reactor which operated at McMurdo Sound in Antarctica 1962-72, generating a total of 78 million kWh. There was also an Army program for small reactor development, most recently the DEER (deployable electric energy reactor) concept which is being commercialised by Radix Power & Energy. DEER would be portable and sealed, for forward military bases. Some successful small reactors from the main national program commenced in the 1950s. One was the Big Rock Point BWR of 67 MWe which operated for 35 years to 1997.  There is now a revival of interest in small LWRs in the USA, and some budget assistance in licensing two designs is proposed.

Of the following designs, the KLT and VBER have conventional pressure vessels plus external steam generators (PV/loop design). The others mostly have the steam supply system inside the reactor pressure vessel (‘integral’ PWR design). All have enhanced safety features relative to current LWRs.  All require conventional cooling of the steam condenser.

In the USA major engineering and construction companies have taken active shares in two projects: Fluor in NuScale, and Bechtel in B&W mPower.
Two new concepts are alternatives to conventional land-based nuclear power plants. Russia’s floating nuclear power plant (FNPP) with a pair of PWRs derived from icebreakers, and France’s submerged Flexblue power plant, using a 50-250 MWe reactor possibly to be derived from Areva’s latest naval design. The first is described briefly below and in the Russia paper, the second is mainly described in the France paper, since details of the actual reactor are scant.

KLT-40S

Russia’s KLT-40S from OKBM Afrikantov is a reactor well proven in icebreakers and now – with low-enriched fuel – proposed for wider use in desalination and, on barges, for remote area power supply. Here a 150 MWt unit produces 35 MWe (gross) as well as up to 35 MW of heat for desalination or district heating (or 38.5 MWe gross if power only). These are designed to run 3-4 years between refuelling with on-board refuelling capability and used fuel storage. At the end of a 12-year operating cycle the whole plant is taken to a central facility for overhaul and storage of used fuel. Two units will be mounted on a 20,000 tonne barge to allow for outages (70% capacity factor).

Although the reactor core is normally cooled by forced circulation (4-loop), the design relies on convection for emergency cooling. Fuel is uranium aluminium silicide with enrichment levels of up to 20%, giving up to four-year refuelling intervals.  A variant of this is the KLT-20, specifically designed for FNPP. It is a 2-loop version with same enrichment but 10-year refueling interval.

The first floating nuclear power plant, the Akademik Lomonosov, commenced construction in 2007 and is planned to be located near to Vilyuchinsk. Due to insolvency of the shipyard the plant is now expected to be completed in 2014. 2 See also (see Floating nuclear power plants section in the information page on Nuclear Power in Russia).

RITM-200

OKBM Afrikantov is developing a new icebreaker reactor – RITM-200 – to replace the KLT reactors and to serve in floating nuclear power plants. This is an integral 210 MWt, 55 MWe PWR with inherent safety features. A single compact RITM-200 could replace twin KLT-40S (but yielding l

Read more

Tough Love for Renewable Energy | Foreign Affairs

Making Wind and Solar Power Affordable

Article Summary and Author Biography

Over the past decade, governments around the world threw money at renewable power. Private investors followed, hoping to cash in on what looked like an imminent epic shift in the way the world produced electricity. It all seemed intoxicating and revolutionary: a way to boost jobs, temper fossil-fuel prices, and curb global warming, while minting new fortunes in the process.

Much of that enthusiasm has now fizzled. Natural gas prices have plummeted in the United States, the result of technology that has unlocked vast supplies of a fuel that is cleaner than coal. The global recession has nudged global warming far down the political agenda and led cash-strapped countries to yank back renewable-energy subsidies. And some big government bets on renewable power have gone bad, most spectacularly the bet on Solyndra, the California solar-panel maker that received a $535 million loan guarantee from the U.S. Department of Energy before going bankrupt last fall.

Critics of taxpayer-sponsored investment in renewable energy point to Solyndra as an example of how misguided the push for solar and wind power has become. Indeed, the drive has been sloppy, failing to derive the most bang for the buck. In the United States, the government has schizophrenically ramped up and down support for renewable power, confusing investors and inhibiting the technologies’ development; it has also structured its subsidies in inefficient ways. In Europe, where support for renewable power has been more sustained, governments have often been too generous, doling out subsidies so juicy they have proved unaffordable. And in China, the new epicenter of the global renewable-power push, a national drive to build up indigenous wind and solar companies has spurred U.S. allegations of trade violations and has done little to curb China’s reliance on fossil fuels

Read more

Use of public and private dollars for scaling up clean energy needs a reality check, say Stanford scholars

In a post-Solyndra, budget-constrained world, the transition to a decarbonized energy system faces great hurdles. Overcoming these hurdles will require smarter and more focused policies. Two Stanford writers outline their visions in a pair of high-profile analyses.

By Mark Golden
NAIT / Creative Commons An array of solar panels

In the fast-globalizing clean-energy industry, the U.S. should press its advantage in engineering, high-value manufacturing, installation and finance, writes Stanford researcher Jeffrey Ball.

An array of solar panels

In the fast-globalizing clean-energy industry, the U.S. should press its advantage in engineering, high-value manufacturing, installation and finance, writes Stanford researcher Jeffrey Ball.

America’s approach to clean energy needs to be reformed if it is to meaningfully affect energy security or the environment, according to two new articles by Stanford writers.

The debate over how to fundamentally change the world’s massive energy system comes amid taxpayers’ $500 million tab for the bankruptcy of Fremont, Calif., solar company Solyndra, the global recession, government budget cuts and plunging U.S. prices for natural gas. Making the change cost-effectively will be crucial, write Jeffrey Ball and Kassia Yanosek, both based at Stanford University’s Steyer-Taylor Center for Energy Policy and Finance.

Ball, scholar-in-residence at the Stanford center and former energy reporter and environment editor for the Wall Street Journal, writes in the current edition of Foreign Affairs that the world’s renewable-energy push has been sloppy so far. It can be fixed through a new approach that forces these technologies to become more economically efficient, he writes in the article, “Tough Love for Renewable Energy.”

“It is time to push harder for renewable power, but to push in a smarter way,” Ball writes.

Kassia Yanosek, entrepreneur-in-residence at the Stanford center and a private-equity investor, writes in Daedalus, the journal of the American Academy of Arts and Sciences, that attempting to accelerate a transition to a low-carbon economy is expensive and risky. Policymakers, says Yanosek, need to realize that achieving a transition with government-aided commercialization programs will require putting billions of taxpayer dollars at risk, often in a high-profile way.

“If government officials wish to accelerate the next energy transition, they will need a different strategy to develop an industry that can survive without major subsidies, one that prioritizes funding to commercialize decarbonized energy technologies that can compete dollar-for-dollar against carbon-based energy,” Yanosek said.

With natural gas prices so low due to huge new supplies of shale gas, besting the current energy system has become tougher.
Reinvention, not rejection

Ball writes that governments and investors have spent big money on renewable power, slashing the cost of many renewable technologies and creating jobs. And yet, he notes, modern renewables remain a very small percentage of the global energy mix.

“Wind and solar power will never reach the scale necessary to make a difference to national security or the environment unless they can be produced economically,” he writes. “The objective is not wind turbines or solar panels. It is an affordable, convenient, secure, and sustainable stream of electrons.”

Taken together, the analyses by Ball and Yanosek argue for driving down the costs of key technologies and speeding up their deployment, said Dan Reicher, the executive director of the Steyer-Taylor Center, launched a little more than a year ago at Stanford Law School and the Stanford Graduate School of Business.

“This will require the right mix of targeted government policy and hard-nosed private sector investment,” said Reicher, also a Stanford law professor and business school lecturer, and formerly an assistant U.S. energy secretary and private-equity investor.

Ball, in Foreign Affairs, writes that rationalizing “the conflicting patchwork of energy subsidies that has been stitched together over the decades” is essential. Supporters of renewable energy point out that public subsidies for these technologies are a fraction of those for fossil fuels, both globally and in the United States. Realistically, Ball figures, subsidies should be examined not just in total dollar amounts, but also per unit of energy produced. This more apples-to-apples comparison would help foster an honest debate about which subsidies best promote the type of energy system countries want.

Also key to America pursuing clean energy in the most economically efficient way is for the country to exploit globalization rather than fight it, Ball writes. Despite mounting trade-war tensions with China over wind and solar power, he writes: “If the goal of the renewable-power push is a cleaner, more diversified power supply, then low-cost solar equipment, from China or anywhere else, is a good thing.”

In the fast-globalizing clean-energy industry, Ball writes, the United States should press its advantage in engineering, high-value manufacturing, installation and finance. “Much of the machinery used in Chinese solar-panel factories today is made in America,” he writes. Installation remains a domestic business, and the U.S. financial system allows homeowners to install rooftop solar panels at no upfront cost. Ball notes that two other energy shifts will be at least as important as renewable sources: cleaning up the process of burning of fossil fuels, which provide most of the world’s energy; and using energy from all sources more efficiently.

Nevertheless, Ball writes, America’s renewable-energy tax credits need to be changed. He and Yanosek agree the current credits have contributed to an inefficient, boom-and-bust approach to renewable energy.

Yanosek writes that smarter government polices could help innovative technologies overcome what she describes as the main financial barrier – the “commercialization gap.” To do this, though, politicians and taxpayers must realize that government efforts to help accelerate an energy transition will require massive and risky investments, she says. A project like building a next-generation nuclear power station or a new type of utility-scale solar thermal plant can require hundreds of millions, or even billions, of dollars.
The commercialization gap

After developers show that new technologies can work in prototype, they often cannot get the backing of traditional investors to build the first commercial project because the risk/return profile is not attractive to private investors, writes Yanosek, who invests in the energy sector at Quadrant Management. Such projects require more money than venture capital investors are willing to bet. But, says Yanosek, the risks of failure in such first-time projects are too great for private equity funds or corporate balance sheets.

If policymakers decide that funding commercialization is a priority, Yanosek’s article provides a roadmap for government support. First, limited public dollars would be best spent moving a bunch of promising new technologies to the next stage.

That leads to Yanosek’s next rule of the road: Government clean energy technologies must not become hostage to stimulus spending and job creation objectives. The legitimate beneficiaries of commercialization-gap support are promising but unproven technologies with no steady revenue stream. They have the potential for cutting prices, but by nature are not likely to ramp up employment significantly until after they have successfully crossed the commercialization gap.

Loan guarantees in many cases are not the best structure for funding companies that push the boundaries of cost and efficiency, Yanosek argues. Instead, the government should invest equity and thus profit proportionately when a beneficiary succeeds, setting up a revenue stream for continued funding. The funding body, furthermore, should take advantage of private-sector expertise and maintain independence from the Department of Energy, where awards can be slow in coming and may be politicized.

Ultimately, Yanosek says, policymakers and taxpayers must embrace the incremental advances and understand that there will be failures along the way. For government to push an energy transition faster than the historical pace, it cannot remove the steps, but only hope to take them more quickly.

Mark Golden works in communications at the Precourt Institute for Energy at Stanford University.
Media Contact

Mark Golden, Precourt Institute for Energy: (650) 724-1629, mark.golden@stanford.edu

Dan Stober, Stanford News Service, (650) 721-6965, dstober@stanford.edu
http://news.stanford.edu/news/2012/may/scaling-clean-energy-050112.html

Grist

Working toward a planet that doesn’t burn, a future that doesn’t suck

My Planet Earth

Creating a Healthy Planet

Planet Earth Weekly

Climate Change and Renewable Energy: Saving Our Planet for Future Generations

Follow The Money

"It has to start somewhere. It has to start sometime. What better place than here? What better time than now?"

BE CURIOUS

The important thing is not to stop questioning. Curiosity has it's own reason for existing.

patricktsudlow

Patrick Sudlow's blog

The Common Constitutionalist - Let The Truth Be Known

Politics, current events, human interest & some humor

You Evolving

Science, Adventure, Philosophy, Personal Evolution

Road To Abundance

The Earth Is Full and There is More Than Enough to Spare

Seaspout

Ocean News & Views

Coal Action Network Aotearoa

Keep the Coal in the Hole!

Precarious Climate

A call for urgent action on climate change

The GOLDEN RULE

“During times of universal deceit, telling the truth becomes a revolutionary act” – George Orwell

Earth Report

Global Disaster Watch - An Overview

The Survival Place Blog

Surviving The World As We Know It

manchester climate monthly

To inform, inspire and involve

DeepResource

Observing the world of renewable energy and sustainable living

Climate Change Reports

Newscasts on Global Warming, Its Consequences & Solutions

%d bloggers like this: