Feed: Skeptical Science
Posted on: 06 May, 2012 2:19 PM
Author: Skeptical Science
|This is a reprint of a news release posted by the US National Research Council on May 2, 2012.
A new National Research Council report says that budget shortfalls, cost-estimate growth, launch failures, and changes in mission design and scope have leftU.S.earth observation systems in a more precarious position than they were five years ago. The report cautions that the nation’s earth observing system is beginning a rapid decline in capability, as long-running missions end and key new missions are delayed, lost, or cancelled.
This photo from NASA’s Suomi NPP satellite shows the Eastern Hemisphere of Earth in "Blue Marble" view. The photo, released Feb. 2, 2012, is a companion to a NASA image showing the Western Hemisphere in the same stunning detail. This photo was taken on Jan. 23. CREDIT: NASA/NOAA
"The projected loss of observing capability will have profound consequences on science and society, from weather forecasting to responding to natural hazards," said Dennis Hartmann, professor of atmospheric sciences at theUniversityofWashington,Seattle, and chair of the committee that wrote the report. "Our ability to measure and understand changes in Earth’s climate and life support systems will also degrade."
The report comes five years after the Research Council published "Earth Science and Applications From Space: National Imperatives for the Next Decade and Beyond," a decadal survey that generated consensus recommendations from the earth and environmental science and applications community for a renewed program of earth observations. The new report finds that although NASA responded favorably and aggressively to the decadal survey, the required budget was not achieved, greatly slowing progress. Changes in program scope without commensurate funding, directed by the Office of Management and Budget and by Congress, also slowed progress. A further impediment, the report says, is the absence of a highly reliable and affordable medium-class launch capability.
Despite these challenges, NASA has been successful in launching some of the missions in development when the survey report was published. It has also made notable progress in establishing the "Venture-class" program, as recommended in the decadal survey. The suborbital program and the airborne science program are additional areas where significant progress is being made. In accord with the decadal survey’s recommendations, NASA also aggressively pursued international partnerships to mitigate shortfalls and stretch resources.
In the near term, the report concludes, budgets for NASA’s earth science program will remain inadequate to meet pressing national needs. Therefore the agency should focus on two necessary actions: defining and implementing a cost-constrained approach to mission development, and identifying and empowering a cross-mission earth system science and engineering team to advise on the execution of decadal survey missions.
The report also reviews the state of NOAA’s satellite earth observation program, an integral part of the decadal survey’s overall strategy and tied to the success of NASA’s program. Budget shortfalls and cost overruns in NOAA’s next generation of polar environmental satellites account for the slow rate of progress. An interagency framework, recommended in the decadal survey to assist NASA and NOAA in optimizing resources, has yet to be realized. This framework is even more crucial now that both agencies face fiscal constraints, and its importance is reiterated in the present report.
The study was sponsored by NASA. The National Academy of Sciences, National Academy of Engineering, Instituteof Medicine, and National Research Council make up the National Academies. They are private, nonprofit institutions that provide science, technology, and health policy advice under a congressional charter. The National Research Council is the principal operating agency of the National Academy of Sciences and the National Academy of Engineering. For more information, visit http://national-academies.org.
Posted on: 11 May, 2012 3:55 AM
|Steve McIntyre is free to do any analysis he wants on any data he can find. But when he ladles his work with unjustified and false accusations of misconduct and deception, he demeans both himself and his contributions. The idea that scientists should be bullied into doing analyses McIntyre wants and delivering the results to him prior to publication out of fear of very public attacks on their integrity is ludicrous.
By rights we should be outraged and appalled that (yet again) unfounded claims of scientific misconduct and dishonesty are buzzing around the blogosphere, once again initiated by Steve McIntyre, and unfailingly and uncritically promoted by the usual supporters. However this has become such a common occurrence that we are no longer shocked nor surprised that misinformation based on nothing but prior assumptions gains an easy toehold on the contrarian blogs (especially at times when they are keen to ‘move on’ from more discomforting events).
So instead of outrage, we’ll settle for simply making a few observations that undermine the narrative that McIntyre and company are trying to put out.
First of all, it should be made clear that McIntyre’s FOI requests on the subject of Yamal are not for raw data, nor for the code or analysis methodology behind a published result, but for an analysis of publicly available data that has not been completed and has not yet been published. To be clear, these requests are for unpublished work.
Second, the unpublished work in question is a reconstruction of regional temperatures from the region of Yamal in Siberia. Regional reconstructions are generally more worthwhile than reconstructions from a single site because, if there is shared variance, the regional result is likely to be more robust and be more representative – and that makes it more valuable for continental and hemispheric comparisons. The key issues are whether all the trees (or some subset of them) share a common signal (are they mostly temperature sensitive? are some localities anomalous? etc.). It isn’t as simple as just averaging all the trees in a grid box or two. The history of such efforts follows a mostly standard path – local chronologies are put together, different ‘standardisation’ techniques are applied, more data is collected, wider collations are put together, and then regional reconstructions start to appear. Places that are remote (like Yamal) have the advantage of a lack of local human interference, and plenty of fossil material, but they are tricky to get to and data collection can be slow (not least because of the political situation in recent decades).
UK FOI legislation (quite sensibly) specifically exempts unpublished work from release provided the results are being prepared for publication. So McIntyre’s appeals have tried to insinuate that no such publication is in progress (which is false) or that the public interest in knowing about a regional tree ring reconstruction from an obscure part of Siberia trumps the obvious interest that academics have in being able to work on projects exclusively prior to publication. This is a hard sell, unless of course one greatly exaggerates the importance of a single proxy record – but who would do that? (Oh yes: YAD06 – the most important tree in the world, The global warming industry is based on one MASSIVE lie etc.). Note that premature public access to unpublished work is something that many people (including Anthony Watts) feel quite strongly about.
Worse, McIntyre has claimed in his appeal that the length of time since the Briffa et al (2008) paper implies that the regional Yamal reconstruction has been suppressed for nefarious motives. But I find it a little rich that the instigator of a multitude of FOI requests, appeals, inquiries, appeals about inquires, FOIs about appeals, inquiries into FOI appeals etc. is now using the CRU’s lack of productivity as a reason to support more FOI releases. This is actually quite funny.
Furthermore, McIntyre is using the fact that Briffa and colleagues responded online to his last deceptive claims about Yamal, to claim that all Yamal-related info must now be placed in the public domain (including, as mentioned above, unpublished reconstructions being prepared for a paper). How this will encourage scientists to be open to real-time discussions with critics is a little puzzling. Mention some partial analysis online, and be hit immediately with a FOI for the rest…?
The history of this oddity (and it is odd) dates back to McIntyre’s early obsession with a reconstruction called the “Polar Urals” Briffa et al. (1995). This was a very early attempt at a local multi-proxy reconstruction, using a regression of both tree-ring widths and densities. McIntyre has previously objected to observations that 1032 was a particularly cold year in this reconstruction (though it was), that the dating of the trees was suspect (though it wasn’t), and that no-one revisited this reconstruction when reprocessed chronologies became available. [Little-known fact: McIntyre and McKitrick submitted a comment to Nature complaining about the dating issues in 1995 paper around Dec 2005/Jan 2006, which was rejected upon receipt of Briffa’s response (which was an attachment in the second tranche of CRU emails). Neither this submission, the rejection (for good cause), nor the Polar Urals dating issue have been mentioned on Climate Audit subsequently.]
Around this point, McIntyre got the erroneous idea that studies were being done, but were being suppressed if they showed something ‘inconvenient’. This is of course a classic conspiracy theory and one that can’t be easily disproved. Accusation: you did something and then hid it. Response: No I didn’t, take a look. Accusation: You just hid it somewhere else. Etc. However, this is Keith Briffa we are talking about: the lead author of Briffa et al, (1998)(pdf) describing the “inconvenient” divergence problem in some tree ring density records, a subject that has been described and taken up by multiple authors – Jacoby, D’Arrigo, Esper, Wilson etc. Why McIntyre thought (thinks?) that one single reconstruction was so special that people would go to any lengths to protect it, while at the same time the same people were openly discussing problems in reconstructions across the whole northern hemisphere, remains mysterious.
Similarly, McIntyre recently accused Eric Steig of suppressing ‘inconvenient’ results from an ice core record from Siple Dome (Antarctica). Examination of the record in question actually demonstrates that it has exceptionally high values in the late 20th Century (reflecting the highest temperatures in at least the last 700 years, Mayewski et al.), exactly counter to McIntyre’s theory. McIntyre made these accusations public “a couple of days” – his words – after requesting the data, since apparently university professors have nothing more pressing to do than than respond instantly to McIntyre’s requests. In short, you have to give McIntyre what he wants within 48 hours or he will publicly attack your integrity. Unsurprisingly, no apology for that unjustified smear has been forthcoming.
So on to Yamal. The original data for the Yamal series came from two Russian researchers (Rashit Hantemirov and Stepan Shiyatov), and was given to CRU for collation with other tree-ring reconstructions (Briffa, 2000). As a small part of that paper, Briffa reprocessed the raw Yamal data with the regional curve standardisation (RCS) technique. The Russians published their version of the chronology with a different standardization a little later (Hantemirov and Shiyatov, 2002). McIntyre is accusing Briffa of ‘deception’ in stating that he did not ‘consider’ doing a larger more regional reconstruction at that time. However, it is clear from the 2000 paper that the point was to show hemispheric coherence across multiple tree ring records, not to create regional chronologies. Nothing was being ‘deceptively’ hidden and the Yamal curve is only a small part of the paper in any case.
Another little appreciated fact: When McIntyre started to get interested in this, he asked Briffa for the underlying measurement data from Yamal and two other locations whose reconstructions were used in Osborn and Briffa (2006). In May 2006, Briffa politely replied:
Briffa was conforming to the standard protocol that directs people to the originators of data series for access to the underlying data, as opposed to the reconstructions which had been archived with the paper. McIntyre expressed great exasperation at this point, which is odd because in email 1548, McIntyre is quoted (from Sep 26, 2009 (and note the divergence in post URL and actual title)):
To which Rashit Hanterminov responds:
Thus at the time McIntyre was haranguing Briffa and Osborn, McIntyre had actually had the raw Yamal data for over 2 years (again, unmentioned on Climate Audit), and he had had them for over 5 years when he declared that he had finally got them in 2009 (immediately prior to his accusations (again false) against Briffa of inappropriate selection of trees in his Yamal chronology).
Back to the main story. Of course, regional reconstructions are a definite goal of the dendro-climatology community and Briffa and colleagues have been working on these for years. Some of those results were published in Briffa et al (2008) as part of a special issue on the boreal forest and global change. Special issues come with deadlines, and as explained in a submission to the Muir Russell inquiry, a regional Yamal reconstruction putting together multiple sources of tree ring data was indeed ‘considered’ but wasn’t finished in time. McIntyre’s claim of deception comes from a strained reading of the MR submission (it is actually quite good reading). In response to extended (and yet again false) accusations from Ross McKitrick in the Financial Post:
So, Briffa et al did consider a regional reconstruction and are indeed working on it for publication, and it didn’t get into the 2008 paper due to time constraints. Clear, no?
However, a little later on in the submission, there is this paragraph:
This is clearly a response to McKitrick’s unjustified accusations, and in using the reference to the 2008 paper is a little contradictory to the paragraphs above which were much more explicit about the background and purpose of the 2008 paper. However, to take a slight mis-statement in a single sentence, when copious other information was being provided in the same submission, and accusing people of deliberate deception is a huge overreach. Were they trying to deceive only the people who hadn’t read the previous page? It makes no sense at all. Instead, McIntyre conflates the situation at the time of the 2000 paper with the very different situation around 2008 in order to paint a imaginary picture of perfidy.
The one new element this week is the UK ICO partial ruling on McIntyre’s appeal for access to the (still unpublished) regional Yamal reconstruction. For reasons that are as yet unclear (since the full ICO ruling has not yet been issued), the list of components from which the regional reconstruction might be built were released by UEA. All of this data is already public domain. And of course, since Briffa et al have been working on regional reconstructions since prior to the 2008 paper it is unsurprising that they have such a list. McIntyre then quotes an email from Osborn sent in 2006 in support of his claim that the reconstructions were finished at that point, but that is again a very strained reading. Osborn only lists the areas (and grid boxes) in which regional reconstructions might be attempted since “most of the trees lie within those boxes”. It makes no statement whatsoever about the work having already been done.
McIntyre’s subsequent insta-reconstruction from the list is apparently the ‘smoking gun’ that the results are being withheld because they are inconvenient, but if any actual scientist had produced such a poorly explained, unvalidated, uncalibrated, reconstruction with no error bars or bootstrapping or demonstrations of common signals etc., McIntyre would have been (rightly) scornful. Though apparently, scientists are supposed to accept his reconstruction at face value. The irony is of course that the demonstration that a regional reconstruction is valid takes effort, and needs to be properly documented. That requires a paper in the technical literature and the only way for Briffa et al to now defend themselves against McIntyre’s accusations is to publish that paper (which one can guarantee will have different results to what McIntyre has thrown together). In the meantime, they can’t discuss it online or defend themselves because the issue with the FOI appeal is precisely their ability to work on projects prior to publication without being forced to go public before they are finished.
Finally, a couple of observations regarding the follow-through from Andrew Montford and Anthony Watts. Montford’s summary is an easier read than anything McIntyre writes, but it is clear Montford’s talents lie in the direction of fiction, not documentary work. All of his claims of “why paleoclimatologists found the series so alluring”, or that the publication “must have been a severe blow”, or “another hockey stick” was “made almost to order to meet the requirements of the paleoclimate community” and other accusations are simply products of his imagination. He also makes up claims, that for instance, McIntyre asking Briffa for the Yamal data “was, as expected, turned down flat” (not true – the actual response was given above) and he imagines even more ‘deceptions’ than McIntyre. Since he assumes the worst of the people involved, everything he sees is twisted to conform to his prior assumptions – if there is an innocent explanation, he expends no time considering it. As for Watts, the funny thing is that he immediately thinks that Michael Mann needs to answer these accusations, and attempts a twitter campaign of harassment when Mike, rightly, points out that Yamal doesn’t actually impact that much and, in any case, it has nothing to do with him at all. Watts is clearly a cheerleader for the ‘Blame Mike First’ campaign, so maybe his next post will be on why Mike is responsible for the Greek bank default (have you seen those bond yield curves?!?).
It should also go without saying that sometimes life gets in the way of work, and suggestions that academics have to work on issues according to a timetable dictated by hostile and abusive commentators is completely antithetical to the notion of free inquiry or the inevitable constraints of real life. McIntyre is of course free to do any analysis he wants, but he has no right to demand that other people do work for him under fear of highly public false accusations of dishonesty. We can nonetheless look forward to more of these episodes, mainly because they serve their purpose so well.
Feed: Skeptical Science
Posted on: 11 April, 2012 4:59 AM
Author: Skeptical Science
|Sudden spikes in global temperatures that occurred 50-55 million years ago were caused by thawing of permafrost in Antarctica and northern high latitudes, according to recent research. The trigger for this sudden destabilization was a variation in orbital configurations that resulted in warmer polar summers. This model also provides an analogue for the releases of carbon from modern permafrost caused by current man-made global warming. Modern permafrost volumes are smaller than the estimates for those of 55 million years ago, but will nevertheless amplify the climatic effect of fossil fuel consumption and will provide continuing warming feedbacks for centuries after human emissions cease.
The Paleocene-Eocene hyperthermal events
The Paleocene-Eocene Thermal Maximum (PETM) is an extreme global warming event or hyperthermal that occurred 55.5 million years ago when a sudden “burp” of a huge quantity of carbon was emitted into the atmosphere, causing global temperatures to rise by five degrees Celsius and the oceans to become more acid. Because of the rapidity of the carbon burp, the event has often been considered an analogue of what might happen as humans release a comparable slug of CO2 into the atmosphere. The PETM was discovered in 1991 by Kennett and Stott and since then over 400 scientific papers have been written on the subject. There is a good review of the PETM by McInerney and Wing (2011). See also Rob Painting’s post CO2 Currently Rising Faster Than The PETM Extinction Event and the comments in the discussion that follows it.
There are uncertainties about where the carbon in the PETM came from and what triggered its sudden release. One of the leading hypotheses (e.g. Dickens, 2003) has been that the carbon was released from methane hydrates, ice-like accumulations of methane and water that were present in sediments below the deep ocean floor. According to this hypothesis, the hydrates became destabilized as sea water temperatures gradually rose. The release of the hydrates into the atmosphere increased the greenhouse effect, which led to more hydrate destabilization. See also Lunt et al (2011).
One problem with the hydrate release model is that carbon from hydrates is particularly low in the heavier C13 isotope (the hydrate δ13C value—the amount in thousandths that methane hydrate is depleted in the heavier isotope than marine carbon—is about -60‰). To achieve the level of carbon isotopes observed at the height of the PETM (the so-called Carbon Isotope Excursion or CIE) requires the release of ~2000 Pg (2000 billion metric tons) of carbon from hydrates, which may be insufficient to account for the extra greenhouse warming observed at the PETM. It is worth noting that the size of the PETM CIE has not been determined exactly, with estimates varying by more than a factor or two; with marine carbonates yielding lower estimates and estimates from terrestrial carbon samples pointing to a bigger CIE.
The PETM was the biggest, and is certainly the best-known event of its kind, but it was followed by smaller, lesser-known but similar events over the subsequent 3 million years These are known as ETM2 and ETM3; PETM is also sometimes called ETM1.
Other carbon release models for the PETM include invoking a comet impact, or massive peat and coal fires (δ13C value is about -22‰), or thermogenic methane releases in the North Atlantic (from volcanic intrusions into mudstones rich in organic matter) with a δ13C value of about -30‰. (The more negative the δ13C value, the less emitted carbon is needed to lighten the atmosphere to the range of CIE’s observed at PETM times.) The problem with all these hypotheses is that they rely upon one-off mechanisms that, even if they work for the PETM, it is improbable that they could be invoked again to account for the ETM2 and ETM3 events.
A paper just published in Nature (subscription only) by Robert DeConto (DC12) and seven co-authors proposes a model that, the authors claim, can account for the CIE, the extreme warming, the repetitive nature of these hyperthermal events, and the fact that the events become progressively less prominent with time.
The permafrost feedback model
There is a rock outcrop near the town of Gubbio, in central Italy, that has played a key role in three major geological controversies: continental drift/paleomagnetism; the Cretaceous-Tertiary (K-T) boundary layer, related to the comet impact and extinction of the dinosaurs; and, more recently, paleoclimate studies of the Paleocene and Eocene. If there was a prize for the “most informative rock outcrop”, the Gubbio section would have been a multiple winner.
The Gubbio rocks are a continuous sequence, some tens of metres thick, of pelagic limestones and marls formed by the shells of foraminifera that were deposited on the deep ocean floor. Galeotti et al (2010) studied this sequence in great detail using microfossils, carbon isotopes, calcium carbonate concentrations and magnetostratigraphy. They were able to correlate the hyperthermal events (PETM, ETM2 and ETM3) with orbital cycles.
DeConto et al (DC12) used these results to develop a model for the hyperthermal events that involves orbital configurations acting as a trigger. A combination of high obliquity (the tilt of the Earth’s axis of rotation responsible for the seasons) and high eccentricity (non-circular Earth orbits around the Sun), results in periods of greater insolation (more sunlight) at high latitudes. This provoked the thawing of Antarctic and Arctic permafrost, releasing carbon into the atmosphere, causing increased greenhouse warming, which, in turn, triggered more permafrost carbon release until the permafrost carbon was used up. Once the peak of the hyperthermal was reached, rock weathering drew down the CO2, cooling the Earth until permafrost started to form again near the poles, sequestering carbon in frozen soils.
This orbitally triggered cycle is similar in some ways to the better-known and more recent ice-age cycles. For example, see Dana Nuccitelli’s post on the recent Shakun et al paper. However, the ice-age cycles were partly a result of albedo feedbacks from advancing or retreating icesheets—there were no continental icesheets in the Paleogene (the term for the Paleocene, Eocene and Oligocene combined)—and the ice-age CO2 feedbacks primarily involved inorganic carbon dissolved in the oceans. The timing of the cycles is also very different, all of the eight ice-age cycles occurred over a span of 800,000 years, whereas the gaps between the three Paleogene hyperthermals were 1.8 and 1.2 million years long.
Simulations of biomes (left) and permafrost (right) for three different forcing models: 900ppm CO2 equivalent, mean orbit (top); 900 ppm of CO2 eq., warm southern summer orbit (middle); 2680ppm CO2 eq., mean orbit (bottom). A comparison with a model run on the modern Earth can be seen here. From DC12 and DC12 supplementary material. [Note: 900 ppm of CO2 eq. includes 760ppm CO2, plus CH4 and N2O; 2680ppm CO2 eq, comprises 2000ppm CO2 plus the other GHGs.]
DeConto and his co-workers tested a variety of atmospheric and orbital parameters using a Global Climate Model with coupled biosphere and soil components. Three of the model runs (#2, #5 and #7) are shown above. In model #2, global average temperatures are 6°C warmer than today, yet the area of permafrost is 22.4 million km2, which is roughly equivalent to today’s area of 19 million km2. At first sight, it may appear odd that the area of permafrost in the Paleogene hothouse climate was similar to the area in today’s icehouse climate. The problem is resolved by considering that the continent of Antarctica was unglaciated 55 million years ago and its land area was bigger than today’s due to relative isostatic uplift, providing room for large permafrost areas that compensate for the much smaller permafrost areas, compared to the modern situation, in the Paleogene north.
In model #5, the higher obliquity and eccentricity configurations increase summer insolation at both poles, reducing the permafrost area by 33%.The emissions associated with this thaw produce a positive greenhouse feedback, leading, eventually, to a situation like model #7, where atmospheric greenhouse gasses are at a level of 2680ppm CO2 equivalent, in which global mean temperatures have risen by 6°C above the late Paleocene norm and permafrost has all but disappeared from the planet.
A schematic of the permafrost-hyperthermal mechanism
The red and green peaks on the upper panel show a series of hyperthermal spikes built upon a rising base trend of increasing CO2. The lower schematics illustrate how the permafrost-hyperthermal feedback cycles work. From the supplemental material of DC12.
In the graphic above, the initial release of carbon (between times 1 and 3) takes 10,000 years. Following this release, the very high ensuing global temperatures increase the rate of the chemical reactions involved in the weathering of silicate rocks. These reactions reduce the CO2 in the air over periods of hundreds of thousands of years, gradually cooling the climate. Eventually, it gets cool enough at higher elevations near the poles for permafrost to reform, which further draws down the CO2. Because of the longer-term background trend of increasing CO2 concentrations and a warming climate, less permafrost was re-formed to be available for driving the future, and smaller, hyperthermals ETM2 and ETM3.
The new model of DeConto et al will not, of course, be the final word on the subject of the Paleogene hyperthermals. Climate models are unable yet to fully account for all the features of the Paleogene climate. Current proxy measurements of CO2 concentrations and regional temperatures are highly uncertain and some important factors, such as methane concentrations, cannot yet be measured at all.
As noted earlier, there is also uncertainty over the size of the PETM carbon isotope excursion, also. Figure 3 from the review article of McInerny and Wing (2011) shows that a release of 3434 +/- 951 PgC (billion tonnes) of permafrost carbon, as proposed in DC12, would barely account for only the lowest estimates of the PETM CIE. It is perhaps likely that releases of C13-depleted methane hydrates played at least a supporting role, maybe as a feedback, in these extreme warming events.
Volumes and emission rates, then and now
The required mass of carbon to be added to the atmosphere between stages #5 and #7 is estimated to be 3434 (+/- 951) billion tonnes, with the total pre-PETM permafrost carbon pool being 3728+/- 1033 billion tonnes. This figure is more than double the 1700 billion tonnes estimated to be in place in modern permafrost. Yet, as remarked previously, the areas of permafrost then and now are roughly equivalent. This apparent discrepancy is accounted for in DC12 by the authors assuming that:
a) The depth of permafrost in the Paleocene was thicker than today’s, because a much longer preceding cool period, perhaps a million years before the PETM, allowed ample time for the large volumes of carbon to be deposited; whereas modern permafrost carbon deposition (in most areas) occurred only during brief (~10,000 year) interglacial periods.
b) DC12 argues that the interior of Antarctica was a flat plain, prone to hosting wetlands with associated carbon-rich peat deposition; a situation that would have allowed for the accumulation of much more carbon per unit area than the modern average.
The rate of carbon release in the DC12 model is estimated at up to 1.5 billion tonnes per year, comparable to calculations of the rate of release calculated by modelling modern permafrost under the IPCC A1B scenario (Schaeffer et al, 2011), but spread over thousands of years for the PETM case; whereas the Schaefer model has carbon emissions lasting for just 200 years. Our current global emissions, for comparison, are over 9 billion tonnes of carbon per year, some six times the maximum rate modelled for the PETM or permafrost later this century.
The not-so-permafrosting on the fossil fuel cake
Schuur and Abbott (2011) polled 41 permafrost experts for their estimates on the future release of permafrost carbon. Below, is the range of estimates for cumulative emissions by certain dates:
Date Estimate (billion tonnes C)
The great majority of this release of permafrost carbon will be in the form of CO2, but Schuur and Abbott estimate that 2.7% will be as methane (CH4). Because methane has a much higher global-warming potential, about half of the climate forcing induced by permafrost emissions will come from that gas alone.
Within the next 90 years, therefore, we can expect permafrost carbon emissions of a little less than one-tenth of the total amount estimated by DC12 for the PETM. After 2100, cumulative emissions are expected to more than double over the following two centuries. Over a longer timeframe of many centuries, we can expect a further contribution of carbon to the atmosphere from the destabilization of gas hydrates. Some researchers argue that hydrate releases may already have started.
The potential for a major permafrost carbon release in this century is quite real and the hypothesis outlined in DC12, for the first time, provides an ancient analogue of a permafrost release causing a major global warming event. Nevertheless, we are not likely to get a PETM-sized carbon release because the amount of carbon in current permafrost stores is relatively small compared to the Paleogene. An ETM2 or ETM3-sized hyperthermal may be more plausible, however.
The estimates of permafrost emissions can be compared with the carbon in proven remaining reserves of oil, gas, coal and the Alberta bitumen sands (amounts are from reserves figures from BP and Alberta’s ERCB, as converted to carbon emissions by Swart and Weaver, 2012 (subs. only)). These reserve estimates sum to about 900 billion tonnes of carbon in proven fossil fuel reserves. At current levels of consumption, these stores of carbon will be released into the atmosphere over a period of just a few decades. (BP’s proven reserves/production ratio for oil is 46 years; for gas, 59 ; and for coal, 118). As the conventional reserves are consumed, they will be partly replenished by new discoveries and reserves growth. Technological improvements and rising prices will enable the exploitation of the huge resource base of coal and unconventional gas and oil. The 900 billion tonne figure is thus best regarded as a minimum case on a business-as-usual path. Our cumulative emissions from continuing fossil-fuel use over this century are therefore likely to be much bigger even than the worst-case permafrost release three centuries from now.
Positive net carbon-cycle feedbacks from thawing permafrost may start as early as 2020, which will make emissions mitigation efforts more difficult. By 2100, net emission rates from permafrost could reach 1.6 billion tons of carbon per year (Schaeffer et al al, 2011), which exceeds the current annual emissions from the United States. Once started, permafrost emissions would be impossible to reverse and would endure for centuries, making climate change steadily worse even if we had already stopped all emissions from fossil fuels. The only way to prevent permafrost feedbacks is to limit climate change enough not to incite them in the first place. The Eemian interglacial may have been slightly warmer than today and did not provoke a runaway permafrost feedback, so it may not be too late.
In conclusion, disquieting though the threat of future permafrost feedbacks undoubtedly is, we should perhaps worry more about the far greater stores of fossil carbon that we are now quite deliberately exhuming and putting into the atmosphere over a much shorter time period than any projected releases from thawing frozen ground. Indeed, as I argued in a previous article, our business-as-usual emissions are on track to change our atmosphere to the baseline levels in the Eocene and Paleocene. That is plenty to be worried about. By the time we get to the stage of having doubled or tripled pre-industrial levels of greenhouse gasses, any additional carbon releases from thawing long-frozen ground will just be the icing on an over-baked cake.
Thanks to Rob Painting, Alex C and Dana Nuccitelli for helpful comments.
Feed: Skeptical Science
Posted on: 12 April, 2012 10:31 AM
Author: Skeptical Science
|Note: this post has been re-published by The Guardian and Climate Progress and cribbed by The Huffington Post
Almost exactly two years ago, John Cook wrote about the 5 characteristics of science denialism. The second point on the list involved fake experts.
We have seen many examples of climate denialists producing long lists of fake experts, for example the Oregon Petition and the Wall Street Journal 16. Now we have yet another of these lists of fake experts. 49 former National Aeronautics and Space Administration (NASA) employees (led by Harrison Schmitt, who was also one of the Wall Street Journal 16) have registered their objection to mainstream climate science through the most popular medium of expressing climate contrarianism – a letter. As is usually the case in these climate contrarian letters, this one has no scientific content, and is written by individuals with not an ounce of climate science expertise, but who nevertheless have the audacity to tell climate scientists what they should think about climate science.
It’s worth noting that when the signatories Meet The Denominator, as is also always the case, their numbers are revealed as quite unimpressive. For example, over 18,000 people currently work for NASA. Without even considering the pool of retired NASA employees (all signatories of this list are former NASA employees), just as with the Oregon Petition, the list accounts for a fraction of a percent of the available pool of people.
This letter, as these letters always do, has gone viral in the climate denial blogosphere, and even in the climate denial mainstream media (Fox News). But why exactly is this letter being treated as major news? That is something of a mystery. Or it would be, if the behavior of the climate denial community weren’t so predictable.
Obviously this letter first gained attention because the signatories are former NASA employees. They are being touted as "top astronauts, scientists, and engineers" and "NASA experts, with more than 1000 years of combined professional experience." Okay, but in what fields does their expertise lie?
Based on the job titles listed in the letter signatures, by my count they include 23 administrators, 8 astronauts, 7 engineers, 5 technicians, and 4 scientists/mathematicians of one sort or another (none of those sorts having the slightest relation to climate science). Amongst the signatories and their 1,000 years of combined professional experience, that appears to include a grand total of zero hours of climate research experience, and zero peer-reviewed climate science papers. You can review the signatories for yourself here.
Contrarians for Censoring Climate Science
These 49 former NASA employees wrote this letter to the current NASA administrator requesting that he effectively muzzle the climate scientists at NASA Goddard Institute for Space Studies (GISS).
Since nothing in science is ever proven, apparently these individuals simply don’t want NASA GISS to discuss science in their public releases or websites anymore. What specifically do they object to?
Ah yes, the ever-more-popular goalpost shift of "catastrophic climate change". The letter of course provides no examples of NASA GISS public releases or websites claiming that CO2 is having a catastrophic impact on climate change, and of course provides zero examples of these mysterious "hundreds of well-known climate scientists and tens of thousands of other scientists" who disbelieve these unspecified catastrophic claims. As is always the case with these types of letters, it is all rhetoric and no substance.
As Skeptical Science readers are undoubtely well aware, the impact of natural climate drivers has been very thoroughly studied, and they simply cannot account for the observed global warming or climate change, especially over the past 50-65 years (Figure 1).
Figure 1: Net human and natural percent contributions to the observed global surface warming over the past 50-65 years according to Tett et al. 2000 (T00, dark blue), Meehl et al. 2004 (M04, red), Stone et al. 2007 (S07, green), Lean and Rind 2008 (LR08, purple), Huber and Knutti 2011 (HK11, light blue), and Gillett et al. 2012 (G12, orange).
The contrarians continue:
If NASA administrators were to censor the organization’s climate scientists at the behest of a few of its former employees who have less climate science experience and expertise combined than the summer interns at NASA GISS, that would really damage NASA’s exemplary reputation.
Let’s be explicit about our choice here.
Amongst those individuals at NASA GISS are some of the world’s foremost climate scientists. They include James Hansen, who created one of the earliest global climate models in the 1980s, which has turned out to be remarkably accurate (Figure 2).
Figure 2: Observed temperature change (GISTEMP, blue) and with solar, volcanic and El Niño Southern Oscillation effects removed by Foster and Rahmstorf (green) vs. Hansen Scenario B trend adjusted downward 16% to reflect the observed changes in radiative forcings since 1988, using a 1986 to 1990 baseline.
This is not a difficult choice for NASA Administrator Charles Bolden, Jr. We would not be surprised if he gave the ‘skeptic’ letter one look and tossed it in the recycle bin.
Climate contrarians clearly disagree, but in the real world, expertise matters. The fact that these 49 individuals used to work at NASA does not make them experts in everything NASA does. If the issue at hand were another moon landing, then by all means, the opinions of many of these individuals would be well worth considering. But we’re not talking about space shuttle launches or moon landings here, we’re talking about climate science. This is a subject which, to be blunt, these 49 individuals clearly don’t know the first thing about.
To those who are making so much noise about this letter – the next time you are at a medical center in need of major surgery, will you go see a pediatrician? Or as a more relevant analogy, will you visit your neighbor, the retired dentist, and ask him to perform the surgery for you?
Somehow we suspect you will insist that the surgery be performed by a surgeon with relevant expertise. The reason is of course that expertise matters. Perhaps you would be wise to consider that fact the next time a group of climate contrarians with little to no expertise publish another of these letters.
As we suggested to William Happer, if climate contrarians want their opinions to be taken seriously, they should engage in real science within the peer-review system that works for every scientific field. That is how science advances – not through letters filled with empty rhetoric, regardless of how many inexpert retirees sign them.
Feed: Skeptical Science
Posted on: 16 April, 2012 1:56 AM
Author: Skeptical Science
|This is a reprint of a press release posted by the Universtiy of California Los Angeles (UCLA) on April 12, 2012.
Wilted leaves during Hawaiian drought
New research by UCLA life scientists could lead to predictions of which plant species will escape extinction from climate change.
Droughts are worsening around the world, posing a great challenge to plants in all ecosystems, said Lawren Sack, a UCLA professor of ecology and evolutionary biology and senior author of the research. Scientists have debated for more than a century how to predict which species are most vulnerable.
Sack and two members of his laboratory have made a fundamental discovery that resolves this debate and allows for the prediction of how diverse plant species and vegetation types worldwide will tolerate drought, which is critical given the threats posed by climate change, he said.
The research is currently available in the online edition of Ecology Letters, a prestigious ecology journal, and will be published in an upcoming print edition.
Why does a sunflower wilt and dessicate quickly when the soil dries, while the native chaparral shrubs of California survive long dry seasons with their evergreen leaves? Since there are many mechanisms involved in determining the drought tolerance of plants, there has been vigorous debate among plant scientists over which trait is most important. The UCLA team, funded by the National Science Foundation, focused on a trait called "turgor loss point", which had never before been proven to predict drought tolerance across plant species and ecosystems.
A fundamental difference between plants and animals is that plant cells are enclosed by cell walls while animal cells are not. To keep their cells functional, plants depend on "turgor pressure" — pressure produced in cells by internal salty water pushing against and holding up the cell walls. When leaves open their pores, or stomata, to capture carbon dioxide for photosynthesis, they lose a considerable amount of this water to evaporation. This dehydrates the cells, inducing a loss of pressure.
During drought, the cell’s water becomes harder to replace. The turgor loss point is reached when leaf cells get to a point at which their walls become flaccid; this cell-level loss of turgor causes the leaf to become limp and wilted, and the plant cannot grow, Sack said.
"Drying soil may cause a plant’s cells to reach turgor loss point, and the plant will be faced with the choice of either closing its stomata and risking starvation or photosynthesizing with wilted leaves and risking damaging its cell walls and metabolic proteins," Sack said. "To be more drought-tolerant, the plant needs to change its turgor loss point so that its cells will be able to keep their turgor even when soil is dry."
The biologists showed that within ecosystems and around the world, plants that are more drought-tolerant had lower turgor loss points; they could maintain their turgor despite drier soil.
The team also resolved additional decades-old controversies, overturning the long-held assumptions of many scientists about the traits that determine turgor loss point and drought tolerance. Two traits related to plant cells have been thought to affect plants\’ turgor loss point and improve drought tolerance: Plants can make their cell walls stiffer or they can make their cells saltier by loading them with dissolved solutes. Many prominent scientists have leaned toward the \"stiff cell wall\" explanation because plants in dry zones around the globe tend to have small, tough leaves. Stiff cell walls might allow the leaf to avoid wilting and to hold onto its water during dry times, scientists reasoned. Little had been known about the saltiness of cells for plants around the world.
The UCLA team has now demonstrated conclusively that it is the saltiness of the cell sap that explains drought tolerance across species. Their first approach was mathematical; the team revisited the fundamental equations that govern wilting behavior and solved them for the first time. Their mathematical solution pointed to the importance of saltier cell sap. Saltier cell sap in each plant cell allows the plant to maintain turgor pressure during dry times and to continue photosynthesizing and growing as drought ensues. The equation showed that thick cell walls do not contribute directly to preventing wilting, although they provide indirect benefits that can be important in some cases — protection from excessive cell shrinking and from damage due to the elements or insects and mammals.
The team also collected for the first time drought-tolerance trait data for species worldwide, which confirmed their result. Across species within geographic areas and across the globe, drought tolerance was correlated with the saltiness of the cell sap and not with the stiffness of cell walls. In fact, species with stiff cell walls were found not only in arid zones but also in wet systems like rainforests, because here too, evolution favors long-lived leaves protected from damage.
The pinpointing of cell saltiness as the main driver of drought tolerance cleared away major controversies, and it opens the way to predictions of which species could escape extinction from climate change, Sack said.
"The salt concentrated in cells holds on to water more tightly and directly allows plants to maintain turgor during drought,\" said research co-author Christine Scoffoni, a UCLA doctoral student in the department of ecology and evolutionary biology.
The role of the stiff cell wall was more elusive.
"We were surprised to see that having a stiffer cell wall actually reduced drought tolerance slightly — contrary to received wisdom — but that many drought-tolerant plants with lots of salt also had stiff cell walls," said lead author Megan Bartlett, a UCLA graduate student in the department of ecology and evolutionary biology.
This seeming contradiction is explained by the secondary need of drought-tolerant plants to protect their dehydrating cells from shrinking as they lose turgor pressure, the researchers said.
"While a stiff wall doesn’t maintain the cell turgor, it prevents the cells from shrinking as the turgor decreases and holds in water so that cells are still large and hydrated, even at turgor loss point," Bartlett explained. "So the ideal combination for a plant is to have a high solute concentration to keep turgor pressure and a stiff cell wall to prevent it from losing too much water and shrinking as the leaf water pressure drops. But even drought-sensitive plants often have thick cell walls because the tough leaves are also good protection against herbivores and everyday wear and tear."
Even though the team showed that turgor loss point and salty cell sap have exceptional power to predict a plant’s drought tolerance, some of the most famous and diverse desert plants — including cacti, yuccas and agaves — exhibit the opposite design, with many flexible-walled cells that hold dilute sap and would lose turgor rapidly, Sack said.
"These succulents are actually terrible at tolerating drought, and instead they avoid it," he said. "Because much of their tissue is water storage cells, they can open their stomata minimally during the day or at night and survive with their stored water until it rains. Flexible cell walls help them release water to the rest of the plant."
This new study showed that the saltiness of cells in plant leaves can explain where plants live and the kinds of plants that dominate ecosystems around the world. The team is working with collaborators at the Xishuangbanna Tropical Botanical Gardens in Yunnan, China, to develop a new method for rapidly measuring turgor loss point across a large number of species and make possible the critical assessment of drought tolerance for thousands of species for the first time.
"We’re excited to have such a powerful drought indicator that we can measure easily," Bartlett said. "We can apply this across entire ecosystems or plant families to see how plants have adapted to their environment and to develop better strategies for their conservation in the face of climate change."
Megan K. Bartlett, Christine Scoffoni and Lawren Sack, The determinants of leaf turgor loss point and prediction of drought tolerance of species and biomes: a global meta-analysis, Ecology Letters 15
Article first published online: 22 MAR 2012 | DOI: 10.1111/j.1461-0248.2012.01751.x
To access the paper, click here.
Feed: Skeptical Science
Posted on: 24 April, 2012 12:29 AM
Author: Skeptical Science
|One of the fallback positions of climate denial after the assertions that "It’s not happening" and "It’s not us" fail is "It’s not bad." The latest incarnation of this myth courtesy of Pat Michaels’ serial data deletion colleague Chip Knappenberger argues that those who seek to mitigate global warming are actually endangering public health because, and believe it or not this is a direct quote:
Here is the specific argument Knappenberger makes in attempting to defend this seemingly absurd thesis:
By this logic gang violence is great because it makes people more adept at dodging bullets.
Knappenberger cites a paper he co-authored, Davis et al. (2003), which found that heat-related deaths are less common in hotter cities. This makes sense, as hotter cities have the infrastructure (i.e. air conditioning units) to cope with hotter temperatures. They do not have to adapt – they are already adapted to the heat, whereas most heat-related deaths come in regions which experience uncommon heat events (but are now experiencing them more and more frequently due to global warming). Thus this point does not support Knappenberger’s argument that more heat waves would be beneficial.
Knappenberger describes the second point in Davis et al. as follows.
This point, however, is contradicted by more recent research.
Zanobetti et al. on Short-Term Heat Impacts
Zanobetti et al. (2012) take an interesting approach in investigating the relationship between hot weather events and mortalities. Since the age group most at risk for heat deaths are the elderly (those over 65 years of age) with predisposed illnesses, Zanobetti compared Medicare data from 1985 to 2006 from 135 U.S. cities to summer temperatures. The authors explain the reasoning behind their approach:
Zanobetti et al. find that larger summer temperature variability leads to more deaths among the elderly. Each 1°C increase in summer temperature variability increased the death rate for elderly with chronic conditions between 2.8% and 4.0%, depending on the condition (emphasis added):
Sherwood and Huber on Long-Term Heat Impacts
A 2009 paper by Sherwood and Huber examines a worst case scenario in which the average global surface temperature warms in the ballpark of 10°C a few centuries in the future. They note that a wet-bulb temperature (Tw) exceedence of 35°C for extended periods should induce hyperthermia in humans and other mammals, as they become unable to sufficiently dissipate heat. In short, if Tw(max) of a particular region were to exceed 35°C for long periods of time, that region would effectively become uninhabitable to mammals.
Based on their climate model simulations, Sherwood and Huber found that Tw increases somewhat more slowly than the average global surface temperature, such that a 1°C average global warming corresponds to a 0.75 to 1°C Tw increase. Therefore, an 8.5°C Tw increase would require approximately 11°C global warming.
In short, Sherwood and Huber find that there is a limit to what humans and other mammals can adapt to in terms of rising temperatures. It will likely take a few centuries for global temperatures to reach that limit, but eventually large regions of the planet could become effectively uninhabitable, beyond what mammals can adapt to.
McInerney & Wing (2011) also examined the Paleocene-Eocene Thermal Maximum (PETM); a period about 56 million years ago during which global temperatures increased 5 to 8°C over a period of about 200,000 years. They found that most species were able to avoid extinction by adapting to the increasing temperatures, for example by becoming smaller (increasing their surface area to volume ratio and thereby being better able to shed bodyheat). Secord et al. (2012) similarly concluded that many species became smaller during the PETM and grew larger after the PETM (Figure 1).
Figure 1: Summary of percent mean body size change in genera that exhibit change from the latest Paleocene to the PETM (left), and from the PETM to the post-PETM (right). No genus exhibits a size increase in the PETM or a decrease after the PETM. Compiled from published sources, except for Sifrhippus from this study. Asterisks indicate genera that first appear in the PETM (Secord et al. 2012).
Our problem is that current climate change is occurring much faster, over just centuries rather than the millennia of the PETM, and thus species will not have sufficient time to evolve in this manner.
Warmer is Not Better
The Knappenberger argument that higher temperatures will decrease heat-related deaths and thus benefit humanity thus suffers from two major flaws. The first is that while fewer heat-related deaths occur when humans are adapted to high local temperatures, heat-related deaths will nevertheless rise in unprepared regions until they become adapted to those rising temperatures (i.e. by installing the necessary cooling infrastructure). Zanobetti et al. illustrate that increasing heat-related deaths is already a reality.
The argument also neglects the long-term limit – there is a point at which temperatures can become too hot for humans and other mammals to survive. If we continue on a business-as-usual path as Knappenberger promotes, we will likely reach that point within a few centuries, and the costs of losing the habitability of large regions of the planet are incalculable.
Feed: Skeptical Science
Posted on: 24 April, 2012 2:59 PM
Author: Skeptical Science
|I’m sick and tired of coming up something witty and funny week after week for these introductions, so now I’ll just write this boring summary: Themes of this week are mapping, Arctic sea ice, non-Arctic air traffic, greenhouse gases (which by the way has nothing to do with gardener’s stomach problems), paleoclimate, biosphere, groundwater, seawater, groundweather, seaweather, and what else? Oh yes, and climate, of course. All this in just 15 little studies plus one classic.
Feed: Skeptical Science
Posted on: 07 May, 2012 12:47 PM
Author: Skeptical Science
In the second installment of the Why Are We Sure We’re Right? series, SkS authors Rob Honeycutt, Dana Nuccitelli, and Andy S. explain why they embrace what mainstream scientists are telling us about climate change. Needless to say, this post generated the highest number of comments during the week. Coming in second and third respectively were Dana’s John Nielsen-Gammon Comments on Continued Global Warming and John Mason’s Two Centuries of Climate Science: part two – Hulburt to Keeling, 1931- 1965.
Toon of the Week
Another climate change solution
Source: Royalty Free Cartoons
Quote of the week
Source: "Clouds’ Effect on Climate Change Is Last Bastion for Dissenters" by Justin Gillis, New York times, Apr 30, 2012
Issue of the Week
When it comes to manmade climate change, what do you consider to be the most significant "canary in the coal mine"?
Words of the Week
Greenhouse effect Greenhouse gases effectively absorb thermal infrared radiation, emitted by the Earth’s surface, by the atmosphere itself due to the same gases, and by clouds. Atmospheric radiation is emitted to all sides, including downward to the Earth’s surface. Thus, greenhouse gases trap heat within the surface-troposphere system. This is called the greenhouse effect. Thermal infrared radiation in the troposphere is strongly coupled to the temperature of the atmosphere at the altitude at which it is emitted. In the troposphere, the temperature generally decreases with height. Effectively, infrared radiation emitted to space originates from an altitude with a temperature of, on average, –19°C, in balance with the net incoming solar radiation, whereas the Earth’s surface is kept at a much higher temperature of, on average, +14°C. An increase in the concentration of greenhouse gases leads to an increased infrared opacity of the atmosphere, and therefore to an effective radiation into space from a higher altitude at a lower temperature. This causes a radiative forcing that leads to an enhancement of the greenhouse effect, the so-called enhanced greenhouse effect.
Greenhouse gas (GHG)Greenhouse gases are those gaseous constituents of the atmosphere, both natural and anthropogenic, that absorb and emit radiation at specific wavelengths within the spectrum of thermal infrared radiation emitted by the Earth’s surface, the atmosphere itself, and by clouds. This property causes the greenhouse effect. Water vapour (H2O), carbon dioxide (CO2), nitrous oxide (N2O), methane (CH4) and ozone (O3) are the primary greenhouse gases in the Earth’s atmosphere. Moreover, there are a number of entirely human-made greenhouse gases in the atmosphere, such as the halocarbons and other chlorine- and bromine-containing substances, dealt with under the Montreal Protocol. Beside CO2, N2O and CH4, the Kyoto Protocol deals with the greenhouse gases sulphur hexafluoride (SF6), hydrofluorocarbons (HFCs) and perfluorocarbons (PFCs).
Source: Annex I (Glossary) to Climate Change 2007: Working Group I: The Physical Science Basis, IPCC Fourth Assessment Report.
The Week in Review
A complete listing of the articles posted on SkS during the past week.
A list of articles that are in the SkS pipeline. Most of these articles, but not necessarily all, will be posted during the week.
Simple Myth Debunking of the Week
Anyone who argues There is no consensus is denying the many different studies and surveys which demonstrate that approximately 97% of climate scientists agree that humans are the main cause of the current global warming.
The Max Planck Institute for Meteorology (MPI-M), located in Hamburg, Germany, is an internationally renowned institute for climate research. Its mission is to understand Earth’s changing climate.
The MPI-M comprises three departments:
and three independent research groups:
Scientists at the MPI-M investigate what determines the sensitivity of the Earth system to perturbations such as the changing composition of its atmosphere, and work toward establishing the sources and limits of predictability within the Earth system. MPI-M develops and analyses sophisticated models of the Earth system, which simulate the processes within atmosphere, land and ocean. Such models have developed into important tools for understanding the behaviour of our climate, and they form the basis for international assessments of climate change. Targeted in-situ measurements and satellite observations complement the model simulations.
Together with several other non-university research institutions the MPI-M and the University of Hamburg constitute the KlimaCampus, a centre of excellence for climate research and education in Hamburg, Germany.
Feed: Skeptical Science
Posted on: 09 May, 2012 10:16 AM
Author: Skeptical Science
|The Australian Department of Climate Change and Energy Efficiency have put together a handy and recommended resource: Accurate Answers to Professor Plimer’s 101 Climate Change Science Questions (direct link to 1.4Mb PDF). This is in response to Plimer’s book How To Get Expelled From School, a compilation of climate misinformation targeted at school children. One section of the book features 101 questions that he suggests children ask their teachers. The DCCEE summarise it well:
As well as a direct response to Plimer’s misleading questions, the document is an interesting and useful resource in its own right. As I explain in a workshop with the Climate Literacy & Energy Awareness Network (CLEAN), responding to misinformation can be educational opportunity, a chance to put myths in proper context and explain the science. So I recommend reading through the document which explains the science behind many common climate myths.
There are also numerous references to Skeptical Science graphs and resources, as we have already tackled many of the myths propounded by Plimer. In fact, SkS content has been appearing in a number of sources of late. Most notably, weather website Wunderground have published a section on climate myths, reproducing the SkS rebuttals of the top climate myths. Apparently they tweaked some of the text where they thought they could improve on our content (I haven’t got around to checking where exactly, will be interesting to check).
SkS material is also being adopted in colleges and universities, with our rebuttals and graphs included in textbooks covering on topics such as geology, climate and psychology. The Debunking Handbook has been adopted into the curriculum of a philosophy course at Portland State University and a Science Communication course at the University of Western Australia. So kudos and credit must go to the SkS team who continue to produce content of high quality and quantity, that is not only being read on the SkS website but also being reproduced by scientists, communicators and educators across the globe.
Note: an alternative response to Plimer’s 101 questions is provided by Ian Enting in the document Ian Plimer’s questions deconstructed. Analysis of ‘How to get expelled from school ..”
Feed: Skeptical Science
Posted on: 09 May, 2012 10:13 PM
Author: Skeptical Science
|Satellite measurements of temperatures near wind farms in Texas from 2003-2011 have suggested that wind turbines have mixed up the nighttime atmosphere, bringing warmer air down to the ground (Zhou et al, 2012). When looking at the physics it turns out that this suggests the chance of a (very, very small) global cooling effect.
The satellites measure that downwind from wind farms the surface is warming more than other places nearby, but only at night. In the windier Texan summer the night warming has been 0.73°C per decade, but the calmer winter months have only gone up at 0.46°C per decade.
The authors blame wind farms because the warming happened where turbines were built, as shown in Figure 1 below. The patterns also match expectations from physics: the effect is stronger when it’s windier, downwind from turbines and at night.
Figure 1 – map of changes in temperature across Texas in degrees Celsius. Crosses represent places where there are wind turbines, and the prevailing wind is from the south. The area average temperature has been subtracted from each point, so a blue area doesn’t necessarily mean it cooled, just that it warmed less quickly than the turbine areas.
At night the Earth’s surface cools quickly as it radiates efficiently to space and the cold dense air right next to it can’t rise. The air above is warmer and less dense so it floats on top and heat becomes ‘trapped’ (rather like the alcohol content in the layers of a B-52 cocktail) unless something helps mix it up – and this is where the chopping blades of wind turbines come in (stirring a B-52 does the same job).
During the day the Sun quickly warms the surface and the lower air heats up and rises. The extra chopping of the turbines doesn’t make a difference because the atmosphere is already well mixed (to try this at home, try pouring a B-52 the wrong way round and too quickly, then see how much difference stirring it makes!).
What does this mean for climate change?
Lead scientist Liming Zhou from the University of Albany released a Q&A to explain that the effect is local, and that
However, newspaper headline writers and political commentators disagree with physics, measurement and scientific experts and think that the results mean "Wind Farms Cause Global Warming". As Professor Zhou explained, this just ain’t the case.
Tiny and local effect, but physics suggests wind farms cause small global cooling!
It’s possible that the global effect of this would be a (very small) cooling. It works like this: the Earth’s surface can emit in the atmospheric ‘windows’ where heat leaks easily to space. This is why the Earth’s surface cools down faster at night than the atmosphere.
Typically the upper air stores this heat at night, but if it’s mixed up by wind turbines, it loses it to the ground and the ground then leaks this heat efficiently into space. Like the radiator fins on a car: the radiator warms up, but it helps keep to cool the whole engine.
When contacted, Professor Zhou responded that "Your explanation is interesting and physically correct to me but the warming-induced emission is very small." He also commented that there might be other effects related to the efficiency of Earth’s heat loss which could work in the opposite direction and that more research is needed. Regardless, the effect on global temperatures will be too tiny to measure.
Effect important locally, and a good reminder to stay skeptical
The nighttime warming measured by satellite is quite large compared with the global warming signal of ~0.2°C per decade. But it’s only in a very small region and doesn’t have much effect on global warming.
As Professor Zhou says, more research is needed. Is this effect widespread in other places? Does it explain some of the night warming measured by near surface thermometers? The area of atmosphere affected is too low for satellites, but maybe nearby weather balloons could be used to check these results.
The media confusion shows us how important it is to stay skeptical and try and think of the whole picture. The turbines are just mixing up heat that was already there and by only measuring where the heat is moving to you can easily get confused and draw the wrong conclusions.
|Feed: Skeptical Science
Posted on: 10 May, 2012 11:19 AM
Author: Skeptical Science
A few months ago, the Canadian Committee for the Advancement of Scientific Skepticism (CASS) issued a report regarding a slew of climate misinformation being taught at Carleton University in Ottawa, Canada by Tom Harris. Somehow Harris, who is an engineer and communications specialist with zero climate research experience, and is the Executive Director of the Heartland Institute-funded International Climate Science Coalition, was put in the position to teach a class on climate and Earth science at Carleton University. More recently, Harris has taken to denouncing what he terms "climatism," which appears to be a disparaging synonym for "climate science." How an anti-climate science engineer was made lecturer of a climate science class at Carleton University is something of a mystery, and a poor decision by the university.
The CASS report followed the growing trend of climate misinformation debunkings using the Skeptical Science database. In this post we will examine just a few of the myths identified by CASS as regrettably being used by Harris to misinform Carleton University students.
A popular climate myth, coming in at #21 on the list of most used climate myths, is that global warming is being caused by galactic cosmic rays. The reason this myth is so popular is that it’s a relatively new hypothesis, and thus has only been investigated by climate researchers in recent years. However, the vast majority of studies on the subject have found little if any relationship between galactic cosmic rays and global temperatures.
That didn’t stop Harris from exhibiting one of the 5 characteristics of scientific denialism to claim otherwise in his lectures – cherrypicking. As John Cook wrote two years ago in describing this denialism characteristic,
This is precisely how Harris taught his class, picking out the few scientific studies which seemed to indicate a relationship between cosmic rays and global temperatures, and completely neglecting to mention the vast majority of studies finding little to no correlation between the two. More importantly, galactic cosmic ray flux on Earth has been flat on average over the past six decades, and therefore could not be responsible for a long-term global warming trend over that period. As Figure 1 shows, starting around 1925, the cosmic ray increase lagged behind the temperature increase, and temperatures continued to rise after 1970 while cosmic ray flux did not.
Additionally, there was a record high cosmic ray flux observed in 2009 (Figure 2). The hypothesis espoused by Harris claims that cosmic rays are supposed to cause cooling by increasing cloud cover, yet 2009 was one of the hottest years on record.
Figure 2: Record cosmic ray flux observed in 2009 by the Advanced Composition Explorer (NASA)
Harris chose simply not to teach these inconvenient facts to his students, instead only presenting the cherrypicked bits of information which seemed to support the case he wanted to present them.
Downplaying the Human Influence on Climate
Along similar lines as his cosmic ray cherrypick, Harris decided to toss physics aside and claim that human effects on the climate are too small to measure.
This myth fails on two levels. First, as Richard Alley has discussed, CO2 has historically been the principle control knob for the Earth’s temperature, and the current human influence on the climate involves a rapid increase in the amount of CO2 and other greenhouse gases in the atmosphere. Second, many different studies have shown that the human influence on global temperatures has dominated the natural influence over the past century, especially over the past half-century (Figure 3). Once again, Harris neglects to mention the vast body of scientific research and literature which contradicts the assertions he makes in his lectures.
Figure 3: Net human and natural percent contributions to the observed global surface warming over the past 50-65 years according to Tett et al. 2000 (T00, dark blue), Meehl et al. 2004 (M04, red), Stone et al. 2007 (S07, green), Lean and Rind 2008 (LR08, purple), Huber and Knutti 2011 (HK11, light blue), and Gillett et al. 2012 (G12, orange).
Denying the CO2 Control Knob Again
CASS caught Harris downplaying the influence of CO2 on climate once again, repeating the myth that water vapor is the most important greenhouse gas.
Harris of course neglects to mention that the amount of water vapor in the atmosphere is dictated by the temperature of the atmosphere – water vapor is a feedback, not a forcing. Thus something else has to initiate global warming before water vapor can kick in and amplify it. This inconvenient fact makes it very difficult to argue that water vapor is "the most important" greenhouse gas.
Indeed, as Lacis et al. (2010) noted in a paper titled Atmospheric CO2: Principal Control Knob Governing Earth’s Temperature (emphasis added),
Once again, Harris failed to accurately portray the body of scientific research and evidence, which contradicts the information he taught his students.
CO2 is a Pollutant
Harris also decided to delve into the realm of laws and policy, repeatedly claiming that CO2 is not a pollutant.
In fact, CASS documents Harris claiming that CO2 is not a pollutant at least four times, showing two additional videos in which it is again claimed that CO2 is not a pollutant, and also twice claiming that another greenhouse gas – nitrous oxide – is not a pollutant.
Why, in a class entitled "Climate Change: An Earth Sciences Perspective", is Harris so focused on the legal and policy issue of the definition of a pollutant? As it so happens, CO2 and other greenhouse gases are indeed pollutants, at least based on the U.S. Clean Air Act. The U.S. Environmental Protection Agency correctly determined that greenhouse gases meet the definition of air pollutants because they pose a threat to public health and welfare through climate change, and Canadian scientific organizations certainly concur with this conclusion. However, the definition of this term is primarily a legal question, and thus it is puzzling why Harris placed so much emphasis on the issue in a climate and Earth science class.
It’s also worth noting that while we often hear this argument from climate denialists that CO2 isn’t a pollutant because it’s tasteless and odorless, human sensory perceptions generally have little to do with whether a substance is is deemed a pollutant. As noted above, pollutants are characterized by the threat they pose to public health and welfare, not based on whether they’re ugly or stinky.
Denying Settled Science
In his lectures, Harris even goes as far as to deny some of the most settled aspects of climate science.
Of course we know for a fact that the planet is warming, and similarly there simply is no question that the CO2 increase is due to human emissions. The easiest way to prove this is through a simple accounting approach (i.e. see Cawley 2011) – human emissions are roughly twice as large as the increase in atmospheric CO2 (most of the rest is absorbed by the oceans, which leads to ocean acidification). Harris disputes this with yet another cherrypick:
Once again Harris rejects the body of scientific evidence in favor of one paper which somebody emailed him about, and apparently expects his students to exhibit the same lack of skepticism.
Climate Denial 101
We have only touched on a few of the many climate myths identified in Harris’ lectures by CASS, who in fact documented over 100 such myths in their report. Tamino also takes Harris to task for his minsinformation regarding warming in the United States in a recent Open Mind post.
Harris’ Carleton University course was clearly less of a climate and Earth science class and more of a climate cherrypicking denial class, teaching students how to ignore the vast body of scientific evidence and only consider the few bits of information which appear to support one’s pre-determined conclusion. However, such denial is entirely anti-science, and this behavior should certainly not be taught in a university classroom. Hopefully Carelton University will learn from this error and disservice to their students.
|Feed: Skeptical Science
Posted on: 11 May, 2012 11:50 PM
Author: Skeptical Science
This is a re-post from the Arctic Sea Ice blog.
I’m starting this blog post off with a conclusion that was reached a while back already: sea ice on the Atlantic and Russian side of the Arctic
Right, with that out of the way we can now look at various aspects of the 2011/2012 freezing season, and compare them to previous years, to be precise the previous freezing season of 2010/2011, and the freezing seasons leading up to and following that other record year: 2006/2007 and 2007/2008. Simply put: I’ll be comparing 2007, 2008, 2011 and 2012 before their respective start of the melting season.
I’ll try not to use too many words, but I’ll be using a lot of images. A bigger version of these images can be found by clicking on them in the original blog post.
I’ll start with the AARI ice age maps. These images are for the end of April, and they look upside down, because it’s from the perspective of the Russians who produced them:
This year, at the end of April, the Arctic seems to hold less of the brown ‘old ice’ than last year and 2007 (older version), and a tad more than 2008, that had relatively little multi-year ice (MYI) after the 2007 melting season/massacre.
Another source that was already mentioned in the A first clue blog post, were these images based on data compiled by NASA senior research scientist Josefino Comiso from NASA’s Nimbus-7 satellite and the U.S. Department of Defense’s Defense Meteorological Satellite Program (credit: NASA/Goddard Scientific Visualization Studio). The images show the amount of MYI at its maximum, I presume:
These images look similar to the ones from AARI, with 2012 showing less old ice/MYI than 2007 and 2011, and a bit more than 2008 (look at the graph in the bottom right image). However, at the time a flag was raised by Spanish blogger Diablobanquiso on his excellent blog, maintaining there was more MYI than AARI and Comiso indicated. He based himself on ASCAT radar images, where slightly brighter white represents older ice. The following image shows March 16th 2011 and March 15th 2012 side by side (unfortunately there are no radar images available from 2007 and 2008), with 2012 merging into an image made by Diablobanquisa, showing what part was missing from AARI and Comiso:
In my view he was proven right when James Maslanik and Chuck Fowler produced their bi-yearly graph/map for NSIDC, showing March ice age distribution, compared here with our other years of interest:
Here we see the zone delineated by Diablobanquisa on the ASCAT radar images that reaches much further towards the East Siberian Sea. Could it be that Comiso and AARI overlooked it because it stands out less clear than the rest of the MYI on the radar images? Maybe there’s a difference in the way the respective teams define old ice/MYI. Either way, it still looks like 2012 has less old ice/MYI than 2011 and 2007, but more than 2008.
This isn’t surprising as there has been a lot of transport of ice towards Greenland and the Canadian Archipelago, and through Fram Strait.
Sea level pressure and ice drift
The movement of ice floes is largely determined by wind, and wind is largely determined by sea level pressure gradients. So let’s first have a look at SLP maps from NOAA’s Earth Science Research Laboratory (daily mean composites page). I have divided the freezing season up into 3 parts with a duration of two months each:
Obviously the mean of two months of SLP patterns will look similar from year to year, but there is still some interesting info here. Take a look for instance at the purple-blue region of low pressure around Greenland. Low pressure means winds blowing counter-clockwise, so the intensity of this low pressure area tells us something about ice transport through Fram Strait and towards Greenland and the Canadian Archipelago. Darker purple means more transport, and particularly the Dec-Jan row looks intense in this sense for this winter and the winter preceding the 2007 melting season.
Also noteworthy is how far the purple blot stretches towards Siberia. Looking at Dec-Jan for this year and last year we see that the low doesn’t stretch all the way over Novaya Zemlya, which partially explains why that region showed a retreat of ice earlier on in 2012 and 2011: westerly winds blowing between high and low pressure systems.
One last thing I noted is that comparing Dec-Jan from year to year, and also Feb-Mar from year to year, the pressure over Siberia seems to be getting higher every winter. Whether this means anything with regards to the WACC theory (Warm Arctic, Cold Continents), I wouldn’t know. Either way, it’s not relevant to this Winter Analysis.
The effect of the various SLP patterns can also be seen on these excellent IFREMER/CERSAT sea ice drift maps (hat-tip yet again to Diablobanquisa). I’ve made an animation covering the October-March period of 2011/2012:
In December and January there are a lot of long arrows, pointing towards Fram Strait, but also to Greenland, the Canadian Archipelago and the Beaufort Sea. This explains in large part why the ice pack looks vulnerable on the Atlantic/Siberian side of the Arctic, and should be stronger on the Pacific/North American side of the Arctic. But there are other factors as well.
Air and sea surface temperatures
For SAT and SST images we turn again towards that most excellent tool: the daily mean composites page from NOAA’s Earth Science Research Laboratory. First of all the surface air temperatures of the four freezing seasons, divided into two periods:
With regards to the first half of the freezing season we see that in 2006/2007 and 2007/2008 large parts of the Arctic are anomalously warm, and the freezing season of 2010/2011 had a very big anomaly over Baffin Bay and the Canadian Archipelago. In the second half of the last freezing season the contrast between the positive anomaly in the Barentsz/Kara region and the negative anomaly in the Bering Sea is very pronounced.
And now for the SSTs:
I’m not really sure how useful this is, because it would seem to me that satellites can’t measure SSTs when the sea is covered by ice (maybe I did something wrong while entering the parameters on the daily mean composites page), but nevertheless we see again a big contrast between the Atlantic and Pacific sides of the Arctic for the second half of the past freezing season, a contrast that translated into record anomalies in the Barentsz and Bering Seas.
One final comparison to look at are the thickness maps generated by the Naval Research Laboratory’s PIPS model and its follow-up, the Arctic Cap HYCOM/CICE/NCODA Nowcast/Forecast System (ACNFS):
There seems to be a lot more of the thickest 4-5 meter thick ice north of Greenland and the Canadian Archipelago when compared to 2008. But then again, PIPS wasn’t the best tool for ice thickness projections, so I’ll just have a look at higher-resolution ACNFS images and compare 2012 to 2011.
There seems to be less thick ice now than last year, but overall it’s thicker, which makes sense, after all those winds pushing the winds from Siberia towards Greenland, the Canadian Archipelago and the Beaufort Sea in December and January. Here too the Atlantic side of the Arctic looks vulnerable, compared to last year, and there’s a lot more (thin) ice in the Bering Sea.
So that was all the evidence for the conclusion given at the start of this blog post. The big questions now are of course:
How this plays out, mostly depends on the weather, albeit less so than in the past when ice was thicker. We will be keeping a close watch through the (bi-)weekly ASI updates, the monthly NSIDC and PIOMAS updates, comparisons of sea ice concentration maps and an animation here and there.