In September 2018, as Hurricane Florence was heading towards landfall in North Carolina, a team of researchers announced that the storm would be 80 kilometers larger and drop 50% more rainfall due to “human-induced climate change.”
In a study published last week, the researchers shared that their initial numbers were wildly off base.
The new study explained: “The quantitative aspects of our forecasted attribution statements fall outside broad confidence intervals of our hindcasted statements and are quite different from the hindcasted best estimates.”
In plain English, that means: “We were really, really wrong.”
Scott Johnson, the editor of Climate Feedback, explains the significance of a mistake in the research team’s initial analysis and the consequence of using methods that generated huge error bars:
“Rather than something like 50 percent of the rainfall being the result of a warmer world, the models actually show about five percent (and that’s ±5%). And rather than a storm that is 80 kilometers wider because of climate change, it was about nine kilometers (±6km) wider.”
Johnson says the rush to get the initial analysis out likely contributed to the flawed numbers.
He questioned whether speed and scientific accuracy are worth trading off: “Whether there’s sufficient value in getting a less reliable answer faster is another question.”
Both studies of Hurricane Florence reflect a recently developed approach that seeks to quantify the influence of human-caused climate change on individual weather events.
Such “event attribution” studies typically use models to produce results under two different conditions: the real world and a counterfactual world in which climate change is not present.
The differences between the two worlds are used to make statements about the connection between climate change and the specific event.
Such studies are increasingly easy to do, extremely media-friendly and as a consequence, have now become commonplace.
The major error in the initial attribution study of Hurricane Florence provides an opportunity to offer some guidance on how to interpret such studies.
Here I offer three rules to guide the production and interpretation of such event attribution studies. …snip…
The world has evolved differently than it would have otherwise due to the significant human influence, which notably includes the emission of carbon dioxide from the burning of fossil fuels, but through other influences as well, such as changes to the land surface.
We should, therefore, be cautious when climate change is associated with individual extreme events through weasel words that imply a connection but in a trivial non-specific way.
Such weasel words include claims that a particular extreme event has been — linked or connected or cited (I’m sure you can think of other examples) – in connection with human-caused climate change.
Of course, the prevalence of weasel words reflects intense political overlay on the association of extreme weather events and climate change.
For instance, one of the scientists who performed the initial Hurricane Florence analysis, Michael Wehner of Lawrence Berkeley National Laboratory, openly expressed his desire to get the initial analysis in the news.
According to Buzzfeed, “Wehner admitted that he and his colleagues are sticking their necks out in making an estimate of the effect of climate change before the storm makes landfall. But he said that it’s important to provide answers when a hurricane is in the news, not months later when most people are thinking about other issues.”
Wehner also expressed a political motivation: “This kind of study really brings home the point that dangerous climate change is here now.”
The publicity campaign was enormously successful. Many news outlets ran with the sensational story.
For instance, the Guardian proclaimed: “Climate change means Hurricane Florence will dump 50% more rain.”
Newsweek announced: “How Global Warming Is Turbocharging Monster Storms Like Hurricane Florence.”
Wehner was contrasted with President Trump by the Center for American Progress and he told them: “The most important message from this (and previous) analyses is that ‘Dangerous climate change is here now!’ It is not a distant threat in the future but today’s reality” (emphasis in original).
Apart from media sensationalism and efforts to shape public and policymaker opinion on climate change, the attribution of extreme weather to causal factors (including the emission of greenhouse gases) is important for actual decision making related to disaster planning and climate adaptation.
In such contexts, science should be more than just a symbol. Here scientific quality actually matters.
To ensure rigor in its work, the Intergovernmental Panel on Climate Change (IPCC) has employed a statistical framework for concluding that extreme weather phenomena had actually increased (or decreased) and the factors responsible for such changes.
The detection of changes required quantifying a change in the statistics of weather extremes over climate time scales of 30 years or even longer.
Once detection was achieved, then scientists seek to attribute those changes to particular causes, including the accumulation of carbon dioxide in the atmosphere.
When it comes to many types of extreme events the IPCC has for decades been unable to conclusively detect changes in their frequency or intensity.
For instance, the IPCC has reported increases in heatwaves and in heavy precipitation, but not tropical cyclones (including hurricanes), floods, tornadoes or drought.
The rise of individual “event attribution” studies coincide with frustration that the IPCC has not definitively concluded that many types of extreme weather had become more common.
Elizabeth Lloyd, a philosopher of biology, and Naomi Oreskes, a science historian, expressed this frustration in a 2018 paper in the journal Reviews of Geophysics:
“The traditional risk‐based approach to extreme events [detection and attribution under the IPCC] may lead to a challenge in communication, and to the impression that climate science is less epistemically secure than it actually is… Because no event can be attributed to climate change without an attribution study, this effectively means that scientists following community norms will nearly always convey the message that individual events are not related to climate change—or at least, that we cannot say if they are. In short, it conveys the impression that we just do not know, which feeds into both contrarian claims that climate science is in a state of high uncertainty, doubt, or incompleteness, and the general tendency of humans to discount threats that are not imminent.”
The rise of “event attribution” studies offers comfort and support to those focused on climate advocacy by establishing the linkage (weasel word) of specific extreme events and climate change.
It is not clear however that such studies offer much in the way of empirical rigor, particularly as compared to the conventional IPCC detection and attribution framework.
As one climate scientist observes of the event attribution methods, “it is important to appreciate that being quantitative is not necessarily the same thing as being rigorous.”
With a focus on scientific rigor, here are three rules for accepting the coming avalanche of “event attribution” studies that will without a doubt connect (weasel word) most every extreme weather event with climate change.
Rule Number One: Any model used in an event attribution study to quantify a linkage (weasel word) between climate change a specific extreme event should also produce accurate historical climate trends associated with the relevant phenomena.
The claim that rainfall from Hurricane Florence was boosted 50% by climate change should have raised immediate doubts because observations have not shown an increase in rainfall related to landfalling hurricanes.
Any event attribution study that cannot accurately replicate historical trends using the same model and methods is clearly fatally flawed. A comparison of observations and modeled climate history with respect to the extreme weather phenomena under study should always be included in event attribution results.
Rule Number Two: All event attribution studies should be preregistered, which means “committing to analytic steps without advance knowledge of the research outcomes.”
All methodological choices should be made transparent in advance of any event attribution study and submitted to an independent registry (there are many examples).
All analyses should be subsequently published, including null- and non-findings. Such preregistration can improve the rigor of research.
As one event attribution study concluded: “any event attribution statement can—and will—critically depend on the researcher’s decision regarding the framing of the attribution analysis, in particular with respect to the choice of model, counterfactual climate, and boundary conditions.”
Preregistration will make such choices transparent. Any event attribution study conducted in the absence of preregistration is of questionable value.
Rule Number Three: All event attribution studies should integrate their findings with the traditional approach to detection and attribution of the IPCC. Event attribution studies often result is what is called “attribution without detection.”
This means linking (weasel word) a specific extreme event with climate change in the absence of detecting any increase in the relevant characteristics of such events – as with attributing Hurricane Florence rainfall (or some fraction of it) to climate change, but without detecting a corresponding long-term increase in rainfall in the climatological record of U.S. hurricanes.
Event attribution and the conventional IPCC approach can be integrated by calculating the emergence timescale of trends in the characteristics of the extreme weather event in question, using the same model and methods of the event attribution study.
For instance, any event attribution study of a single hurricane’s rainfall should always be accompanied by a quantitative estimate of when changes over time across all hurricanes should be detectable under the conventional IPCC framework. In this way, event attribution studies can be made fully consistent with the IPCC approach.
Individual event attribution studies are here to stay. They fill a strong demand for advocacy and politics. Meeting such demand should be fully compatible with basic standards of scientific quality.
For event attribution studies to be conducted with the highest degree of rigor they should (1) demonstrate consistency with historical observations, (2) be the product of preregistered studies, and (3) be fully integrated with the conventional methodologies of the IPCC.
Until event attribution studies meet these basic rules, they will better serve purposes of advocacy rather than science.
Roger Pielke Jr. has been a professor at the University of Colorado since 2001. Previously, he was a staff scientist in the Environmental and Societal Impacts Group of the National Center for Atmospheric Research. He has degrees in mathematics, public policy, and political science, and is the author of numerous books. (Amazon).
Read more at Forbes Blogs
Credit: Source link