By: Rebecca Cerio
One of the more scientifically bizarre stories lately has been the conviction of Italian scientists and engineers in the L’Aquila earthquake trial. To summarize, during a swarm of small earthquakes, a government-sponsored panel told the people of the L’Aquila region that the tremors were nothing to worry about and that they were believed to disperse energy and reduce the chance of a larger earthquake. (In the past, such swarms preceded only a tiny fraction of large earthquakes.) Six days later, a large earthquake hit the region, killing over 300 people. The scientists were tried for the deaths of about 30 of those people, who–reassured by the scientists’ words–stayed in their homes when the quake struck, instead of rushing outside to more open, safer ground.
Scientists, predictably, have shook their heads in dismay at the Italian court’s verdict (convictions of manslaughter and 6-year sentences). They have, understandably, pointed out that there was no way that the scientists could predict an earthquake and that they should not be punished for giving the best advice they could given the data they had. The prosecution has pointed out that the defendants were not being charged with incorrectly predicting an earthquake but instead incorrectly communicating the RISK of an earthquake. In essence, the scientists were charged and found guilty of giving people a false sense of security that convinced the victims to change their behavior in an ultimately lethal way.
Whether the scientists gave people bad advice or whether they gave them good advice that simply turned out to be wrong is still unclear and is perhaps something that only Mother Nature would be able to testify about, but it gets right at the crux of a very pointed issue: how should scientists convey risk and uncertainty about their data to the public, particularly in life-and-death scenarios? How much responsibility do scientists have to convey that risk accurately? And what legal blame do scientists have to accept when people interpret and use that data to justify acting in ways that lead to injury or death?
Several excellent pieces have since been written about various aspects of this issue. Erik Klemetti argued in Wired that ultimately, the public needs to understand that no prediction is 100% accurate and act accordingly.
If you do live in a region of high geologic hazard, then you should be prepared for such eventualities, and if you can’t make the preparations, then you should be making sure your government does. […] However, when it comes down to it, a lot of the responsibility falls on the public to be better educated about the hazards they face. Some of that needs to come from the officials and scientists in charge – better outreach, clearer statements, more research – but some of this needs to come from the grassroots where children learn science and hazards.
Graeme Archer in the Daily Telegraph has also brought out food for thought on the issue of whether or not science is good enough to base public policy on at all. If one cannot give information good enough to bet peoples’ lives on…should it be used in making public policy about such things at all? In his piece “The L’Aquila earthquake trial reminds us that scientific evidence shouldn’t determine public policy“, Archer points out that scientists need to understand the gravity of how their information may be interpreted, even if what they are saying is perfectly correct.
Experts must remember that the estimation of risk is the first step in a process, which then proceeds to use those estimates to evaluate the cost of various political actions. Political decisions have consequences which are impossible to determine accurately; something scientists – now understandably clear that they don’t wish to pay the price for making the wrong call – should remember. L’Aquila was a tragedy for which the physicists weren’t to blame. But who’s to blame for thinking that scientific evidence should determine policy?
Personally, I find the dichotomy of these two opinions interesting, because it deals with such issues as knowledge, power, and the ethical use of both for the public good…and to the public liking. Scientists, through their experience and expertise, have the ability to understand and interpret data that the public generally cannot. In that, they have knowledge which the public asks for, receives, and then decides–sometimes on an individual basis–how to use.
How would the public have reacted if the scientists above had said, “We can’t say whether there will or will not be an earthquake. There’s a chance that an earthquake might happen. There’s a larger chance that it won’t. We can’t predict these things.” Would the public have accepted that (quite valid) uncertainty? Would they have responded with “just tell us what the chances are” or “you can’t do better than that?” Would public support for earthquake science have dimmed, given such a wishy-washy answer? Would some of those 30 people have left their houses the night of the earthquake and not died? Would the scientists be on trial? We will never know.
And, looking forward, how will this precedent affect other scientists’ communications with the public? Italy is only one country, but I would argue that the issues raised here are universal. Everywhere, scientists are called upon to be experts by political bodies that then use this expert testimony to form public policy. Sometimes the experts will, through no fault of their own, predict risk incorrectly. If scientists have been successfully prosecuted previously in your country for giving advice that underestimates risk, how will subsequent panels respond? In the case of the notoriously inaccurate science of earthquake prediction, Klemetti suggests, this environment may “create a situation where hazard geoscientists are caught between a literal rock and a hard place – don’t emphasize enough and something happens, you go to prison; overemphasize and cause panic, you lose the public’s trust. “
Many would argue that there is a happy medium between these two, but finding that balance, especially in relation to events that are unlikely but still possible, like the L’Aquila earthquake, is notoriously difficult for both scientists and the public.
Knowledge, unfortunately, is not the same as foresight. The loss of life in L’Aquila is testament to that.
(crossposted to Science Policy for All)