Did Houston Officials Learn Too Much from Katrina? The Salience Fallacy and What to do About it |
|
By MICHAEL C. DORF |
|
Tuesday, Sep. 27, 2005 |
This is one in a special series of columns on legal issues arising in the aftermath of Hurricane Katrina. - Ed.
For persons of good will, the sight of millions of Texans stuck in traffic jams last week as they desperately tried to evacuate before the onrushing Hurricane Rita, must have evoked sympathy, apprehension and that sinking feeling of "here we go again." Had government officials learned nothing from the devastation so recently wrought by Katrina?
And yet, the better question may have been more nearly the opposite: Had government officials and the public learned too much from Katrina? Having so recently seen in New Orleans the danger that arises from waiting too long to evacuate residents, Houston officials overlooked the danger of panicked evacuation.
When Houston Mayor Bill White announced that the "time for waiting was over," he predictably caused his own flood--of cars, rather than water--leaving drivers stranded without adequate supplies of food, water or gasoline, and within range of the fast-approaching storm.
Of course, Mayor White was not wrong to urge Houstonians to flee Rita's wrath, even though the hurricane ultimately weakened and shifted course. Rather, his error--and the error of other local, state and federal officials--was to focus attention on a risk that had so recently, and so spectacularly, been demonstrated as having come to fruition, while, at the same time, neglecting other predictable risks.
The harms that might come to pass from a failure to evacuate in the face of a hurricane were highly salient in the minds of policymakers and the public; in contrast, the risk that people might perish in a traffic jam or, as tragically came to pass, a bus explosion, was not.
But let's put aside the question of whether Houston officials acted rashly or prudently in this particular situation--and turn to the general principle this example illustrates: excessive focus on risks made salient by recent, spectacular events sometimes causes ordinary citizens, lawmakers and others to act in ways that are, when viewed in the sober light of reason, irrational.
In the balance of this column, I will provide some examples of this widespread phenomenon, which I shall call "the salience fallacy." I'll then pose a challenge to the conventional academic wisdom regarding what to do about the salience fallacy.
The Maginot Line: Fortifying Against Some Risks, While Ignoring Others
Perhaps the best-known historical example of the salience fallacy is the Maginot Line, an elaborate network of fortifications, obstacles and troop transport facilities that France erected along its border with Germany in the wake of World War I. The "war to end all wars" had been characterized by trench warfare, and after the Treaty of Versailles, French military planners resolved that they would so fortify their front as to make France impregnable to a German invasion.
However, when Germany attacked France in 1940, German forces did not hazard a direct assault on the Maginot Line. Instead, Germany sent its troops through the low countries, where the Maginot Line had only belatedly been extended, and was not as heavily fortified as on the border with Germany, and through the Ardennes Forest, which French military planners had falsely considered impassable by modern military equipment.
France had miscalculated terribly--with the lessons of World War I perhaps too much in mind--and within weeks, had to sign a one-sided armistice with Germany.
To be sure, some historical revisionists argue that the Maginot Line was not the blunder that it is routinely described as. The Line worked exactly as intended, they say, deterring a frontal German assault.
Yet this defense of the Line entirely misses the point of its critics: No one claimed that the Maginot Line was defective because it was inadequately fortified. The objection was, and remains, that it only guarded against some of the military risks France faced, while neglecting others. And guarding against some risks, while leaving others unaddressed, meant the measures that were taken, proved futile.
Accordingly, since the Second World War, the Maginot Line has appropriately become synonymous with the fallacy of "fighting the last war" --that is, preparing for eventualities made salient by recent events, while neglecting other dangers.
Thus, for example, the Strategic Defense Initiative (popularly dubbed "Star Wars") of the Reagan Administration and revived under the current Bush Administration, has been derided as a "Maginot Line in space." By devoting billions of dollars to developing a capacity to intercept nuclear missiles, it is claimed, the U.S. government simply invites would-be attackers to use low-flying sea-launched or hand-delivered weapons, which could not be detected or destroyed by even a perfect missile defense.
Driving Versus Flying: Choosing a Riskier, But Seemingly Safer, Activity
The salience fallacy was also at work in the immediate wake of September 11, 2001, when millions of Americans cancelled air travel plans and opted to drive whenever possible. The skittishness about flying was certainly understandable; yet by shifting to automobile travel, Americans predictably increased, rather than decreased, the risks they faced. Per mile traveled, cars are roughly ten times more dangerous than airplanes, so that all that extra driving resulted in more, rather than fewer, fatalities than would otherwise have occurred.
Of course, in the immediate aftermath of 9/11, travelers may have reasonably believed that the odds had changed. Never before had four commercial airplanes been hijacked and crashed on the same day, and prudent persons may well have thought that air travel had become more than ten times as dangerous as it had previously been. In the face of the unknown (and, in some sense, unknowable) risk of hijacking, it was not necessarily irrational to accept the known risk associated with driving.
But if we can excuse or understand the travel choices made by individual Americans post-9/11, the same cannot be said for a choice of the Federal Aviation Administration (FAA), which, not long ago, proposed mandating that all children flying on airplanes be seated in child safety seats, as in cars.
The available data showed that such a regulation could prevent as many as one to two children's deaths per year. As a percentage of children flying annually, that sounds like (and, of course, is) a small number, but every avoidable death is a tragedy, and so the proposed FAA rule may have seemed sensible.
Sensible, that is, if one only focuses on the problem as defined narrowly by the cases that seemed salient to the FAA--children injured or killed during airplane crashes because they were inadequately restrained.
But as an October 2003 analysis in the Archives of Pediatric and Adolescent Medicine concluded, the proposed FAA rule would likely have increased, rather than decreased, the net number of juvenile fatalities.
Why? Because the FAA rule would have marked a change from current policy, which permits passengers to hold children under the age of two on their laps. Mandating that all children and infants sit in child safety seats would have required that parents who previously had held children on their laps, would now have to purchase additional seats for those children. And the cost of those additional seats, in turn, would have been sufficient to motivate some number of passengers to forego air travel entirely, and opt for the less safe option of driving.
The Archives article concluded that if only five to ten percent of putative air passengers switched to automobile travel, rather than purchase another ticket for an infant, then the proposed rule's net effect would be to increase juvenile fatalities.
Accordingly, and at least for the time being, the FAA has not mandated the use of child safety seats in aircraft. Instead, it has strongly urged parents who do choose to bring their children on airplanes to use child safety seats voluntarily, while still permitting infants to travel on laps.
How to Balance Risks: Choose Solutions Based on Careful Cost-Benefit Analysis
I do not mean to suggest that proposed solutions invariably cause greater harm than the problems they aim to solve.
For example, one fallacious argument against mandatory seat-belt use in automobiles is that in certain categories of accidents, a passenger restrained by a seat belt will be unable to exit a vehicle before drowning or perishing in a fire. That is certainly true, but statistically, seat belt use saves many more lives than it costs. And because no one can know in advance whether he will be involved in no accident, an accident in which a seat belt will prevent or mitigate injury, or the rare sort of accident in which a seat belt will make matters worse, it makes sense to buckle up.
More generally, what a prudent individual or regulator wants to know is whether some course of action will increase or decrease the aggregate risk. Seat-belt use in cars reduces aggregate risk. Mandatory child safety restraints in airplanes, in contrast, may well increase the aggregate risk.
The Paternalistic Response to Cognitive Errors
But what about the salience fallacy, or as some writers in the field call it, the "salience heuristic," the "vividness heuristic," or the "availability heuristic"?
The fallacy is this: Most people act in ways that they think will minimize risks that are cognitively available--in other words, those that readily come to mind. And risks of a vivid nature, or that have lately received a great deal of publicity, or both, readily come to mind. At the same time, they ignore less salient risks. And the result may be that when they choose a course of action, aggregate risk actually increases (or, at least, is not decreased as much as it might be by a different approach).
In the literature of "behavioral economics," the standard remedy for judgment errors like the salience fallacy is paternalism or technocratic rule. If ordinary people's judgments are distorted by cognitive biases, then experts armed with statistics can correct for those biases.
That approach certainly has something to be said for it, in many contexts. Putting aside libertarian objections, there are undoubtedly circumstances in which people, left to their own devices, make poor choices that the law may legitimately override.
Consider an example that involves a different cognitive bias: the fact that most people are overconfident in their own abilities. Suppose the law permitted people to drive under the influence of alcohol, and only punished those drivers who actually caused injury to life, limb or property as a result of their drunk driving. Under such a regime, accidents would undoubtedly increase.
Why? Because some people who now give their keys to the designated driver, would succumb to overconfidence and get behind the wheel. To correct for the cognitive bias of overconfidence, the law legitimately proscribes all drunk driving.
Participatory Democracy as a Response to the Salience Fallacy
But paternalism, or rule by experts, is no panacea, especially as experts themselves can easily succumb to the salience fallacy. The military planners who focused French defenses on the Maginot Line, to the neglect of the Ardennes Forest and the borders with Belgium and the Netherlands, were experts. So too were the FAA officials who proposed mandating child safety restraints on airplanes. And whether or not he called on them, the Mayor of Houston had access to emergency management experts.
The best antidote to the salience fallacy may well be the opposite of technocratic rule--namely, participatory democracy. For although anyone can succumb to the salience fallacy, usually there will be at least some people who do not--some people who, for whatever reason, are able to call to mind risks beyond those that have most recently been highly publicized.
The key to effective public decision making is for policy makers to find ways to elicit information about risks (and by the same token, benefits) that they may have overlooked, and then to pay proper attention to the additional information. To do so, they must provide those who will raise less obviously salient risks with an avenue to speak--and perhaps also an incentive to do so--and must listen to them when they do speak
For example, the publication of the FAA proposal to mandate child safety seats on airplanes called forth private knowledge; three doctors wrote the paper about the potential harm from diversion to automobile travel. And to its credit, the FAA is, for now at least, paying attention to the point they made.
More generally, administrative law in the United States is supposed to make use of dispersed knowledge in just this way. In most contexts, before a new regulation goes into effect, members of the public receive notice that it is planned, and are invited to comment on it. (Administrative lawyers refer to the process as "notice and comment" rulemaking.)
To be sure, much of the commentary that is actually elicited consists of self-serving objections by companies and others who do not wish to bear the cost of the proposed regulation. But even information produced for selfish motives can bear on a decision, and thus combat the salience fallacy. And public interest groups, too, may provide comments, as may altruistic private individuals.
"Those who cannot remember the past," George Santayana famously wrote, "are condemned to repeat it." Perhaps, but human history also teaches that those whose minds focus only on the recent past, are condemned to neglect the risks of an uncertain future.
Before the next hurricane strikes, let's hope that public officials have a plan to minimize the aggregate risk, not just those risks most recently in the news.