Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission bias and ambiguity. Journal of Behavioral Decision Making, 3, 263-277.
Subjects are reluctant to vaccinate a (hypothetical) child when the vaccination itself can cause death, even when this is much less likely than death from the disease prevented. This effect is even greater when there is a `risk group' for death (with its overall probability held constant), even though the test for membership in the risk group is unavailable. This effect cannot be explained in terms of a tendency to assume that the child is in the risk group. A risk group for death from the disease has no effect on reluctance to vaccinate. The reluctance is an example of omission bias (Spranca, Minsk, & Baron, 1990), an overgeneralization of a distinction between commissions and omissions to a case in which it is irrelevant. Likewise, it would ordinarily be prudent to find out whether a child is in a risk group before acting, but in this case it is impossible, so knowledge of the existence of the risk group is irrelevant. The risk-group effect is consistent with Frisch and Baron's (1988) interpretation of ambiguity.The present study concerns the role of two biases in hypothetical decisions about vaccinations. One bias is the tendency to favor omissions over commissions, especially when either one might cause harm. We show that some people think that it is worse to vaccinate a child when the vaccination can cause harm than not to vaccinate, even though vaccination reduces the risk of harm overall. The other bias is the tendency to withhold action when missing information about probabilities is salient - such as whether the child is in a risk group susceptible to harm from the vaccine - even though the missing information cannot be obtained. We show that this bias is found even when the overall probability of each outcome is clearly constant across the conditions compared. We take both of these effects to be overgeneralizations of principles or heuristics that are generally useful to situations in which they are not useful. Consider first what we shall call omission bias, the tendency to favor omissions (such as letting someone die) over otherwise equivalent commissions (such as killing someone actively). In most cases, we have good reasons for the distinction between omissions and commissions: omissions may result from ignorance, and commissions usually do not; commissions usually involve more malicious intentions than the corresponding omissions; and commissions usually involve more effort, itself a sign of stronger intentions. In addition, when people know that harmful omissions are socially acceptable, they look out for themselves; this self-help principle is, arguably, sometimes the most efficient way to prevent harm. In some cases, however, these relevant differences between omissions and commissions seem to be absent. For example, choices about euthanasia usually involve similar intentions whether the euthanasia is active (e.g., from a lethal drug) or passive (e.g., orders not to resuscitate). In such cases, when knowledge and intentions are held constant, omissions and commissions are morally equivalent (see Spranca, Minsk, & Baron, 1990, for discussion). Yet many people continue to treat them differently - not everyone, to be sure, but enough people to influence policy decisions. We suggest that these people are often overgeneralizing the distinction to cases in which it does not apply. The intuition that commissions are worse, valid as it may be in most cases, is no longer valid when knowledge and intention are known to be the same for both omission and commission or when a decision maker must choose between an omission and a commission, knowing the consequences of both (as in the studies reported here). If you have a choice of killing 5 or letting 10 people die, assuming (for present purposes) that all are drawn at random from the same population, you should kill the five. Each member of the population has twice the chance of death from your omission than from your commission, and each would therefore prefer you to act. If you choose not to act, you are hurting all by going against their preferences. Any principle that tries to justify the omission here would have to have a strong justification, for that principle will have a price in lives. When attempts are made to formulate a principle that can justify a bias toward omissions, the very distinction between omission and commission becomes unclear, and the distinctions that can be maintained have no clear moral significance (Bennett, 1966, 1981 - for example, one distinction is that there are more ways of not doing something than doing it, yet the number of ways of doing something has no normative significance). Arguments in favor of the distinction (e.g., Kagan, 1988; Kamm, 1986; Steinbock, 1980) fall back on intuitive judgments about cases. The correctness of intuition, however, is exactly what is at issue: to appeal to intuition is to beg the question. One might argue that intuition is relevant because it affects the regret that people feel about different outcomes. Active killing of one person might cause more regret than passive killing of two. Our answer to this is that the regret is felt by the decision maker, not those who die, so the use of this argument as a justification is a kind of selfishness. In addition, the difference in regret might not be inevitable; it might disappear with a change in the person's view of omissions and commissions, so a general change in this view might still be justified. Spranca et al. (1990) found that many subjects considered commissions that caused harm to be morally worse than omissions that caused harm, even with intention held constant. For example, active deception was considered morally worse than withholding the truth, even when the actor's intention to deceive was judged to be the same in the two cases. Spranca et al. also asked subjects to evaluate two options from a decision maker's point of view: a treatment that would cure a disease but cause death with a .15 probability, or no treatment, with the disease itself causing death with a .20 probability. In 13% of the cases, subjects chose no treatment because (they said) they did not want to be responsible for causing deaths through their decision. (When the probabilities were reversed, subjects preferred the treatment in only 2% of the cases, and, in general, when subjects rated the desirability of both options, the relative desirability of the lower death rate was higher when that was associated with inaction than when it was associated with action.) This result was equally strong whether the decision was made from the point of view of a physician, a patient (deciding for himself), or a public-health official deciding for many patients. In the present study, we extend this result, using different examples and a different method. This omission bias is related to other phenomena. Kahneman and Miller (1986) point out that commissions lead to greater regret than omissions when a fortuitous bad outcome occurs. Demonstrations of the status-quo bias and related biases (Knetsch, Thaler, & Kahneman, 1988; Samuelson & Zeckhauser, 1988; Viscusi, Huber, & Magat, 1987) usually confound the status quo with an omission. For example, when the willingness to pay to remove a risk is less than the willingness to accept payment to bear the risk, changing the status quo requires an action (accepting or paying) in both cases. Ritov and Baron (1990) have unconfounded the status-quo effect from omission bias in both of these contexts by asking subjects whether they would act in order to prevent a change from the status quo or whether they would feel worse when a bad outcome resulted from failure to take such action (vs. acting to maintain the status quo). In both of these situations, we found that the omission-commission distinction is the critical one, not the preservation of the status quo. We do not mean to suggest that people are always biased toward omissions. Under some conditions, for example, when the decision maker is in a position of responsibility, people show the opposite bias (Ritov, Hodes, and Baron, 1989). Most subjects in the studies of Spranca et al., and in the studies reported here, show no bias. A substantial minority, however, can influence public policy (e.g., on active vs. passive euthanasia) or can affect overall rates of cooperation, as in a vaccination program. The bias toward omissions does not seem to have a single explanation (Spranca et al., 1990). Many subjects justify the distinction by arguing that omissions are not causes (despite the fact that they affect the probability of outcomes relative to the alternative option). Some of these subjects do not hold themselves responsible for outcomes that would have occurred if they were absent or ignorant, despite the fact that they were not absent and not ignorant. The use of omissions as a reference point also seems to play a small role, so that harms caused by omissions are seen as foregone gains, which are less aversive than pure losses caused by commission (as is consistent with norm theory, Kahneman & Miller, 1986, and the loss aversion assumption of prospect theory, Kahneman & Tversky, 1984). Consider next the effect of salient missing information. Frisch and Baron (1988) have argued that the effects of `ambiguity' on decision making can be described in terms of the salience of missing information. For example, in a situation first described by Ellsberg (1961), people told that they will win a prize if a red ball is drawn will prefer to draw from an urn with 50 red balls and 50 blue balls, rather than an urn with an unknown proportion of red and blue balls. Here, the proportion of red balls is a salient piece of missing information in the second case. Subjects do not think about other missing information that would be just as useful, for example, information about the proportion of red balls in the region of the first urn from which the ball will be drawn. The perception of missing information can incline people toward inaction because they feel a desire to seek the information before doing anything else. When the information is not available, however, this desire must be left unsatisfied. Frisch and Baron argue that the tendency to withhold action when information is missing can account for other effects of ambiguity, such as the effects of conflict among experts who estimate probabilities (Kunreuther and Hogarth, 1989). Brun and Teigen (1990) recently provided some evidence consistent with this view: subjects prefer guessing the outcome of an uncertain event before it has occurred to guessing after it occurred but before they know it. In the present experiments, we test the effects of missing information directly by holding constant the probability of the outcome, a vaccine-related injury. We simply call attention to one factor that can influence the probability of such an injury, membership in a `risk group' for the injury. `Ambiguity' is therefore manipulated even though the probability of the outcome in question remains exactly known. Previous studies of the effect of ambiguity on decision making have often failed to inform subjects explicitly that the probability of the outcome was unaffected by the ambiguity manipulation. Frisch (1988) has found that subjects in experiments such as Ellsberg's often do not know that the expected probability is constant across the conditions being compared. In the present experiments, we test the effects of missing information directly by holding constant the probability of the outcome, a vaccine-related injury. We simply call attention to one factor that can influence the probability of such an injury, membership in a `risk group' for the injury. `Ambiguity' is therefore manipulated even though the probability of the outcome in question remains exactly known. We therefore test the role of ambiguity itself, unconfounded by subjects' beliefs about the effects of ambiguity on probability. Both omission-commission and missing information are involved in public policy. A classic case in which the bias toward omissions affected policy was the argument that not seeding a hurricane could be justified, even though seeding would lead to less damage, because the damage would be felt by different people, to whom the decision makers would be `responsible' (Howard, Matheson, & North, 1972). Our legal system, as well, honors the distinction even when it seems irrelevant: we hold manufacturers strictly liable for damages that result from their decisions to make certain products, but we do not hold them liable at all for decisions not to produce the products (e.g., new vaccines). Similarly, we seem to put more effort into reducing risk when the risk is not well known but small (e.g., products of genetic engineering) than when the risk is well known and large (e.g., radon). In all the experiments reported here, the basic task was the following: Subjects were presented with a hypothetical situation in which they had to make one of two decisions, either whether to vaccinate their child against an epidemic disease, or whether to support a law, requiring that all children be vaccinated. Naturally, the vaccine itself carries some risk. The vaccination problems we present are modeled on the real case of DPT vaccine (diphtheria, pertussis, tetanus), which causes a serious, permanent neurological injury in 1 dose out of 310,000, far less than the damage formerly caused by pertussis (whooping cough) alone in infants. In 1987, the only manufacturer of DPT vaccine in the United States (Lederle) set aside 70% of the price of the vaccine as a reserve against tort claims (Inglehart, 1987). Likewise, the Sabin vaccine occasionally causes polio, although it is on the whole safer than the Salk vaccine, which sometimes fails to prevent polio. The producer of the Sabin vaccine has been held liable for such cases (Inglehart, 1987), although no suits have been brought against the producer of the Salk vaccine. More generally, manufacturers are liable for harmful effects of their actions but not for harmful effects of their inactions (Huber, 1988). Ambiguity about possible side effects further reduces the willingness of companies and their insurers to move forward with new products (Kunreuther and Hogarth, 1989).
Suppose it were discovered that the children who are susceptible to death from flu are the same ones who are susceptible to death from the side effects of the vaccine. Thus, ... the `net decrease' would represent actual lives saved, children who would have died from flu if they had not been given the vaccine. There would be no children who would die from the vaccine who would not have died anyway.Two other items were identical except that the cost was $10 instead of $2 per child, with corresponding increases in the cost per life. Additional items addressed other issues that are not relevant to this paper.
In the state you live in, there had been several epidemics of a certain kind of flu, which can be fatal to children under 3. The probability of each child getting the flu is 1 in 10, but only 1 in 100 children who get the flu will die from it. This means that 10 out of 10,000 children will die. A vaccine for this kind of flu has been developed and tested. The vaccine eliminates the probability of getting the flu. The vaccine, however, might cause side effects that are also sometimes fatal.In the personal decision, subjects were instructed: `Imagine that you are married and have one child, a one-year old. You wonder whether you should vaccinate your child. Your child will have a 10 in 10,000 chance of dying from the flu without the vaccination.' In the policy decision, they were instructed: `Suppose that the state government is thinking of passing a law to require vaccination for all children, unless a physician thinks it is dangerous to the child's health. (Such laws exist in Pennsylvania and other states.) The question now is whether you would support such a law. If the law is not passed, the vaccine will not be offered, and 10 out of 10,000 children will die from the flu.' For each case, subjects were asked to indicate the maximum overall death rate for vaccinated children for which they would be willing to vaccinate their child (or to support the law), using the following scale:
Would you vaccinate your child [support a law requiring vaccination if the overall death rate for vaccinated children were (check those in which you would vaccinate [support the law]): 0 in 10,000 1 in 10,000 2 in 10,000 3 in 10,000 4 in 10,000 5 in 10,000 6 in 10,000 7 in 10,000 8 in 10,000 9 in 10,000 10 in 10,000Eight different version of the vaccination problem were used (with titles included): 1. Basic case. `The children who die from the side effects of the vaccination are not necessarily the same ones who would die from the flu.' 2. Risk group for flu. `Suppose it were discovered that 100 out of every 10,000 children were susceptible to death from the flu. Children who were not susceptible do not experience any adverse effects. The test to determine who is susceptible is not generally available and cannot be given. The overall probability of death from the flu is still 10 out of 10,000 (with all 10 included in the 100 who are susceptible).' 3. Risk group for vaccine. `Suppose it were discovered that 100 out of every 10,000 children were susceptible to death from the side effects of the vaccine (if any such deaths occur). Children who were not susceptible do not experience any adverse effects. The test to determine who is susceptible is not generally available and cannot be given. The overall probability of death from the flu is still 10 out of 10,000.' 4. Same risk group for flu and vaccine. `Suppose it were discovered that 100 out of every 10,000 children were susceptible to death from the flu, and the same children were susceptible to death from side effects (if any such deaths occur). Children who were not susceptible do not experience any adverse effects. The test to determine who is susceptible is not generally available and cannot be given. This, of course, does not mean that a child who died from side effects of the vaccination would have died anyway, since he may not have contracted the flu at all. The overall probability of death from the flu is still 10 out of 10,000 (with all 10 included in the 100 who are susceptible).' 5. Different risk groups for flu and vaccine. `Suppose it were discovered that 100 out of every 10,000 children were susceptible to death from the flu, and 100 out of every 10,000 children were susceptible to death from side effects, but they were not necessarily the same children. Children who were not susceptible to death from the flu never die from the flu, and children who are not susceptible to death from side effects never die from side effects. The tests to determine susceptibility are not generally available and cannot be given. The overall probability of death from the flu is still 10 out of 10,000 (with all 10 included in the 100 who are susceptible to death from the flu). 6. Chemical risk group for vaccine. 'Suppose it were discovered that death from the vaccine is caused by the interaction of the vaccine with a certain chemical normally produced by the body. The interaction can occur when the level of this chemical goes above a certain point. 100 out of every 10,000 children have a chemical above this point. These children are at risk of death from the vaccine. No other children are at risk. The test for the chemical is not generally available and cannot be given. The overall probability of death from the flu is still 10 out of 10,000. 7. Vaccine might cause flu. `Suppose that the vaccine consists of a certain dose of weakened bacteria, which would encourage the body to produce specific antibodies to combat the flu. The dosage given in the vaccine is the minimum that would activate the production mechanism of the antibodies. What is considered death from 'side effects` in item A is actually death from the flu, caused by the vaccine. The overall probability of death from the flu is still 10 out of 10,000. 8. Vaccine failure. 'Suppose that an alternative vaccine were developed. In this case, the vaccine causes no deaths, but it could be less effective. Of children who are given this vaccine, some could die from the flu, because the vaccine will fail. The death rate from the flu is still 10 out of 10,000, as in case A. In this case, of course, there are no side effects, but vaccinated children can die from the flu if the vaccine fails.`
Table 1 {\it Mean maximal death rate (out of 10,000) for vaccinated children (N=20)} Case Personal decision Support of law 1 Basic case 5.458 5.750 2 Risk group for flu 5.458 6.250 3 Risk group for vaccine 4.350 4.625 4 Same risk group 4.000 3.917 5 Different risk groups 3.667 3.500 6 Chemical risk group for vaccine 3.208 3.125 7 Vaccine might cause flu 5.167 5.542 8 Vaccine failure 7.125 6.833The results for policy decisions were the same as those for personal decisions. We found no significant difference in overall level of vaccination for policy vs. personal decisions (t=.431, p=.67). The ordering of the cases in the personal decisions did not differ from their ordering in the policy decisions either: Friedman analysis of variance of the differences in ranking of cases for the two types of decisions did not yield a significant result (Friedman test statistic=6.698, p=.461). We therefore averaged the results for the two types of decisions. To test the hypothesis that presence of risk group for vaccinated children affects the decision, we compared the cases with a risk group for the vaccine (case 3 and case 5) to the otherwise equivalent cases without a risk group for vaccine (cases 1 and 2). Indeed, the presence of a risk group for the vaccine significantly reduced subjects' willingness to vaccinate: 12 subjects were less willing to vaccinate in case 3 than in case 1, and only 2 subjects went the other way (p < .02); 15 subjects were less willing to vaccinate in case 5 than in case 2, and none went the other way (p < .001). Our description of the risk group might have aroused suspicion. Subjects might have thought that sufficient effort would yield the missing information. To test this possibility, we ran an additional experiment in which we made clear why the information was unavailable. Twenty-two subjects were given the basic case plus a new vaccine risk-group version, identical to version 3, except that subjects were told: 'A test to determine who is susceptible is available now, but it must be done when the child is born, and it was not available when your child was born.` Results were essentially unchanged. Eight subjects were less willing to vaccinate in the risk-group condition than in the basic case versus one who went the other way (p < .02), six gave the maximum rating (9 or 10) to both versions, and seven gave equal ratings, less than 9, to the two versions. Returning to the main experiment, the decisions in cases with risk group for the flu (case 2 and case 5) did not significantly differ from the decisions in the matched cases without it (case 1 and case 3). Four subjects were more willing to vaccinate in case 2 than in case 1, while 4 other subjects were less willing to vaccinate in this case (p=1); 2 subjects were more willing to vaccinate in case 5 than in case 3, 9 subjects went the other way (p=.065, but here the risk group increases rather than decreases willingness to vaccinate). In sum, a risk group for the vaccine makes subjects reluctant to vaccinate, but a risk group for the flu has no significant effect. The difference between case 4 (same risk group for flu and vaccine) and case 5 (different risk groups) is not significant: 7 subjects were less inclined to vaccinate in case 5 than in case 4, 2 subjects went the other way (p=.180). This is consistent with the general lack of effect of the risk group for flu. Making the missing information more salient caused subjects to be still less willing to vaccinate. Comparing case 3 (risk group for the vaccine) with case 6 (chemical risk group for the vaccine), we find that the additional information given in case 6 caused 10 subjects to give a lower answer in this case than in case 3. Non of the subjects went the other way (p < .01). Information concerning the mechanism by which the vaccine causes death does not seem to matter (7 subjects gave a higher response in case 1 than in case 7, 3 subjects went the other way. p=.344). Finally, subjects are willing to accept a higher death rate of vaccinated children when the death is a result of vaccine failure rather than vaccine side effects (10 subjects were more willing to vaccinate in case 8 than in case 1, only one subject went the other way. p < .02). When death results from vaccine failure, the decision to vaccinate does not cause death: this condition is analogous to the 'same children` in Experiment 1. This finding is consistent with the finding of Spranca et al. (1989) that omission bias is determined by belief that the commission itself causes a bad outcome, which would not have occurred if the decision maker were unaware of the possibility of making a decision.
Table 2 {\it Number of subjects affected by presence of risk groups in Experiment 3 (N=30).} Response mode Less likely More likely Risk group in basic case to vaccinate to vaccinate Vaccine Death rate from 19 5 flu (Case 2) Death rate from 21 5 vaccine (Case 6) Flu Death rate from 12 13 flu (Case 3) Death rate from 13 13 vaccine (Case 5)Note first that knowledge of the risk group had nearly identical effects in the two response modes. This was true for both the risk group for vaccine and the risk group for flu. To examine the effect of risk group, we averaged the numerical responses across the two cases with the risk group and the two cases without it. For the vaccine risk group (Cases 2 and 6), we find that 18 subjects were less likely to vaccinate when they knew of a risk group for the vaccine (relative to the basic case, without mention of a risk group), and only 3 subjects were more likely to vaccinate (p < .001). The presence of risk group for the flu did not have a systematic effect on subjects' decisions: 8 subjects were less inclined to vaccinate when they knew of a risk group for the flu (Case 3 and 5), and 10 subjects were more inclined to vaccinate (p=.815). We have therefore replicated the result of Experiment 2 and eliminated an alternative explanation of this result. Subjects are less inclined to vaccinate their child when they know of a risk group for vaccinated children but are not similarly affected by existence of a risk group for children who get the flu. This finding does not seem to result from having the information concerning death rates from the vaccine made more salient by the response mode.
Table 3 {\it Mean minimal death rates from flu for giving the vaccine (N=40).} Mean minimal Risk group size Vaccine deaths flu death rate Case (out of 100,000) (out of 100,000) (out of 100,000) 1 --- 10 2,584 2 --- 100 4,316 3 --- 1,000 10,693 4 1,000 10 8,139 5 1,000 100 7,739 6 10,000 100 7,287 7 10,000 1,000 14,435For the risk group versions we computed the log of the ratio between the response to each version and the response to the corresponding control version. (Positive logs indicate less willingness to vaccinate with the risk group.) The means of those logs, across all subjects are reported in Table 4. Each of the cells in Table 4 is significantly larger than zero. Thus, we replicated here the 'vaccine risk group` effect found in earlier experiments: the presence of a risk group for the vaccine results in an augmentation of the omission bias. Other things being equal, subjects are even less inclined to vaccinate when they know the risk is 'unevenly` spread across the population.
Table 4 {\it Mean log of the ratio: response to risk group case over response to corresponding control case (N=40).} Risk group size Vaccine deaths Control Log of Case (out of 100,000) (out of 100,000) case ratio 4 1,000 10 1 1.465 5 1,000 100 2 0.848 6 10,000 100 2 1.073 7 10,000 1,000 3 0.702A multivariate test of the differences between the cells yielded a significant result (F(4,36)=4.61, p < .01). However, the only significant pairwise comparison is the comparison between the two extreme cases, Case 4 and Case 7 (F(1,39)=11.34, p < .01): Willingness to vaccinate was affected more by a small risk group with low risk than by a large risk group with high risk. It is likely that this distinction is due to the difference in overall death rates rather than the independent effect of risk group size or death rate within this group. Indeed, a comparison of the large risk group with low risk and the small risk group with high risk cases did not show a significant difference (F(1,39)=.25, p=.61). These results suggest that as the overall risk gets higher, subjects are less sensitive to the presence of risk group. The results do not support the 'worst case` hypothesis, which holds that subjects who do not know whether they belong to the risk group tend to assume that they do.