Spranca, M., Minsk, E., & Baron, J. (1991). Omission and
commission in judgment and choice. Journal of Experimental
Social Psychology, 27, 76-105.
Omission and commission in judgment and choice
Mark Spranca, Elisa Minsk, and Jonathan Baron1
University of Pennsylvania
Abstract
Subjects read scenarios concerning pairs of options. One option
was an omission, the other, a commission. Intentions, motives,
and consequences were held constant. Subjects either judged the
morality of actors by their choices or rated the goodness of
decision options. Subjects often rated harmful omissions as less
immoral, or less bad as decisions, than harmful commissions.
Such ratings were associated with judgments that omissions do not
cause outcomes. The effect of commission is not simply an
exaggerated response to commissions: a reverse effect for good
outcomes was not found, and a few subjects were even willing to
accept greater harm in order to avoid action. The `omission
bias' revealed in these experiments can be described as an
overgeneralization of a useful heuristic to cases in which it is
not justified. Additional experiments indicated that that
subjects' judgments about the immorality of omissions and
commissions are dependent on several factors that ordinarily
distinguish omissions and commissions: physical movement in
commissions, the presence of salient alternative causes in
omissions, and the fact that the consequences of omissions would
occur if the actor were absent or ignorant of the effects of not
acting.
1 Omission and commission in judgment and choice
Is withholding the truth as bad as lying? Is failing to help the
poor as bad as stealing? Is letting someone die as bad as
killing? In most cases, the answer would seem to be no. We have
good reasons for the distinction between omissions and
commissions: omissions may result from ignorance, and commissions
usually do not; commissions usually involve more malicious
motives and intentions than the corresponding omissions; and
commissions usually involve more effort, itself a sign of
stronger intentions. In addition, when people know that harmful
omissions are socially acceptable, they look out for themselves;
this self-help principle is, arguably, sometimes the most
efficient way to prevent harm. For all these reasons, the law
usually treats omissions and commissions differently (Feinberg,
1984): Very few states and nations even have `bad Samaritan'
laws by which a person may be prosecuted for failing to help
someone else in need.
In some cases, however, these relevant differences between
omissions and commissions seem to be absent. For example,
choices about euthanasia usually involve similar intentions
whether the euthanasia is active (e.g., from a lethal drug) or
passive (e.g., orders not to resuscitate). In such cases,
when intentions are held constant, omissions and commissions are,
we shall argue, morally equivalent. Yet many people continue to
treat them differently - not everyone, to be sure, but enough
people to influence policy decisions. We suggest that these
people are often overgeneralizing the distinction to cases in
which it does not apply.
Such overgeneralization results from two sources: First, by
failing to think reflectively about their own heuristics, these
people fail to recognize the conditions under which heuristics do
not serve their purposes (Baron, 1985, 1988a). The
overgeneralization of heuristics is therefore analogous to
inappropriate transfer of mathematical rules, as when a student
learns the base-times-height rule for the area of a parallelogram
and then applies it unreflectively to a trapezoid (Wertheimer,
1945/1959). Second, the omission-commission distinction could be
motivated in that it allows people to limit their moral
responsibility to others (Singer, 1979). If we hold ourselves
responsible only for commissions that cause harm, we need not
concern ourselves with our failure to help when we can. The
intuition that harmful commissions are worse than otherwise
equivalent omissions is therefore a self-serving one.
The distinction between omissions and commissions can also be
maintained by our perception of situations. For example, we tend
to consider omissions as a point of comparison or reference point
(Kahneman & Miller, 1986). Suppose Paul considers selling his
stock, does not sell, and finds he would have done better to
sell. George, however, sells his stock and finds he would have
done better not to sell. George will feel worse, because he will
be more inclined to compare his outcome to the outcome of doing
nothing, while Paul will tend to regard his outcome as simply the
thing to be expected (Kahneman and Tversky, 1982b).
1.1 Normative views
Jonathan Bennett (1966, 1981, 1983) defends the view that the
omission-commission distinction is, in itself, morally
irrelevant. He argues that the difference between what people
call acts and omissions is difficult to define, and those
definitions that can be maintained have no apparent moral
relevance. For example, Bennett (1981) suggests that omissions
involve many more possible movements than corresponding
commissions. If John intends to make Ivan eat something that
will make him sick, John has only a few ways of suggesting that
Ivan eat the food in question, but John has many ways of not
preventing Ivan from eating the food on his own. Bennett argues
that the number of ways of bringing about an effect is morally
irrelevant, and he applies the same sort of argument to other
possible differences between what we call omissions and
commissions, such as whether movement is involved or whether we
describe something positively or negatively (staying home versus
not going out, lying vs. not telling the truth).
Utilitarians such as Singer (1979) and Hare (1981) also maintain
that the omission-commission distinction is, in itself, morally
irrelevant. Indeed, this conclusion follows from any definition
of morality in terms of intended consequences of decisions.
Finally, in cases of non-moral or self-interested decision
making, the distinction between omission and commission is, in
itself, clearly irrelevant. All extant theories of rational
choice assume that the best option is the one with the best
expected consequences, the one that best achieves the
decision-maker's goals (Baron, 1985, 1988a).
Other philosophers argue for the relevance of the distinction
(see Kagan, 1988; Kamm, 1986; Steinbock, 1980), but their
arguments are ultimately based on the philosopher's (and
sometimes the reader's) intuition that certain commissions are
worse than certain omissions. However, we have at least two
reasons to question this intuition. First, some of these cases
differ in other features than the omission-commission distinction
itself (Tooley, 1974). For example, our reluctance to shoot one
political prisoner (a commission) in order to stop a cruel
dictator from shooting ten others (the consequence of our
omission) can be justified by the precedent-setting effects of
giving in to such a brutal ultimatum. Second, the intuitions in
other cases could be misleading. Philosophers are not immune to
psychological biases (Hare, 1981). The situation here is similar
to other studies of decision making, in which accepted normative
models are challenged on the basis of intuitions about cases (see
G„rdenfors & Sahlins, 1988, for examples).
1.2 Psychological mechanisms
In the experiments reported here, we presented subjects with
scenarios in which a judgment or decision must be made. We
compared judgments of omissions and commissions in cases in which
intentions, outcomes, and knowledge are held constant, in which
the difference in effort between omissions and commissions is
trivial, and in which issues of autonomy or precedent setting do
not distinguish the cases. Because we assume, on the basis of
the arguments just cited, that there is no moral difference
between omissions and commissions in such cases, we say that
subjects show a omission bias when they judge harmful
commissions as worse than the corresponding omissions. Note that
we define `bias' as a way of thinking that prevents us from
achieving our goals, including altruistic goals. By this
definition, a bias can be present even in someone who does not
acknowledge it after reflection.
One previous study, Sugarman (1986), has compared subjects'
judgments of omissions and commissions. Subjects judged
commissions (active euthanasia) as worse than omissions (passive
euthanasia). Many of Sugarman's questions, however, could be
interpreted as concerning questions of law or codes of medical
practice, where the distinction might be legitimate even when
intentions are held constant, unlike the case of moral judgments.
Our study thus goes beyond Sugarman's primarily by designing our
situations so that the omission-commission distinction is truly
irrelevant to the questions we ask. In addition, we are
concerned with the omission-commission distinction in general,
not in any particular case. We also examine in more detail the
cause of omission bias.
Exaggeration effect. Other findings lead us to expect
subjects to judge omissions and commissions differently.
Kahneman and Tversky (1982a) reported that subjects felt more
regret when bad outcomes result from action than when they result
from inaction. The examples were personal decisions about stock
purchases - buying the less profitable of two stocks versus
failing to buy the more profitable stock. Miller and McFarland
(1986) found similar results for judgments of victim
compensation. Landman (1987) extended these findings: Joy in
response to positive outcomes was also stronger when the outcomes
were the result of action rather than inaction. Her examples
also concerned decisions with outcomes primarily for the decision
maker. We can therefore expect that omission bias will be found
in personal decisions as well as decisions that affect others.
To explain these results, Kahneman and Miller (1986) suggested
that `the affective response to an event is enhanced if its
causes are abnormal' (p. 145). Commissions are considered
abnormal because `it is usually easier to imagine abstaining from
actions that one has carried out than carrying out actions that
were not in fact performed' (p. 145). This explanation, which
Kahneman and Miller call `emotional amplification,' may be even
more relevant outside the laboratory than in the experiments of
Landman (1987), or those reported here, since in all of them the
experimenter tells the subject specifically what would have
resulted from commissions that did not occur.
Note that by this account, omission bias is limited to bad
outcomes, and it is a subset of a more general effect that works
in the opposite direction for good outcomes.
Loss aversion. Another reason to expect omission bias is
that gains are weighed less heavily than losses of the same
magnitude (Kahneman and Tversky, 1984; Knetsch and Sinden, 1984).
If subjects take the consequence of omission as a reference point
(Baron, 1986), an omission that leads to the worse of two
outcomes would be seen as a foregone gain, but a commission that
leads to the worse outcome would be seen as a loss. The loss
would be weighed more heavily than the foregone gain, so the
commission would be considered worse than the omission. When the
omission and commission both lead to the better outcome,
however, the omission would be seen as a foregone loss, so it
would be considered better than a mere gain. The omission
would always be considered better, with the outcome held
constant. This prediction is inconsistent with Landman's (1987)
results, but it might apply elsewhere.
Commissions as causes. Another reason for expecting
omission bias is that, when omissions cause harm, there is
usually some salient other cause. Following the principle of
causal discounting (Kelley, 1973), the perceived causal role of
the actor is diminished by the salience of the other cause. This
can be an illusion. What matters for decision making is the
contingency of various outcomes on the actor's options.
Sometimes, the addition of other salient causes corresponds to a
reduction of this contingency, but when contingency is held
constant, the actor is inescapably caught in the causal chain
(Bennett, 1981). In this case, other causes amount to background
conditions (such as the presence of air).
More generally, subjects might consider commissions to be more
heavily involved in causing the outcome because commissions are
more abnormal and abnormal events tend to be seen as causes
(Hilton & Slugoski, 1986). Although such an interpretation of
`cause' is in accord with common usage, it is not the sense that
is relevant in morality or decision theory. What matters here is
whether some alternative option would have yielded a different
outcome (Bennett, 1981). (In any case, abnormality itself is
controlled in most of our scenarios.)
Shaver (1985) has proposed that judgments of causality are
prerequisite to judgments of responsibility or moral blame. If
this is right, and if blame is different for omissions and
commissions, we can ask whether this difference is exerted before
or after the judgment of causality. If it is before, then we
would expect a difference in causality judgments too and we would
expect them to correlat with judgments of blame. Brewer's (1977)
model also suggests that responsibility or blame judgments would
be closely connected with judgments of causality. Her model of
causality is based on comparison of two probabilities, the
probability of the outcome given the action (or inaction?)
performed, and the probability of the outcome `in the absence of
the perpetrator's intervention.' We have argued that the latter
probability should be based on the alternative option facing the
perpetrator, but even Brewer's wording suggests that people might
instead think about what would happen if they were absent or if
they were ignorant of the possibility of reducing harm. If they
did, they would not hold themselves responsible for bad effects
that would have occurred anyway if they were absent or ignorant.
Overgeneralization. Finally, as we noted, omission bias can
represent an overgeneralization of rules that normally apply
because of differences in intention. Failure to act does not
seem as immoral or irrational when it results from mere
thoughtlessness, so even when it occurs after thinking, it is
excused. Subjects might not have appreciated the statement -
included in all the Kahneman-Tversky and Landman omissions - that
the decision maker `considered' acting but then decided against
it. The various mechanisms we have suggested are compatible and
may reinforce each other.
1.3 Purposes
The main purpose of the experiments we report is to demonstrate
the existence of omission bias in judgments and evaluations of
decision options. Although we found that some procedures produce
the bias more than others, we do not concern ourselves with the
differences. (Even if the bias is rare, it could be
consequential, if those who display it influence important social
choices or if they subvert their own goals.) Our second purpose
is to isolate those aspects of omissions and commissions that
subjects find relevant.
We are interested in subjects' reflective judgment, not their
immediate reactions. To make sure that subjects did their best
thinking (following Tetlock & Kim, 1987), we paid them by the
hour and we asked for written justifications (in blue books, of
the sort usually used for examinations). In addition, we used a
within-subject design in which the cases that we compare are
presented next to each other. If subjects are biased here, they
are likely to be biased elsewhere. Also, if we had used a
between-subjects design, subjects in the commission condition
would very likely infer stronger intention than those in the
omission condition. By using a within-subject design, we make
sure that subjects understand the equivalence of intention.
2 Experiment 1
The first experiment presented scenarios in which a decision
maker (or `actor') intends to bring about some harm to someone
else. In various endings of each scenario, the actor attempts to
bring about the harm either through omission or commission.
Subjects were asked to judge the morality of the actor in the
situation.2 We
included questions about the role of causality because pilot
subjects had justified omission bias on the grounds that
omissions do not cause the outcomes.
We also varied the outcome of the decision (e.g., whether Ivan
gets sick, as John intends, or not). Baron and Hershey (1988)
found that subjects evaluated identical decisions as well made or
poorly made depending on whether their consequences were good or
bad, even when the subjects knew everything that the decision
maker knew at the time the decision was made. (Baron & Hershey
argued that subjects overextended a heuristic that was usually a
good one.) In that study, cases that differed in outcome were
placed far apart in the questionnaire, so that subjects would not
make explicit comparisons. In this experiment, we placed the
cases next to each other to determine whether anyone
knowingly evaluates decisions according to their outcome.
Such an outcome bias could be related to omission bias. Subjects
would make both errors if they judged morality on the basis of
behavior and outcome alone. They would do this if they tended to
ignore intention and evidence about the probability of harm (both
of which are held constant), that is, to judge an actor as
particularly immoral just when an action succeeded in producing a
bad outcome.
2.1 Method
Subjects and procedure. Subjects were 57 undergraduate and
graduate students at the University of Pennsylvania, recruited by
signs in prominent places. They were paid $4.50 (or, in later
sessions, $5) per hour for completing questionnaires at their own
pace. All questionnaires asked for both numerical ratings and
justifications. Answers were written in examination-type blue
books.
Materials. We describe here the results of two scenarios.
Two others are not reported here because many subjects gave
ratings of `0' (not immoral) and the other subjects' ratings
merely agreed with the results reported here.
Case 1: The tennis tournament
John West plays tennis at the Wyncote Tennis Club two or three
times a week. John is the best player belonging to the club, but
he is not good enough to play professionally.
The club holds an annual tennis tournament, which occasionally
attracts a big-name tennis player in need of a warm-up before
Wimbledon. The first prize is $20,000, and the prize for the
runner-up (who plays in the final but loses it) is $10,000. This
year, Ivan Lendl agreed to play in the tournament. John and Ivan
quickly advanced until they were to meet in the final. John
would of course love to win, but he realizes that he is at a
large disadvantage.
The tradition at Wyncote is for both finalists to meet for dinner
at the club before the final the next morning. While getting
dressed for dinner John remembers reading that Ivan is allergic
to Cayenne pepper. He also recalls that the house dressing
served in the dining room contains Cayenne pepper. John thinks,
`If Ivan eats the house dressing, he will probably get a stomach
ache that will keep him up much of the night. Then I'll have a
chance to win.'
At the dinner, Ivan orders first. After he orders his main
course, the waiter asks him whether he prefers the house dressing
or Italian dressing. Ivan does not think that the house dressing
might contain Cayenne pepper.
Consider the following endings to this story:
- Before Ivan makes a choice, John recommends that Ivan try the
house dressing. Ivan orders it and gets a stomach ache, as
predicted. If John had said nothing, Ivan would have ordered
Italian dressing, but John does not know this for sure. John
wins the match.
Ivan orders the house dressing and gets a stomach ache, as
predicted. John says nothing. John realizes that if he had
warned Ivan about the Cayenne, even after Ivan announced his
choice, Ivan would have ordered Italian dressing. John wins the
match.
Ivan orders Italian dressing. John then recommends that Ivan try
the house dressing. Ivan changes his mind, orders the house
dressing, and gets a stomach ache, as predicted. John wins the
match.
In three other endings, the ruse fails and Ivan wins anyway.
Subjects were asked the following questions:
A. Rate John's morality in this situation for each of the six
endings on a scale from 0 (not immoral at all) to -100 (as
immoral as it is possible to be in this situation). Then explain
what reasons you used in rating John's morality in this
situation. If you gave different ratings to any of the six
cases, explain your reasons for doing so. If you gave the
same ratings to any of the cases, explain your reasons for
doing so.
B. For the first three endings, suppose that Ivan found out (from
witnesses) that John knew about the dressing and Ivan's allergy.
Suppose further that Ivan sues John for the lost $10,000 and for
an additional $10,000 for pain and suffering. You are on the
jury and are convinced by the evidence that the case and the
ending is exactly as described above. For each of the first
three endings, what award would you recommend? Give reasons for
giving different awards for different cases, if you did. Give
reasons for giving the same award in different cases, if you did.
C. For the first three endings, are there differences in the
extent to which John caused the outcome? Explain.
D. Does your answer to question C explain your answers to
question A for these endings? Explain.
Case 2: The witness
Peter, a resident of Ohio, is driving through a small town in
South Carolina. At a 4-way stop, he gets into a small accident
with a town resident named Lyle. The accident came about like
this:
Traveling north, Lyle approached the 4-way stop and failed either
to slow down or stop. Meanwhile, Peter had just finished
stopping and began to move east through the intersection. Peter
noticed that a car, Lyle's, was crossing the intersection after
having failed to stop. Peter slammed on his brakes, but too late
to prevent his car from hitting Lyle's car as it passed in front
of him. The accident was clearly Lyle's fault, because the
accident was caused by his failure to stop. However, because the
accident's cause is not clear from its effects, the police may
believe that Peter failed to stop and that caused Peter to run
into Lyle's car broadside. [A diagram of the accident was
included.
Immediately after the accident, both men exclaimed that it was
the other's fault. When the police came, Peter told them that
the accident was caused by Lyle's failure to stop. Lyle told the
police that the accident was caused by Peter's failure to stop.
Unknown to either man, there was an eyewitness to the accident,
Ellen. Like Lyle, Ellen is a town resident. She thought to
herself, `I know the accident is Lyle's fault, but I know Lyle
and do not wish him to be punished. The only way that Lyle will
be faulted by the police is if I testify that the accident is
indeed Lyle's fault.'
In the four endings to be considered, Ellen either told the
police that the accident was caused by Peter's failure to stop
(#1 and #2) or told the police nothing (#3 and #4); in endings #1
and #3, Peter is charged with failure to stop and fined, and in
endings #2 and #4, Lyle is charged and fined. Subjects were
asked the same questions as before, except that they were not
asked about legality or lawsuits.
Twenty-two subjects did these two scenarios in the order given,
35 in reverse order. There were no order effects, and the
results were combined.
2.2 Results
Our main interest is in whether subjects distinguished omissions
and commissions. Our analyses were based on ordinal comparisons
only, because too many differences were zero for us to use
parametric statistics. (When a justification for rating
omissions and commissions differently mentioned differences in
intention, we counted the subject as rating omissions and
commissions the same, to be conservative.)
In case 1, 37 out of 57 subjects (65%) rated each omission as
less bad than either corresponding commission, showing an
omission bias, and only one subject (for idiosyncratic reasons)
rated an omission as worse than a commission. In case 2, 39
(68%) subjects rated each omission as less bad than its
corresponding commission, and none did the reverse.3 The responses to the two cases were
correlated (f = .33, c2=4.78, p < .025,
one-tailed). Of the subjects who made a distinction, the
difference between mean ratings for omissions and commissions
ranged from 1 point (five subjects) to 70 points on the 100 point
scale (mean, 27.6; s.d., 21.2; median 25).
A few subjects showed an outcome bias. In case 1, 8 subjects
rated behavior as worse (on the average) when the intended harm
occurred than when it did not, and none did the reverse. In case
2, 6 subjects rated behavior as worse when the intended harm
occurred than when it did not, and one subject did the reverse.
Over both cases, 9 subjects showed an outcome bias on at least
one case and 1 subject showed a reverse outcome bias, a
significant difference by a binomial test (p < .02, one-tailed).
All subjects who showed this outcome bias in a given case also
rated commissions worse than omissions for that case. The
association between the outcome bias and the omission bias is
significant by Fisher's exact test for case 1 (p=.027). Although
the corresponding test is not significant for case 2 (p=.095), we
take this association to be real because of its consistency.
For case 1, 34 subjects answered question B by saying that a
greater penalty should be paid for commissions than for
omissions, 14 subjects said that the compensation should be
equal, no subjects said that the penalty should be greater for
omissions, and 7 subjects said no penalty should be paid. (The
remaining 3 subjects did not answer the question clearly.) Of
the 34 subjects who distinguished omissions and commissions, 24
did so in their moral judgments as well, and of the 14 who did
not, only 5 did so in their moral judgments
(c2=3.69, p < .05, one tailed).
Of the 26 subjects who showed omission bias in case 1 and who
answered the causality questions (C and D) understandably, 22
said that John played a greater causal role in the commission
endings than in the omission endings and that this was good
reason for their different moral judgments, 2 said that John
played a greater causal role in omission but this did not account
for their different moral judgments, and 2 said that John's
causal role did not differ. Of the 15 subjects who did not show
omission bias and who answered the causality questions
understandably, 9 said that John's causal role did not differ and
that this was why their moral judgments did not differ, 1 said
that John's causal role did not differ but this was not why this
subject's moral judgments did not differ, and 5 said that John's
causal role did differ. In sum, 31 out of 41 subjects felt that
their perception of John's causal role affected their moral
judgments, and, in fact, differences in judgments of causal role
were strongly associated with differences in moral judgments
(c2=13.26, p < .001).
Those subjects who showed omission bias frequently cited
differences in causality: `John [in case 1, ending 2] did not
recommend the dressing.' `In [ending] 1 [case 2], she
affects [the outcome] greatly by lying. In [ending] 3, she
affects it by failing to give any testimony, which results in the
police finding the wrong party guilty. This is a lesser effect
than in 1 because the police could have still found Lyle guilty.'
Such arguments are reminiscent of the causal discounting scheme
proposed by Kelley (1973): when there is another cause (Ivan's
choice) the cause at issue (John's choice of whether to warn
Ivan) is discounted.
Other justifications of the distinction (each of which occurred
several times) were made in terms of rules that made a
distinction between omissions and commissions, at least
implicitly. Some of these rules concerned responsibility: `It
isn't John's responsibility to warn Ivan about the Cayenne. It's
Lendl's responsibility to ask if it is in the salad dressing.'
Other rules concern descriptions of the act in question: `She is
again causing the innocent man to be charged but this time
through neglect, which I don't believe to be as immoral as
directly lying, but perhaps this is because today, keeping your
mouth shut seems to be the norm.' Several subjects referred to
the wrongness of `lying,' which did not include deliberately
misleading someone through one's silence. Other rules concerned
rights: `She should have said something, but I don't think that
she should be required to do so. It should be her right to mind
her own business.' Finally, some subjects referred to the
omission-commission distinction directly: `Choosing to do nothing
isn't really immoral.' `John doesn't plant the seed [in the
omission], he just lets it grow.' These rules might express the
heuristics that subjects use.
Those subjects who made no distinction between omissions and
commissions most frequently referred to the equivalence of
intentions, causality, or gave no justification at all. In other
cases, they referred to duty or obligation (e.g., `It's Ellen's
duty to report the truth'), rules that did not distinguish
omissions and commissions (e.g., that it is wrong to mislead
people), or intended consequences (`By saying nothing it is just
as bad as lying because she is hindering justice').
In summary, Experiment 1 showed four things: 1., moral judgments
of others exhibited omission bias; 2., an outcome bias was found
in a few subjects, all of whom showed the omission bias as well;
3., legal judgments distinguished omissions and commissions; and,
4., the perceived causal role of the actor at least partially
accounted for the bias.
3 Experiment 2
Recall that one legitimate reason for making a distinction
between omissions and commissions is that commissions are
associated with greater intention to do harm. The justifications
provided in Experiment 1 suggest that omission bias was not based
on different beliefs about intentions. In the present
experiment, we asked explicitly about possible differences in
intention. We also modified the cases used in Experiment 1 so
that the equivalence of intention in cases of omission and
commission was more apparent. Specifically, we made it clear
that the actor would have brought about the result through
commission if it were necessary to do so.
3.1 Method
The `John' case from Experiment 1 was used with the following
endings and questions:
- Before Ivan makes a choice, John recommends that Ivan try the
house dressing.
John is about to recommend the house dressing, but before John
can say anything, Ivan orders the house dressing himself. John
says nothing.
Question 1. Is John's behavior equally immoral in both endings?
If so, why? If not, why not?
Question 2. Is there a difference between the two endings in
John's intention, that is, what he was trying to bring about?
The `Ellen' case was used with the following endings and with the
same questions (about difference in morality and intention):
- Ellen told the police that the accident was caused by Peter's
failure to stop.
Ellen is about to tell the police that the accident was caused by
Peter's failure to stop, but before she could say anything, she
heard one policeman say this to the other policeman, who agreed,
so Ellen said nothing.
Two additional cases were included. One will be presented as
Experiment 3. The other, the last case, will not be described
because its results merely agreed with the results of the first
two cases. Thirty-six subjects filled out the questionnaire.
3.2 Results
Of the 36 subjects, 33 rated intention as equivalent in the John
case and 33 in the Ellen case. (Most of those few who did not
rate intention equivalent gave incomprehensible answers rather
than negative ones.) Of those who rated John's intention as
equivalent, 10 rated his commission as more immoral than the
omission, and none rated the omission as more immoral (p < .002,
binomial test, two-tailed). Likewise, of those who rated Ellen's
intention as equivalent, 5 rated her commission as more immoral
and none rated her omission as more immoral (.05<p < .10).
Altogether, 12 out of the 35 subjects who rated intention as
equivalent in at least one case showed omission bias, and none
showed a reverse bias (p < .001). Several differences between
these cases and the comparable cases in Experiment 1 could
account for the lower percentage of subjects showing the effect
here: we told subjects here that the actor was on the verge of
action; we did not tell them the outcome; and we used a different
method of eliciting the response.
Justifications were similar to those in Experiment 1, for
example: `It's more immoral in the first, because John actively
... causes Ivan to eat the peppered salad. His silence in the
second case at least relieves him of the responsibility of
actively causing Ivan's allergic reaction. As long as Ivan
has done this to himself, John has a certain leeway of
moral[ity]. ... [John is] just damn lucky that Ivan orders
before he can recommend.' One subject who did not show omission
bias made a distinction worth noting: `Immorality is found in
intentions and `blood on the hands' is found in deeds.
Therefore, ... John is equally immoral - it's just that John has
`less blood on his hands' in ending #2. That is to say, only
John will ever know the immorality of his deeds in ending #2.'
Another subject who did not show the bias quoted a Catholic
prayer, `Forgive us for what we have done and for what we have
failed to do.' Some heuristics oppose the bias.
In summary, Experiment 2 showed that the results supporting
omission bias are not due to perceived differences in intention.
4 Experiment 3
The questionnaire used in Experiment 2 included a new case based
on the `branch line' example from Bennett (1981), in which a
person can switch a vehicle onto a new track or leave it where it
is. If the train goes down one track, two men are put at risk;
if it goes down the other track, three men are at risk. There
are four different endings: switch from three men to two men,
switch from two to three, do not switch from three to two, do not
switch from two to three. We expected that the action putting
three men at risk would be considered worse the the inaction that
leaves three men at risk.
If omission bias is a subset of a more general effect in which
the effects of commissions are exaggerated, actions that put two
men at risk (instead of three) would be considered better than
inactions that leave two men at risk. Such exaggeration could
result either from emotional amplification (as suggested by
Landman, 1987) or from the subjects assuming that the outcome was
more strongly intended when it resulted from a commission than
from an omission.
If, on the other hand, deprecation of harmful commissions were
not accompanied by commendation of beneficial commissions, the
deprecation could not be explained in terms of perceived
differences in intention between omissions and commissions. Such
perceived differences in intention would induce both deprecation
of harful commissions and commendation of beneficial ones. So a
finding of deprecation unaccompanied by commendation would
indicate in yet another way the existence of omission bias. It
can be explained by overgeneralization of heuristics favoring
omission, or by loss aversion.
4.1 Method
The subjects were, of course, the same as those of Experiment 2.
The new case was as follows:
Sam works as a mechanic in the train yard. While he is alone in
the switching station checking the machinery, he notices that a
train has started to roll slowly down a sloping part of track 1.
The brakes seem to have been released accidentally. Sam knows
how to work the switches, and he could change the train to track
2 (but not to any other track). At the bottom of both tracks (1
and 2), some men are working with jackhammers. (They will
probably not hear the train approach.)
Rank Sam's behavior in each of the following four endings from
best to worst. Explain your rankings. You may use ties.
The first case read `Sam sees that there are three men at the
bottom of track 1 and two men at the bottom of track 2. He does
not switch the train to track 2.' The remaining cases were
constructed by changing `does not switch' to `switches' in the
second and fourth, and by switching the groups of men in the
third and fourth. We refer to these cases as 3o, 2c,
2o, and 3c, respectively: the number refers to the
number of men at risk, and the letter refers to omission vs.
commission.
4.2 Results
Four subjects gave incomprehensible or unusable answers (e.g.,
saying that the track with 3 men was better because one of them
was more likely to notice, in which case the ranking of the
outcomes was opposite to what we expected). As expected, 21 of
the remaining 32 subjects ranked ending 3o higher than
ending 3c, and none ranked ending 3c higher than ending
3o (p < .001, binomial test, two-tailed).
To ask whether deprecation of harmful commissions
(3c<3o) was found without equivalent commendation of
beneficial commissions (2c>2o), we counted patterns of
ranks that were consistent with such an effect and patterns that
were consistent with the opposite effect (2c>2o but not
3c<3o). Twelve subjects showed patterns consistent
with the effect and only one showed an pattern consistent with
the opposite (p < .005, binomial test, two-tailed). This result
cannot be accounted for by differences in perceived intention
between omissions and commissions, for such differences alone
would simply exaggerate the judgments of the better and worse
outcomes, leading to equal numbers of rankings in which
3c<3o and 2c>2o.
In fact, nine subjects ranked each of the commissions (2c
and 3c) lower than its corresponding omission (2o and
3o, respectively), but no subject ranked each of the
omissions higher than its corresponding commission (p < .01,
binomial test). Two of the nine subjects ranked 2c equal to
3o and two ranked 3o higher than 2c; these four
subjects considered omission versus commission to be at least as
important as expected outcome.
We also found an exaggeration effect. Ten subjects ranked ending
2c higher than 2o, although 9 ranked the 2o higher
than 2c. Overall, 14 subjects provided rankings consistent
with an exaggeration effect, in which action was ranked higher
(better) than inaction for the good outcome (2 men) or worse for
the bad one (3 men), or both: (2c>2o>3o>3c, 9
subjects; 2c=2o>3o>3c, 4;
2c>2o>3o=3c, 1) and no subjects provided
rankings consistent with the reverse effect (e.g.,
2o>2c>3c>3o) (p < .001). As we pointed out
earlier, this effect is consistent either with emotional
amplification or with perceived differences in intention between
omissions and commissions. (Note that the ranking
2c=2o>3o>3c is also consistent with
deprecation.)
Typical justifications given by those who consistently deprecated
commissions mentioned causality or Sam's role (see Haidt, 1988):
`If I do nothing, it's not exactly me who did it. I
might tell myself it is meant to happen.' `[2c is worse
than 2o] because it's as if he chooses who would die by
switching.' `In [ending] 2c, Sam tries to prevent injury to
three people but in the course causes two people to be injured.'
Typical justifications for those who exaggerated commissions took
inaction as a reference point with which the effects of action
were compared: `2c is the best because he is saving a life.
3c is the worst because he is needlessly killing an extra
person.' Some subjects of both types criticized Sam's
willingness to `play God' in the commission cases.
In summary, we found a deprecation of harmful actions that cannot
be the result of differences in perceived intentions. We also
found evidence of exaggeration, which could result either from
perceived differences in intention or from emotional
amplification.
5 Experiment 4
In this experiment, we removed the influence of possible
differences in intention in a different way, specifically, by
putting subjects in the position of the actor. Subjects were
given no information about intentions. Rather, they had to make
a decision based on the expected consequences and the means for
bringing them about. They could not excuse harmful omissions on
the basis of different intentions, because there were no other
intentions possible than the ones they imagined themselves to
have in the situation.
In each case, the subject was given two options to rate. The
options concerned the treatment of one or more sick patients,
either doing nothing or ordering a medical procedure. Each
patient had a 20% chance of permanent brain damage if nothing was
done, and a 15% chance if the treatment was given. In a control
condition, the probabilities were reversed. If, as we found in
Experiment 3, harmful (20% risk) commissions are deprecated more
than helpful (15% risk) commissions are commended, then the
overall rating of omissions - across both risks - will be higher
than the overall rating of commissions, whatever the effect of
risk itself on the ratings.
The decision was made from three different perspectives: that of
a physician treating a single patient, that of a patient making
the decision on his or her own behalf, and that of a public
health official making the decision on behalf of a number of
similar patients. This experiment, like the train problem of
Experiment 3, therefore looks for omission bias in situations
that are not `moral' in the narrow sense of the term. The public
health official was included to test the possibility that the
decision would change when the issue involved numbers of patients
affected rather than probabilities of a single patient being
affected. Subjects could take frequencies more seriously than
probabilities, and therefore be less inclined to show omission
bias for the public health official than for the physician.
We tested the tendency of subjects to take omissions as the
reference point by asking subjects to assign numbers to both
options, using positive numbers from 0 to 100 for `good'
decisions and negative numbers from 0 to -100 for `bad' ones. If
subjects tend to take omissions as the reference point, they
would assign zero to the omission more often than to the
commission, across all cases. In addition, the average rating
assigned to both options would be higher when the worse option is
an omission than when it is a commission. This is because they
would tend to assign the omission a number closer to zero.
Our main hypothesis is, of course, that the overall ratings of
commissions will be lower than the overall ratings of omissions,
across all cases (including those in which commissions are
associated with 20% risk and those in which it is associated with
15% risk). A stronger hypothesis is that subjects will consider
the commission worse than the omission even when the probability
of harm from the commission is lower, more often than they will
do the reverse.
5.1 Method
Subjects were told to assign numbers to each option. `The number
is to represent the overall goodness of the decision from
your point of view, assuming that you are trying to do your job
as well as possible. By 'overall goodness` we mean that you
would always prefer decisions with higher numbers to those with
lower numbers. If you assign the same numbers to two different
decisions, that means that you feel it is a toss-up between
them.' Subjects were also told to make sure that their numbers
were comparable across cases as well as between options within
each case. Subjects were told: `Use positive numbers between 0
and 100 for good decisions, negative numbers between 0 and -100
for bad decisions. 100 and -100 represent the best and worst
decisions possible in this kind of situation. (If you use 100
and later want to use a higher number, or if you use -100 and
later want to use a lower number, feel free to go beyond these
limits.) 0 represents a decision that is neither good nor bad.'
Subjects were also told to explain the factors that influenced
their rating of each option.
Case 1 read: `Imagine you are a physician making a decision on
behalf of a patient who has left the matter up to you.
The patient has an unusual infection, which lasts for a short
time. The infection has a 20% chance of causing permanent brain
damage. You may undertake a procedure that will prevent the
brain damage from the infection (with 100% probability).
However, the procedure itself has a 15% chance of causing brain
damage itself.' Case 2 was from the patient's perspective; case
3 was from that of a public health official making a choice for
many patients. Cases 4-6 were identical except that 20% and 15%
were reversed. Subjects were told to rate the omission and
commission options separately for each of the six cases.
Twenty-four subjects were given the cases in this order, and
another 24, the reverse order. Ten additional subjects were
omitted for failing to rate both options, failing to provide
justifications, or for adding additional assumptions in their
justifications (the most common one being that treatment should
be given for research purposes, to learn more about its effects
and to try to improve it).
5.2 Results
The mean ratings for each option in each of the six cases are
shown in Table 1. These ratings were analyzed by an analysis of
variance on the factors: risk (15% vs. 20%); action-inaction;
perspective (physician, patient, official); and the
between-subjects factor of order. Of most importance, the mean
difference of 16 points (on a -100 to 100 scale) between actions
and inactions was significant (F1,46=10.2, p < .005). In the
last three cases, where the commission led to greater harm, the
more harmful option was rated worse and the less harmful option
was rated better than in the first three cases, where the
omission led to greater harm. Action-inaction did not interact
significantly with order or perspective (mean difference,
inaction minus action, of 11 for the physician, 14 for the
patient, and 23 for the official). Omission bias therefore seems
to occur for choices affecting the self, as a patient,
(t47=1.89, p < .05, one-tailed) as well as choices affecting
others, and it occurs when the outcomes are expressed as
frequencies as well as probabilities.4
Table 1
Mean rating (and standard deviation) assigned to each option
in Experiment 4
Decision Degree Mean
Case maker Option of damage rating (s.d.)
--------------------------------------------------------------------
1 Physician Inaction 20\% -38.0 (53.0)
Treat 15\% 53.4 (48.9)
2 Patient Inaction 20\% -34.9 (55.2)
Treat 15\% 48.8 (50.5)
3 Official Inaction 20\% -38.4 (55.6)
Treat 15\% 45.6 (49.8)
4 Physician Inaction 15\% 62.9 (42.9)
Treat 20\% -51.5 (55.0)
5 Patient Inaction 15\% 60.5 (35.7)
Treat 20\% -51.9 (42.9)
6 Official Inaction 15\% 62.1 (43.4)
Treat 20\% -68.1 (40.0)
In addition, 19 out of 144 comparisons (13%) in cases 1-3 favored
the (more harmful) omission over the (less harmful) commission,
but only 3 out of 144 comparisons (2%) in cases 4-6 favored the
(more harmful) commission over the (less harmful) omission. The
difference between cases 1-3 and cases 4-6 in the number of such
anomalies per subject was significant by a Wilcoxen test (p < .03,
two-tailed). Ten subjects showed more anomalies favoring
omissions, and one subject showed more anomalies favoring
commissions (p < .02, binomial test). Although the number of
anomalies favoring omission was small, they indicate that some
people are inclined to sacrifice lives or increase the
probability of death in order to avoid an action that would
replace one risk with a smaller risk. As in Experiment 1,
perceived causality seemed important in justifications of such
reversals, e.g., `There is slightly less chance with the second
choice but the blame would be yours if brain damage occurred.'
Other justifications seemed to involve naturalism, e.g., `...
there is something to be said for letting nature take its
course.' No subject mentioned the stress of the procedure itself
as a relevant consideration (but one subject did mention the
cost).5
To test the hypothesis that omissions were taken as the reference
point, we first examined ratings of 0. Zero was assigned to the
omission on 18 out of the 288 cases, and to the commission on 7
out of 288 cases (excluding the 9 cases in which 0 was assigned
to both options within a given case). The difference in the
number of 0 ratings assigned to omissions and commissions per
subject was significant by a Wilcoxen test (p < .04, 1 tailed) and
by a binomial test across subjects (8 with more zero ratings for
omissions, 1 with more for commissions, p < .02).
We also compared the mean of all the ratings in cases 1-3, where
the riskier option was an omission, with the mean of all the
ratings in cases 4-6, where the riskier option was a commission.
The mean rating was higher when the worse option was an omission
than when it was a commission (difference, 3.7; t47=2.03,
p < .025). The correlation between this measure of the
reference-point effect and the overall omission bias was .23,
which was not quite significant.
Although we found some support for loss aversion with omissions
taken as the reference point, this hypothesis cannot account for
the relatively large number of anomalies in which a more harmful
omission is preferred over a less harmful commission. This
hypothesis, then, appears not to be the whole story, although it
may well be part of the story.
In summary, this experiment revealed an omission bias in choices
affecting the self as well as choices affecting others, and in
choices in which the expected harm of an option is expressed in
terms of the number of people harmed as well as in the
probability of harm. Intended outcome was held constant by
putting the subject in the position of the decision maker. A few
subjects showed the anomaly of preferring a more harmful omission
to a less harmful commission.
6 Experiment 5
The remaining experiments address further the question of why
omission bias occurs. We examine the role of several factors
that might distinguish omissions and commissions. We have
already noted some of these factors: perceived difference in
causality; differing degrees of `responsibility'; and the bald
fact that one situation was a commission and the other situation
was an omission.
We hypothesize that individuals distinguish between omissions and
commissions because certain factors, which often distinguish
them, affect judgments of morality. Some of the factors also
serve to define what people think of as omissions or commissions.
Drawing on the work of Bennett (1981, 1983) and Tooley (1974) and
on subjects' statements in earlier experiments, we examined a
more complete list of factors than those we examined in
Experiments 1-4. We assume that all the factors we examined are
morally irrelevant except for responsibility. (We do not examine
the role of intention and motive, which we also take to be
relevant.) We examined the role of each factor both on its own
(with other aspects of the omission-commission distinction held
constant) and in the context of an obvious omission-commission
distinction (where other factors also distinguish omissions and
commissions).
The following seven factors were examined:
Movement. Although movement is a sign of greater intention,
movement loses its moral relevance when intention is held
constant. Subjects might still regard movement as relevant,
however.
Detectability. Some subjects in earlier studies suggested
that behavior was worse if the intention were detectable than if
it were not. This is reminiscent of certain aspects of
`pre-conventional' thinking as described by Kohlberg (1963),
although it could also result from a confusion between morality
and legality.
Number of Ways. Bennett (1981) maintains that the only
distinction between what he terms `positive and negative
instrumentality' is the number of possible ways that exist to
bring about a harmful outcome. Compared to the number of ways we
can commit an act, the number of ways we can `not do something'
is large: we can do almost anything.
Alternative Cause. According to the `causal discounting
principle,' the role of a particular cause in producing an
outcome is discounted if other plausible causes are also present
(Kelley, 1973). Subjects apply this principle when it is
legitimate (Thibaut and Riecken, 1955). For example, when a
person is forced to commit harm, we correctly believe that he is
less responsible than someone who was not forced. Subjects might
over-generalize this principle, judging a person less blameworthy
because of an alternative cause even when they know that the same
deed would be done even in the absence of the alternative cause.
Omissions commonly involve a salient alternative cause while
commissions usually do not.
Presence. Several subjects in unreported experiments argued
that you could not be blamed for something if it would have
occurred in your absence. Most omissions are in this category.
Logically, this explanation is a subset of `Alternative cause,'
since occurrence in one's absence requires another cause, but
psychologically, the two kinds of arguments might differ, or
alternative-cause arguments might be limited to cases in which
the actor is absent.
Repeatability. It is relatively rare to be able to
intentionally harm an individual through an act of omission.
However, it is fairly common to have the opportunity to harm
someone through an act of commission. Martin Seligman (personal
communication) has proposed that this could be a reason that
individuals make a moral distinction between omissions and
commissions.
Diffusion of Responsibility. According to the Diffusion of
Responsibility theory, when an individual realizes that another
person is in danger, he is less likely to intervene if he
believes that other people are present (Latane and Darley, 1968).
This is because the burden of responsibility does not fall on him
alone. Commissions usually involve one person who is clearly
responsible for the outcome. Omissions sometimes involve more
than one person who could be held responsible for the outcome.
We designed one scenario to examine the relevance of each of the
seven factors. Some endings of each scenario differed in the
presence or absence of the factor of interest, holding constant
other factors that would distinguish omissions and commissions.
In most scenarios, other endings differed in several other
factors that distinguish omissions and commissions but not in the
factor of interest.
Method
Twenty-seven subjects were solicited as in previous studies.
What follows is a summary of each of the seven scenarios in the
questionnaire (which is available from J.B.). The actor is
always the subject herself, e.g., the first scenario begins, `You
are returning one item and buying another in a department store.'
Movement. In ending A, the cashier in a department store
credits the actor's account (i.e., `your account') with $100 by
mistake, and the actor says nothing. In B, the cashier places
the $100 on the counter and the actor reaches out and picks them
up.
Detectability. A person in a grocery store checkout line
notices $20.00 of someone else's change sitting on the counter.
The three different endings describe different ways in which the
person takes this money. In the first ending he takes the money
in a way that is detectable to those around him who may be
watching (a commission case). In the second and third endings
(an omission and a commission, respectively), he takes the money
in a way that is not detectable to anyone watching.
Number of Ways. A government official wants to protect a
guilty friend who is being sued in court. The official has the
opportunity to appoint one of ten people to be his friend's
prosecutor (a commission), or he can leave the person already
assigned (an omission). In the first ending, `Nine of the
assistants are new and inexperienced. If one of these is
assigned to the case, the prosecution will probably fail. The
tenth assistant is experienced and will probably succeed. The
experienced person was assigned to the case by your predecessor.
(Your predecessor did not know anything about which assistants
were experienced and which were not.) You take that person off
the case and put on one of the others.' In the second ending, 9
of the assistants are inexperienced, one of these was already
assigned, and `you do nothing.' In the third ending, one of the
assistants is inexperienced, and `you' assign this person to the
case. In the fourth ending, the single inexperienced person is
left on the case. In the third and fourth endings, a commission
and omission, respectively, there is only one way to produce the
bad outcome, as opposed to nine ways in the first two endings.
Alternative Cause. An angry man causes his neighbor's
parked car to roll down a hill. In two of the endings (one an
omission involving failure to stop the car with a rock, and one a
commission involving moving a rock out of the car's path) there
is alternative cause to the car's rolling down the hill (faulty
brakes). In the third case (a commission) there is no
alternative cause; the man pushes the car himself.
Presence. A soldier prevents a radio message from being
received that would have sent him on a difficult mission. In
ending A, he (or she) knows that the antenna is pointed in the
wrong direction, and he fails to change it. (Nobody else knows.)
In B, he is blocking the antenna by standing next to it, and he
fails to move. In C, the antenna (pointed the wrong way) works
only when he is standing next to it, and he steps away. In D, he
points the antenna in the wrong direction. C and D contain most
of the factors that characterize commissions, but in A and C the
outcome would occur in the soldier's absence.
Repeatability. A student cheats on an exam in a way that he
could either repeat as often as he likes, or in a way that he
could never do again because it is allowed by the professor's
temporary absence (both commissions).
Diffusion of Responsibility. An individual witnesses a
friend's car accident. He then causes the wrong person to be
charged in the accident either through lying to the police about
what he saw (a commission) or through failing to correct the
police when they charge the innocent party (an omission). In two
of the endings (one omission and one commission) a third party
also witnesses the accident and acts in the same way. In the
other two endings (one omission and one commission) there are no
other eyewitnesses to the accident.
For each ending of each scenario, subjects were asked to do two
things: (1) to rate on a scale from 0 to -100 how immoral they
believed the act to be (0 being not immoral and -100 being very
immoral), and (2) to justify with written explanations any
similar or dissimilar ratings that they gave.
Results
Table 2 shows, for each pair of endings, the number (and percent)
of subjects who indicated that one ending was worse than another
in the hypothesized direction (e.g., commission worse than
omission, detectability worse than no detectability) and the
number (and percent) of subjects who indicated that one ending
was worse than the other in the opposite direction.
Table 2
Number (and percent) of subjects (out of 27) in Experiment 5
who rated paired endings differently in each direction
Rated unequally Rated unequally,
Scenario Comparison as hypothesized not as hypothesized
Movement *C/O 12 (44\%)
Detectability *C/C 4 (15\%)
C/O 8 (30\%) 1 (3\%)
Number of ways *C/C 5 (19\%)
*O/O 9 (33\%)
C/O 13 (48\%)
*C/*O 13 (48\%)
Alternative cause *C/C 9 (33\%)
*C/*O 18 (67\%)
Presence *O/O 14 (52\%)
*C/C 14 (52\%)
C/O 18 (67\%)
*C/*O 10 (37\%) 1 (3\%)
Repeatability *C/C 9 (33\%) 1 (3\%)
Responsibility *C/C 4 (15\%) 1 (3\%)
*O/O 3 (11\%)
*C/*O 14 (52\%)
C/O 16 (59\%)
Note: * indicates the presence of the factor, and O and C
indicate omission and commission, respectively.
The comparisons between the omission and commission cases in
Table 2 show that subjects often rated the commission cases as
morally worse than the omission cases. Only twice, however, did
subjects rate the omission case as more immoral than the
corresponding commission case. The data shown in Table 2 also
suggest that subjects' morality judgments were affected by
presence (52%), movement (44%), repeatability (37%), and
alternative cause (33%).
E.M. and J.B. coded the justifications of the first nine
subjects, and their ratings concurred 90% of the time. E.M.
coded the remaining subjects' justifications. Subjects gave two
justifications that were not among the original seven
hypothesized. They are `acts versus omissions,' and `not their
responsibility.' The `diffusion of responsibility' factor was
renamed `responsibility' to include all justifications that fell
under the explanation that it was not the subjects responsibility
to prevent harm in a particular situation. Several extraneous
factors were cited; for example, in the Movement scenario, some
subjects said that it was easier for the store to discover the
error in the ending in which the customer is given a credit.
The justifications that subjects cited most often for rating the
endings unequally as predicted (which occurred in 193
comparisons) were: acts versus omissions (49 comparisons);
alternative cause (35); and responsibility (34). Other
justifications were cited, however, in comparisons designed to
elicit them and elsewhere: movement was cited 3 times in the
Movement scenario and 7 times in the Presence scenario;
detectability was cited 10 times total in 6 different scenarios;
number of ways was cited 10 times in the Number of Ways scenario;
presence was cited 14 times in three different scenarios; and
repeatability was cited 5 times in the Repeatability scenario.
7 Experiment 6
This experiment used the same scenarios and endings as Experiment
5, but it asked subjects for judgments of justifications that we
gave rather than construction of their own. Subjects were
provided with a list of factors and told to indicate which
factors were relevant to each comparison. This method allows us
to determine whether the pattern of justifications found in
Experiment 5 was limited by the subjects inability to articulate
their reasons. A new factor, which we called `knowledge,' was
included in the list of factors, but no new dilemma manipulated
this factor. This factor is similar to `presence,' but points to
what the actor knew rather than where she was: in the case of
omissions, the actor would have behaved in the same way if she
had not known of the opportunity to affect the outcome through a
commission.
Method
Thirty-two subjects were solicited as in previous experiments.
Twelve of these subjects rated all endings as morally equal or
gave extraneous reasons why they were not; these subjects are not
considered further.
After each scenario, the subject was asked to compare critical
pairs of endings: `Which ending is more immoral, or are they
equally immoral?' Next, the subject indicated the status of each
factor that we thought distinguished the two cases. For each
factor listed, the subject indicated whether the factor was
morally irrelevant or which ending it made worse. When it was
relevant, scenarios included an `omission' factor that referred
simply to whether or not the actor `did anything.'
For example, in the comparison of endings A and B in the Presence
scenario, subjects were told, `In A, if you were absent from the
situation, the outcome would be the same,' and they were asked to
circle one of three alternatives: `Morally irrelevant'; `Makes A
worse'; or `Makes B worse.' In the comparison of endings A and
C, subjects were asked about three different factors (knowledge,
movement, and omission): `If you did not know about the antenna
in A, you would have done the same thing as you did'; `You moved
in C'; `In A you are not doing anything.' Note that the wording
of factors was adapted to the scenario when it seemed to increase
clarity. The factors asked about for each scenario are shown in
Table 3. The subject was asked to explain any additional factors
that were morally relevant and not listed.
Results and discussion
Table 3
Number of subjects (out of 20 included) who said that the two
endings in a given comparison differed morally in the
hypothesized direction (e.g., commission ending worse) or the
reverse direction (e.g., omission ending worse)
Rated unequally Rated unequally,
Scenario Comparison as hypothesized not as hypothesized
Movement movement 5 0
Detectability detectability 1 0
Number omission 13,12 0,0
number 1,0 0,1
Cause cause 4 0
omission 15 0
Presence presence 7,7 0,0
omission 5,7 1,1
Repeatability repeatability 2 0
Responsibility omission 13,13 0,0
responsibility 1,1 2,2
Note: `Comparison' refers to the factor that distinguished the
items compared; `omission' indicates several factors. For the
detectability item, only the endings differing in detectability
were analyzed, as the others did not bear on any questions. When
a given scenario had two comparisons for a given factor, both are
included.
Table 3 shows the numbers of subjects who thought the actor's
morality was different in the two endings. Many subjects thought
that omissions and commissions (which always differed in several
factors) were morally different (as indicated in the rows labeled
`omission' in Table 3). When only one factor distinguished the
two endings being compared, only presence played a significant
role (according to a binomial test based on unequal ratings in
hypothesized direction versus not hypothesized). Differences in
detectability, number, repeatability, and responsibility did not
by themselves lead to perceptions of a moral difference.
Movement did make some difference, but the comparison involving
movement (the first scenario) was not a pure one; other factors
also distinguished the endings. The cause factor mattered to
four subjects, but this was not significant by a binomial test.
The factor we call knowledge was not tested in any single
comparison.
Table 4
Number of subjects (out of 20 included) who said that factors
were different and relevant in the hypothesized direction (+) or
the reverse direction (-).
Different
and relevant
Scenario Comparison Factor + -
Movement movement movement 5 0
detect. 3 0
omission 5 0
Detectability detect. detect. 3 0
Number omission detect. 3,4 0,0
cause 10,10 0,1
presence 9,10 1,2
knowledge 8,8 1,2
omission 11,10 0,0
number number 1,0 0,1
Cause: cause cause 4 0
omission detect. 3 0
presence 12 1
knowledge 12 1
movement 15 0
omission 14 1
Presence: presence presence 8,7 0,0
omission knowledge 3,6 0,0
movement 7,7 0,0
omission 6,7 1,0
Repeatability: repeat. repeat. 2 0
Responsibility: omission detect. 0,1 0,0
respons. 9,9 1,1
cause 10,10 0,0
movement 13,11 0,0
presence 10,9 0,0
knowledge 8,8 1,0
omission 12,11 0,0
respons. respons. 1,1 2,2
Note: `Comparison' refers to the factor that distinguished the
items compared; `omission' indicates several factors. `Omission'
as a factor refers to `not doing anything' in one of the endings.
For the detectability item, only the endings differing in
detectability were analyzed, as the others did not bear on any
questions. When a given scenario had two comparisons for a given
factor, both are included.
Table 4 shows the number of subjects who thought each factor was
relevant in the hypothesized direction and in the opposite
direction. (So far as we can determine from reading subjects'
additional comments, all of the responses in the opposite
direction were the result of misunderstanding of the items'
wording.) In some cases, subjects said that a factor was
relevant even thought the subject did not think the endings
differed in morality; we do not know why this discrepancy
occurred. Very few subjects indicated the relevance of factors
not mentioned.
The major factors are much the same as those found in Experiment
5: cause; presence; knowledge; and movement. These factors
favored the hypothesis significantly (by a binomial test) in
almost every case. Factors that played essentially no role were
detectability, number, and repeatability. These factors were
rarely mentioned spontaneously in Experiment 5, and they did not
make a difference when they were the sole distinction between two
endings. Subjects did think that the omission factor itself
(i.e., the fact that the actor `did nothing' in one ending) was
relevant; we do not know whether they considered it to be
redundant with the other factors listed or not.
The factor called `responsibility' was tested in only one
scenario, but in two different comparisons. In the first
comparison, in the omission endings, `The police are about to
charge the [wrong] person' with causing the accident, and `you
say nothing.' In the omission endings, `The police ... ask you
who was at fault,' and `you say it was the other driver.' The
factor is described as, `In [the omission ending], the police are
responsible, not you.' Nine subjects thought that this factor
was relevant in the hypothesized direction in each ending. In
the second comparison, the distinction is whether or not a friend
of the actor sees the accident too. Only one subject thought
this distinction was relevant (and two thought it was relevant in
the opposite direction). These results suggest that subjects do
not regard `diffusion of responsibility' of the sort studied by
Latan‚ and Darley as morally relevant, but they do think it is
relevant when those who are responsible by virtue of their role
make a mistake.
In summary, the last two experiments have found certain factors
to be relevant to moral judgments, most importantly: the actor's
physical movement; the subject's judgment that an outcome has
another cause aside from the actor; the fact that the same
outcome would have occurred in the actor's absence; and the fact
that the same outcome would have occurred if the actor did not
know that she could affect the outcome. These last three factors
together probably contribute to the judgment that the actor
caused the outcome in the case of commissions but not in the case
of omission, and we found in Experiment 1 that this judgment was
correlated with omission bias. These factors generally
distinguish what are called omissions from what are called
commissions, but subjects find each of these factors relevant
even when it is not accompanied by the other factors that usually
make up the distinction. (Other factors were not very often
considered relevant: the detectability of the actor's behavior;
the number of ways in which an outcome could be brought about;
the repeatability of the situation; or the fact that someone else
was responsible.)
8 Discussion
Our experiments established a bias to favor omissions that cause
harm. Ordinarily, harmful omissions are less blameworthy because
the actor is less knolwedgeable about the potential consequences,
but knowledge was held constant in almost all our scenarios.
Likewise, harmful omissions are typically less intentional than
commissions, but this difference cannot explain our results
either: In Experiment 2, we found that the bias was present even
when intentions were judged to be the same. In the train story
of Experiment 3 and the medical cases of Experiment 4,
deprecation of harmful commissions was not accompanied by equal
commendation of helpful commissions. Differences in perceived
intention alone would not predict this finding. (Nor would an
effect of perceived differences in effort, which would affect
perceived intention.)
This finding also indicates that emotional amplification is not a
sufficient account of the bias, although we also found evidence
consistent with amplification in the train story of Experiment 3.
The deprecation of harmful commissions may sometimes be caused by
loss aversion, coupled with a tendency to take omissions as the
reference point, but this explanation, too, cannot be a complete
account, for it fails to explain the anomalies (found in
Experiments 3 and 4) in which an omission is preferred over a
commission with a better outcome.
We also found evidence for some of the mechanisms that serve to
cause or maintain this bias. In Experiment 1, we found that the
effect was correlated with perceived differences in the causal
role of omissions and commissions. Some subjects argued that the
actor played no causal role at all if the outcome would have
occurred in his absence. Subjects offered a variety of other
reasons in support of the distinction. Some subjects said that
an actor has a responsibility not to harm, for example, or that
lying was worse than withholding the truth. These stated
reasons, we suggest, are the embodiments of subjects' heuristics.
These heuristics are learned because they are useful guides,
which bring about the best outcomes or lead to the best judgment
in most cases (Hare, 1981). They are maintained, even when they
cannot be independently justified, by the absence of critical
thinking about them, by other mechanisms of the sort we have been
discussing, and by the fact that they are often self-serving.
All of these heuristics are asymmetric in that they treat
omissions and commissions differently. Each asymmetric principle
of judgment has a corresponding symmetric one, which is often
adduced by subjects who do not show omission bias, e.g., the
principle that withholding the truth is a form of lying and is
therefore just as bad as any other lying, or the principle that
others have a right to certain forms of help as well as a right
not to be hurt. The asymmetric forms of these principles, which
concern commissions only, could suffice to prevent the most
insidious cases of intentional immorality. When we take the
symmetric forms seriously, we have just as much obligation to
help others - once we perceived that we can truly help - as to
avoid harming others.
The last two experiments examined the justifications of the
distinction in more detail. We considered the role of several
factors that could distinguish omissions and commissions. The
major factors - cause, presence, knowledge, and movement - were
consistent with those found less formally in the other
experiments. In particular, the idea that actors do not cause
the outcomes of omissions may be related to the idea that the
outcomes would occur even if the actor were absent or did not
know of the possibility of making a decision. A heuristic of
judging blame in terms of cause, and judging cause in terms of
presence and knowledge, would yield our results. Of course,
these considerations are not normatively relevant, because the
actors in all our cases did know and were present. It
is a fact of life that we are faced with consequential decisions.
Omission and commission are difficult to define. Bennett (1981)
argued that the only difference between them is that there are
more ways to bring about harm through omission than through
commission, but some of our subjects (in their comments) seem to
regard other properties as more crucial to the distinciton.
Establishing exactly how people define omission and commission is
a task for future research.
Are subjects really biased? Should people learn to think
differently about omissions and commissions? It does not follow
from our results that they should. The philosophical arguments
we have cited imply that the omission-commission distinction is
in itself normatively irrelevant, as an ideal, but people
might make better moral judgments on the whole by honoring the
distinction consistently than by trying to think normatively.
(We cannot provide a plausible reason why this might be so, but
it is possible in principle, as argued by Hare, 1981, and Baron,
1988a,b.) By this view, omission bias is not necessarily an
error, and our studies concern the extent to which people are
capable of normatively ideal thinking under favorable
circumstances.
It is worthy of note in this context that most subjects in most
experiments did not distinguish omissions and commissions.
Some subjects strongly denied the relevance of the distinction,
for example, `The opposite of love is not hate, but indifference;
and indifference to evil is evil.' If omission bias is
prescriptively correct, then such subjects were not applying the
best heuristics. We consider this unlikely, although it remains
a matter for investigation.
An alternative view (Singer, 1979; Baron, 1986) is that omission
bias in the moral sphere allows people to feel righteous by
abstaining from sins of commission, even while they neglect
(through omission) the suffering of others, which they could
ameliorate at a small cost. In the sphere of personal decisions,
omission bias helps people to avoid blaming themselves for their
own misfortunes that they could have avoided through action. By
this view, the distinction is usually not admirable but rather
convenient for those who want to be irresponsible without
guilt feelings or regret. If we accept this view, our studies
are about whether people do the kind of reasoning we all ought to
do, namely, reasoning in terms of the expected consequences of
our choices rather than in terms of whether these choices involve
action or not.
9 References
Baron, J. (1985). Rationality and intelligence. New York:
Cambridge University Press.
Baron, J. (1986). Tradeoffs among reasons for action.
Journal for the Theory of Social Behavior, 16, 173-195.
Baron, J. (1988a). Thinking and deciding. New York:
Cambridge University Press.
Baron, J. (1988b). Utility, exchange, and commensurability.
Journal of Thought, 23, 111-131.
Baron, J., & Hershey, J. C. (1988). Outcome bias in decision
evaluation. Journal of Personality and Social Psychology,
54, 569-579.
Bennett, J. (1966). Whatever the consequences. Analysis,
26, 83-102 (reprinted in B. Steinbock, ed., Killing and
letting die, pp. 109-127. Englewood Cliffs, NJ: Prentice Hall).
Bennett, J. (1981). Morality and consequences. In S. M.
McMurrin (Ed.), The Tanner Lectures on human values (vol. 2,
pp. 45-116). Salt Lake City: University of Utah Press.
Bennett, J. (1983). Positive and negative relevance.
American Philosophical Quarterly, 20, 183-194.
Brewer, M. B. (1977). An information-processing approach to
attribution of responsibility. Journal of Experimental
Social Psychology, 13, 58-69.
Feinberg, J. (1984). The moral limits of the criminal law.
(Vol. 1). Harm to others. New York: Oxford University
Press.
G„rdenfors, P., & Sahlins, N.-E. (Eds.) (1988). Decision,
probability, and utility: Selected readings. New York:
Cambridge University Press.
Haidt, J. (1988). Social frames of moral judgment.
Manuscript, Department of Psychology, University of Pennsylvania,
Philadelphia, PA.
Hare, R. M. (1978). Value education in a pluralist society. In
M. Lipman & A. M. Sharp (Eds.), Growing up with philosophy
(pp. 376-391). Philadelphia: Temple University Press.
Hare, R. M. (1981). Moral thinking: Its levels, method and
point. Oxford: Oxford University Press (Clarendon Press).
Hilton, D. J., & Slugoski, B. R. (1986). Knowledge-based causal
attribution: The abnormal conditions focus model.
Psychological Review, 93, 75-88.
Kagan, S. (1988). The additive fallacy. Ethics, 99, 5-31.
Kahneman, D. & Miller, D. T. (1986). Norm theory: Comparing
reality to its alternatives. Psychological Review, 93,
136-153.
Kahneman, D., & Tversky, A. (1982a). The simulation heuristic.
In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment
under uncertainty: Heuristics and biases (pp. 201-208). New
York: Cambridge University Press.
Kahneman, D. and Tversky, A. (1982b). The psychology of
preferences. Scientific American, 246, 160-173.
Kahneman, D., & Tversky, A. (1984). Options, values, and frames.
American Psychologist, 39, 341-350.
Kamm, F. M. (1986). Harming, not aiding, and positive rights.
Philosophy and Public Affairs, 15, 3-32.
Kelley, H. H. (1973). The process of causal attribution.
American Psychologist, 28, 107-128.
Knetsch, J. L., & Sinden, J. A. (1984). Willingness to pay and
compensation: Experimental evidence of an unexpected disparity in
measures of value. Quarterly Journal of Economics, 99,
508-522.
Kohlberg, L. (1963). The development of children's orientations
toward a moral order. I. Sequence in the development of human
thought. Vita Humana, 6, 11-33.
Landman, J. (1987). Regret and elation following action and
inaction: Affective responses to positive versus negative
outcomes. Personality and Social Psychology Bulletin, 13,
524-536.
Latane, B. and Darley, J. M. (1968). Group Intervention of
Bystander Intervention in Emergencies. Journal of
Personality and Social Psychology, 10, 215-221.
Miller, D. T., & McFarland, C. (1986). Counterfactual thinking
and victim compensation. Personality and Social Psychology
Bulletin, 12, 513-519.
Shaver, K. G. (1985). The attribution of blame: Causality,
responsibility, and blameworthiness. New York: Springer-Verlag.
Singer, P. (1979). Practical ethics. Cambridge University
Press.
Steinbock, B. (Ed.) (1980). Killing and letting die.
Englewood Cliffs, NJ: Prentice Hall.
Sugarman, D. B. (1986). Active versus passive euthanasia: An
attributional analysis. Journal of Applied Social
Psychology, 16, 60-76.
Tetlock, P. E., & Kim, J. I. (1987). Accountability and judgment
processes in a personality prediction task. Journal of
Personality and Social Psychology, 52, 700-709.
Thibaut, J. W., & Riecken, H. Some determinants and consequences
of the perception of social causality. Journal of
Personality, 25, 115-129.
Tooley, M. (1974). Lecture given at Brown University, February
15-16, 1974 (reprinted in B. Steinbock, Ed., Killing and
letting die. Englewood Cliffs, NJ: Prentice Hall, pp. 109-127).
Wertheimer, M. (1959). Productive thinking (rev. ed.)..
New York: Harper & Row (Original work published 1945)
Footnotes:
1This
work was supported by grants from the National Institute of
Mental Health (MH-37241) and from the National Science
Foundation (SES-8509807 and SES-8809299). It is based in
part on undergraduate honors theses by M. S. and E. M.,
supervised by J. B. We thank Jon Haidt, the editor, and the
reviewers for helpful
comments.
2Arguably, a judgment of the morality of the
actor in general might legitimately attend to the
omission-commission distinction, because harmful commissions
might be a better sign (than omissions) of poor character.
3We
counted several responses as `no distinction' even though a
distinction was made in the numbers. These were cases in which
subjects referred to a difference in motivation - despite our
instructions that this was the same - or in the likely
effectiveness of the choice. For example, two subjects
attributed Ellen's withholding of the truth as possibly the
result of shyness, or John's failure to warn Ivan as due to
possible confusion.
4The analysis of
variance also revealed that the overall mean rating was
significantly above 0 (F1,46=16.5, p < .001). Low-risk options
were rated higher than high-risk options (F1,46=37.4,
p < .001). Risk interacted with order (F1,46=9.0, p < .005) and
with perspective (F2,45=3.2, p=.05). No other two-way
interactions were significant.
5Some responses that favored action over inaction
were justified by the assertion that it was better to do
something rather than await the epidemic passively. These
responses suggest the existence of an intuition that action is
better than inaction, rather than worse. Although the opposite
intuition predominates in our experiments, this one might
predominate in other situations, such as those in which some
action is expected.
File translated from
TEX
by
TTH,
version 3.40.
On 25 Aug 2003, 15:41.