Baron, J. (1992). The effect of normative beliefs on anticipated
emotions. Journal of Personality and Social Psychology, 63,
320-330.
1 The effect of normative beliefs on anticipated emotions
Jonathan Baron
Department of Psychology
University of Pennsylvania
2 Acknowledgment
A version of this paper was presented at a conference on the role
of anticipation and regret in decision making, La Jolla, January
7-8, 1991, sponsored by the John D. and Catherine T. MacArthur
Foundation. I thank the participants, David M. Clark, Jon Haidt,
Daniel Kahneman, the editor, and an anonymous reviewer for
comments. The work was supported by a grant from the National
Science Foundation (SES-8809299) and by the MacArthur Foundation.
3 Abstract
In three experiments, subjects were asked how they would or
should make hypothetical decisions, and how they would react
emotionally to the options or outcomes. The choices were those
in which departures from proposed normative models had previously
been found: omission bias; status-quo bias; and the
person-causation effect. These effects were found in all
judgments, including judgments of anticipated emotion. Arguments
against the departures affected judgments of anticipated emotion
as well as decisions, even though the arguments were entirely
directed at the question of what should be done. In all but one
study, effects of these arguments on anticipated emotion were as
strong as their effects on decisions or normative beliefs. Thus,
in many situations, people think that their emotional reactions
will fall into line with their normative beliefs. In other
situations, some people think that their emotional reactions have
a life of their own. It is suggested that both normative beliefs
and anticipated emotions affect decisions.
I take emotions to be states that are subjectively experienced,
that have some hedonic component, and that drive or motivate
certain kinds of behavior specific to the emotion. Examples of
hedonically negative emotions are guilt feelings, regret,
sadness, anger, and fear. Anticipation of emotions can affect
decisions even when the emotion is not an inherent part of the
desired or undesired outcomes. This effect has been widely
recognized by clinical psychologists and social psychologists.
For example, fear of having a panic attack keeps some
agoraphobics from going out in public (Chambless & Gracely,
1989), even when the anticipation of an attack is not related to
reality (Cox, Swinson, Norton, & Kuch, 1991). Fear of
embarrassment could account for a variety of phenomena in which
choices are manipulated in the social-psychology laboratory, such
as the Milgram obedience effect (Sabini, 1992).
Decision theorists have also begun to recognize the role of
anticipated emotions in decision making. Kahneman and Lovallo
(1990) describe the role of fear in decisions under risk, and
Lopes (1987) discusses hope as well. Bell (1982, 1985) and
Loomes and Sugden (1982; Loomes, 1987) proposed that certain
apparent departures from expected-utility theory were the result
of anticipated regret, rejoicing, disappointment, and elation.
We sometimes avoid taking risks because we anticipate feeling
regretful if worse comes to worse. Such anticipated regret could
affect many real decisions, such as those made in medicine
(Hershey & Baron, 1987). Loewenstein (1987) and Elster (1985)
likewise called attention to the role of savoring and other
emotions in intertemporal choice. We sometimes put off pleasant
events so that we can savor our anticipation of them, and we
sometimes get aversive events over with so that we don't have to
put up with dread.
In these cases, what matters for decision making is the
anticipated emotion, not the real one. People might be wrong to
think that they will experience pleasant anticipation rather than
impatience, for example. Kahneman and Snell (1990) have
documented systematic errors in people's predictions of their
hedonic experiences. The present study is concerned with
anticipated emotions only, not real ones. Subjects are asked to
compare how they would feel in certain situations. Anticipated
emotions are important to the extent to which they affect
decisions. Real emotions are important for decision making to
the extent to which people learn to adjust their anticipations to
reality, and real emotions are also important, of course, as
outcomes that affect behavior and experience.
Decisions can be described in terms of options leading to
outcomes, with the outcome depending on the option chosen as well
as on the state of the world, which is often uncertain when the
decision is made (Baron, 1988). These consequences include
emotions, and the anticipation of these emotional consequences
can affect the choice of options.
Emotions are associated with decisions in four ways. The first
three concern emotions as consequences: 1. Emotions can depend
directly on the expected consequences themselves. Skydivers seek
the excitement that is part of skydiving. The rest of us avoid
the fear. 2. Emotions can depend on comparison of an outcome to
outcomes of different options. If I decide to buy shares of
stock and the value of the stock goes down, I will experience
regret (Bell 1982, Loomes and Sugden 1982). Such regret depends
on my having had another option. 3. Emotions can depend on
comparison of an outcome to outcomes of the same option in
different states, to what `might have happened,' regardless of
whether a decision is made or not (Kahneman & Miller, 1986; Bell,
1985; Loomes, 1987). I can be disappointed that my stock went
down even if I do not feel regret about buying it. 4. Finally,
emotions can also affect decisions directly. Sympathy for a
plaintiff in a lawsuit or anger at a defendant can make us choose
to provide the plaintiff with greater compensation for an injury,
for example.
Decisions are affected by normative beliefs, beliefs about what
ought to be done in certain cases or about how decisions ought to
be made. For example, people might believe that hurting others
through acts is worse than causing the same hurt through
omissions (Spranca, Minsk, & Baron, 1991), or that injuries
caused by people are less deserving of compensation than those
caused by people (Baron, in press a). Normative beliefs include
beliefs about personal virtue as well as about interpersonal
morality.
At issue in this paper is whether these normative beliefs also
affect anticipated emotions. How could such effects occur? In
principle, such beliefs could strengthen, weaken, or reverse all
four of the emotional effects just listed, and decision makers
could anticipate such effects when they make decisions: 1. A
belief that smoking is wrong, for example, could reduce the
expectation of pleasure from smoking. 2. A belief that harmful
acts are worse than harmful omissions could cause us to expect to
regret acts that cause harm more than we regret omissions that
cause the same harm. 3. A belief that risk-taking is wrong can
lead us to expect greater disappointment from taking a risk and
losing. 4. A belief that smoking is wrong could cause us to get
angry at smokers.
If such effects on anticipated emotions occur, then beliefs could
affect decisions in two ways, first, through direct effects on
which option is chosen, and, second, through effects on
anticipated emotion. If these effects agree, decisions are
overdetermined. For example, people could choose an option
either because they believe it is morally right or because the
want to avoid the guilt feelings that would result from choosing
it. The two effects do not necessarily agree, however. People
could think that an act is right yet anticipate feeling guilty if
they do it. Such a situation can cause difficulty in making
decisions. We might expect people to adopt standard ways of
resolving such conflicts, some choosing to side with their
`head,' others with their (anticipated) `heart.'
What happens when people are persuaded to change their normative
beliefs? Will their tendencies to choose certain options change
accordingly? Or, alternatively, will they still tend to choose
the options they would have chosen before the change?
Persistence of choice could result from an expectation that,
although normative beliefs have changed, emotions will not
change. A person who comes to believe that premarital sex is not
immoral, for example, could still be reluctant to engage in it
because of anticipated guilt feelings.
People who believe that their emotions are autonomous are likely
to be skeptical of rational thought as a means of making
decisions. And people who believe that emotions will fall into
line with beliefs automatically are more likely to allow their
decisions to be governed by reasons, without fear of negative
emotional consequences of doing so. These different beliefs
about the autonomy of emotions might also be self-fulfilling.
The expectation of guilt feelings could lead to guilt feelings
even in the absence of a corresponding moral belief. (`I know
what I did was not wrong, but I still feel guilty about it.')
The experiments reported here attempt to change (at least
briefly) the beliefs of the subjects about how they should
choose. Subjects are then asked how they would feel in various
cases. Assuming that such arguments are effective in changing
beliefs, do they change anticipated emotions too? If so,
subjects believe that their emotions are largely determined by
their beliefs about what they ought to do. If, on the other
hand, otherwise effective arguments do not affect anticipated
emotions, subjects must believe that their emotions are not under
the control of the reasons that they consciously accept. If
these anticipated emotions affect decisions, then decisions, too,
are not entirely under the control of non-emotional reasons.
The experiments address other questions, if only indirectly: Do
anticipated emotions affect norms and choices? Do emotional
reactions maintain beliefs that would otherwise be weakened by
argument? Are people whose emotions and beliefs agree more
resistant to arguments concerning their beliefs?
The examples chosen for these experiments are drawn from the
literature on biases in decision making (e.g., Baron, 1988).
Spranca et al. (1991) have argued that many subjects make an
unjustified distinction between acts and omissions in deciding
what to do or in judging the decisions of others. In one study,
subjects were asked to evaluate medical policies for treating a
serious disease. In one condition, 20% of the patients with the
disease would be brain damaged, and the treatment (which
completely cured the disease) would cause brain damage in 15%; in
another condition, the proportions were reversed. Across both
conditions, subjects rated the omission (no treatment) as a
better policy. Some subjects even rated no treatment with 20%
brain damage as better than treatment with 15%. Ritov and Baron
(1990) likewise found that many subjects prefer not to vaccinate
children when the vaccine can kill the children, even though the
death rate from the vaccination is a mere fraction of the death
rate from the disease it prevents. Subjects often said that they
would feel more guilty if a child died from the vaccine than if
the child died from the disease.
Related phenomena are the endowment effect, in which subjects
require more money to give up something than they are willing to
pay for it (Knetsch & Sinden, 1984; Kahneman, Knetsch, & Thaler,
1990), and the status quo effect, in which subjects prefer the
option they have (e.g., a retirement plan) to one they do not
have, even though they regard the latter as superior when they
have neither (Samuelson & Zeckhauser, 1988). Ritov and Baron
(1992) present evidence that these effects result in part from a
bias toward inaction rather than a bias toward the status-quo
itself.
Another effect considered here is the tendency to compensate
victims more when they are harmed by people than when they are
harmed by nature. We want to compensate a person who suffers
from a severe reaction to a vaccine, although we are less prone
to compensate someone who suffers just as much from a natural
disease (Baron, in press a). Such differences are difficult to
justify (Baron, in press a). They may, however, be related to
such emotions as anger at the injurer in the cases of
human-caused misfortune. In a sense, we compensate the victim to
assuage our own emotional reactions.
The arguments given to subjects here are those that might be used
in experiments on `debiasing' of the sort done by Larrick,
Morgan, and Nisbett (1990). An argument designed to remove a
bias might be ineffective if it does not changes anticipated
emotions as well as beliefs. In such a case, the purported bias
might not even be nonnormative (Baron, 1985, pp. 62-63, 1988;
Hershey & Baron, 1987). A strong negative emotion associated
with one option can reduce the expected utility of that option
enough so that it is no longer optimal. (However, we might still
do better by trying to control our emotion instead of giving in
to it. And when we make decisions for others, as in the
vaccination example, we are selfish if we weigh our own emotions
too heavily.) The effect of normative arguments on anticipated
emotions is therefore relevant both to the success of debiasing
in changing decisions and to the normative status of those
decisions.
4 Experiment 1
In the first experiment, subjects were presented with two cases
involving omission bias. The first case involved a choice of
whether to vaccinate a child against a disease, given that the
vaccine might kill the child, although unvaccinated children were
twice as likely to die as vaccinated children. The second case
involved a choice of whether to kill one innocent prisoner in
order to save two. The two cases are analogous except that death
is certain in the prisoner case and the alternative cause of
death is natural in the vaccination case. Subjects were asked
what they should do in each choice, what they would do,
and which outcome would lead to greater feelings of guilt.
Some of the subjects were then presented with an argument against
the relevance of the act-omission distinction. The argument was
addressed to the intellect, not the emotions. The effect of the
argument on Should, Would, and Guilt responses could then be
examined. The Should response serves as a manipulation check on
the effectiveness of the argument.
The main hypothesis is that the argument will affect anticipated
guilt feelings if it affects normative beliefs, at least in some
subjects. A secondary hypothesis is that some subjects will show
effects only in normative beliefs, and not in anticipated
emotions. In addition, we can examine the extent to which the
Would response is related both to Should and Guilt responses.
I assume that the correct response in both cases is to act. The
argument presented to subjects in the `debiasing' condition
advocated action in both cases. This is a controversial
assumption, especially in the prisoner case (where very few
subjects agreed with it, even after the argument). Good
arguments can be made on the other side of the prisoner case.
For example, in any realistic version of this case it would be
impossible to know that the expected long-run consequences of
shooting are better than those of not shooting. Even from a
consequentialist perspective, then, one might do better to follow
the rule `never shoot anyone,' even in situations in which that
rule appears to lead to worse consequences (Hare, 1981). Or,
even if shooting has better consequences, a person who
refused to shoot might be a better person, a person with the kind
of character that will lead to more good in the long run (Hare,
1981). Finally, a person could regard the guilt feelings from
shooting the prisoner as worse than death, in which case the
consequences of shooting would be clearly worse on the whole than
those of not shooting. For subjects who thought of these
arguments, the persuasive argument given should be ineffective in
changing normative belief.
In assuming that the correct answer is to shoot, I assume that
the case is not realistic, that subjects should accept the case
as stated, that they should consider the choice (as requested)
rather than the character of the decision maker, that decisions
should be judged by their expected consequences, and that
subjects do not regard the guilt feelings from killing as worse
than death. Although these assumption are questionable, they do
not affect the conclusions that I draw, which do not depend on
the normative status of the argument. (For recent discussion of
the act-omission issue, see: Baron, in press b, ch. 7; Bennett,
1966, 1981; Gorr, 1990; and Kuhse, 1987, ch. 2.) From here on,
then, I shall speak as though the act were normatively correct,
purely for brevity of expression.
4.1 Method
Subjects were solicited by placing a sign on a prominent campus
walkway at the University of Pennsylvania. Most subjects were
undergraduate students there or at nearby Philadelphia College of
Pharmacy Science. Subjects were paid $5 per hour for completing
this questionnaire and others. 120 subjects were given the
following questions:
1. A kind of flu can be fatal to children under 3. A vaccine for
this kind of flu has been developed and extensively tested. The
vaccine prevents the flu, but it sometimes causes side effects
that can be fatal.
The death rate for unvaccinated children is 10 out of 10,000
children under 3. These children die from the flu.
The death rate for vaccinated children is 5 out of 10,000
children under 3. These children die from the vaccine.
A. If you had a child under 3, should you vaccinate your child?
B. Do you think that you would vaccinate your child?
C. Consider two situations:
1. You vaccinate and your child dies from the vaccine.
2. You do not vaccinate and your child dies from the flu.
In which of these situations would you feel guiltier about your
decision?
2. Imagine that you and three others are held prisoner in the
Middle East. All four of you are innocent. Your captors are
planning to murder two of the other prisoners. You do not know
which two. They will spare these two if you will murder the
single remaining prisoner. (You believe them when they say
this.) Nobody except you and your captors will know about your
decision.
A. Should you kill the one to save the two?
B. Do you think that you would kill the one to save the two?
C. Would you feel guiltier about your decision if you:
1. committed the murder;
2. did not commit the murder.
(Answer 1 or 2.)
Then 63 subjects in a `debiasing' condition were given the
following argument:
The questions you have just answered concern the distinction
between actions and omissions. We would like you now to
reconsider your answers. First, read this page and think about
it. Then answer the questions again on the next page. (Do not
go back and change your original answers.) Feel free to comment
as well.
When we make a decision that affects mainly other people, we
should try to look at it from their point of view. What matters
to them, in these two cases, is whether they live or die.
In the vaccination case, what matters is the probability of
death. If you were the child, and if you could understand the
situation, you would certainly prefer the lower probability of
death. It would not matter to you how the probability came
about.
The case of the prisoners is much the same. Imagine that each
prisoner knows about the choice you must make but does not know
whether he is the one that would be shot by you or whether he is
one of the two that would be shot by your captors. Each of the
other prisoners would certainly want you to do the shooting. If
you shoot, only one of the three will die, so the chance of death
is 1/3. If you refuse to shoot, then two of the three will die,
so the chance is 2/3.
In cases like these, you have the choice, and your choice affects
what happens. It does not matter what would happen if you were
not there. You are there. You must compare the effect of one
option with the effect of the other. Whichever option you
choose, you had a choice of taking the other option.
If the main effect of your choice is on others, shouldn't you
choose the option that is least bad for them?
These subjects were given the original two cases again. 42 of
the 63 were also asked, `Please comment on how the argument
affected, or did not affect, your answers.' (One of these
subjects did not do so.)
An additional 38 subjects in a control condition were asked,
after the first two questions: `Here are the same questions
again. Please think through your answers. Do not simply repeat
what you said before unless, after further thinking, you
still think that you gave the best answers.' After the repeated
questions, these subjects were asked, `Please comment on why you
changed, or did not change, your answers.'
In sum, 120 subjects did the first two questions, 63 of these
were in the debiasing condition and answered the questions again
after reading an argument (with 42 asked to comment), and 38 were
in a control condition that required simply answering the
questions again.
4.2 Results and discussion
Table 1 shows the proportions of subjects who answered each
question favoring the act (vaccinate or shoot) before the
argument (`initial') and after it (`final'). Of primary
interest, all responses in the debiasing condition, including
Guilt, favored acts more strongly after subjects read the
argument (p<.05 for each comparisons by a sign test). The
control condition showed no significant improvement, although 15
of the 38 subjects did change at least one answer in one
direction or the other. The debiasing condition showed more
change toward the act option than the control condition (across
both situations) for Should (p=.000, Mann-Whitney U test), Would
(p=.005), and, most importantly, Guilt (p=.014). The hypothesis
that the argument would affect anticipated emotion was therefore
supported.
Table 1.
Number of subjects favoring the act (which was less harmful)
for each question, out of the total number who answered the
relevant question (to the right of each slash). `Change' is the
net change toward favoring the act, out of the number who
disagreed with the argument initially.
All Ss Debias condition Control condition
Initial Initial Final Change Initial Final Change
Vaccination
Should 98/120 51/63 59/63 8/12 31/38 32/38 1/7
Would 88/119 47/63 57/63 10/16 26/37 26/38 0/11
Guilt 83/118 44/63 51/63 7/19 25/36 26/36 1/11
Prisoners
Should 29/120 14/63 36/63 22/49 8/38 8/38 0/30
Would 15/120 9/63 17/63 8/54 2/38 2/37 0/36
Guilt 8/120 4/63 14/63 10/59 3/38 1/38 -2/35
The effects of the argument on the three responses did not differ
significantly in the vaccination case. The effect on Guilt was
therefore just as strong as that on Should or Would. In the
prisoner case, however, the number of subjects who changed in
response to the argument was greater for Should than for Would
(p=.017, sign test) or Guilt (p=.007). (These results were
substantially identical when the analysis took into account the
number of opportunities for change.) Importantly, then, the
effect on Guilt is not found in all subjects. Some subjects
change their normative belief (Should) but not their anticipated
emotion in response to the argument. Typical written responses
of subjects who accepted the argument for Should but not for
Would or Guilt were: `... after reading the passage ... I realize
that I should kill the one to spare the 2 lives, but I still feel
that I would be unable to commit a murder.' `I agree with the
argument but could not do it.' `I feel irrational that I would
rather have more people killed ... but my mental blocks against
actively killing things are so strong that I don't think I could
do it, even for the greater good.' `I am aware that this is
actually a selfish decision.'
Table 2 shows the data relevant to questions about contingency
between changes in Should and Guilt in the debiasing condition.
The main result here is that Guilt did not favor the act (vs. the
omission) after the argument unless Should favored the act after
the argument. Subjects who were still unconvinced that they
should act would still feel more guilty from acting. The
argument therefore had no effect on emotion as long as normative
belief favored the omission. When belief initially favored the
act, additional argument might have strengthened the belief
either by reminding the subject of old arguments or presenting
new ones. The additional strength could have led the subject to
anticipate a change in emotion.
Table 2.
Number of subjects in the debiasing condition of Experiment 1
who favored the act or omission before/after the presentation of
the debiasing argument on the Should and Guilt questions. For
example, subjects labeled `omit/act' favored the omission before
the argument but the act after it.
Vaccination
Guilt
omit/omit omit/act act/omit act/act
Should
omit/omit 2 1 0 1
omit/act 2 2 0 4
act/omit 0 0 0 0
act/act 7 5 1 38
Prisoners
Guilt
omit/omit omit/act act/omit act/act
Should
omit/omit 26 0 0 1
omit/act 16 5 0 1
act/omit 0 0 0 0
act/act 7 5 0 2
Responses to the two cases were not significantly correlated.
Table 1 shows that the act was less favored in the prisoner case.
For both cases, Should favored the act more strongly than Would
and Guilt. The Should percentage exceeded that of both Would and
Guilt significantly (p<.001, Wilcoxon test) overall for the
initial answers, but the difference between Would and Guilt was
not significant.
Correlations suggest that anticipated behavior is driven by a
compromise between normative beliefs (Should) and anticipated
emotions (Guilt), which are themselves in the greatest conflict:
questions within a case (before the debiasing argument) were
significantly correlated, with the strongest correlations between
Should and Would (Goodman-Kruskal gamma .97 for vaccination and
.86 for prisoners) and the weakest between Should and Guilt (.66
for vaccination, .55 for prisoners). A log-linear analysis on
the initial responses led to the same conclusion. It found
significant associations (p<.005) between Would and Guilt for
both cases and between Would and Should for the prisoner case
(p<.001; p<.10 for the vaccination case). No separate
association was found between Should and Guilt. (The basic model
was `Should*Would + Should*Guilt + Would*Guilt'; in this
notation, each interaction term includes its main effects. Tests
were carried out by replacing each interaction term with its
corresponding main effects only and then comparing the
likelihood-ratio chi-squares.)
An additional log-linear analysis suggested that the effect of
the argument on Would was associated with its effect on Guilt, in
the debiasing condition. Taking into account the effect of
initial responses, Would2, the Would response after the
argument, was associated with Guilt2 both for vaccination
(p<.05) and for prisoners (p<.001), and it was associated with
Should2 for prisoners (p<.005) but not for vaccination. (The
basic model was `Should2*Should1 + Would2*Would1
+ Guilt2*Guilt1 + Would1*Should1 +
Would1*Guilt1 + Should1*Guilt1 +
Would2*Should2 + Would2*Guilt2.' Each of the
last two interaction terms was replaced with its main effects for
the test.)
To test more directly the role of anticipated guilt in
determining Would and Should responses an additional experiment
was carried out on 28 subjects from a class in thinking and
decision making (which had been assigned some reading about
utilitarianism). The prisoner story was modified by using the
word `kill' instead of `murder' and by adding, `You are
absolutely convinced that they will do what they say.' For 14
subjects, the number of other prisoners was raised from 3 to 39
(with 38 slated to be killed instead of 2). Subjects could
indicate indifference for each question. The Should item favored
killing for 20 subjects and favored not killing for 5 (the rest
being indifferent). The Guilt item favored killing for only 9
subjects (vs. 10), and Would favored killing for only 9 (vs. 11).
These both differed significantly from Should (sign test, p=.035
and .031, respectively), and Would and Guilt were correlated
(gamma=.78, p<.001). (Should and Guilt favored killing
significantly more for 38 other prisoners than for 2.) Subjects
were also asked, `If you were sure that you would not feel guilty
from killing the one to save the others, do you think that you
would do it?' Here, 22 said they would kill (vs. 3);
significantly more than the original Would response (p=.000).
Subjects were then asked: `Suppose that someone else were in this
position instead of you. This person would feel guiltier killing
the one than not killing [not killing the one than killing the
one]. Should this person kill the one to save the others?' 15
(vs. 10) said that the person who felt guiltier from killing
should kill, and 21 (vs. 4) said that the person who felt
guiltier from not killing should kill. The difference is
significant (p=.039, sign test). Subjects think that anticipated
guilt feelings affect not only action but also what a person
should do.
Log-linear analyses suggested that initial Guilt did not predict
susceptibility to argument for Would or Should, but some
important cells contained few subjects, so these analyses are
inconclusive. (For example, of those whose initial Guilt favored
vaccination, Should favored nonvaccination in only five
subjects.)
In sum, an argument about the irrelevance of the act-omission
distinction affected anticipated emotion as well as moral belief.
It seems that anticipated emotion is driven partly by belief.
The effect on anticipated emotion is not as strong as that on
belief in the prisoner case, however. Changes in expected action
seem to occur in combination with changes in anticipated emotion,
and subjects in an ancillary study believed that both expected
action and the normatively correct response are affected by
anticipated guilt feelings.
5 Experiment 2
This experiment concerned judgments of appropriate compensation
for a misfortune, in particular, blindness. Baron (in press a)
reports evidence that judged compensation is greater when a
misfortune was caused by a person than when it was caused by
nature. Ritov, Hodes, and Baron (1989) also found that people
wanted to have more insurance against misfortunes caused by a
person than against those caused by nature. Experiment 2 seeks
to replicate those results and to examine their relation to
emotions. In particular, judged compensation for misfortune
could be related either to sadness at the person's suffering or
to anger that the person had to suffer. (In principle, anger
could be directed against nature as well as against people.) As
in Experiment 1, we provide an argument against differences in
compensation, and we examine the effect of the argument on both
judgments and emotions.
5.1 Method
Fifty-nine subjects, solicited as in Experiment 1, were given the
following questionnaire (the debiasing condition):
Suppose that two equally serious viral diseases, A and B, begin
to spread around the world. Both diseases are spread through
contact, much like flu. Neither disease can be detected early
enough to quarantine those who carry it. Both diseases cause
permanent blindness in those who get them.
It is expected that 5 out of every 10,000 people (that is, 0.05\%
of the population) will get disease A.
It is expected that at least 50 out of every 10,000 people (that
is, 0.5\% of the population) would get disease B if nothing is
done. However, a pharmaceutical company developed a vaccine for
disease B, and the government initiates a crash program which
succeeds in getting everyone vaccinated. The vaccine completely
prevents the disease, but it also causes permanent blindness in 5
out of 10,000 people (that is, 0.05\%) who are vaccinated, a
reduction of 90\% in the amount of blindness. This was absolutely
the best that the company could do in the time available to
develop the vaccine.
1. As part of the program, the government sets aside a special
fund to compensate the victims. How should the fund be used?
Pick one and explain:
a. More money should be spent to compensate those made blind
by disease A.
b. More money should be spent to compensate those made blind
by the vaccine for disease B.
c. Equal amounts should be spent on both kinds of victims.
2. Suppose that no special fund is set aside but insurance is
available for blindness caused in either way. Both kinds of
insurance cost the same. Which would you be more inclined to
buy? Pick one and explain:
a. More insurance against blindness caused by disease A.
b. More insurance against blindness caused by the vaccine for
disease B.
c. Equal amounts of both kinds of insurance.
3. Which would you feel sadder:
a. Hearing about someone who became blind from disease A.
b. Hearing about someone who became blind from the vaccine for
disease B.
c. Equally sad.
4. Which would make you more angry.
a. Becoming blind from disease A.
b. Becoming blind from the vaccine for disease B.
c. Equally angry.
[New page} Now consider the following argument before answering
the original questions again:
Blindness is the same regardless of its cause. It does not
matter whether blindness was caused by a natural disease or by a
vaccine.
The main functions of compensation are to pay for extra expenses
(such as seeing-eye dogs, medical care, etc.), make up for losses
(lost income), and to give people extra money so that they can
try to make up for their loss of vision (e.g., by hiring help).
The need for these things is great, and it is just as great
regardless of the cause of the blindness.
Any gain to one group of victims means that the other group will
be receiving less compensation.
The compensation comes from the same source regardless of the
cause of the blindness. The point of the compensation is not to
punish those that caused the blindness. The government, in any
case, knew that it would have to compensate both kinds of
victims.]
Subjects were asked to answer the original questions again, and
they were then asked: `Explain how the argument affected your
responses, or, if it did not, why it did not.'
An additional 58 subjects did a control condition, in which they
were instructed, after answering the first four questions, `Here
are the same questions again. Please think through your answers.
Do not simply repeat what you said before unless, after
further thinking, you still think that you gave the best
answers.' At the end, they were asked to comment on their
reasons for changing or not changing.
5.2 Results
Table 3 shows the number of subjects giving each answer to each
question initially and finally. Initially, as predicted,
Compensation, Sadness, and Anger showed a bias toward the vaccine
(p=.000 by a sign test comparing responses favoring the vaccine
to those favoring the disease), but Insurance did not (20 vs.
14). However, Insurance was significantly correlated with
Compensation (gamma=.619, p=.001), and those subjects who wanted
more compensation for vaccine-caused injury also tended to want
more insurance for this (10 wanting more, one wanting less). So
some subjects show a bias even for insurance. Some subjects may
show the opposite bias, toward the disease.
Table 3.
Number of subjects favoring each response on each question in
Experiment 2.
{\it Final
Debiasing condition (N=59) Control condition (N=58)}
more for more for more for more for
vaccine equal disease vaccine equal disease
{\it Initial}
Compensation
vaccine 3 14 0 12 5 0
equal 0 41 0 3 33 1
disease 0 1 0 0 2 2
Insurance
vaccine 4 5 1 5 4 1
equal 0 42 0 6 35 0
disease 0 4 3 1 2 4
Sadness
vaccine 10 7 0 18 7 0
equal 0 39 0 2 25 2
disease 0 1 2 0 1 3
Anger
vaccine 30 11 0 40 3 0
equal 1 17 0 1 12 2
disease 0 0 0 0 0 0
Just as the two decision items, Compensation and Insurance, were
correlated, the two emotion items, Sadness and Anger, were
correlated as well (gamma=.700, p=.000). The sum of the two
decision items was not correlated significantly with the sum of
the two emotion items, however (gamma=.176).
The argument seemed to reduce the bias toward the vaccine.
Favoritism toward the vaccine - that is, more compensation or
stronger emotion - decreased in Compensation (p=.001, one-tailed
sign test), Anger (p=.003), and Sadness (p=.035). None of the
corresponding changes in the control condition was significant.
The difference between the two conditions in the reduction of
favoritism was significant only for Compensation (p=.009,
one-tailed Mann-Whitney U test), however.
The argument was more clearly effective in leading subjects to
treat the two causes equally, ignoring whether inequality favored
the vaccine or the disease. In the debiasing condition, the
number of judgments of equality increased significantly for
Compensation (p=.000), Insurance (p=.002), Sadness (p=.004), and
Anger (p=.003). No increases were significant for the control
condition. The difference between the debiasing condition and
the control condition in the change in the number of judgments of
equality was significant for Compensation (p=.009), Insurance
(p=.002), and Anger (p=.009), but not for Sadness.
In the debiasing condition, the effect of the argument on the
number of unequal emotion judgments (Sadness or Anger; 0, 1, or 2
of these could be unequal) was not significantly greater than its
effect on the number of unequal normative judgments (Compensation
or Insurance; p=.239, Wilcoxon test). These two effects were
correlated in the debiasing condition (gamma=.551, p=.003).
Initial Sadness and Anger judgments were completely uncorrelated
with the effectiveness of the argument in changing compensation
judgments.
As in Experiment 1, the argument affected anticipated emotion
(toward equality) both in subjects whose compensation judgments
were already equal and in those who changed from inequality to
equality. Of those whose compensation judgments were already
equal, 12 changed toward equality in emotion judgments, out of 31
who gave initially unequal judgments for at least one emotion.
(One subject in this group changed away from equality in emotion
judgments.) Of those whose compensation judgment changed toward
equality, 6 changed toward equality in emotion judgments, out of
10 who gave initially unequal judgments. Only three subjects
gave unequal compensation judgments after the argument, and the
unequal emotion judgments of all three also remained unequal.
Experiment 2 agrees with Experiment 1 in showing effects of
debiasing on anticipated emotion as well as normative beliefs as
expressed in decisions. The effect on decisions was not
significantly stronger overall than the effect on emotion.
6 Experiment 3
Experiment 3 examines the tradeoff of money and risk. Subjects
are often asked to make judgments of their willingness to pay
(WTP) for the reduction of risk or their willingness to accept
(WTA) money in return for an increase in risk. These kinds of
judgments are used in measuring the value of public goods by
`contingent valuation' (Mitchell & Carson, 1989). In these
studies, WTA is generally higher than WTP for the same risk, as
is consistent with the endowment effect (e.g., Kahneman et al.,
1990).
The WTA-WTP discrepancy could in principle result from the shape
of the utility function for money. The utility gain from
accepting $1,000,000 might equal the utility loss from paying
only $100,000. In this case, it would be normatively correct to
show a large discrepancy. Although WTA-WTP effects for small
amounts of money are not thought to be understandable in terms of
the utility function (Kahneman et al., 1990), they could arise
from such effects applied to limited `mental accounts' (Thaler,
1985). If I have a limited account for tradeoffs of money and
risk (Thaler, 1985), I might think of accepting $100 as
equivalent to losing $50. Such an explanation of the WTA-WTP
difference would not render the effect trivial, but the effect is
usually interpreted as a bias toward the status-quo, not as the
result of the subjective utility function.
In the present experiment, the debiasing argument needed to be
directed toward a single explanation. Accordingly, this
experiment compared WTP to a different measure, `willingness to
forgo loss' (WTFL). In WTFL, the subject is asked what the
monetary loss would have to be in order to make her indifferent
between the monetary loss and the risk. The only difference is
in the assumed status-quo, which is less risk and more money for
WTP and the reverse for WTFL. The future consequences are
identical. Kahneman et al. (1990, Experiments 6 and 7) found
that WTA was higher than willingness to forgo gains (which
they called `choosing'). This result indicates a pure status-quo
bias, independent of wealth or income effects. (McClelland &
Schulze [1991] make a similar comparison, but without holding
future outcomes constant.) Here, we extend this to WTP and WTFL,
and we examine emotional concomitants. WTP is of practical
interest because it is often impossible to use WTA in risk
evaluation: many subjects say that `no amount of money is enough'
(a reasonable response if their utility for monetary gains is
bounded).
Experiment 3 addresses another issue concerning the finding of
Experiments 1 and 2 that the debiasing argument does not always
affect anticipated emotion even when it affects a decision. One
possible interpretation of this incompleteness is that the
questionnaires measured the wrong emotions. This is unlikely: in
Experiment 1, no subject mentioned any other relevant emotion
aside from guilt feelings; in Experiment 2, the high correlation
between Sadness and Anger suggests that emotion judgments in
these cases are subject to a halo effect in any case. In
Experiment 3, as an additional test, subjects were asked to rate
the strength of their `emotional reaction,' without any specific
mention of the type of emotion.
6.1 Method
In the debiasing condition, 91 subjects were solicited as in
previous experiments. Eleven were eliminated for failing to
follow instructions (e.g., writing only comments when numerical
values were requested), leaving 80 for analysis. Subjects were
given the following questionnaire:
In each question, you will be asked to name a dollar figure and
to indicate the strength of your emotional reaction to a
hypothetical outcome. In rating the strength of your reaction,
use a scale of numbers from 0 to 100, in which 0 represents `not
upset at all' and 100 represents `as upset as one can be about
this sort of situation.' Please try to use consistent numbers
throughout the questionnaire. That is, stronger negative
reactions should get larger numbers. If you must go beyond 100
in order to be consistent with an earlier response, feel free to
do so.
All questions concern nuclear waste. As you may know,
repositories for high-level nuclear waste from nuclear plants are
built according to Federal safety standards in places designated
as geologically safe. The wastes are stored more than 100 feet
below the earth's surface, in specially sealed canisters,
designed to last for thousands of years.
1. Suppose that the Federal government had planned to put a high
level nuclear waste repository 500 miles from your home. The
decision was appealed, and more studies were conducted. The
government recently proposed to locate the repository 50 miles
from your home.
You and others in your area expected your taxes to increase. If
you and others accept the change in the site of the repository
(from 500 miles to 50 miles away), the government will cancel the
increase for as long as you live near the repository. {\it How
much would the planned increase have to be, in dollars, so that
you are indifferent between:}
* having the site 50 miles from your home without the increase;
* having it 500 miles from your home with the increase?
Check your answer as follows:
Suppose that the planned tax increase is more than the number you
just wrote. You should prefer having the repository 50 miles
away without the increase.
Suppose that the increase is less than the number you wrote. You
should prefer having the repository 500 miles away with the increase.
If these statements are not true, change your answer so that they
are true. Finally, place a check mark next to your answer to
indicate that you have checked it.
2. Please rate your emotional reactions to the following outcomes
of question 1:
A. Your taxes do not increase, and the repository is located
50 miles from your home.
B. Your taxes increase by the amount you wrote in question 1,
and the repository is located 500 miles from your home.
3. Suppose that the Federal government had planned to put a high
level nuclear waste repository 50 miles from your home. The
decision was appealed, and more studies were conducted. The
government recently proposed to locate the repository 500 miles
from your home.
If you and other accept this change, your taxes will increase for
as long as you live far away from the repository. {\it What tax
increase would you be willing to pay, in dollars, so that you are
indifferent between:}
* having the site 50 miles from your home without the increase;
* having it 500 miles from your home with the increase?
[Subjects again checked their answer and, in question 4, rated
their emotional reactions as before.
[New page.] Now, before answering the original questions again,
consider the following argument:
Questions 1 and 3 concern decisions about the tradeoff between
distance from the depository and money. You can be close to or
far from the repository. You can have less money than you have
now or you can have the same amount. When you answer these
questions, you are saying that you are indifferent between having
less money, by a certain amount, and being far from the
repository.
{\it What matters for decisions like these is the future, not the
past, and not what was `expected.' The future is what you can
affect. It does not matter whether the government initially
planned to locate the repository 50 miles or 500 miles from your
home.}
Therefore, the amount of money you indicate should be the same in
the two cases. In both cases, you have the same choice with
respect to the future.
[Subjects answered questions 1-4 again and then commented on how
the argument affected or did not affect their answers.]]
In the version just presented, question 1 is WTFL and question 3
is WTP. Forty-two of the analyzed subjects received the
questions in this order. For 38 subjects, the order was
reversed, with WTP as question 1 and WTFL as question 3. The two
orders did not differ significantly in any results, so order will
be ignored.
In the control condition, 61 subjects were given the same
questionnaire with instructions to rethink (and to explain any
changes) as in the control conditions in Experiments 1 and 2.
Fourteen subjects were eliminated for failure to follow
instructions, leaving 47 (26 in the order given above, 21 in the
reverse order).
6.2 Results
Table 4 shows the main results. In the answers to questions 1
and 3, WTP was smaller than WTFL in 53 subjects. These subjects
showed a status-quo bias. Thirteen subjects showed the opposite
effect, and 61 subjects gave equal dollar values. The status-quo
bias (53 vs. 13) was significant by a sign test (p=.000).
Table 4.
Number of subjects who responded in each way for Experiment
3. (Ns for emotion questions are smaller because some subjects
did not answer it properly.)
{\it Final
Debiasing condition (N=80) Control condition (N=47)}
WTP WTFL WTP WTFL
worse equal worse worse equal worse
{\it Initial}
Monetary question
WTP worse 2 7 0 0 2 2
equal 0 40 1 2 12 6
WTFL worse 2 17 11 1 8 14
Emotion question
WTP worse 4 4 1 6 2 2
equal 0 36 1 5 14 3
WTFL worse 5 15 9 3 3 8
The same bias was found in the emotion questions. Subjects were
more upset by change than by keeping the status-quo. This was
assessed by comparing the sum of the emotion ratings to question
2 with the sum of the ratings to question 4. Forty-three
subjects gave higher ratings for WTP than for WTFL, and 19 gave
the reverse (p=.002). (Five subjects did not give numerical
emotion ratings.) The responses to the emotion questions were
correlated with those to the monetary question (gamma=.337,
p=.01).
The argument did not significantly reduce the status-quo bias,
but it did move subjects toward equality for WTP and WTFL,
regardless of which judgment was initially greater. Twenty-four
subjects in the debiasing condition moved from inequality to
equality for the two monetary values, and only one subject moved
from equality to inequality (p<.001). The control condition had
no effect, and the difference between experimental and control
conditions in the increase in the number of equal judgments was
significant (p=.013, Mann-Whitney U).
The same effect occurred for the emotions questions: in the
debiasing condition, 19 moved from inequality to equality, and 1
moved from equality to inequality (p<.001). Although the
status-quo bias was still present after the argument for the
monetary questions (12 vs. 4), it was essentially absent for the
emotion questions after the argument (11 vs. 9). The difference
between the experimental and control conditions in the increase
in the number of equal judgments was significant (p=.001) for the
emotion question.
The effect of the argument on the number of equality judgments
was essentially as strong for the emotion questions as for the
monetary questions. A Wilcoxon test yielded no significant
difference in the debiasing condition.
In the debiasing condition, once again, the effect on emotion was
limited to those subjects who either changed from inequality to
equality on the monetary question or who indicated equality both
before and after the argument. Of those who changed to equality
on the monetary question, 11 changed to equality on the emotion
question, out of 18 who answered with inequality initially. Of
those who answered with equality on the monetary question before
and after the argument, 7 changed to inequality on the emotion
question, out of 10 who answered with inequality initially. (One
subject changed from equality to inequality.) Of those subjects
who answered the monetary question with inequality both times,
only one changed from inequality to equality on the emotion
question, and 9 answered this question with inequality both
times.
Although the correlation between change in the emotion question
and change in the monetary question was high (gamma .70, p<.01),
it was not perfect. A total of 11 subjects had unequal
anticipated emotions after the argument even though their
responses to the monetary question were equal. (This number can
be compared to the 18 who changed to equality.) Some subjects
change only their decision without changing their anticipated
emotion. This incompleteness, found in Experiments 1 and 2 as
well, cannot here be explained in terms of asking about the wrong
emotion, for Experiment 3 did not specify the emotion.
No apparent relationship was found in the debiasing condition
between change in the monetary question and the initial answer to
the emotion question.
In sum, the normative argument affected emotion just as strongly
as it affected judgment, but the two effects were imperfectly
correlated. Again, the effect on emotion is found only in
subjects who ultimately agree with the argument, if only because
they agreed with it at the outset.
7 Discussion
Normative arguments affected anticipated emotions essentially as
strongly as they affected normative beliefs or hypothetical
decisions, except for the prisoner scenario in Experiment 1. In
many subjects whose beliefs or decisions agreed with the
normative argument initially, arguments affected anticipated
emotions, bringing emotions into accord with beliefs or
decisions. In other subjects, the arguments affected beliefs or
decisions without affecting emotion.
Normative beliefs could affect anticipated emotions and decisions
in three ways. First, subjects could believe that normative
beliefs determine emotions. The change in anticipated emotion,
in turn, could allow subjects to change their decisions. In this
case, anticipated emotions are a determinant of decisions. The
ancillary experiment following Experiment 1 suggests that some
subjects do believe that decisions are affected by anticipated
emotions, thereby supporting this mechanism (but not impugning
the others). Second, a change in normative beliefs could affect
the decision, and subjects could then believe that they could
bring their own emotions into line with their decision. (See
Ainslie, 1992, for discussion of the control of emotions.)
Third, the normative beliefs could have independent effects on
decisions and beliefs about future emotions. We cannot rule out
any of these possibilities. At first blush it might seem that a
correlation between emotion change and decision change would rule
out the third account, but it is possible that the correlation is
induced by differential effectiveness of the argument in changing
normative belief. Further investigation is needed.
In most of the situations studied here, people seem to assume
that their emotions will be justified and reasonable, so that
they can do what they believe to be right without interference
from their emotions. In the prisoner scenario, however, some
people think that their anticipated emotions are unaffected when
normative beliefs change. In situations like this, a change in
normative belief will not necessarily lead to behavior change,
and it could lead to conflict between belief and emotion. This
`emotional inertia' could complicate efforts to change decision
making through purely cognitive means, such as cognitive therapy
(Clark, 1986), debiasing (Larrick et al., 1990), or education
(Baron & Brown, 1991).
Our results point to two different beliefs about the relation
between emotion and normative judgment. By one belief, emotions
fall into line (or can be brought into line) with rational
judgment. The other belief, a sort of Freudian theory, which
holds that emotion is not affected by rational argument. When
people hold this theory about their own emotions, they may resist
rational appeals concerning their decisions, for - even if they
are intellectually persuaded - they will be held back by the
belief that their emotions will not follow along. Even if they
are convinced that radon is more dangerous than pesticides, they
will not favor a change in governmental priorities away from
pesticides and toward radon, because of the fear that they will
worry more if the change is made.
No evidence, however, indicated that this sort of belief in the
non-malleability of emotion causes resistance to the intellectual
argument itself. We might have expected such resistance if
people believe that emotions and normative beliefs should be
consistent and that emotions would not change. The role of
folk-psychological theories in belief change is another topic for
further investigation (see Baron, 1991).
The danger of holding the view that emotions will coincide with
normative beliefs is that the Freudian view could be correct. A
young man or woman could be convinced that premarital sex is not
immoral and then engage in it, thinking that no guilt feelings
will follow, but then experience the guilt feelings anyway.
Does this sort of thing happen? It's hard to tell. In the real
world, belief change itself is rarely complete and stable. I am
impressed, however, with the findings of cognitive therapists
such as Salkovskis, Clark, and Hackmann (1991), who found that
panic attacks can be controlled by pure change in belief about
their origin. Patients who believe that incipient attacks
represent an immediate medical crisis, when convinced otherwise,
cease having full blown attacks. The fear that leads to the full
blown attack is, in this case, controlled by a belief that
the fear is unjustified.
8 References
Ainslie, G. (1992). Picoeconomics: The interaction of
successive motivational states within the individual. New York:
Cambridge University Press.
Baron, J. (1985). Rationality and intelligence. New York:
Cambridge University Press.
Baron, J. (1988). Thinking and deciding. New York:
Cambridge University Press.
Baron, J. (1991). Beliefs about thinking. In J. F. Voss, D. N.
Perkins, & J. W. Segal (Eds.), Informal reasoning and
education, pp. 169-186. Hillsdale, NJ: Erlbaum.
Baron, J. (in press a). Heuristics and biases in equity
judgments: a utilitarian approach. In B. A. Mellers and J. Baron
(Ed.), Psychological perspectives on justice: Theory and
applications. New York: Cambridge University Press.
Baron, J. (in press b). Morality and rational choice.
Dordrecht: Kluwer.
Baron, J. & Brown, R. V. (Eds.) (1991). Teaching decision
making to adolescents. Hillsdale, NJ: Erlbaum.
Bell, D. E. (1982). Regret in decision making under uncertainty.
Operations Research, 30, 961-981.
Bell, D. E. (1985). Disappointment in decision making under
uncertainty. Operations Research, 33, 1-27.
Bennett, J. (1966). Whatever the consequences. Analysis,
26, 83-102 (reprinted in B. Steinbock, ed., Killing and
letting die, pp. 109-127. Englewood Cliffs, NJ: Prentice Hall).
Bennett, J. (1981). Morality and consequences. In S. M.
McMurrin (Ed.), The Tanner Lectures on human values (vol. 2,
pp. 45-116). Salt Lake City: University of Utah Press.
Chambless, D. L., Gracely, E. J. (1989). Fear of fear and the
anxiety disorders. Cognitive Therapy & Research, 13, 9-20.
Clark, D. M. (1986). A cognitive approach to panic.
Behavior Research and Therapy, 24, 461-470.
Cox, B. J., Swinson, R. P., Norton, G. R., Kuch, K. (1991).
Anticipatory anxiety and avoidance in panic disorder with
agoraphobia. Behaviour Research & Therapy, 29, 363-365.
Elster J. (1985). Weakness of will and the free-rider problem.
Economics and Philosophy, 1, 231-265.
Gorr, M. (1990). Thomson and the trolley problem.
Philosophical Studies, 59, 91-100.
Hare, R. M. (1981). Moral thinking: Its levels, method and
point. Oxford: Oxford University Press (Clarendon Press).
Hershey, J. C., & Baron, J. (1987). Clinical reasoning and
cognitive processes. Medical Decision Making, 7, 203-211.
Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1990).
Experimental tests of the endowment effect and the Coase theorem.
Journal of Political Economy, 98, 1325-1348.
Kahneman, D., & Lovallo, D. (1990). Timid decisions and bold
forecasts: A cognitive perspective on risk taking. Manuscript,
Department of Psychology, University of California, Berkeley.
Kahneman, D. & Miller, D. T. (1986). Norm theory: Comparing
reality to its alternatives. Psychological Review, 93,
136-153.
Kahneman, D., & Snell, J. (1990). Predicting utility. In R.
Hogarth (Ed.), Insights in decision making. Chicago: University
of Chicago Press.
Knetsch, J. L., & Sinden, J. A. (1984). Willingness to pay and
compensation: Experimental evidence of an unexpected disparity in
measures of value. Quarterly Journal of Economics, 99,
508-522.
Kuhse, H. (1987). The sanctity of life doctrine in medicine:
A critique. Oxford: Oxford University Press.
Larrick, R. P., Morgan, J. N., & Nisbett, R. E. (1990). Teaching
the use of cost-benefit reasoning in everyday life.
Psychological Science, 1, 362-370.
Loewenstein, G. (1987). Anticipation and the value of delayed
consumption. Economic Journal, 97, 666-684.
Loomes, G. (1987). Testing for regret and disappointment in
choice under uncertainty. Economic Journal, 97, 118-129.
Loomes, G., & Sugden, R. (1982). Regret theory: An alternative
theory of rational choice under uncertainty. Economic
Journal, 92, 805-824.
Lopes, L. L. (1987). Between hope and fear: The psychology of
risk. In L. Berkowitz (Ed.), Advances in experimental social
psychology (Vol. 20, pp. 255-295). New York: Academic Press.
McClelland, G. H., & Schulze, W. D. (1991). The disparity
between wilingness-to-pay versus willingness-to-accept as a
framing effect. In D. R. Brown & J. E. K. Smith (Eds.),
Frontiers of mathematical psychology: Essays in honor of
Clyde Coombs, pp. 166-192. New York: Springer-Verlag.
Mitchell, R. C., & Carson, R. T. (1989). Using surveys to
value public goods: The contingent valuation method.
Washington: Resources for the Future.
Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission
bias and ambiguity. Journal of Behavioral Decision Making,
3, 263-277.
Ritov, I., & Baron, J. (1992). Status-quo and omission bias.
Journal of Risk and Uncertainty, 5, 49-61.
Ritov, I., Hodes, J., & Baron, J. (1989). Biases in
decisions about compensation for misfortune. Manuscript,
Department of Psychology, University of Pennsylvania.
Sabini, J. (1992). Social Psychology. New York: Norton.
Salkovskis, P. M., Clark, D. M., & Hackmann, A. (1991).
Treatment of panic attacks using cognitive therapy without
exposure or breathing retraining. Behaviour Research and
Therapy, 29, 161-166.
Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in
decision making. Journal of Risk and Uncertainty, 1, 7-59.
Spranca, M., Minsk, E., & Baron, J. (1991a). Omission and
commission in judgment and choice. Journal of Experimental
Social Psychology, 27, 76-105.
Thaler, R. (1985). Mental accounting and consumer choice.
Marketing Science, 4, 199-214.
File translated from
TEX
by
TTH,
version 3.40.
On 17 Sep 2003, 20:06.