Baron, J. (1993). Heuristics and biases in equity judgments: a utilitarian approach. In B. A. Mellers and J. Baron (Eds.), Psychological perspectives on justice: Theory and applications, pp. 109-137. New York: Cambridge University Press.

Heuristics and biases in equity judgments: a utilitarian approach

Jonathan Baron1
Department of Psychology
University of Pennsylvania

Human judgments and decisions have been compared to normative models that specify how judgments and decisions should be made. Such comparisons often find discrepancies between people's goals and the decisions meant to achieve those goals. These discrepancies are often called `biases,' and the informal ways of thinking that lead to them are called `heuristics.' Evidence of biases is useful because we can often find ways of teaching people better ways of thinking, better heuristics, or we can learn when we need to work around the biases by using more formal methods of analysis. In these ways, the discovery of human error leads to ways to improve the human condition.
In this chapter, I shall apply this comparative approach to the study of equity judgments. Of course, even the earliest studies of equity judgments were implicitly concerned with criticizing and improving them. However, by making the interest in criticism more explicit than previous writers, I am forced also to be more explicit about the normative theory to which the judgments are compared. The normative theory I shall defend is utilitarianism, the view that the best decision is the one that maximizes expected utility over all who are affected. I take `utility' to be the extent to which goals are achieved in fact. Utility in this sense need not be the same as utility as expressed in decisions or judgments, which usually involve implicit predictions of utility in my sense (Kahneman & Snell, 1990).
Utilitarianism has been criticized over the last two centuries for leading to conclusions that seem unjust. For example, it can permit hurting people who are already unfortunate if the benefit to those who are more fortunate is sufficiently great. Most modern philosophers (e.g., MacIntyre, 1984; Sen & Williams, 1982; Williams, 1985) think that these kinds of criticisms have stuck.
On the other hand, economists and other social scientists (e.g., Landes & Posner, 1987; Shavell, 1987) often accept some form of utilitarianism. Some philosophers (e.g., Hare, 1981; Singer, 1979) think that recent, more carefully developed, forms of utilitarianism can answer the criticisms, so that the theory now represents the most defensible and complete normative approach to questions of policy as well as individual moral decision making. In general, these writers deal with apparent counterexamples by either reinterpreting them as consistent with a more thorough utilitarian analysis (e.g., Singer, 1977) or arguing that a generally good intuitive principle (e.g., `do no harm') is being overgeneralized. The latter reply challenges the critics to provide some other justification than their intuitive judgment about what is just. So far, the critics have not met this challenge and have, instead, retained a degree of faith in human intuition that recent psychological findings (reviewed by Baron, 1988a) would lead us to question.
If utilitarianism is the correct normative theory, the exercise of asking where our judgments differ from utilitarianism tells us where our judgments are in need of improvement (if improvement is possible). If, on the other hand, the critics are correct, then this exercise tells us why our judgments - correct or not - could fail to bring about the best consequences in the utilitarian sense. This sort of knowledge will at least help utilitarians understand our current situation.
In this chapter, I shall outline the implications of utilitarianism for equity judgments. Although I shall sketch some arguments that might be made for it, I cannot provide a complete defense here. (See Baron, in press, and Hare, 1981, for that.) Then, I shall discuss some apparent departures of decisions and judgment from these implications. These are the purported biases. At the end, I shall discuss the implications of these biases for public policy.
Utilitarianism is a normative theory - a standard for evaluating our decisions - not necessarily a prescriptive theory - a set of practical guidelines (Baron, 1985). It need not apply to everyday judgments such as how to treat a student who asks for an extension on an assignment. Such judgments are typically made on the basis of intuitions - moral heuristics, as it were - or fixed rules, such as, `No extensions, because they are unfair to those who need them and don't request them.' In general, we feel that these intuitions and rules are morally right, and we tend to feel guilty when we go against them, even when we know that they are normatively wrong. These intuitions and rules constitute our `naive theories' of morality (in the sense of McCloskey, 1983, and others).
We might maximize utility better by using our naive theories even if they disagree with the results of our best effort to maximize utility in a given case (Hare, 1981). Certainly, if I considered every student who asks for an extension in great detail, I would find a few who seem to deserve it. But I could be wrong: for example, I could misestimate probabilities. I might make fewer mistakes in the long run to say, `no extensions, period,' than to try to pick out those deserving cases. To paraphrase the late Hillel Einhorn, I must accept error to avoid more error. In some cases, then, following strict rules is a better way to maximize utility than trying to maximize utility. In other cases, though, our naive theories might not maximize utility. We might do better by trying, or by using different naive theories or heuristics.
Equity judgments are often based on these kinds of intuitions or naive theories. In showing that these judgments depart from the normative standard, we must not jump to the conclusion that they should not be used. Instead, we are led to ask whether any alternative set of intuitions or ways of making judgments can bring us closer to the normative model. Often, the answer to this question will be affirmative, but we cannot assume that it is in all cases. In this sense, showing that something is non-normative makes only a prima facie case that it is irrational. The rest of the case involves showing that a better way can be found.

Utilitarianism as normative theory

Utilitarianism is about choosing best options. It allows us to evaluate one option against another. It ignores whether one is an `action' and the other an `inaction.' It holds that decisions should be made on the basis of future differences between outcomes of different options. The past is relevant only if it affects the future. For example, the existence of an agreement (a past event) sets up a situation in which violation of the agreement sets a precedent (in the future) for other violations.
Utilitarianism often conflicts with our intuitive beliefs about what is morally right. This is, indeed, the main phenomenon I shall discuss. Other moral theories are often defended on the basis of appeal to intuitions (or the systematization of intuitions, as in Rawls, 1971). Most utilitarians cannot accept this approach, because it puts us in danger of begging the question by confirming our present moral beliefs. To think about moral theory, we must put aside our present moral intuitions so that we can question them. We must find some other way to justify moral conclusions aside from appeal to our own prior intuitions.
My own approach to justification (Baron, in press) argues that the best moral principles are those that we would each have reason to endorse for others to follow for the sake of the achievement of our nonmoral goals. (The limitation to nonmoral goals avoids begging the question.) These principles must be impartial across people, so that we can endorse them convincingly for others. The goals that we already have give us our only reasons to endorse these principles. We do not need the intuitive moral beliefs that we have put aside. The best principle for us to endorse consistently, in order to achieve our goals as well as possible, is that everyone should act so as to maximize the achievement of everyone's goals, i.e., maximize total utility. By endorsing this principle, we are acting to insure the achievement of our own goals as well as the goals of others.

Interpersonal comparison

An important difference between utilitarianism and other theories is its reliance on interpersonal comparison of utility. Utilitarianism assumes that consequences can be evaluated and that differences in consequences can be compared, ideally if not in practice. In some simple cases, we have two options. One option is better for one person and the other option is better for another person. For example, a couple is trying to decide whether to go to a movie or a play. The husband wants the movie, and the wife wants the play. To make an interpersonal comparison, we must compare the utility differences for each person. Hare (1981) suggests that we do this by imagining that the decision were being made by a single person who had all the goals of both the husband and wife. If the wife prefers the play strongly - so that the difference is large for her - and the husband prefers the movie only weakly - so that the difference is small for him - then the couple does better on the whole to go to the play. The utility loss to the husband (relative to the movie) is smaller than the utility gain to the wife (relative to the movie).
The ability to make such comparisons requires understanding of the goals of other people and willingness to put aside one's own goals for the purpose of achieving such understanding. These conditions are often absent in adversarial situations such as simple bargaining (with mediation or arbitration).
More generally, utilitarianism assumes that we can make judgments of whether a loss (or forgone gain) for one or more people is compensated by a gain (or forgone loss) for one or more others. If we judge that a loss is compensated, then we can justify the loss by pointing to this judgment: `Yes, by choosing this option, I hurt you, but I would hurt others more if I chose the alternative.' (Of course, the judgment can be challenged.) This kind of comparison of losses and gains can be used to define the utility scale itself (Hare, 1981, ch. 7.3), thus avoiding the problems of inferring utility from preferences among gambles faced by individuals (Weymark, 1991).
Importantly, any theory that leads to different conclusions cannot always appeal to this justification in terms of comparison of relative harm. It must sometimes countenance harm to some (relative to other options) without compensating gain for others. (Examples are given later.) Various principles are often invoked for this purpose, such as rights, fairness, retribution, honor, and so on. But these principles cannot derive their authority from any considerations of consequences for the achievement of people's goals. Compared to the utilitarian decision, any other theory yields decisions that achieve people's goals less well.
Several competing theories try to do without interpersonal comparisons. For example, some conceptions of economic efficiency rely on Pareto optimality. By this criterion, a situation is optimal if it is impossible to improve matters for one person without making matters worse for someone else. Notice that, in our example, both the movie and the play could be Pareto optimal.
Sometimes this principle is used in a way that puts the burden of proof on the side against the status quo. If the couple had `planned' to go to the movie, then, by this rule, it would be wrong to change the plan because the change would make matters worse for the husband. This rule creates a bias toward inaction, even when action could increase total utility. Here, the past, the prior plan, is affecting the outcome in ways that need not be relevant to future effects on goal achievement. Nozick (1974) seems to endorse such a principle.
Another competing theory, proposed by Kaldor (1939) and Hicks (1939) holds that a change is optimal if the winners could compensate the losers so that nobody loses. This rule does not lead to a bias toward inaction. But it does not require the compensation to be provided, so it could make things worse. If the wife very much wants something that the husband could easily provide (agreement to go to dinner at a certain restaurant that the husband likes too), so that she might accept it as compensation for attending the movie, then, by this rule, the couple should go to the movie, even if the husband does not in fact provide the compensation.
These competing theories often start from the assumption that interpersonal comparison is impossible or impractical. Although it is indeed impractical in many cases, its theoretical possibility has been defended (Hare, 1981; Griffin, 1986; Baron, 1988b, 1991). What matters is not whether it is easy to do, but rather whether it makes sense for us to try to do it. When we try to compare one person's gain with another's, does it make sense to say that we are accurate or inaccurate in this judgment?
In many cases, it clearly does make sense. For example, certain medical policies, such as vaccination, will hurt a few (who get serious side effects) to help many others. In such cases, we can think of the desires of the `typical person' affected by such a policy. If some of the people affected happen to be `utility monsters,' with utilities 100 times as sensitive to different outcomes as those of other people, we do not know who these monsters are. If each person has an equal chance of being such a monster, then the conclusion that we should base our judgment on the typical person is unaffected (on the basis of expected utility).
In other cases, we must consider differences in tastes. Although this is more difficult, we do know something about the development of tastes, and the errors in our knowledge are as likely to be in one direction as in another (if we are unbiased), so we are better off trying to apply what we know than ignoring it. In the extreme, if identical twins have identical experiences, then we can be sure that their utility functions are the same. (If we can't be sure of this, then we are slipping into a kind of skepticism that would make all normative inquiry impossible.) If we know something about the effects of experience on desires, we can adjust for differences in experiences. Perhaps genetic differences do make people differentially sensitive to certain experiences. In the absence of understanding of such effects, however, the earlier argument about utility monsters applies, and we can neglect genetic differences on the grounds that they are as likely to go one way as another, and we can take a probability-weighted average of all their possible effects.
Interpersonal comparison is not a proof procedure designed to beat down alternative theories. We can usually imagine a set of utility functions that yields decisions compatible with any alternative theory. These functions need not be the ones we would arrive at, however, if we focus our attention on the question of what achieves the goals of those affected. Utilitarianism tells us what to attend to when trying to make difficult judgments as best we can, not how to justify judgments we have already made.

Issues in the application of utilitarianism

Although the basic principle of utilitarianism can easily be stated while standing on one foot, applications require a number of intermediate devices. This section reviews a few of the more common ones.

Declining marginal utility vs. incentive

In general, the utility of one additional unit of a good, e.g., a dollar or an apple, becomes smaller the more units one has already. We say that `marginal utility' declines. This is because goods are essentially means to the achievement of more fundamental goals. We use money, for example, to buy food, and we use food to nourish ourselves. Money is a flexible good, in that we can use it to satisfy many different goals. If we are poor, we use money to satisfy only the goals that can be most easily satisfied with money, such as food and shelter. If we are rich, additional spending to achieve these goals has little effect, and we try to find other ways to spend money to achieve our goals, but these are bound to be less efficient uses of money, for we have already done the things that money can do most easily.
Declining marginal utility is an important utilitarian justification for provision of compensation for injury, whether the compensation comes from personal insurance, social insurance, or liability law. If you suffer a loss that can be made up with money (such as a house fire), then you can obtain greater utility from money than you could before the loss. (You can rebuild.) If utility is marginally declining and insurance is `actuarily fair' (i.e., the insurer makes no long-run profit), then you maximize utility by insuring yourself fully against the loss.
This principle does not necessarily apply when losses cannot be replaced with money or goods. Unless the death of one's child increases one's marginal utility for money, compensation for such a death is not justified (Friedman, 1982). Penalties for death caused negligence are justified by the need to deter negligence, but the penalty need not, in principle, be paid to the victim or the victim's parents. This is nonintuitive, as are many conclusions derived from utilitarianism. But, in fact, people do not generally buy life insurance on their children.
Because the marginal utility of money (and other goods) is usually declining, we can generally increase total utility by taking from the rich and giving the same amount to the poor, other things equal. The poor can make use of money to achieve their goals more easily. Of course, other things are not equal. But this argument pushes us toward equal division of goods.
One utilitarian argument against equal division is that goods can be used as rewards and punishments. The market rewards those who produce goods and services that others want. The availability of this reward causes people to try to provide goods and services that others want. Similarly, we can penalize people for behavior that we want to discourage. I mean `reward' and `punishment' in the general sense of anything that affects the frequencies of certain behavior, if only through expressing consistency with avowed social norms.
The principle of declining marginal utility and the principle of incentive are the major utilitarian considerations in allocating goods (including money), e.g., through wages or taxation. If taxation is not progressive enough, then the poor will suffer too much; matters would be better on the whole if we took more from the rich and less from the poor. If, on the other hand, taxation is too progressive (so that, for example, everyone received the same after-tax income), incentive to produce would be reduced too much; matters would be better on the whole to allow some inequality. Some optimal balance can usually be found, but the correct distribution of goods need not follow any simple rule. Research is required to study both the utility of money for different groups and the incentive effects of extra income. From a utilitarian perspective, rules such as `to each according to her contribution' or `to each the same' are at best crude approximations to an optimum. When the relevant data are not worth collecting, however, use of such rules may be the best we can do.
The deterrence principle is the major utilitarian justification of punishment, including criminal law and tort law. Punishment is a harm, and it therefore must be justified by compensating gains. For utilitarians, two wrongs do not make a right, unless the second wrong prevents even greater wrongs in the future. In some cases, deterrence is optimal if an injurer is required to compensate the victim fully for a loss (Landes & Posner, 1987; Shavell, 1987). In other cases, additional `punitive' penalties are justified (e.g., if the offense is difficult to detect). It might make sense to make someone who kills a child negligently pay a penalty - as deterrence - even if does not make sense to give that penalty to the child's parents. For utilitarians, compensation and deterrence are not necessarily linked, although for practical purposes it might make sense to link them.2
Competing theories (e.g., Rawls, 1971; Cohen, 1989) often make some sort of distinction between different kinds of goods. The idea is that equalization should apply mainly to certain goods, such as adequate nutrition, medical care, educational opportunity, opportunity to compete for positions, and opportunity to participate politically. Other goods - such as expensive cars, vacation homes, pornographic movies, or, more generally, the things that people spend their money on once they have satisfied their basic `needs' should be left to the incentive system, if anywhere.
Utilitarianism does not begin with any distinction of this sort, but a similar distinction can be derived from the competing principles of incentive and declining marginal utility. Both sorts of goods achieve people's goals. The principle of declining marginal utility says that, other things equal, we should try to eliminate discrepancies in all of these things. A progressive income tax could still be justified even in a world in which everyone's basic needs were completely satisfied and money was spent only on luxuries. But in the real world, this principle alone gives us no particular reason to equate the distribution of basic needs rather than the money that people could use to satisfy them or to achieve any other goals.
But goods in the first category - the basic needs - allow people to take part most effectively in a competition based on incentive. Efforts to equate the distribution of these goods - beyond simply giving people the money to pay for them - increase the general effectiveness of an incentive system. (I assume that the effectiveness of money spent on these goods at making people sensitive to incentive is also marginally declining.) People who are poorly educated, in poor health, or malnourished, cannot be induced to contribute much even by fairly heavy incentives. Their production is therefore lost to the rest of us. So we have additional reason to equate these basic goods, aside from the fact that people want them very badly.
Moreover, some of the `luxuries' (e.g., pornography, or buying gas guzzlers) that people pursue with wealth derive from goals that people should be discouraged from developing, because they impair the achievement of other people's goals more than most goals do. We might tolerate some of these goals by allowing them as part of an incentive system, but we surely do not want to encourage them by making them part of a system of basic entitlements.
Although utilitarianism presupposes equal consideration of everyone, it is, in another sense, not a theory of justice at all, for it tries to subsume all other considerations of justice under other headings. In doing this, it can account for some of the intuitions about justice that inspire competing theories.


A second utilitarian argument against equal division of all goods is the existence of individual tastes. If I like apples and you like oranges, it is better for me to get more of the apples and you to get more of the oranges. The market, as an institution, allows us to satisfy our individual tastes, insofar as we can predict them.

Envy and comparison

Certain emotions are connected with distributions, particularly envy, a desire that those who are perceived as coming out ahead unjustly should suffer (Sabini & Silver, 1982; Elster, 1989). Envy is most likely to arise when the comparison between self and others is clear, and this is most likely to happen when people are near each other, working together, in the same family, and so on.
Envy can be an unpleasant consequence of decisions about distribution. It must therefore be included in the evaluation of options. One way to avoid envy is to use simple rules of equal division, or any rules that everyone agrees should be used. Another way is to teach people not to be envious, as traditional Christianity has tried to do.
We often find decision makers paying careful attention to equity within their groups, ignoring gross inequity between members of different groups. Reduction of envy is a possible justification of such concerns.
Another possible justification of such concern is that many goods are evaluated through comparison to the goods possessed by others (Sen, 1987). Clothing, automobiles, and houses are, for many people, valued to the extent to which they are up to (or above) the level of the owner's reference group. Lawyers must dress like other lawyers, not like professors, so even a public-interest lawyer needs a collection of suits. People might be able to overcome such comparative evaluation, just as they can overcome envy, but, until they do, it is a relevant concern.
Many have argued that certain goals, such as those resulting from envy, should be ignored in a utilitarian calculus. I shall not assume this, for these goals are real.3 However, ignoring envy might be one way to teach people not to be envious. If it is, then we might well be justified in ignoring envy in our decisions.


In the second grade, my son was given a story of the `fair bears,' who went out to collect berries. The baby collected the most berries, the (large) father the next most, and the mother the least. All three worked equally hard. The children were asked how the bears should divide up the berries.
Clearly, this is one of several episodes of the same type. The bears have probably worked out a system, and, in the long run, a great variety of systems could be approximately optimal from a utilitarian point of view. My own answer is `not enough information given.' (If it were people and not bears, I might be able to make some reasonable guesses about what had happened before.) If the bears expected to keep the berries that each of them collected, then it would seem unfair to divide the berries equally. Likewise, if they expected to divide the berries equally, it would seem unfair to keep what was found.
Expectations have a role in utilitarian theory. They coordinate social interaction, and their violation weakens the general trust that people have in them, forcing people to take precautions against their violation by others. It does not matter much whether we drive on the left or right side of the road, provided that we all drive on the same side. If even a few people start driving on the other side, then everyone must be extra cautious, and this is costly. Likewise, people make plans based on expectations of how goods will be distributed. They suffer if they cannot make or carry out these plans. A non-optimal system of distribution that fits everyone's expectations can be better than an otherwise optimal system that is introduced unexpectedly, without giving people adequate time to modify their plans.


Here is a story based on Foot (1978): Five people are in a hospital, dying. One can be saved only by a kidney transplant, another by a heart transplant, another by a brain transplant, etc. They are all young and will lead full lives if they are saved. But no donors are available. Then, one day, Harry wanders into the emergency room to ask directions...
So the question for a utilitarian is, why not? The answer could be that this is one of those cases in which our intuitions are wrong, so that Harry really ought to be sacrificed, although a good person would not do it.
Another answer is that Harry, and all of us, have a right not to be sacrificed in this way. But what is a right and where does it come from? A possible utilitarian answer is that a right is a social rule that saves people certain costs of worry and protective behavior. If Harry were sacrificed, we would all have to take precautions against being sacrificed for the benefit of others. We would also worry about it. In the end, the sacrifice might not be justified in utilitarian terms.
More generally, many rights can be seen as social institutions or norms that are reliably enforced, so that people can depend on certain things not happening to them. If rights are violated, people have to change their plans - taking steps to protect themselves against things they thought they were protected against - and they have something new to worry about. These effects, although perhaps small, are spread over many people, and they might therefore outweigh a substantial net good that would be done otherwise.
In a utilitarian analysis, rights are never absolute. They can always be outweighed. As a practical matter, though, our judgments are prone to error, and we are properly suspicious of those who lightly take it upon themselves to violate someone's rights for someone else's imagined good. On the other hand, we should also be suspicious of those who raise the banner of rights on behalf of questionable practices. As Mill (1859) argued, rights are worth enforcing because they serve a utilitarian purpose. Some practices put forward as rights might not be justifiable in terms of their consequences for goal achievement.

The need for adequate information

To test for anti-utilitarian biases, we need to provide subjects with sufficient information so that utilitarianism provides an answer to the question we ask. Many experiments in equity theory are like the fair-bears story described above. Too little information is given, and many different responses are normatively reasonable. These experiments cannot tell us whether anti-utilitarian biases exist. (Some of the experiments I discuss can be faulted on these grounds too, but I include them because I think that more information would not change the results.) For example, Bar-Hillel and Yaari (1987) asked subjects to divide a shipment of 12 grapefruits and 12 avocados between Jones and Smith. Jones derives 100 mg of `vitamin F' from each grapefruit, and Smith derives 50 mg from each grapefruit and from each avocado. Smith and Jones are interested only in vitamin F.
A utilitarian would need to make some explicit assumptions about each person's utility function for vitamin F. If utility were linear with vitamin F, then Jones should get all the grapefruit and Smith, the avocados. But if the minimum requirement for staying alive were 800 mg from this shipment, or if envy were a strong consideration, then Smith should get four of the grapefruit too. (This solution, which also equated the amount of vitamin F for Smith and Jones, was preferred by most subjects.) If utility functions were marginally declining and similar for both people, the optimal solution would be somewhere in this interval. The utilitarian solution is even less clear in several variants of the basic cases, e.g., those in which the subject is told only Smith's and Jones's beliefs about their ability to extract vitamin F, with their true ability unstated.
When relevant information about utility functions is withheld, we can learn about the heuristics that subjects use in the absence of such information. We cannot tell whether subjects regard these heuristics as sufficient even when the relevant information is provided. True, subjects rarely complain about the lack of information, but they are not usually asked whether they think the information is adequate, and they have come to expect psychologists to require judgments to be based on minimal cues.
In the rest of the paper, I shall argue that many people have nonutilitarian intuitions about equity. Moreover, their heuristics have, in many cases, solidified into moral intuitions that are resistant to counterargument. People are not at all monolithic in these intuitions. Many people do bring utilitarian intuitions to bear on the same cases. All this makes for lively disputes in debates about public policy.

Probability, ex-ante and ex-post

When no incentive effects are present, it seems equitable to divide costs or risks equally among similar individuals, and this is justified by declining marginal utility. But what seems equitable before a risk is resolved (ex-ante) may not be equitable afterwards (ex-post) (Keller & Sarin, 1988: Ulph, 1982). If I give each of my two nephews a lottery ticket, they are both being treated equally. If one wins, the situation is then unequal. But suppose that my only choices are to give ten tickets to one nephew or one ticket to each. To get envy out of the picture while they are waiting for the results of the draw, suppose that neither nephew will ever know that I have given them the tickets, and that, if one wins, he will simply be told that someone gave him the winning ticket. Many people might still think that it is wrong to give more tickets to one nephew. The intuition that ex-ante equity is important in its own right has inspired some (e.g., Sarin, 1985) to develop nonutilitarian normative models of equity. (Note that ex-post equality is supported by declining marginal utility. Utilitarianism conflicts only with the intuition that ex-ante equity is justified beyond its effect on ex-post equity.)
In the end, though, only one of them can win, and giving one of them ten tickets makes such an event more likely. The expected utility is greater for the unequal solution. The expected achievement of my nephew's goals - and mine insofar as I care about theirs - is greater with the unequal division. If I were to choose one ticket for each, I must deprive one nephew of an additional chance to win, and I could not justify this by saying that I had given the other a compensating gain.
One possible utilitarian justification of ex-ante equality is that equal division follows a good general rule, and breaking this rule - even when doing so seems to maximize utility in a given case - would weaken support for the rule, so that, in the long run, the consequences would be worse. Notice that this argument presupposes that people will not distinguish between uses of the equality rule that do and do not maximize utility in the specific case.
Another possible utilitarian justification is that arbitrary ex-ante inequality - not justified by incentive - weakens or dilutes the use of inequality of distribution for incentive. If distribution of anything, risks included, is seen as arbitrary, then people will not work so hard to gain benefits or avoid risks.
It is also possible, though, that the equal-chance principle is sometimes an overgeneralization, a true error. Chances to win are not the same as winnings. An equal-division rule for winnings is justified by the declining marginal utility of winnings themselves. But the utility of chances to win is not marginally declining. People still apply the equal division rule because they do not know its justification.
Keller & Sarin (1988) gave subjects hypothetical options like the following:
Option 1: Person 1 dies. Person 2 lives.
Option 2: 50% chance: Person 1 dies, Person 2 lives.
50% chance: Person 2 dies, Person 1 lives.
Option 3: 50% chance: Person 1 dies, Person 2 dies.
50% chance: Person 1 lives, Person 2 lives.
Options 1 and 2 differ in ex ante equity, that is, equity determined before the uncertainty is resolved. Option 2 differs from option 3 in ex post equity, determined after the uncertainty is resolved. Subjects preferred more equal distributions in both kinds of situations, that is, Option 3 is preferred to Option 2, and Option 2 is preferred to Option 1.
According to a simple utilitarian analysis, all three options are equivalent. Emotional considerations could account for the pattern of choices that subjects make, however: Option 2, compared to Option 1, leaves both potential victims with some hope until the uncertainty is resolved. (On the other hand, a 50% chance of death may provoke more than half of the anxiety of certain death, in which case Option 1 would be better.) Option 2 might seem worse than Option 3 because the person who lived might grieve for the person who died in Option 2, and this could not happen in Option 3.
For many cases of dispersed risk, however, these considerations are irrelevant, or they work in the opposite direction. In the case of small ex ante environmental risks from chemicals, for example, doubling the risk level and halving the number of people exposed would probably have little effect on the amount of anxiety in each exposed person, so that it would, on the whole, decrease anxiety (rather than increase it) by reducing the number of people exposed. Likewise, most decisions about risk have no effect on the amount of grief per death. This issue would arise only if the risk were such as to annihilate a substantial portion of some community.
Issues of equity in risk of discrete events such as death are somewhat separate from equity issues involving money. The utility of money is marginally declining. The utility of probability of death is, normatively, linear with probability (putting aside such issues as the effects of deaths on communities, and assuming constant grief per death). Our intuitions about equity in risk bearing - despite their strength - are probably unjustifiable. If we must give up something in order to follow them, then the intuitions themselves harm the achievement of other goals. This may be happening in our judgments about equality of ex-ante risk.

Punishment, deterrence, and compensation

To compare people's judgments to utilitarian theories of punishment and compensation, Baron and Ritov (in preparation) devised a questionnaire concerning liability law for medical products. The questionnaire addressed, among other issues, respondents' understanding of deterrence as a rationale for penalties, and their understanding of the justification of compensation. Respondents were told:
`Imagine that, a few years from now, the Unites States has a new law concerning medical misfortunes, such as injuries or diseases. According to this law, anyone who suffers such a misfortune can request compensation from the government. This compensation is in addition to medical expenses, which are paid out of universal medical insurance.
'If the misfortune might be caused by a medical product made by a company, the person who suffered the misfortune can file a complaint. For each complaint, two questions will be decided separately, each by a different panel:
* One panel will decide whether the company will be fined, and, if so, how much. All fines go to the government, not the injured person. The panel that decides the fines considers only the justice of imposing the fines. It ignores the needs of the government for money, and it ignores how the money will be spent.
* The second panel will decide how much the injured person will be compensated. If any compensation is paid, the government pays it, not the company. This panel takes into account only the situation of the injured person. It ignores the cost to the government, and it ignores the responsibility of the government, if any, for causing or preventing the misfortune.
`Compensation can be provided even if the company pays nothing, and the company can be fined even if no compensation is provided. The government does not have to break even in the long run.
'The panel that decides on compensation to the victim does not know how much the company has been fined, if anything, and the panel that decides on fines does not know how much compensation has been given to the injured person.
`If the misfortune was not caused by a product, the person who suffered the misfortune can still ask for compensation. Only the second panel will hear the case.'
This situation allowed us to examine the determinants of compensation and penalties separately.4
Two cases were then presented. In the first, a woman becomes sterile as a result of taking a new (but well-tested) birth-control pill. In the second, a child dies from a vaccine against a disease that is far more likely to kill the child than the vaccine is (based on real cases - see Inglehart, 1987). Each case was followed by several questions, which were then compared to each other. Questions were directed at both penalties and compensation for the victim (the parents, in the case of the child). Each question asked for a justification as well as a response.
We gave the questionnaire to members of Judicate, a group of arbitrators who are mostly retired judges, a group of environmental activists, a group of members of the American Economic Association, and a group of undergraduate students, and a few law students, 93 respondents in all. In general, the groups did not differ greatly in their responses, and I shall not discuss the group differences here.
Do people see deterrence or incentive - future consequences - as a reason for increasing or decreasing penalties?   One test for this was the comparison of penalty judgments in questions in which the penalty would bring about improved behavior (making a safer product) vs. questions in which the penalty would bring about a less desirable state (no product). For example, one question stated, `The company knew how to make an even safer pill but had decided against producing it because the company was not sure that the safer pill would be profitable. If the company were to stop making the pill that the woman took, it would make the safer pill.' A matched question stated, `If the company were to stop making the pill that the woman took, it would cease making pills altogether.' Analyses are restricted to those respondents who said that the company should pay some penalty in the first question.
If respondents said that the company should be punished less in the second question, then they were sensitive to the future effects of the penalties. Out of 74 respondents who would fine the company in one case or the other and who answered these questions, 31% did think that the company should be punished less in the second case in at least one of the two cases, and 4% thought the company should be punished more.5 Most of the subjects who would punish less in the second case explicitly mentioned incentive in their justifications, but only one of the other subjects mentioned it (arguing that incentive was optimal). Most respondents did not seem to notice the incentive issue.
A second test for incentive was the question in which the penalty would have no future effect at all because the the penalty was secret (and those who would know were retiring) and, in any case, the company was insured by a long-term policy with fixed rates. `These two facts together mean that decisions about payment to the government could have no effect on future decisions by this company or other companies about which pills to produce.' This stipulation ruled out any more general effect of fines on the deterrence of future behavior, if subjects accepted it (and none explicitly denied it as part of a deterrence rationale). Out of 72 respondents who penalized the company in one case or the other and who answered this question, 24% did penalize the company less in this question, and 10% penalized the company more. Again, most respondents were not sensitive to incentive effects here.
Does compensation depend on the cause of the injury?   In each case, the particular injury (sterility, death of a child) was held constant across the questions. Differences in the need for penalties could not serve as a reason for differences in compensation, because these two decisions were independent. We examined three different factors that could affect compensation in the absence of differences in the victim's need for compensation: whether the injury was caused by an omission or a commission; whether it was caused by nature or a company; and whether the company that caused it was negligent. In the negligence case, the company did not follow regulations in producing the product, but the negligence itself did not lead to the injury. Most analyses are restricted to those respondents who thought that some compensation should be provided in the first question.
In the omission questions, the company did not produce the product in question, and the injury would not have occurred if the company had produced it: the woman became sterile because she took a different, more risky, pill; or the child died from the flu. Out of the 63 respondents who compensated the victim in the first question in at least one case and who answered this question, 32% provided less compensation (in at least one case) when the harm was caused by an omission and 2% provided more compensation. The effect was found in both cases. (Arguably, the victim should not have taken the risky pill, but the parents had no choice.) People seem to feel that compensation should be greater when injury is caused by an act than when it is caused by an omission or by nature.
In other questions, the injury was simply caused by nature and could not have been prevented. Out of 69 respondents who provided compensation in at least one case and who answered the questions about compensation for natural injuries, 58% provided less compensation for natural injuries than those caused by a company (in at least one case), 42% provided equal compensation in both cases, and none provided more compensation for natural injuries. People seem to feel that more compensation should be provided for injuries caused by people than for those caused by nature. This issue is discussed further below.
Finally, out of 76 respondents who provided compensation in at least one case and who answered the questions about negligence, 13% provided more compensation when the company was negligent and 1% provided less compensation. Note, however, that the provision of extra compensation did not help to punish the company, for that was handled in the penalty question. In sum, people seem to be influenced by nonutilitarian considerations having to do with establishing some sort of balance between compensation and penalty, as if the two could not be separated.
Does direct compensation have special status?   In our cases, penalties and compensation were determined separately and did not have to be equal. Typically, however, injurers pay victims directly. We thought that people might have a very basic intuition about the need to `undo' a harm that would lead to greater payment when the compensation was paid directly. In other words, people might see the provision of compensation as more than just the assessment of a penalty and the provision of compensation. We tested this by asking how much compensation should be provided to the victim if the injurer pays directly, in the case in which the penalty was secret and the injurer was insured.
Out of 83 respondents who answered the relevant question at least once, 29% provided more compensation when the company paid the victim directly and 5% provided less compensation. Moreover, 50% of the 82 respondents who answered the relevant questions provided more compensation here than when the injury was naturally caused. Out of 79 subjects who answered all the relevant questions, 28% showed both of these effects together and only 1% showed the reverse effects (more compensation from nature and less compensation with direct payment).
This pattern of responses cannot be justified in terms of compensation or incentive. The need for compensation does not change as a function of the direct payment. Incentive is absent because of the insurance and the secrecy. We conclude, then, that a substantial proportion of respondents are inclined to ask injurers to pay more and victims to receive more when the payment is direct, as it is in most cases in the real world. Such a pattern of responding would lead to excessive use of the tort system, compared to what could be justified by the functions of compensation and deterrence.

More on personal vs. natural causation

In another study, not reported elsewhere, I examined a potential alternative explanation of the finding that compensation was greater when the injury was caused by people rather than by nature. Specifically, victims might feel more emotionally upset when their misfortune was caused by a person, thus needing extra compensation.
To test this, I asked subjects to decide on appropriate compensation for victims of injuries. Subjects were told that the victims never knew the cause of their accident and that the injurers never knew the effect of their carelessness on the victim (so they did not know about the compensation either). The latter stipulation ruled out any possible deterrent effect of the compensation provided. Thirty-two student subjects (paid for their time) completed the questionnaire.
Subjects were told, `Imagine that you are the executor of the estate of an eccentric multi-millionaire, whose estate is to be used to compensate people who have suffered some misfortune. It is your task to decide how much to compensate each person. The highest award can be $100,000, but you should feel free to give less if you want to save the money for more deserving cases.
'Please compare the cases in each group to each other. We are interested in why you give different amounts to different cases in each group, or why you give the same amounts, so please provide brief explanations. We are not interested in the absolute levels of compensation, so if you prefer simply to rank the cases (including ties, if any), that is fine.
`When we do not provide details (such as the age of the people), assume that you cannot take these details into account, or that they are the same for all the cases within a group.
'Finally, please imagine that all these events occur in a country in which lawsuits are prohibited. The compensation that you award is therefore the only compensation that people can get for their suffering, even when, in our country, they might be able to sue. (They are, however, insured for their medical expenses.)`
In the first scenario, 'The people in this group have all suffered a permanent back injury that causes severe pain when they are engaged in strenuous activity. During the course of a normal day they experience some pain in the back. In all cases, the injury resulted from tripping over a rock lying on the sidewalk.` In Case 1, the rock 'rolled onto the sidewalk from a nearby hill, as a result of a rain storm.` In Cases 2 and 3, it 'rolled onto the sidewalk because a construction crew, which was working on a nearby building, had violated safety rules for the use of rocks in construction.` Cases 2 and 3 were distinguished by whether those responsible for the violation were caught and punished (Case 2) or not (Case 3). Greater compensation in Case 3 than in Case 2 would suggest that subjects applied an intuition about retribution even when there was in fact no effect of the compensation on the perpetrator.
The second scenario involved blindness from an infection caused either by a mosquito bite, in Case 1, or by violation of sanitary rules by the kitchen staff in a restaurant in Cases 2 and 3, which were again distinguished by whether those responsible were caught and punished. The third injury was loss of a job caused either by normal business competition (Case 1) or by the unfair and illegal practices of another business firm (Cases 2 and 3, distinguished as before).
In each scenario, subjects were told that the victim did not know the cause of his injury (e.g., 'The injured person never learned how the rock got where it was`), and in each case with a perpetrator they were reminded that the perpetrator did not know that the injury had occurred (e.g., 'Those responsible ... did not know about the injury`).
Table 1 shows the ranking patterns of the three cases within each injury. Out of 32 subjects, 16 provided equal compensation for all cases within each injury. Fifteen of the remaining 16 provided more compensation in Case 2 or Case 3, where carelessness was to blame, than in Case 1, where nature was to blame, in a majority of comparisons (p<.001 by a Wilcoxen test on the number of scenarios per subject, for each comparison). Of the subjects who compensated Cases 1 and 2 differently, 10 out of 12, 8 out of 10, and 11 out of 11 were in the predicted direction for the three injuries, respectively. For Cases 1 and 3, the analogous results were 11 out of 12, 8 out of 10, and 11 out of 11.
Table 1
Number of subjects who showed each pattern of ranking in the experiment on compensation.
               Back        Blindness   Job loss
3 > 2 > 1       2           0           2
2 > 3 > 1       1           1           1
1 > 3 > 2       0           1           0
3 > 1 > 2       1           0           0
2 = 3 > 1       7           7           8
3 > 1 = 2       1           1           0
1 > 2 = 3       1           1           0
1 = 2 = 3      19          21          21

Note: In case 1, the misfortune was caused by nature; in cases 2
and 3, it was caused by negligence.  In case 2, those who were
negligent were caught and punished; in case 3, they were not. 
Higher ranks reflect greater compensation.

There was no significant difference in the compensation provided to Cases 2 and 3 (carelessness vs. accident). This result conflicts, inexplicably, with the result of the last study, in which compensation was greater when the company was negligent. Apparently, that phenomenon is not robust.
Typical justifications of lower compensation in Case 1 were: 'In Case 1, no compensation should be awarded because no one is at fault. ... it was a freak of nature and no one is to blame.` 'Case 1 - lowest amount -> uncontrollable Natural Act - unpreventable unless person themselves hadn't tripped.` 'Both people in cases 2 & 3 are victims of people's carelessness so we decide to give [them] more than case 1.` 'Case 1 should be awarded $10,000 only, because the rock came loose because of nature and it was inevitable that it would come apart.` 'Cases 2 & 3: A person eating in a restaurant should be able to assume that the food will not make them blind, so [I] awarded these two more than Case 1 where the blindness was from natural cause.` One subject referred to victim incentive in the business scenario: 'Case 1 should get less money because he could have done something to contribute to the company going out of business...`
This experiment indicates that the heuristic of providing less compensation for misfortunes caused by nature than for those caused by people is not dependent on subjects' beliefs about victims' or injurers' knowledge or emotions. This person-causation bias could lead to inequity in the provision of compensation. We are more inclined to compensate those who are injured by others than those who are victims of natural accidents such as the unfortunate circumstances of their birth. Such inequity is nonoptimal for maximizing utility because it leaves some people uncompensated who could benefit greatly from a relatively small amount of compensation, while those who are injured by people sometimes get compensation that does them less good.

Inequity and change in public policy

Other equity biases often seem to inhibit reform. Most reforms help some people and hurt others. For example, a higher tax on gasoline in the U.S. will help the whole world by reducing CO2 emissions, and it will help most Americans by reducing traffic, pollution, and the budget deficit. But it will hurt those few Americans who are highly dependent on gasoline, even when we take the other benefits into account. Some of those who would be hurt are poor, and would suffer considerably. Congress might try to craft some sort of compensation for those who are hurt, but it is difficult to target them accurately.
The problem of uncompensated harm arises in practically any sort of reform in energy and environmental policy. The argument is basically in the form of a heuristic rule or intuition, 'Don't hurt people.` This argument has been made against increased gasoline taxes. Very likely, the same argument will be raised when other needed environmental reforms - such as those occasioned by global warming or the population explosion - are considered seriously. It is even made when the ultimate effect is to reduce inequity: I once almost convinced someone that the U.S. ought to abolish its sugar quotas in order to help the impoverished families who work on sugar plantations in Jamaica and the Philippines, until I mentioned that a much smaller number of sugar workers in the U.S. would lose their jobs as a result of such a move.
Most positions in public policy can be supported by some rational argument, so we cannot simply write off the opponents of gasoline taxes or supporters of sugar quotas as irrational. (For example, it is possible that abolition of sugar quotas will not really help the workers, although I suspect that those who say this have no particular reason to believe it.) It is possible, however, that part of the basis of their position is unjustifiable and that, if this part were removed, many opponents of truly beneficial reforms would not hold their position, or they would not fight for it so strongly.
Opposition to reform on grounds of uncompensated harm alone is a bias. Uncompensated harm is also caused by failure to make reforms. If we do not raise the gasoline tax, then - compared to the alternative at issue - more people will get emphysema, more people will live in poverty as a result of economic stagnation, etc. And if I am right that abolishing sugar quotas will help many families in foreign countries, then failure to abolish these quotas, relative to the alternative, is hurting these people. We must decide which hurt is smaller. What matters is the future achievement of goals, for that is what our decision can affect. The status quo is normatively irrelevant, as is the distinction between action and inaction.
In several studies (Spranca, Minsk, & Baron, 1991; Ritov & Baron, 1990, in press) my colleagues and I have demonstrated a bias toward inaction, especially in cases in which both action and inaction can cause some harm. I am suggesting here that this bias tends to inhibit change in public policy because we attend to changes rather than ultimate results, and we attend more to harms than to benefits. When some people are hurt in order to help others, we view the change as unfair, even if we would prefer the result to the status quo if given a straightforward choice between the two (with neither functioning as the status quo).

Resistance to voting for coerced reform

Some evidence for such an effect comes from a study of Baron and Jurney (1990). We presented subjects with six proposed reforms, each involving some public coercion that would force people to behave cooperatively, that is, in a way that would be best for all if everyone behaved that way. The situations involved abolition of TV advertising in political campaigns, compulsory vaccination for a highly contagious flu, compulsory treatment for a contagious bacterial disease, no-fault auto insurance (which eliminates the right to sue), elimination of lawsuits against obstetricians, and a uniform 100% tax on gasoline (to reduce global warming).
Most subjects thought that things would be better on the whole if the reforms, as we described them, were put into effect, but many of these subjects said that they would not vote for the reforms. (A much smaller group of subjects made the opposite kind of 'reversal,` in which they said that they would vote for a proposal that they thought would make things worse.) A status-quo effect was also found: subjects who thought that the reforms were beneficial were less likely to vote to repeal the reforms, once they were in effect, than comparable subjects were to vote against the reforms at the outset.
Subjects who voted against proposals that they saw as improvements cited several reasons (among a list of reasons that we gave them). The following three reasons (shown in the form used here in one of the studies) played a major role in such resistance to reform, as indicated both by correlations with resistance (among subjects who saw the proposals as improvements) and by subjects indicating that these were reasons for their votes:
Harm. 'The law [or rule] would make some group worse off than they were before the law.`
Rights. 'The law would take away a choice that people ought to be able to make.`
Fairness. 'The law would unfairly distribute the costs of the change. That is, some people would suffer more than they should, relative to other people.`
For example, in one study, 39% of subjects said they would vote for a 100% tax on gasoline, but 48% of the non-voters thought that the tax would do more good than harm on the whole. Of those subjects who would vote against the tax despite thinking that it would do more good than harm, 85% cited the unfairness of the tax as a reason for voting against it, 75% cited the fact that the tax would harm some people, and 35% cited the tax taking away a choice that people should be able to make.
Unfairness was generally the most consistent of these reasons for resistance, across different experiments and different cases. This result provides some evidence for the role of equity judgments in resistance to reform. Note that, in this study, we deal with subjects who saw reform as an improvement in utilitarian terms. We essentially asked subjects this directly.

Resistance to inequality: Experiment 1

To examine further the role of fairness and harm in policy evaluation more quantitatively, I carried out two experiments using hypothetical situations. The first experiment examined tradeoffs between, on the one hand, the total gain or loss in income of two groups, and, on the other hand, increases or decreases in the inequality of outcome or in the change from the status-quo.
Twenty-four paid subjects (students) were given a questionnaire, which began, 'Imagine that you are the president of a small island republic. You have the power to make treaties by yourself. You are engaged in trade negotiations with your sole trading partner, a much larger nation. Your entire economy is dependent on agricultural exports. Crops are grown by small farmers who own their own farms. Half are bean growers, and half are wheat growers. Bean growers and wheat growers are equally needy and equally deserving.
`Each of the following cases represents a final offer made by your trading partner. For each offer, you are shown the present average annual income of each group, and the income that would result from accepting the offer. If you decline an offer, the current situation will stay in effect for at least two years.
Please indicate whether you would accept each offer ('yes`) or not ('no`). Feel free to comment on the reasons for your policy in each part.'
Part A, with 21 cases, began with the following case:
1. Income of Income of
bean growers wheat growers
Decline $30,000 $30,000
Accept $40,000 $20,000
The income of the wheat growers for `Accept' (underlined here but not in the original) was incremented by $1,000 per case up to Case 21, where it reached $40,000. Case 2 is therefore the first case in which a net improvement is possible. Case 11 is the first case in which no harm is done to the wheat growers by accepting. The critical measure here is the minimum income of wheat growers at which the subject accepts the offer, minus $20,000. We can think of this net gain (roughly) as the minimum that the subject is willing to accept to give up equity (WTA).
In Part B, with 11 cases, the first case was:
1. Income of Income of
bean growers wheat growers
Decline $40,000 $20,000
Accept $30,000 $30,000
The income of wheat growers for Decline (underlined here) was incremented in $1,000 steps until it reached $30,000 in Case 11. Case 1 entails no net loss. The remaining cases entail greater net loss for removing less initial inequality. Notice that Part B is identical to Cases 1-11 in Part A, except that the `Decline' and `Accept' are reversed. At issue here is the highest income of wheat growers, minus $20,000, at which the subject accepts the offer. We can think of this net loss as the maximum that the subject is willing to pay for equity (WTP).
In Parts A and B, one of the options is always equality of income. The value of such equality is therefore confounded with the value of equality or inequality in the change from the status-quo that results from accepting an offer. Parts C and D were identical to Parts A and B, respectively, except that the income of the wheat growers was everywhere shifted upward by $5,000, and Part C had 16 cases instead of the 21 in Part A. Hence, the gains or losses of each group from the status-quo were the same as in Parts A and B. In Part C, however, initial equality never occurred, and final equality occurred in Case 16 (the last case) rather than Case 21. In Part D, neither initial nor final equality occurred. Comparison of Parts C and D to parts A and B therefore tells us about the role of inequality of results (initial or final) as opposed to inquality of changes (which are the same in both conditions).
Table 2 shows the statistics for WTA and WTP for the four parts. In general, about half the subjects gave the indicated modal response in each condition and the remaining responses were spread out over the range. The modal (and median) response in Parts A and C represent the first case in which the wheat growers do not lose at all in order to help the bean growers gain. (In Part C, this case is also the one at which the overall group difference of $5,000 is not increased.) Apparently, most subjects are unwilling to hurt one group at all in order to help the other, even if the hurt is as small as 10% of the gain.6 The same heuristic accounts for the modal unwillingness to accept any offers in Parts B and D.
Table 2
Statistics for four conditions of island experiment, in thousands of dollars. The WTA measure is the increase in income of the wheat growers required to accept the change. The WTP measure is the greatest acceptable loss in order to increase equality. For WTP, a value of -1 indicates that the subject declined all offers, including the first (which had no net loss). 'N at mode` indicates the number of subjects, out of 24, who gave the modal response.
Part                 A            B            C            D
 bean decline        30           40           30           40
 wheat decline       30           20-30        35           25-35
 bean accept         40           30           40           30
 wheat accept        20-40        30           25-40        35
Measure              WTA          WTP          WTA          WTP
Mean                  9.2          2.0          8.1          1.0
S.D.                  5.4          3.9          4.1          3.0
Median               10            0           10            0
Mode                 10           -1           10           -1
N at mode            10           11           11           11

Note that these responses lead to a large discrepancy between WTA and WTP as measures of the value of equality (t=6.68, p=.000, combining Parts A and C and Parts B and D). The removal of equality from one of the options (Parts C and D vs. Parts A and B) reduces the value of equity somewhat (t=2.60, p=.016). This reduction is not significantly different for Part A vs. C and B vs. D. Together, these results indicate that subjects are most concerned about changes in the status quo, although there is some value placed on a state of equality as well. The unwillingness to hurt one group in order to help another group leads to large failures to maximize net benefit.

Resistance to inequality: Experiment 2

The last experiment provided evidence for the `harm' heuristic: that it is wrong to hurt some people for the benefit of others. The present experiment looks for additional evidence for the `fairness' heuristic (equal distribution) applied to changes, as well as initial and final states.
Subjects (students) were given a questionnaire beginning: `You are the principal of an elementary school with 200 children, 100 boys and 100 girls, in a small town. An epidemic of flu is approaching your town. The aches and fever typically last for about a week.
'In the following cases, you are given statistics on the number of boys and the number of girls expected to get the flu if a vaccination program is undertaken in your school and if it is not. What is the greatest amount of money that you would spend from the school budget in each case on a vaccination program?`
Subjects were told to use any units - dollars, percent, or dollars per child - but to do so consistently, in terms of the total or average amount spent for both boys and girls.
The first case read:
1.             Boys  Girls
Do nothing     15    15
Vaccinate      10    10

Twenty-two cases differed so as to manipulate the following variables: BEGINDIFF, the beginning difference (without vaccination) between the groups (ranging from 0 to 10); CHANGEDIFF, the difference in the change resulting from the vaccination (0 to 20); ENDDIFF, the end difference if the vaccinations are given (0 to 10); TOTALCHANGE, total reduction in disease resulting from the vaccine (10 to 20), and END, the final overall disease rate with vaccination (30 to 20). (The cases also varied from 30 to 50 in BEGIN, the initial number without vaccination, but BEGIN could be deduced from END and TOTALCHANGE.) Each case in which boys or girls were treated differently was repeated with the sexes reversed. Beginning levels varied from 15 to 25 for one sex or the other, and ending levels varied from 15 to 5. In six cases, the vaccine affected only one sex. In six cases, it affected both sexes equally.
An additional 22 cases were made up in which 'the program was already planned` and 'you could save money for your school by canceling the program.` Subjects were asked, 'What is the smallest amount of savings you would have to receive by canceling the program in each case?` These cases were identical to the initial 22 cases except that 'Do nothing` was replaced with 'Cancel,` and 'Vaccinate` was replaced with 'Do not cancel.` The two groups of 22 cases were distinguished by a additional dichotomous variable, SQ (for 'status-quo`).
Thirty-three subjects received this form of the questionnaire, and 35 subjects were given the same questionnaire with the two SQ conditions reversed. Thirteen Ss in the original condition and 14 in the reverse condition were eliminated from analysis, either because they did not follow instructions (9 subjects - e.g., simply said whether they would vaccinate or not), responded only to TOTALCHANGE (14), or gave the same response to all cases in a group (2).
Data were analyzed by regressing each subject's responses on BEGIN, END, BEGINDIFF, CHANGEDIFF, and ENDDIFF, separately for each SQ condition. The effect of inequality was assessed as the standardized coefficients for BEGINDIFF, CHANGEDIFF, and ENDDIFF. Across all subjects, the means of the BEGINDIFF and ENDDIFF coefficients were not significantly different from zero, nor were they different as a function of SQ. The CHANGEDIFF coefficient was significant in the hypothesized direction, with greater inequality in the change leading to lower value (mean=0.100, t=2.46, p=.009, one-tailed, for the mean coefficient of both SQ conditions; the result was the same in both SQ conditions, with means of 0.103 and 0.096).
Examination of individual regression coefficients confirmed the overall results. Sixteen (out of 43) subjects had at least one coefficient for CHANGEDIFF significant at p<.025. Four subjects had one CHANGEDIFF coefficient significant in the wrong direction. Only five subjects showed any other significant coefficients.
There was no status-quo effect overall, as determined from the constant terms in the regressions. This contrasts sharply with the last experiment, in which the status-quo effect was extremely large. In that experiment, however, changes from the status-quo involved hurting one group in order to help another. In the present experiment, all groups were helped by the vaccinations. Still, a status-quo effect is often found (Samuelson & Zeckhauser, 1988), and it remains a puzzle why it was not found here.
In sum, the present experiment revealed that some subjects were concerned about inequality in the distribution of the benefits of a vaccination program. Inequality reduced the value they placed on the program. Once again, use of this heuristic could lead to failure to maximize the total benefits of a program.
These experiments are preliminary because I have not ruled out a possible alternative explanation of the results. Specifically, subjects might take into account not only the direct costs and benefits of options but also the indirect costs and benefits that result from the emotional response of those affected. Wheat growers might resent having their income cut, and this resentment might increase the disutility of the cut itself. It is more difficult to make this argument in the case of the children, for they would be unlikely to know how much the vaccine helped each of the sexes. But anticipated resentment could again have some role.
On the basis of the experiment reported in the last section, in which I have tried to explicitly remove the possibility of such emotions, I doubt that this sort of explanation accounts for the results. More likely, I think, subjects are applying heuristics without explicitly evaluating the consequences of the options. Further research must be done, but, in the meantime, I believe that the present results make more plausible the claim that many people do not evaluate options in terms of their expected utilities, but, rather, in terms of heuristics that yield the best solution only some of the time.


In this paper, I have summarized the utilitarian approach to distribution, and I have described several studies suggesting that people's intuitions often contradict utilitarian theory. Many people seem to think that punishments or penalties are inherently deserved, so that they should be applied even when deterrence is absent. People also tend to think that compensation should be greater when harm is caused by people than when it is caused by nature, especially (in one study) when injurers are negligent. Greater compensation is also provided when the injurer compensates the victim directly; this seems to fit into a schema for compensation. Also in contrast to utilitarian theory, people are reluctant to harm one person in order to help another, and they are reluctant to initiate reforms when the benefits of reform are unequally distributed, even when the reforms are beneficial on the whole.
In all studies, these biases (departures from the theory) were by no means universal. People differ in the extent to which they subscribe to utilitarian theory in each case. Many subjects follow the theory. (The number who follow the theory depends heavily on the question, and very likely also on the wording of the question.) It does not seem to be beyond our cognitive power to follow it. Most people seem to have utilitarian intuitions alongside of others. The findings of Larrick, Nisbett, & Morgan (1990) suggest that subjects who follow this sort of theory are at least no worse off personally than those who do not.
My own view (Baron, 1990) is that people should be taught to understand the utilitarian approach. If instruction in the kind of principles I have outlined were widespread, then I think that people would have a common language for discussing their differences about matters of social policy, and we would be able to reduce many of our differences to empirical questions (or to judgments about the answers to these questions, when we must decide in the absence of data). The need for a common language is particularly important in the years immediately ahead, when the world must collectively decide (if only by default) how to respond (or not respond) to an interrelated set of issues concerning population growth, environmental degradation, and global warming. The question of how the burden of global warming will be distributed among different people and people of different nations will raise very serious questions of equity (Baron & Schulkin, in press).
If utilitarianism were better understood, those who reject it would do so on the basis of understanding, not on the basis of ignorance. These critics would be encouraged to work out alternative theories more thoroughly, so that they provided solutions to real problems just as well as utilitarian theory does.


Bar-Hillel, M., & Yaari, M. (1987). Judgments of justice. Manuscript, The Hebrew University, Jerusalem.
Baron, J. (1985). Rationality and intelligence. New York: Cambridge University Press.
Baron, J. (1988a). Thinking and deciding. New York: Cambridge University Press.
Baron, J. (1988b). Utility, exchange, and commensurability. Journal of Thought, 23, 111-131.
Baron, J. (1990). Thinking about consequences. Journal of Moral Education, 19, 77-87.
Baron, J. (in press). Morality and rational choice. Dordrecht: Kluwer.
Baron, J., & Jurney, J. (1990). Norms against coerced reform. Manuscript, University of Pennsylvania.
Baron, J., & Ritov, I. (in preparation). Intuitions about punishment and compensation in the context of tort law. Manuscript, University of Pennsylvania.
Baron, J., & Schulkin, J. (in press). Equity, moral judgments, and global warming. Environment.
Cohen, G. A. (1989). On the currency of egalitarian justice. Ethics, 99, 906-944.
Elster J. (1989). The cement of society. Cambridge: Cambridge University Press.
Foot, P. (1978). The problem of abortion and the doctrine of the double effect. In P. Foot, Virtues and vices and other essays in moral philosophy, pp. 19-32. Berkeley: University of California Press. (Originally published in Oxford Review, no. 5, 1967.)
Friedman, D. (1982). What is 'fair compensation` for death or injury? International Review of Law and Economics, 2, 81-93.
Griffin, J. (1986). Well being. Oxford: Oxford University Press (Clarendon Press).
Hare, R. M. (1981). Moral thinking: Its levels, method and point. Oxford: Oxford University Press (Clarendon Press).
Hicks, J. R. (1939). The foundations of welfare economics. Economics Journal, 49, 696-712.
Inglehart, J. K. (1987). Compensating children with vaccine-related injuries. New England Journal of Medicine, 316, 1283-1288.
Kahneman, D., & Snell, J. (1990). Predicting utility. In R. Hogarth (Ed.), Insights in decision making. Chicago: University of Chicago Press.
Kaldor, N. (1939). Welfare propositions of economics and interpersonal comparison of utility. Economic Journal, 49, 549-552.
Keller, L. R., & Sarin, R. K. (1988). Equity in social risk: Some empirical observations. Risk Analysis.
Landes, W. M., & Posner, R. A. (1987). The economic structure of tort law. Cambridge, MA: Harvard University Press.
Larrick, R. P., Nisbett, R. E., & Morgan, J. N. (1990). Who uses cost-benefit reasoning? Manuscript, University of Michigan, Ann Arbor.
MacIntyre, A. (1984). After virtue: A study in moral theory (2nd ed.). Notre Dame, IN: University of Notre Dame Press.
McCloskey, M. (1983). Naive theories of motion. In D. Gentner & A. L. Stevens (Eds.), Mental models (pp. 299-324). Hillsdale, NJ: Erlbaum.
Mill, J. S. (1859). On liberty. London: Parker & Son.
Nozick, R. (1974). Anarchy, state, and utopia. New York: Basic Books.
Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.
Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission bias and ambiguity. Journal of Behavioral Decision Making, 3, 263-277.
Ritov, I., & Baron, J. (in press). Status-quo and omission bias. Journal of Risk and Uncertainty.
Sabini, J., & Silver, M. (1981). Moralities of everyday life. Oxford: Oxford University Press.
Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1, 7-59.
Sarin, R. K. (1985). Measuring equity in public risk. Operations Research, 33, 210-217.
Sen, A. (1987). The standard of living. Cambridge: Cambridge University Press.
Sen, A., & Williams, B. (Eds.) (1982). Utilitarianism and beyond. Cambridge: Cambridge University Press.
Shavell, S. (1987). Economic analysis of accident law. Cambridge, MA: Harvard University Press.
Singer, P. (1979). Practical ethics. Cambridge University Press.
Singer, P. (1977). Utility and the survival lottery. Philosophy, 52, 218-222.
Spranca, M., Minsk, E., & Baron, J. (1991). Omission and commission in judgment and choice. Journal of Experimental Social Psychology, 27, 76-105.
Ulph, A. (1982). The role of ex ante and ex post decisions in the valuation of life. Journal of Public Economics, 18, 265-276.
Weymark, J. A. (1991). A reconsideration of the Harsanyi-Sen debate on utilitarianism. In J. Elster & J. E. Roemer (Eds.), Interpersonal comparisons of well-being, pp. 255-320. New York: Cambridge University Press.
Williams, B. (1985). Ethics and the limits of philosophy. Cambridge, MA: Harvard University Press.


1The research described here was supported by grant SES-8809299 from the National Science Foundation. I thank Robyn Dawes, Jon Elster, Clark McCauley, Barbara Mellers and Jay Schulkin for comments on earlier drafts.
2We should not assume this, however. New Zealand has been experimenting with a compensation system totally separate from the tort system.
3In saying this, I do not imply that we should invoke such goals to explain every deviation from utilitarian theory. Rather, in making decisions, we should honestly decide whether such goals are relevant. In experimental work, we should try to eliminate them (Baron, 1991).
4The story is not entirely unrealistic. New Zealand now has a system something like this.
5These results are unlikely to result from confusion of compensation and deterrence, for the proportions were essentially identical in subjects who did not think that the victim should be compensated and in those who did think so.
6It is unlikely that subjects believe that the utility loss of a $1,000 loss is greater than a utility gain of a $10,000 gain. Previous studies of the utility of gains vs. losses have not found such large differences, but this was not checked here.

File translated from TEX by TTH, version 3.59.
On 18 Oct 2004, 21:58.