Baron, J. (2003). Value analysis of political behavior - self-interested : moralistic :: altruistic : moral. University of Pennsylvania Law Review, 151, 1135-1167.

Value analysis of political behavior - self-interested : moralistic :: altruistic : moral

Jonathan Baron1
Department of Psychology
University of Pennsylvania
baron@psych.upenn.edu

Abstract

I distinguish four types of goals: self-interested, altruistic, moralistic, and moral. Moralistic goals are those that people attempt to impose on others, regardless of the others' true interests. These may become prominent in political behavior such as voting because such behavior has relatively little effect on self-interested goals. I argue (sometimes with experimental evidence) that common decision biases concerned with allocation, protected values, and parochialism often take the form of moralistic values. Because moralistic values are often bundled together with other values that are based on false beliefs, they can be reduced through various kinds of reflection or ``de-biasing.''

If your morals make you dreary, depend on it they are wrong. I do not say ``give them up,'' for they may be all you have; but conceal them like a vice ....''


Robert Louis Stevenson, A christmas sermon (Part II)

Introduction

The quality of government is a major determinant of the quality of people's lives. Underdeveloped countries suffer from misguided economic policies such as price controls and subsidies, over-investment in arms and underinvestment in health and education, excessive regulation, and corruption. Developed countries also suffer from unbalanced national budgets, inefficient subsidies, and other ills, although the developed countries generally have better government. World problems such as depletion of fisheries suffer from the lack of regulation and international institutions.2

In the long run, and sometimes in the short run, government policy depends on the political actions and omissions of citizens. But the individual citizen typically has little influence. Thus, one of the most important determinants of our well being is controlled almost completely by our collective behavior but is affected very little by our individual behavior. This situation contrasts with that in a free market for goods and services, where our well being is also affected by our decisions, but individual decisions have a direct effect on individual outcomes. In political behavior - such as voting, responding to polls, trying to influence others, writing letters, or making contributions of time and money - our little actions are pooled together to make one huge decision that affects us all.

This situation opens up the possibility that political behavior is less sensitive to its consequences for the decision maker, and more sensitive to other factors such as emotions and, I shall argue, moral (or moralistic) principles.3 Political behavior might therefore be more subject to decision biases, fallacies and errors than is market-oriented behavior.4

This insensitivity to consequences, if it happens, is interesting for the study of rational decision making. It raises special questions. In this paper, I want to address some of these issues. First, I describe a distinction among types of values, which is the main point of this article. Next I summarize some experimental evidence about how people think about their values.

I assume utilitarianism as a normative theory, that is a standard against which I compare people's judgments and decisions. This theory is defended elsewhere (Baron, 1993, 1996; Broome, 1991; Hare, 1981; Kaplow & Shavell, 2001). Utilitarianism enjoins us to make decisions that produce the best consequences on the whole, for everyone, balancing off the gains and losses. If we fall short of this standard, then we make decisions that make someone worse off - relative to the standard - without making anyone better off to a sufficient extent to balance the harm. Utilitarianism, then, has at least this advantage: if we want to know why decisions sometimes yield consequences that are less good than they could be, on the whole, one possible answer is that people are making decisions according to principles that are systematically non-utilitarian, and they are getting results that follow their principles rather than results that yield the best outcome.5

Types of goals

Decisions are designed to achieve goals, or (in other words) objectives or values. Goals may be understood as criteria for the evaluation of decisions or their outcomes (Baron, 1996). I use the term ``goals'' very broadly to include (in some senses) everything a person values. Behavior is rational to the extent to which it achieves goals. Rational political behavior will not happen without goals.

Goals, in the sense I describe, are criteria for evaluating states of affairs. They are reflectively endorsed. They are the result of thought, and are, in this sense, ``constructed,'' in much the way that concepts are the result of reflection. Goals are not simply desires, and very young children might properly be said to have no goals at all, in the sense at issue.

Your goals fall into four categories: self-interested, altruistic, moralistic, and moral. These correspond to a two-by-two classification. One dimension of this classification is whether or not your goals depend on the goals of others. The other dimension is whether they concern others' voluntary behavior.

For your behavior For others' behavior
Dependent on others' goals Altruistic Moral
Independent of others' goals Self-interested Moralistic

The idea of dependence on others' goals assumes that goals are associated with the individuals who have them. Your goals are contingent on your existence. If you were never born, no goals would be yours.6 Your self-interested goals are those that are yours, in this sense. Altruistic (and moral) goals are goals for the achievement of others' goals. Your altruistic goals concerning X are thus a replica in you of X's goals as they are for X. Altruism may be limited to certain people or certain types of goals. But it rises or falls as the goals of others rise and fall.7

We have goals for what other people do voluntarily. The behavior of others is a kind of consequence, which has utility for us. But this type of consequence has a special place in discussion of social issues. It is these goals for others' behavior that justify laws, social norms, and morality itself. When you have goals for the behavior of others, you apply criteria to their behavior rather than your own. But of course these are still your own goals, so you try to do things that affect the behavior of others. ``Behavior'' includes their goals, and all their thinking, as well as their overt behavior (just as self-interested goals may concern your own mental states). And ``behavior'' excludes involuntary or coerced behavior. When we endorse behavior for others, we want them to want to choose it.

What I call moral goals are goals that others behave so as to achieve each others' goals. These are ``moral'' in the utilitarian sense only. In fact, they are the fundamental goals that justify the advocacy of utilitarianism (Baron, 1996). I shall return to these goals.

By contrast, moralistic goals are goals for the behavior of others that are independent of the goals of others. People could want others (and themselves) not to engage in homosexual behavior, or not to desire it.8 Other examples abound in public discourse: antipathy to drug use; enforcement of particular religions against other religions; promotion of certain tastes in fashion, personal appearance, or artistic style against other tastes; and so on.9

Often the public discourse about such things is expressed in the language of consequences. Moralistic goals usually come bundled with beliefs that they correspond to better consequences (a phenomenon that has been called ``belief overkill'' (Jervis, 1976; Baron, 1998). For example, opponents of homosexuality claim that the behavior increases mental disorders, and this argument, if true, is relevant to those who do not find homosexuality inherently repugnant.

The people whose behavior is the concern of moralistic goals could be limited to some group. Your goals for others' behavior could be limited to others who share your religion or nationality, for example. In a sense, altruism can be limited, but truly moral goals cannot. You can have altruistic goals toward certain other people only, not caring about the rest. But if you have goals for others' behavior based only on these altruistic goals, these are actually moralistic goals, not altruistic, because they are independent of the goals of those outside of your sphere of altruism. This is just a consequence of the way I have defined ``moralistic.'' We might want to distinguish moralistic goals that come from limited altruism from those that come from self-interest alone (which I shall discuss shortly).

In sum, unlike moral goals, moralistic goals can go against the goals of others. When moralistic goals play out in politics, they can interfere with people's achievement of their own goals. That is, if we define ``utility'' as a measure of goal achievement, they decrease utility of others. They need not do this if enough people have the same goals for themselves as others have for them. But the danger is there, especially on the world scale or in local societies with greater diversity of goals.

Moral goals may also involve going against the goals of some in order to achieve the goals of others. But moral goals are those that make this trade-off without bringing in any additional goals of the decision maker about the behavior of others.

Altruism and moralism are difficult to distinguish because of the possibility of paternalistic altruism. A true altruist may still act against your stated preferences, because these preferences may depend on false beliefs and thus be unrelated to your true underlying goals (Baron, 1996). Undoubtedly moralists often believe that they are altruists in just this way. The experiments I review later, however, will suggest that some moralism does not take this form. People think that their values should sometimes be imposed on others even when they (ostensibly) agree that the consequences are worse and that the others involved do not agree with the values being imposed. Moreover, even when people think that they are being paternalistically altruistic, they may be wrong. Although underlying goals are difficult to discover and somewhat labile, their existence is often a matter of fact.

Because other people's moralistic goals - in contrast to their altruistic goals and moral goals - are independent of our own individual goals, each of us has reason to want to replace moralistic goals with altruistic goals. Specifically, you will benefit from altruistic and moral goals of others, and you will also benefit from their moralistic goals if these agree with your own goals. But you will suffer from moralistic goals that conflict with your goals.

It is conceivable that most moralistic goals have greater benefits than costs, relative to their absence and that they could not be replaced with other goals - such as moral goals - that had even greater benefits. But it is also conceivable that moralistic goals arise from the same underlying motivation that yields moral goals and that the difference is a matter of belief, which depends on culture, including child rearing and education. Some of the beliefs in question may be false, and subject to correction. To the extent to which moralistic goals can be replaced with more beneficial moral goals, we have reason to try to make this happen.

A norm-endorsement argument

Norm endorsement is what I call the activity we undertake to try to influence the behavior of others. I have argued that norm endorsement is a fundamental moral activity, in terms of which morality can be defined (Baron, 1996). What should count as moral, by this view, is what we each have reason to endorse for others and ourselves to follow, assuming that we have put aside our current moral views. By this account, moralistic goals are non-moral. But we all have reason to endorse moral and altruistic goals. (Specifically, our reasons come from both self-interest and altruism.)

The basis of this argument is an analytic scheme in which we ask about the purpose of moral norms, or, more generally, norms for decision making. The idea is that the act of endorsing norms is fundamental for a certain view of what the question is. There are surely other questions worth answering, but the question of what norms should we endorse for decision making is of sufficient interest, whether or not it exhausts the meaning of ``moral'' or ``rational.''

The motivation for norm endorsement can be altruistic, moralistic, or moral. If we want to ask about what norms we should endorse for others, we are concerned about their behavior. Hence, we want to put aside the norms that arise from our current moral and moralistic goals, to see whether we can derive these goals from scratch. Again, this isn't the only question to ask, even within this scheme, but it is a useful question. The answer provides a justification of our goals concerning others' behavior, without the circularity of justifying those goals in terms of goals of the same sort.

Our remaining altruistic goals are sufficient to motivate the endorsement of norms for others' decision making. If we successfully encourage others to rationally pursue their self interest, and to be altruistic toward others, they will achieve their own goals better. This will satisfy our own altruism toward them. Note that our altruistic goals concern the achievement of their goals, but we have now used this to justify moral goals, in which we endorse altruistic goals for others.

The upshot of this argument is that we should have moral goals. That is, we should endorse voluntary altruism and moral behavior. We should want others to be altruistic, to endorse altruism, and to endorse the endorsement of both of these behaviors. This contrasts with moralism because it depends completely on the goals that exist in the absence of any goals for others.

Rational political action

The analysis of goals is relevant to decisions about political action, such as voting. Most political action low cost, with little effect on the decision maker, but it is part of a system in which the little actions of a great many people taken together have large effects on everyone. In this regard, it is much like many large-scale public-goods problems (or social dilemmas) in which each person is faced with a choice of cooperation or defection in the support of a public good, e.g., constraining water use when water is short.10

Voting (like cooperation in a social dilemma) is typically not justifiable by the narrow self-interest of the voter (Downs, 1957). The low cost of voting and the lack of significant effect on the voter's interests suggest that voting will typically be motivated by moral beliefs, or by expressive concerns, rather than by rational calculation of self-interest (Brennan & Hamlin, 2000; Brennan & Lomasky, 1993). The social science literature generally supports the conclusion that voting is predicted better from moral beliefs than from narrow self-interest (Brodsky & Thompson, 1993; Sears & Funk, 1991; Shabman & Stephenson, 1994).

Most behavior of interest to legal scholars and economists is primarily self-interested, but most political behavior is not primarily self-interested in the narrow sense. Voting is unlikely to be rational on the basis of purely self-interested goals, as I have defined them. Such reasons might include the good feeling of participation, insofar as that feeling is not itself dependent on altruistic, moral, or moralistic goals. It is hard to imagine such a feeling.

Voting can be rational for you, however, if you are sufficiently altruistic (Baron, 1997a). For similar reasons, voting and other political behavior can be rational if they achieve moral goals by affecting other people's behavior. In principle, such effects could be larger than the effects on purely altruistic goals. If you are the only altruist in the world, and your vote made N people into altruists just like you, who would then vote altruistically, your influence would be multiplied by N. Of course, this is unlikely.

Voting may also be rational for someone with sufficiently strong moralistic goals. Moralistic goals may not be so dependent on the number of people who behave consistently with them, but they can also be more powerful than altruistic goals. Most altruistic goals for each other person are weaker than self-interested goals, usually quite a bit weaker. Moralistic goals are not limited in this way. People who understand - however vaguely - that voting is not justified by self-interest may still feel that their moralistic goals require them to vote, and they are not irrational to feel this way, in terms of their own goals, even when they are immoral from a utilitarian perspective.

In sum, voting is rational when it is supported by moral goals, altruistic goals, and moralistic goals. The trouble is that the moralistic goals are often the strongest. This leads people to impose on us, through their political behavior, policies that sometimes subvert our individual goals.

What can we do when moralistic goals cause trouble?

I argued for putting aside moralistic goals when we think about what norms we would endorse if we did not already have norms. But real people do not put these goals aside. Although you may not like other people's moralistic goals, especially when they conflict with your own goals, they are goals none the less. If you are altruistic toward others, then you want their goals to be achieved, whatever they are. In the extreme, moralistic goals are sadistic. Altruism, hence utilitarianism, must count sadistic goals as well (Hare, 1981).

Sadistic goals, by definition, go against the goals of others, and moralistic goals often do this too. Thus, when we are altruistic toward people who have moralistic and sadistic goals, we should also be altruistic toward those with conflicting goals. Utilitarianism advocates balancing the two considerations so as to maximize total utility in the long run. We cannot simply discount sadistic or moralistic goals because of their nature, but, in any given case, these goals run into resistance from other goals that we must consider.

We do not want people to interfere with our liberties because of their misguided moral opinions. But we also do not want to interfere with might be a well-targeted but unpopular moral opinion. The problem is exacerbated by the fact that the goals that support political behavior are weak, and, without moralistic goals, fewer people would participate at all.

Consider, as an example, whether the Food and Drug Administration (FDA) should approve a rotavirus vaccine (Kaiser, 2001). The benefits are the prevention of a serious disease (especially in poor countries, whose approval of the vaccine may depend on what the FDA does). The costs include various side effects, some as serious as the disease. But another cost is the fact that citizens are bothered by the idea of putting some at risk in order to help others. Should this source of opposition to the vaccine count as part of the cost-benefit analysis?

The opposition to the vaccine may be decomposed into two parts. One is an opinion about what should be done. This is not properly considered as a component of utility, because it does not involve the goals of those who hold this opinion. We can, in principle, have opinions about what others should do without caring about what they should do (i.e., without having that as a goal of our own decisions), although of course we usually do care.

The other part is in fact a matter of caring, a goal. And it is a moralistic goal because it concerns the behavior of others - such as the FDA, parents, and pediatricians - and it is held without regard to whether these others share the goal. A utilitarian cost-benefit analyst would take this goal into account, because it is part of the utility of those who hold it.

Qualifications: Why we might ignore or suppress moralistic goals

This conclusion is qualified by two considerations (Hare, 1980; Baron, 1996). One is that neglecting certain goals might have future benefits in the form of discouraging people from having those goals. When a person's goals interfere with others' pursuit of their goals, we have reason to suppress the former, if we can.

Some goals may even hurt the person who has them. Many goals are generally desired even when they will not be fully achieved. Most people would, I suspect, rather have a sex drive than not, even if they know it will not be satisfied. The same with goals concerning love, friendship, food, drink, the arts, music, and sports. These are goals to be cultivated. (I put aside the interesting question of what makes them this way.) Many moralistic goals go in a class that we might call primarily negative. Once we have these goals, the failure to satisfy them can make us unhappy, but achieving them is not usually positive. In the absence of such goals, most people would probably not want them. (I am assuming this. It is an empirical question, and I'm not sure of the answer.) In this regard, the discouragement of moralistic goals might even benefit those who would otherwise have them. This is also a possible distinction between self-interested and moralistic goals. Self-interested goals often interfere with the achievement of others' goals, in the same way that moralistic goals do. But most self-interested goals are of the positive type, so there is a greater cost to discouraging their formation and maintenance.

The other qualifying consideration is that utilitarian analysis should (arguably) count only fundamental goals, and it should ignore goals based on false beliefs (Baron, 1996).11 Some moralistic goals might be based on false beliefs (such as the belief that the Golden Rule as stated in the Bible concerns actions but not omissions, if that is indeed false).

To take another example, consider the opposition to homosexuality. Whatever the origins of this in feelings of disgust, the belief that homosexuality is wrong is supported by a host of related beliefs: that it is unnatural, that it is controllable, and that it has harmful effects on other aspects of people's lives. Thus, the opposition to homosexuality is a means to the end of avoiding the unnatural through things we can control, etc. To the extent to which the evidence says otherwise, then the moral opposition is based on false beliefs. Beliefs, once formed, often resist such evidence, but if the evidence had the effect it should have then this belief would be eliminated or reduced in strength.

One issue here is whether these two qualifications apply when moralistic goals and self-interested goals coincide. It could turn out that people's moralistic goals coincide the self-interested goals of almost everyone. Surely this happens sometimes. For example, it seems that almost everyone thinks that human cloning is yucky and that nobody should do it. (They make up all sorts of consequentialist reasons, but it is clear that these reasons have little to do with the moral view in question, as they all may disappear in time without reducing the opposition to cloning.) Why not ban cloning, if only 1% would ever think of having themselves cloned? It might even be that 90% of people would favor a ban as a self-control device. They may some day want to clone themselves (e.g., if they need a bone marrow donation that a clone could provide in time), and they may want the temptation to do so removed by law. Similarly, in the case of a vaccine, it may be that almost everyone weighs the side effects more than the disease.

In some cases, because of one or both of these two considerations, neglect of moralistic goals might be desirable. For example it might be reasonable to put them aside in some cost-benefit analysis. When utilities are elicited in surveys, we must take care to separate utilities for outcomes from moralistic goals and also from opinions about what others should do. When we ask about vaccines, we should ask about the consequences without saying which are caused by the natural disease and which are caused by the vaccine. And, when we ask about the value of damage to the environment, we should not say whether the damage was caused by people or by nature itself. (See Baron, 1997c, for further discussion.)

It might seem possible to argue that all moralistic goals are based on false beliefs. The argument would take the form of the argument that I sketched for utilitarianism itself. Namely, if we did not already have such moralistic goals, the creation of such goals (itself an act that is somewhat under our control) could not be motivated by any goals that we already have. So they must arise from confusion.

In particular, people may form moralistic goals initially on the basis of their own reaction to some new situation, such as the possibility of cloning. On the basis of this reaction, they decide that some behavior (such as getting cloned) is not for them. They then go further to conclude that anyone who thinks otherwise must be misguided and in need of protection from her own delusions. Thus, out of parentalism (the non-sexist version of ``paternalism''), people endorse the general prohibition of the behavior, even in those who benefit from it. The impulse for moralistic goals is thus the same as the impulse for altruistic and moral goals. But it is based on false belief about the true goals of others. (This is an especially difficult judgment to make, however, because there are surely cases in which such parentalism is justified, as when parents try to inculcate a taste for education or classical music in their children. But difficulty of a judgment does not imply that there is no such thing as a correct judgment in a given case.)

An apparent problem with this argument is that moralistic goals can be motivated by self-interested goals. For example, one self-interested goal is to want other people to be like us, to share our religion, for example. When we live in a community of like-minded others, it is easier for us to pursue our personal commitments. Thus, moralistic goals may rationally arise from self-interested goals. Moralistic goals thus become a kind of rent seeking in the domain of social norms.

At least when we look at moralistic goals in this way, the claim of some people that moralistic goals should trump other people's goals looks more like a tough negotiating stance than a high-minded moral assertion. We must give moralistic goals their due, but simply because they are related to self-interested goals, not because they have any special status, despite the claims made for them. Part of them may well depend on false beliefs about others.

More to the point, people may come to understand and accept the kind of argument I have made about the nature of moralistic goals. To the extent to which they accept such an argument, they would realize that much of the support for their moralistic goals is based on false belief. The false belief in question is that moralistic goals are truly moral, because they are good for those on whom they are imposed. People may accept this proposition unreflectively, without fully putting themselves in the position of those who are affected. The kind of argument made here may help people to identify those cases in which they are rationalizing rather than being truly parental toward others.

Some examples of biases

In the rest of this article, I want to argue that several biases in judgments about public policy take the form of moralistic goals, to some extent. These are biases away from utilitarianism, hence toward worse overall consequences if the decisions have their intended effects. I will illustrate these principles from my research on people's judgments about hypothetical policy decisions. When these principles are moralistic goals, they serve to provide a rational motivation for voting, because voting serves these goals. One question of my research is to find out their form - in particular whether they are moralistic and knowingly non-utilitarian - and another is to explore the possibility of modifying judgments through de-biasing. I address three types of principles: allocation principles, protected values, and parochialism.

Of primary interest is whether these represent moralistic values, at least in part. That is, are people willing to impose these principles on others, even when the others disagree with the principles and prefer some other option out of self-interest.

Allocation

One type of principle concerns allocation of goods or bads. These principles may thus involve intuitions about fairness. These intuitions generally support policies that lead to worse consequences for some people, and potentially everyone (Kaplow & Shavell, 2000). Interestingly, people's goals for distributional properties like fairness are moralistic goals. They go against the goals of others, with no compensating advantage in achieving other goals, except for the moralistic goals of others who support them.

Omission bias.

Ritov and Baron (1990) examined a set of hypothetical vaccination decisions modeled after real cases (like the rotavirus vaccine, but our actual inspiration was DPT vaccine). In one experiment, subjects were told to imagine that their child had a 10 out of 10,000 chance of death from a flu epidemic, a vaccine could prevent the flu, but the vaccine itself could kill some number of children. Subjects were asked to indicate the maximum overall death rate for vaccinated children for which they would be willing to vaccinate their child. Most subjects answered well below 9 per 10,000. Of the subjects who showed this kind of reluctance, the mean tolerable risk was about 5 out of 10,000, half the risk of the illness itself. The results were also found when subjects were asked to take the position of a policy maker deciding for large numbers of children. When subjects were asked for justifications, some said that they would be responsible for any deaths caused by the vaccine, but they would not be (as) responsible for deaths caused by failure to vaccinate.

Biases such as this are apparently the basis of philosophical intuitions that are often used as counterexamples to utilitarianism, such as the problem of whether you should kill one person to save five others. Many people feel that they should not kill the one and that, therefore, utilitarianism is incorrect. They assume that their intuitions arise from the moral truth, usually in some obscure way. An alternative view, the view I have defended elsewhere (e.g., Baron, 1993, 2000) is that intuitions arise from learning of simple rules that usually coincide with producing good consequences, but, in the critical cases, do not. In terms of consequences, one dead is better than five. (Of course, once the intuition starts running, it is bolstered by all sorts of imaginary arguments, some of them about consequences.)

Omission bias is related to issues of public controversy, such as whether active euthanasia should be allowed, whether vaccines should be approved or recommended when they have side effects as bad as (but less frequent than) the diseases they prevent, or whether we are morally obligated to help alleviate world poverty (Singer, 1993).

Omission bias is somewhat labile. It can be reduced by the instructions to take the point of view of those who are affected (Baron, 1992), e.g., ``If you were the child, and if you could understand the situation, you would certainly prefer the lower probability of death. It would not matter to you how the probability came about.''

Proportionality and zero risk

People worry more about the proportion of risk reduced than about the number of people helped. This may be part of a more general confusion about quantities. Small children confuse length and number, so that, when we ask them to compare rows of objects for length or number, they will answer about length (or number), regardless of the question, even if the shorter row has more objects (Baron, Lawson, and Siegel, 1975). The literature on risk effects of pollutants and pharmaceuticals commonly reports relative risk, the ratio of the risk with the agent to the risk without it, rather than the difference. Yet, the difference between the two risks, not their ratio, is most relevant for decision making: If a baseline risk of injury is 1 in 1,000,000, then twice that risk is still insignificant; but if the risk is 1 in 3, a doubling matters much more.

Several studies have found that people want to give priority to large proportional reductions of small risks, even though the absolute risk reduction is small relative to other options (McDaniels, 1988; Stone, Yates, & Parker, 1994; Fetherstonhaugh, Slovic, Johnson, & Friedrich, 1997; Baron, 1997b). I have suggested that these results were the result of quantitative confusion between relative and absolute risk (Baron, 1997b).

The extreme form of this bias is the preference for zero risk. If we can reduce risks to zero, then we do not have to worry about causing harm. This intuition is embodied in the infamous Delaney clause, part of a U.S. law, which outlawed any food additive that increases the risk of cancer by any amount (repealed after 30 years). Other laws favor complete risk reduction, such as the 1980 Superfund law in the U.S., which concerns the cleanup of hazardous waste that has been left in the ground. Breyer (1993) has argued that most of the cleanup expenses go for ``the last 10%,'' but that, for most purposes, the 90% cleanup is adequate. Cleanup costs are so high that it is proceeding very slowly. It is very likely that more waste could be cleaned up more quickly if the law and regulations did not encourage perfection.

Subjects in questionnaire studies show the same effect (Viscusi et al., 1987; Baron, Gowda, and Kunreuther, 1993). They are willing to pay more for a smaller risk reduction if the reduction goes to zero. They also prefer a smaller reduction over a larger one if the former reduces some risk to zero.

Ex-ante equity

The ex-ante bias is the finding that people want to equate ex-ante risk within a population even when the ex-post risk is worse (Ubel, DeKay, Baron, & Asch, 1996a). For example, many people would give a screening test to everyone in a group of patients covered by the same Health Maintenance Organization (HMO) if the test would prevent 1000 cancer cases rather than give a test to half of the patients (picked at random) that would prevent 1200 cancer cases.

Ubel, Baron, and Asch (2001) found that this bias was reduced when the group was expanded. If subjects are told that the HMO actually covers two states and that the test cannot be given in one of the states, some subjects switched their preference to the test that prevented more cancers. It was as though they reasoned that, since the ``group'' was now larger, they could not give the test to ``everyone'' anyway, so they might as well aim for better consequences, given that ``fairness'' could not be achieved. This result illustrates a potential problem with some non-utilitarian concepts of distributive justice, namely, that the distributions they entail can change as the group definition is changed. If groups are arbitrary, then the idea of fair distribution is also arbitrary.

Are allocation biases moralistic?

Recently I have been exploring the role of these allocation biases in low-cost political behavior such as answering questions in polls. The experiments present subjects with hypothetical decisions, and the subjects say how they think the decision should be made. One series of studies asks whether people are willing to impose their allocation judgments on others, even when the others disagree. That is, do these biases form the basis of moralistic goals? I shall summarize one example of studies of this type. (The general approach applies to all the studies described in this article, which will be reported in detail elsewhere.)

Ninety-one subjects completed a questionnaire on the World Wide Web. They found the questionnaire page through links in other web pages and through search engines (http://www.psych.upenn.edu/~baron/qs.html). Their ages ranged from 18 to 74 (median 38); 29% were male; 14% were students. The questionnaire with the general description of the scenario, which concerned a health insurance company that had to decide which expensive treatments to cover. Each screen presented a choice of two treatments. An example of a screen (from the omission-bias condition) was:

Treatment A cures 50% of the 100 patients per week who have this condition, and it causes no other conditions.

Treatment B cures 80% of the 100 patients per week who have this condition, but it causes a different (equally serious) condition in 20% of them.

Test question: Which treatment leads to fewer sick people?

Which treatment should the company choose (if it didn't know what its members favored)?

Now suppose that 75% of the members favor Treatment B. They think B leads to fewer sick people and that is what they care about. Which treatment should the company choose?

Now suppose that 100% of the members favor Treatment B. ...

Now suppose that 75% of the members favor Treatment A. ...

Now suppose that 100% of the members favor Treatment A. ...

Each question was followed by a four-point response scale: Certainly A, Probably A, Probably B, Certainly B. The subject had to answer the test question correctly in order for the answers to be recorded. The experiment used the omission, zero, and ex-ante biases. The text of the zero and ex-ante conditions read:

Zero:
Treatment A is for a type of the condition that affects 50 patients per week. It cures 100% of them.
Treatment B is for a type of the condition that affects 100 patients per week. It cures 60% of them.

Ex-ante:
Treatment A can be given to all 100 patients per week who have this condition, and it cures 30% of them.
Treatment B is in short supply. It can be given only to 50 of the 100 patients per week with this condition, picked at random. It cures 80% of these 50.

The experiment included a control condition, like the ex-ante condition, but with no difference in the number of patients who could get the treatment.

The main result was that bias was present even when 100% of the members wanted the optimal treatment covered. For example, in the omission bias condition, the mean bias was 0.19 (on a scale where 0 represents neutrality between the two treatments and 1.5 is the maximum omission bias possible) vs. -0.16 for the control condition when subjects ``think [the more effective treatment] leads to fewer sick people and that is what they care about.'' In sum, some people are willing to impose their allocation judgments on others, even when it is clear that the consequences for others are worse and that the others do not favor the allocations in question.

Debiasing allocation biases

Another series of studies has explored debiasing. In one study (with Andrea Gurmankin), we explored two general methods of debiasing. The ``minimal'' method involved stripping away information and focusing on consequences alone, a minimal description. We tried this method on four different biases: omission bias, zero-harm bias, preference for ex-ante equality, and preference for group equality (even when these made consequences worse). Here is an example for the omission bias:

Treatment A cures 50 people out of 100 who come in with condition X each week, and it leads to no other conditions.

Treatment B cures 80 of the people with condition X, but it leads to condition Y (randomly) in 20 of the 100 patients. X and Y are equally serious.

[The following was added for the experimental condition: minimal debiasing.]

In other words, treatment A leads to 50 people with condition X and nobody with any other condition, and
treatment B leads to 20 people with condition X and 20 people with condition Y (which is equally serious).

In general, this minimal debiasing manipulation reduced bias on several different measures. The effects were small but statistically significant. Most subjects thought that the summary (``In other words ...'') was fair.

A second method, like that just described, involves expansion: provision of additional information that might change the bias. A typical item, for the zero risk bias, was:

Zero risk bias

X and Y are two kinds of cancer, equally serious. Each year, 100 people get cancer X and 50 get cancer Y.

Treatment A is given to the 100 people with cancer X. It cures 60 of them.

Treatment B can be given to the 50 people with cancer Y, It cures all 50 of them.

[The following was added in the debiasing condition.]

The total number of cancer cases of all types, including X and Y, is 1000 each year. Treatment A thus cures 60 out of 1000 and treatment B cures 50 out of 1000.

Subjects were asked which option should be covered by a health maintenance organization if it had to choose one. The expansion statement (in one study) led subjects to favor the optimal treatment (A), and most subjects did not think that it was inconsistent with the initial statement.

These results have two implications. First, as a practical matter, biases can be reduced by restating the issue. Second, as a theoretical matter, the biases are subject to framing effects - subject to change with redescription of the same situation - and should thus not be taken as people's last, most considered, view on the issues.

Protected values

People think that some of their values are protected from tradeoffs with other values (Baron & Spranca, 1996; Tetlock et al., 1996). Many of these values concern natural resources such as species and pristine ecosystems. People with protected values (PVs) for these things do not think they should be sacrificed for any compensating benefit, no matter how small the sacrifice or how large the benefit. In an economic sense, when values are protected, the marginal rate at which one good can substituted for another is infinite. For example, no amount of money can substitute for a type of environmental decline.

PVs concern rules about action, irrespective of their consequences, rather than consequences themselves. What counts as a type of action (e.g., lying) may be defined partly in terms of its consequence (false belief) or intended consequence, but the badness of the action is not just that of its consequences, so it has value of its own.

Omission bias is greater when PVs are involved (Ritov & Baron, 1999). For example, when peole have a PV for species, they are even less willing to cause the destruction of one species in order to save even more species from extinction. Thus, PVs apply to acts primarily, as opposed to omissions.

People think that their PVs should be honored even when their violation has no consequence at all. People who have PVs for forests, for example, say that they should not buy stock in a company that destroys forests, even if their purchase would not affect the share price and would not affect anyone else's behavior with respect to the forests. This is an ``agent relative'' obligation, a rule for the person holding the value that applies to his own choices but not (as much) to his obligations with respect to others' choices. So it is better for him not to buy the stock, even if his not buying it means that someone else will buy it.

PVs are at least somewhat insensitive to quantity. People who hold a PV for forests, tend to say that it is just as bad to destroy a large forest as a small one (Ritov & Baron, 1998). They say this more often than they say the same thing for violations of non-protected values (NPVs).

Several researchers have noted that PVs cause problems for quantitative elicitation of values, as is done in cost-benefit analysis or decision analysis (Baron, 1997c; Bazerman et al., 1999). PVs imply infinite values.

Notice that the issue here is not behavior. Surely, people who endorse PVs violate them in their behavior, but these violations do not imply that the values are irrelevant for social policy. People may want public decisions to be based on the values they hold on reflection, whatever they do in their behavior. When people learn that they have violated some value they hold, they may regret their action rather than revising the value.

PVs are moralistic too

It should not be surprising that people think of PVs as rules that are independent of consequences for people. I did an experiment to demonstrate this, using examples like these:

On each screen, the item was presented, followed by a series of questions, with possible answers printed on buttons (shown here as rectangles). Here are the important ones. The mnemonic names in brackets did not appear on the screen:

Should the government allow this to be done? [Allow]

This should be allowed, so long as other laws are followed.

This should sometimes be allowed, with safeguards against abuse.
This should never be allowed, no matter how great the need.

If the circumstances were (or are) present so that the consequences of allowing this were better than the consequences of banning it, should it be allowed? [If-Better]

Yes.

      
No.
      
I cannot imagine this.

If these circumstances were present, and if almost everyone in a nation thought that the behavior should be allowed, should it be allowed in that nation under these circumstances? [If-Pro]

Yes, allowed.

      
No, banned.
      
I cannot imagine this.



The mean proportions of endorsement are shown in the following table.



Allow Sometimes Never
Allow 0.40 0.32 0.28
Always Sometimes Never
If-Better 0.71 0.20 0.09
Allow Ban Can't imagine
If-Pro 0.72 0.22 0.06



Of interest, subjects favored bans 22% of the time, even though, in these cases, they could imagine that the action was better and that the vast majority favored it. In other words, subjects were willing to impose their values on others.

Parentalism

A question that arises here is whether protected values, in their moralistic form, are parentalist. That is, do people who impose their values on others think that they are really going against the values of others. Surely this must happen because of belief overkill. People do not like their beliefs to conflict, so they deceive themselves into agreement. They are thus likely to think that their values are really good for everyone, whether the others know it or not. However, this distortion of beliefs may be incomplete. We might expect some cases in which people know that their moralistic values go against the interests of others.

To test this possibility, I did another experiment, in which I attempted to find actions that were done by a specific person and that benefited that person, e.g.:

  1. A mother gives a drug (with no side effects) to her child that will improve the child's school performance (which is otherwise average).
  2. A single, childless, woman has a child by cloning herself, because she has no prospect for marriage.
  3. A young widow has a child by cloning her husband, using cells taken from him as he was dying.
  4. A man in his 50s has himself cloned to produce an embryo that will yield cells that will prevent him from getting Alzheimer's disease.

PVs were defined roughly as in the last study I described. The main question of interest was:


8. If the person were stopped from doing this, how would it affect [him/her], on the whole (considering all effects together)?



  Stopping it would be good for [him/her] on the whole,
  and [he/she] would see it as good, immediately.


  Stopping it would be good for [him/her] on the whole,
  and [he/she] would see it as good, eventually.


  Stopping it would be good for [him/her] on the whole,
  even if [he/she] though it was bad.


  Stopping it would be good for [him/her] on the whole,
  because it would go against what [he/she] wants.


  Stopping it would be good for [him/her] on the whole,
  because it would infringe on [his/her] rights.



The following table shows the main results as proportions for the four cases shown above. (Other results were similar.)



PV Stop-effect Stop/PV
1 IQ drug 0.404 0.509 0.196
2 Clone 0.588 0.421 0.104
3 Widow 0.500 0.509 0.158
4 Alzh. 0.404 0.605 0.152



In the table, stop-effect is the proportion of answers in which stopping the act was acknowledged to be bad for the actor (the last two options of question 8), and Stop/PV is the proportion of PV responses in which stopping the act were acknowledged to be bad. Although Stop/PV is much lower than Stop-effect, as predicted by the belief-overkill hypothesis, it is not zero. In a substantial number of cases the subjects did acknowledge that their imposed values were harmful to the actors.

Challenging PVs

PVs may be unreflective overgeneralizations. People may endorse the statement that ``no benefit is worth the sacrifice of a pristine rainforest'' without thinking much about possible benefits (a cure for cancer or malaria?). Or, when people say that they would never trade off life for money, they may fail to think of extreme cases, such as crossing the street (hence risking loss of life from being hit by a car) to pick up a large check, or failing to increase the health-care budget enough to vaccinate every child or screen everyone for colon cancer.

Such unreflective overgeneralizations provide one possible avenue for challenging PVs in order to make compromise and tradeoffs possible. If PVs are unreflective in this way, then PVs should yield to simple challenges. Baron and Leshner (2000) gave subjects questions about whether they would regard certain outcomes, such as ``electing a politician who has made racist comments,'' as so much against their values that no benefit would be sufficient to justify actions that caused such outcomes. Then when values were protected in this way, we challenged the subjects by asking them to think of counterexamples. PVs do sometimes respond to such challenges.

We also found that the effects of counterexamples can transfer to measures of omission bias (Ritov & Baron, 1990), the bias toward harm caused by omissions when that is pitted against harm caused by acts. The last two experiments find that PVs are not honored when the probability and magnitude of harm is low enough.

In a more recent experiment (to be reported elsewhere) I have found that PVs have less effect on decisions when subjects are asked to put themselves in the position of a government decision maker with sole responsibility for the decision, as opposed to putting themselves in the position of a citizen responding to an opinion survey. The latter position is close to what the subjects are actually doing, so it is not hard to imagine. The former may be more difficult, but subjects were significantly more willing to make tradeoffs when they were asked to imagine themselves making the actual decisions. This result provides further evidence for the lability of PVs.

The results suggest that PVs are strong, moralistic, opinions, weakly held. They are strong in the sense that they express infinite tradeoffs. Holders of these values assert that they are so important that they should not be traded off for anything. This assertion yields to a variety of challenges. After yielding, of course, the value may still be strong in the sense that a large amount of benefit is required to sacrifice the value.

Parochialism

The tendency of people to favor a group that includes them, at the expense of outsiders and even at the expense of their own self-interest, has been called parochialism (Schwartz-Shea & Simmons, 1991). We may think of parochialism as an expression of both altruistic and moralistic goals. It is altruistic toward co-members. It may be moralistic in its effects on outsiders. The outsiders are being asked to help achieve the goals of insiders, in effect, whether this is consistent with their own goals or not. (What is not clear whether they are being asked to do this voluntarily, or whether coerced behavior would suffice, in which case the values are not truly moralistic.) More likely, though, parochialism is moralistic in its application to insiders, who are expected to be loyal to the group.

A prime example is nationalism, a value that goes almost unquestioned in many circles, just as racism and sexism went unquestioned in the past. Nationalists are concerned with their fellow citizens, regardless of the effect on outsiders. Nationalists are willing to harm outsiders, e.g., in war, for the benefit of co-nationals. This sort of nationalism is moralistic to the extent to which nationalists want outsiders to behave willingly in ways that benefit their co-nationals, e.g., cede territory, stop trying to immigrate, allow investment, etc. Nationalists typically want others in the group to be nationalist as well. The idea that one should vote for the good of humanity as a whole, regardless of the effect on one's own nation, would make total sense to a utilitarian (and it would require little self-sacrifice because voting has such a tiny effect on self-interest). But it is considered immoral by the nationalist.

An experiment by Bornstein and Ben-Yossef (1994) illustrates the parochialism effect. Subjects came in groups of 6 and were assigned at random to a red group and a green group, with 3 in each group. Each subject started with 5 Israeli Shekels (IS; about $2). If the subject contributed this endowment, each member of the subject's group would get 3 IS (including the subject). This amounts to a net loss of 2 for the subject but a total gain of 4 for the group. However, the contribution would also cause each member of the other group to lose 3 IS. Thus, taking both groups into account, the gains for one group matched the losses to the other, except that the contributor lost the 5 IS. The effect of this 5 IS loss was simply to move goods from the other group to the subject's group. Still the average rate of contribution was 55%, and this was substantially higher than the rate of contribution in control conditions in which the contribution did not affect the other group (27%). Of course, the control condition was a real social dilemma in which the net benefit of the contribution was truly positive.

Similar results have been found by others (Schwartz-Shea and Simmons, 1990, 1991). Notice that the parochialism effect is found despite the fact that an overall analysis of costs and benefits would point strongly toward the opposite result. Specifically, cooperation is truly beneficial, overall, in the one-group condition, and truly harmful in the two-group condition, because the contribution is lost and there is no net gain for others.

This kind of experiment might be a model for cases of real-world conflict, in which people sacrifice their own self-interest to help their group at the expense of some other group. We see this in strikes, and in international, ethnic, and religious conflict, when people even put their lives on the line for the sake of their group, and at the expense of another group. We also see it in attempts to influence government policy in favor of one's own group at the expense of other groups, through voting and contributions of time and money. We can look at such behavior from three points of view: the individual, the group, and everyone (the world). Political action in favor of one's group is beneficial for the group but (in these cases) costly to both the individual and the world.

In part, parochialism seems to result from two illusions. In one, people confuse correlation and cause, thinking that they influence others because, ``I am just like others, so if I contribute they will too'' (Quattrone and Tversky, 1984). In the other illusion, people think that self-sacrifice for their own group is in their self-interest, because their contribution ``comes back.'' They do not work through the arithmetic, which would show that what comes back is less than what goes in (Baron, 1997d). People who sacrifice on behalf of others like themselves may be more prone to the self-interest illusion, because they see the benefits as going to people who are like themselves in some salient way. They think, roughly, ``My cooperation helps people who are X. I am X. Therefore it helps me.'' This kind of reasoning is easier to engage in when X represents a particular group than when it represents people in general. Thus, this illusion encourages parochialism (Baron, 2001).

Parochialism is moralistic

If parochialism is a moralistic value for co-citizens, people are willing impose it on others, even when they disagree and even when it goes against the greater good. This tendency might be greater when nationalism can be justified by perceived unfairness. In an experiment to test whether parochialism is moralistic, I manipulated unfairness by telling subjects that ``your nation'' (the subject's nation) had already made a substantial contribution to some public good. Even though it would be best for all if it continued to contribute, people might feel that it was the other nation's turn.

The experiment involved two kinds of situations (4 examples of each), one like a prisoners dilemma, in which each of two nations decided independently on its action, and one in which an external authority could impose a solution. A typical screen began, with alternatives in brackets:

This case involves a dispute about contributions to a peacekeeping force, which needs reinforcements. It is best for each nation to maintain its current contribution, whatever the other nation does. But casualties will rise from 1% to 5% per year without reinforcements.

[In the unfairness condition, the following was added:

Your nation has already contributed an additional 50% and has committed somewhat more troops and equipment than the other nation. (Further contributions are based on your nation's current level.)]

Consider the following three proposals for a choice to be made by your government [an international agency].

A: Your nation contributes 40% more. The casualty rate will remain at 1%, even if the other nation does nothing.

B: Your nation contributes 20% more. The casualty rate will rise to 3% if the other does nothing, and it stay at 1% if the other nation also contributes 20%.

C: Your nation does nothing. The casualty rate will rise to 5% if the other nation does nothing, 3% if it contributes 20% more, and 1% if it contributes 50% more.

When the decision was made by the international agency, the three options all maintained the best outcome but varied the contributions. For example:

Your nation contributes 40% more. The casualty rate will stay at 1%.

Both nations contribute 20% more. The casualty rate will stay at 1%.

The other nation contributes 50% more. The casualty rate will stay at 1%.

Five questions followed. The two of greatest interest are shown here, with alternatives in brackets. (The term ``approval'' will be explained shortly.)

4. What should your government [the international agency] choose if almost everyone voted for [approved] B in an advisory referendum and almost nobody voted for [approved] A or C? [Options were A-C.]

5. What should your government [the international agency] choose if almost everyone voted for [approved] A in an advisory referendum and almost nobody voted for [approved] B or C? [Options were A-C.]

Option A was always worst for the subject's nation, but it required less sacrifice than Option C would require for the other nation.

Even when a majority favored one of the proposals other than the one that favored the group (Self), subjects still favored the Self proposal: 4% in the fair condition when the decision was national level, 6% when made by an external authority, 9% when unfair and national, and 12% when unfair and external. The effects of type of decision and fairness were both significant. These results support the view that parochial values are sometimes moralistic.

Debiasing parochialism with approval voting

The experiment also contrasted two types of voting. In approval voting, voters say yes or no to each of several candidates or proposals. The option with the most approvals wins. By contrast, in standard plurality voting, voters vote for one option, and the option with the most votes wins. Approval voting has many well-known advantages over plurality voting (Brams & Fishburn, 1983).

Approval voting could reduce parochialism if people could see themselves as members not only of their own group but also of the larger group that includes affected outsiders; they would then approve proposals consistent with both views. Such voters may be torn between the greater good for all and the demands of the self-interest illusion for their narrow group (Baron, 1997d). In the present experiment, like others I have done, approval voting generally yielded more votes for the most efficient proposal (counting approval votes).

An important question is whether the use of approval voting itself reduces moralistic (Self) responses about how government should respond to the votes of others. The experiment itself provided some evidence for this. At least in subjects who showed an effect of approval voting in other questions, the number of Self responses to the two questions of interest was lower with approval voting than without it. In sum, approval voting reduce the expression of parochial moralistic values themselves. It may be that nationalists are more tolerant of others who vote for the greater good when the voting is by approval voting.

Conclusion

The main argument of the present paper is that biases away from utilitarianism may take the form of moralistic values, in which people seek to impose their values on the behavior of others, sometimes explicitly ignoring the nature of others' good (utility). I have presented experiments showing this for allocation biases and protected values, and parochialism.

Moralistic values are often supported by beliefs that make them appear to be altruistic or moral. The beliefs are necessarily incorrect and are thus amenable to correction (to varying degrees, of course). I have discussed some experiments showing that non-consequentialist biases can be reduced.

In the long run, we might be able to debias moralistic values too. Many of these values might arise partly as errors based on false beliefs about the good of others. People may think that others are more like them than they are. People's good may really differ. If people could come to see moralistic values as possible errors, they would be more open to discussion about them, and more open to evidence about the true nature of other people's good.

References

Baron, J. (1992). The effect of normative beliefs on anticipated emotions. Journal of Personality and Social Psychology, 63, 320-330.

Baron, J. (1993). Morality and rational choice. Dordrecht: Kluwer.

Baron, J. (1994). Nonconsequentialist decisions (with commentary and reply). Behavioral and Brain Sciences, 17, 1-42.

Baron, J. (1996). Norm-endorsement utilitarianism and the nature of utility. Economics and Philosophy, 12, 165-182.

Baron, J. (1997a). Political action vs. voluntarism in social dilemmas and aid for the needy. Rationality and Society, 9, 307-326.

Baron, J. (1997b). Confusion of relative and absolute risk in valuation. Journal of Risk and Uncertainty, 14, 301-309.

Baron, J. (1997c). Biases in the quantitative measurement of values for public decisions. Psychological Bulletin, 122, 72-88.

Baron, J. (1997d). The illusion of morality as self-interest: A reason to cooperate in social dilemmas. Psychological Science, 8, 330-335.

Baron, J. (1998). Judgment misguided: Intuition and error in public decision making. New York: Oxford University Press. http://www.sas.upenn.edu/~baron/vbook.htm

Baron, J. (2000). Thinking and deciding (3d ed.). New York: Cambridge University Press.

Baron, J., Gowda, R., & Kunreuther, H. C. (1993). Attitudes toward managing hazardous waste: What should be cleaned up and who should pay for it? Risk Analysis, 13, 183-192.

Baron, J., Lawson, G., & Siegel, L. S. (1975). Effects of training and set size on children's judgments of number and length. Developmental Psychology, 11, 583-588.

Baron, J., & Leshner, S. (2000). How serious are expressions of protected values. Journal of Experimental Psychology: Applied, 6, 183-194.

Baron, J., & Spranca, M. (1997). Protected values. Organizational Behavior and Human Decision Processes, 70, 1-16.

Bazerman, M. H., Baron, J., & Shonk, K. (2001). You can't enlarge the pie: The psychology of ineffective government. New York: Basic Books.

Bazerman, M. H., Moore, D. A., & Gillespie, J. J. (1999). The human mind as a barrier to wiser environmental agreements. American Behavioral Scientist, 42, 1277-1300.

Beattie, J., & Loomes, G. (1997). The impact of incentives upon risky choice experiments. Journal of Risk and Uncertainty, 14, 155-168.

Bornstein, G., & Ben-Yossef, M. (1994). Cooperation in intergroup and single-group social dilemmas. Journal of Experimental Social Psychology, 30, 52-67.

Brams, S. J., & Fishburn, P. C. (1983). Approval voting. Boston: Birkhäuser.

Brennan, G. & Hamlin, A. (2000). Democratic devices and desires. Cambridge: Cambridge University Press.

Brennan, G., & Lomasky, L. (1993). Democracy and decision: The pure theory of electoral politics. Cambridge: Cambridge University Press.

Breyer, S. (1993). Breaking the vicious circle: Toward effective risk regulation. Cambridge, Mass.: Harvard University Press.

Brodsky, D. M., & Thompson, E. (1993). Ethos, public choice, and referendum voting. Social Science Quarterly, 74, 286-299.

Broome, J. (1991). Weighing goods: Equality, uncertainty and time. Oxford: Basil Blackwell.

Camerer, C. F., & Hogarth, R. M. (1999). The effects of financial incentives in experiments: A review and capital-labor-production framework. Journal of Risk and Uncertainty, 19, 7-42.

Camerer, C. F. (1987). Do biases in probability judgment matter in markets? Experimental evidence. American Economic Review, 77, 981-997.

Downs, A. (1957). An economic theory of democracy. New York: Harper and Row.

Fetherstonhaugh, D., Slovic, P., Johnson, S., & Friedrich, J. (1997). Insensitivity to the value of human life: A study of psychophysical numbing. Journal of Risk and Uncertainty, 14, 283-300.

Hare, R. M. (1981). Moral thinking: Its levels, method and point. Oxford: Oxford University Press (Clarendon Press).

Jervis, R. (1976). Perception and misperception in international politics. Princeton: Princeton University Press.

Kaiser, J. (2001). Rethinking a vaccine's risk. Science, 293, 1576-1577

Kaplow, L., & Shavell, S. (2000). Principles of fairness versus human welfare: On the evaluation of legal policy. Discussion Paper No. 277, Center for Law, Economics, and Business, Harvard Law School.
(http://www.law.harvard.edu/programs/olin)

Keeney, R. L. (1992). Value-focused thinking: A path to creative decisionmaking. Cambridge, MA: Harvard University Press.

McDaniels, T. L. (1988). Comparing expressed and revealed preferences for risk reduction: Different hazards and question frames. Risk Analysis, 8, 593-604.

Quattrone, G. A., & Tversky, A. (1984). Causal versus diagnostic contingencies: On self-deception and the voter's illusion. Journal of Personality and Social Psychology, 46, 237-248.

Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission bias and ambiguity. Journal of Behavioral Decision Making, 3, 263-277.

Ritov, I., & Baron, J. (1999). Protected values and omission bias. Organizational Behavior and Human Decision Processes, 79, 79-94.

Sears, D. O., & Funk, C. L. (1991). The role of self-interest in social and political attitudes. Advances in Experimental Social Psychology, 24, 1-91.

Schwartz-Shea, P., & Simmons, R. T. (1990). The layered prisoners' dilemma: ingroup vs. macro-efficiency. Public Choice, 65, 61-83.

Schwartz-Shea, P., & Simmons, R. T. (1991). Egoism, parochialism, and universalism. Rationality and Society, 3, 106-132.

Shabman, L., & Stephenson, K. (1994). A critique of the self-interested voter model: the case of a local single issue referendum. Journal of Economic Issues, 28, 1173-1186.

Singer, P. (1993). Practical ethics (2nd ed.). Cambridge: Cambridge University Press.

Stone, E. R., Yates, J. F., & Parker, A. M. (1994). Risk communication: Absolute versus relative expressions of low-probability risks. Organizational Behavior and Human Decision Processes, 60, 387-408.

Tetlock, P. E., Lerner, J. & Peterson, R. (1996). Revising the value pluralism model: Incorporating social content and context postulates. In C. Seligman, J. Olson, & M. Zanna (Eds.), The psychology of values: The Ontario symposium, Volume 8. Hillsdale, NJ: Erlbaum.

Ubel, P. A., Baron, J., & Asch, D. A. (2001). Preference for equity as a framing effect. Medical Decision Making, 21, 180-189.

Ubel, P. A., De Kay, M. L., Baron, J., & Asch, D. A. (1996a). Cost effectiveness analysis in a setting of budget constraints: Is it equitable? New England Journal of Medicine, 334, 1174-1177.

Viscusi, W. K., Magat, W. A., & Huber, J. (1987). An investigation of the rationality of consumer valuation of multiple health risks. Rand Journal of Economics, 18, 465-479.


Footnotes:

1Useful comments were provided by Matthew Adler, Steven Shavell, a class at Harvard Law School, participants in the University of Pennsylvania Decision Processes Brown Bag, and attendees of the present symposium. The experiments were supported by grants from the National Science Foundation and the Russell Sage Foundation.

2See Bazerman, Baron, and Shonk (2001) and Baron (1998) for discussion of the problems of government.

3See Brennan and Lomasky (1993) and Brennan and Hamlin (2000) for discussion of the idea that political behavior is ``expressive.''

4Camerer (1987) gives an example of how market discipline can reduce biases in repeated situations. But see Beattie and Loomes (1997) and Camerer and Hogarth (1999) for evidence that incentives sometimes have little effect.

5Aiming directly at utilitarian outcomes is not always the best way to reach them (Hare, 1981). Sometimes it may be better to follow simple rules (such as ``don't kill noncombatants'') even when violating the rules appears to be for the best. For the cases I discuss, however, it may be sufficient to be aware of such situations, so that we knowingly follow rules because we believe they lead to the best outcomes.

6I do not assume that goals cease with death. Indeed, I have argued that they may continue (Baron, 1996).

7We could imagine negative altruism, in which X wants Y's goals to be frustrated in proportion to their strength. Clearly such goals exist and influence political behavior, but I do not discuss them here.

8It is a moralistic goal if we want people who have homosexual desires to override them through conscious effort. But it is not a moralistic goal if we simply want to prevent homosexual behavior through coercion, e.g., through round-the-clock supervision of boys in a boarding school. The point is that the overriding is voluntary, even though the desire is still present.

9Clearly the goals of fanatics who carry out terrorist attacks, and their supporters, are at least partly moralistic. Terrorists do not engage in ordinary political behavior, to be sure, but many of their financial supporters do.

10This analysis may apply to one side of a political divide, but it cannot apply to both sides. If one side is the side of defection, its adherents will think that it is actually cooperation.

11More generally, we should ignore what Keeney (1992) calls means values, that is, goals that are subgoals, or means to the achievement of other goals. These goals exist because they are connected to more fundamental goals by beliefs, which may or may not be correct.


File translated from TEX by TTH, version 3.01.
On 1 May 2002, 12:26.