Below is the unedited preprint (not a quotable final draft) of:
Baron, J. (1994). Nonconsequentialist decisions. Behavioral and Brain Sciences 17(1): 1-10.
The final published draft of the target article, commentaries and Author's Response are currently available only in paper.
For information about subscribing or purchasing offprints of the published version, with commentaries and author's response, write to: journals_subscriptions@cup.org (North America) or journals_marketing@cup.cam.ac.uk (All other countries).

Nonconsequentialist decisions

Jonathan Baron
Department of Psychology
University of Pennsylvania
3815 Walnut St.
Philadelphia, PA 19104-6196
E-mail: baron@cattell.psych.upenn.edu

Keywords

bias, consequentialism, decision, goals, irrationality, judgement, normative models, overgeneralization, preference, utility

Abstract

Consequentialism, in a simple form, holds that we should make decisions according to our judgments of their consequences for achievement of our goals. Our goals give each of us reason to endorse consequentialism as a standard of decision making. Alternative standards are bound to frustrate goal achievement by leading to consequences that are less good in this sense. In fact, however, some people knowingly follow decision rules that violate consequentialism. For example, they prefer harmful omissions to less harmful acts, they favor the status-quo over alternatives that they would otherwise judge to be better, they provide third-party compensation on the basis of the cause of an injury rather than the benefit from the compensation, they ignore deterrent effects in decisions about punishment, and they resist coercive reforms that they judge to be beneficial. I suggest that nonconsequentialist principles arise from overgeneralization of rules that are consistent with consequentialism in a limited set of cases. Commitment to such rules is detached from their original purposes. The existence of such nonconsequentialist decision biases has implications for philosophical and experimental methodology, the relation between psychology and public policy, and education.

Introduction

Socrates, Aristotle, and Plato started an ongoing tradition of evaluating human reasoning according to standards that applied to the reasoning itself rather than to its conclusions. We now maintain such standards through schools, child rearing, and public and private discourse. The best known standards apply to logic, but standards have also been applied to practical and moral reasoning. To accuse someone of being "illogical" or "unreasonable" is to express such standards, even when the accuser is motivated by a dislike of the conclusion rather than the means of reaching it.

A long tradition in psychology concerns the evaluation of human reasoning with respect to standards of this sort. Evaluation of reasoning was explicit in the study of logic (Woodworth & Schlosberg, 1954; Evans, 1989); problem solving (Wertheimer, 1959); cognitive development (e.g., Sharp, Cole, & Lave, 1979; Kohlberg, 1970); and the social psychology of stereotyping, obedience to illegitimate authority, excessive conformity, and self-serving judgments (Nisbett & Ross, 1980; Sabini, 1992).

In the 1950s, this tradition was extended to the study of judgments and decisions, using statistics, probability theory, and decision theory as standards. Early results (Peterson and Beach, 1967; Tversky, 1967) indicated that people were at least sensitive to the right variables. For example, people are more confident in numerical estimates based on larger samples, although, not surprisingly, the effect of sample size is not exactly what the formula says it should be. Beginning about 1970, however, Kahneman and Tversky (e.g., 1972) began to show qualitative departures from these standards; for example, judgments are sometimes completely insensitive to sample size.

Allais (1953) and Ellsberg (1961) noted departures from the expected-utility theory of decision making fairly early. At the time, that theory was understood as both a normative model, specifying how people should make decisions ideally, and as a descriptive model, predicting and explaining how decisions are actually made. Economists were not bothered by the idea that a single model could do both jobs, for they had done well by assuming that people are approximately rational. Demonstrations that people sometimes violated expected-utility theory were at first taken to imply that the model was incorrect both descriptively and normatively (Allais, 1953; Ellsberg, 1961).

Kahneman and Tversky (1979) suggested that a distinction between normative and descriptive models was warranted in decision making as well as elsewhere. Expected-utility theory could be normatively correct but descriptively incorrect. They proposed Prospect Theory as an alternative descriptive model.

The suggestion that people were behaving nonnormatively was defended most clearly by reference to framing effects (Kahneman and Tversky, 1984): people make different decisions in (what they would easily recognize as) the same situations described differently. For example, most people prefer a .20 chance of $45 over a .25 chance of $30, but, if they have a .75 chance to win nothing and a .25 chance of a choice between $30 for sure and a .80 chance of $45, they say they will pick the $30 if they get the choice. The latter choice was thought to result from an exaggerated weight given to outcomes that are certain, a certainty effect. Presentation of the gamble as a two-stage lottery induced subjects to see the $30 as certain (if they got the chance to choose it). Because such choices are, in a sense, contradictory, they cannot be normatively correct. Hence, separate normative and descriptive models may be needed, and certain rules of decision making (such as the certainty effect) may be seen as errors, in the sense that they depart from a normative model. Although Kahneman & Tversky did not defend expected-utility theory itself, it seems that the only way to avoid all the framing effects they have found is to follow that theory (e.g., Hammond, 1988).

I want to defend an approach to the study of errors in decision making based on comparison of decisions to normative models. I shall argue that this approach is a natural extension of the psychology of reasoning errors and that it has some practical importance. Decisions (and judgments) have consequences. By improving decisions, we might, on the average, improve the consequences. Many of the consequences that people frequently lament are the result of human decisions, such as those made by political leaders (and, implicitly, by those who elect them). If we can improve decision making, we can improve our lives.

Psychologists can of course make errors themselves. But we can make errors of omission as well as errors of commission. If we fail to point out some error that should be corrected, the error continues to be made. Of course, errors of commission have an extra cost. If we mistakenly try to change some pattern of reasoning that is not really erroneous, we not only risk making reasoning worse but also reduce our credibility. Arguably, this has happened for many "pop psychologists." Still, we cannot wait for perfect confidence, or we will never act. Academic caution is not the only virtue.

The basic approach I shall take here (Baron, 1985, 1988) is to consider three types of accounts or "models" of decision making. Normative accounts are those that specify a standard by which decision making will be evaluated. Descriptive accounts tell us how decision making proceeds in fact. Of particular interest are aspects of decision making that seem nonnormative. If we find such phenomena, we know that there is room for improvement. Prescriptive accounts are designs for improving decision making. They can take the form of very practical advice for everyday decision making (Baron et al., 1991), or formal schemes of analysis (Bell, Raiffa, & Tversky, 1988). A systematic departure from a normative model can be called an error or a bias, but calling it this is of no use if the error cannot be corrected. Thus, the ultimate standards are prescriptive. The prescriptive standards that we should try to find will represent the best rules to follow for making decisions, taking into account human limitations in following any standard absolutely. Normative standards are theoretical, to be appealed to in the evaluation of prescriptive rules or individual decisions.

In the rest of this article, I shall outline a normative model of decision making. I shall then summarize some departures from that model, mostly based on my own work and that of my colleagues. Most of these departures involve following rules or norms that agree with the normative model much of the time. Departures based on more pernicious rules doubtless occur too, but less often. I shall discuss the methodological implications of these departures for philosophy and psychology, and their prescriptive implications for public policy and education.

Consequentialism as a normative model

Here, I shall briefly defend a simple normative model, which holds that the best decisions are those that yield the best consequences for the achievement of people's goals. Goals are criteria by which people evaluate states of affairs, e.g., rank them as better or worse. Examples of goals are financial security, maintenance of personal relationships, social good, or more immediate goals such as satisfying a thirst.

Various forms of consequentialism have been developed, including expected-utility theory and utilitarianism. I have defended these particular versions elsewhere (Baron, in press a). For present purposes, consequentialism holds simply that the decision should be determined by all-things-considered judgments of overall expected goal-achievement in the states to which those decisions immediately lead. In this simple form of consequentialism, the states are assumed to be evaluated holistically, but these holistic evaluations take into account probabilities of subsequent states and the distribution of goal achievement across individuals. Suppose, for example, that the choice is between government programs A and B, each affecting many people in uncertain ways. If I judge that the state of affairs resulting from A is, on the whole, a better state than that resulting from B for the achievement of goals, then consequentialism dictates that I should choose A. It is irrelevant that program B may, through luck, turn out to have been better. In sum, judgments of expected consequences should determine decisions.

To argue for this kind of consequentialism, I must ask where normative models come from, what their justification could be. I take the idea of a normative model to be an abstraction from various forms of behavior that I described at the outset, specifically, those in which we express our endorsement of norms (in roughly the sense of Gibbard, 1990), i.e., standards, of reasoning. The basic function of such expression is to induce others to conform to these norms. What reasons could we have for endorsing norms?

Self-interest and altruism give us such reasons. Self-interest gives us reason to endorse norms, such as the Golden Rule, with which we exhort others to help us or refrain from hurting us. Altruism also motivates the same norms: we tell people to be nice to other people. And altruism gives us reason to endorse norms for the pursuit of self-interest. We care about others, so we want to teach them how to get what they want. Indirectly, advocacy of such norms helps the advocate to follow them, so we also have a self-interest reason to endorse them.

It might be argued that norms themselves can provide reasons for their own endorsement. For example, those who think active euthanasia should be legal, or illegal, want others to agree with them. But, importantly, in any inquiry about what norms we should endorse, we need to put aside the norms we already have, lest we beg the question. In thinking about my own normative standards, I put aside the goals that derive from those standards, although I must treat other people's goals as given when I think about what is best for THEM.

If goal achievement gives us reasons to endorse norms, then, other things equal, we should endorse norms that help us achieve our goals (collectively, since our reasons are both altruistic and selfish). Other things ARE equal, I suggest, since we have no other reasons for endorsing norms. Goals are, by definition, the motives we have for doing anything. We need not decide here on the appropriate balance of goals of self vs. others. This issue does not arise in the examples I shall discuss.

For example, consider two possible norms concerning acts and omissions that affect others. One norm opposes harmful acts. Another opposes both harmful acts and omissions, without regard to the distinction. The second norm requires people to help other people when the judged total harm from not helping exceeds that from helping. Harm includes everything relevant to goal achievement, including effort and potential regret. Which norm should I want others to follow? If others follow the first, more limited, norm, my goals will not be achieved as well, since I would lose the benefit of people helping me when the total benefits exceed the costs. I thus have reason to endorse a norm that does not distinguish acts and omissions, and I have no reason to distinguish acts and omissions as such in the norms I endorse. Once I endorse this norm, those who accept it will want me to follow it too, but, if I hold back my endorsement for this reason, I will lose credibility (Baron, in press a).

Suppose that I have a goal opposing harmful action but not opposing harmful omission. This goal is not derived from my moral intuitions or commitments, which we have put aside. In this case, I would have reason to endorse the limited norm. The more people who have such a (nonmoral) goal, the more reason we all have to endorse this norm (out of altruism, at least). But consequentialism would not be violated, for adherence to the norm would in fact achieve people's nonmoral goals. Although this argument leads to a consequentialist justification for a norm distinguishing acts and omissions, the argument is contingent on a (dubious) assumption about human desires.

Consider the case of active vs. passive euthanasia. Suppose we believe that there are conditions under which most people would want life-sustaining treatment withheld but would not want to be actively killed. Then, if we don't know what a patient wants, this belief would justify a distinction. However, if we know that the patient has no goals concerning the distinction, we have no reason to make it on the patient's behalf. (Likewise, the slippery-slope argument that active euthanasia will lead to reduced respect for life depends on a contingent fact that could be taken into account if it were true. The slippery slope could also go the other way: refraining from active euthanasia could lead to errors of misallocation of resources and the consequent neglect of suffering.) Our decision would depend on whether death itself was to be preferred, not by the way in which death comes about (assuming that the means of death have no other relevant consequences of their own for people's goals). In sum, we do not necessarily have any reasons to want each other to honor a principle distinguishing acts and omissions.

Consider another example. Suppose that a girl has a 10/10,000 chance of death from a disease. A vaccine will prevent the disease, but it has a 5/10,000 chance of death from its side effects. The girl should endorse a norm that tells you to give her the vaccine, assuming that this helps her to achieve her goal of living. We would each want each other to act in this way, so we all have reason to endorse this norm. Following Hare (1963), I shall call this kind of argument a Golden Rule argument.

If you have a goal of not putting others at risk through acts in particular and if it inhibits you from vaccinating the girl, she is hurt. She has reason to discourage you from holding such a goal. It is in her interest for your goals concerning your decisions to be concerned only with their consequences. Of course, altruistically, she might be concerned with your own goals concerning your decisions, so she might conclude that it is on the whole better for you not to vaccinate her. But she have no general reason - apart from what she knows about you in particular - to endorse a norm for you that prescribes nonvaccination. The norms we have selfish reason to endorse are those concerned with consequences only, because these are what affect us.

Even when we endorse norms out of altruism, we have no general reason to endorse a norm treating acts and omissions differently. You might have a goal of not causing harm to YOURSELF through acts, so you might not vaccinate yourself. Such a goal would make it more harmful for me to force you to vaccinate yourself, for I would go against that goal. But I have no reason, altruistic or selfish, to endorse a norm that leads you to have such a goal if you do not already have it, for it will not help you achieve your other goals, or mine.

This kind of argument concerning the act-omission distinction differs from other approaches to this issue. (See Kuhse, 1987, for an enlightening review.) Most of these are based on intuitions about cases as data to be accounted for (e.g., the articles in Fischer & Ravizza, 1992). Yet, it is just these intuitions that are at issue. I suggest that many of them arise from overgeneralizations, to which people - even those who become moral philosophers - become committed in the course of their development. In this case, for example, harmful acts are usually more intentional than harmful omissions, hence more blameworthy, but intention is not different in the cases just discussed. Yet, people continue to distinguish acts and omissions, even when the feature that typically makes them different is absent.

This same kind of argument will apply to the other kinds of norms I shall discuss. In general, the function of normative models for decision making (as described earlier) is not served by any norms other than those that specify achieving the best consequences, in terms of goal achievement. And those norms should not encourage people to have any other goals for the decision making except for achieving the best consequences. We might be able to go farther than this, specifying norms for the analysis of decisions into utilities and probabilities, for example, but the examples discussed here do not require such analysis. (Although the vaccination case involved probabilities, I simply assumed that anyone would judge a lower risk of death to be a better state of affairs, and no trading off of probability and utility was required.)

The upshot of this argument is that we have reason to be disturbed, prima facie, when we find other people making decisions that violate consequentialism. On further investigation, we might find that no better prescriptive norms are possible. But, unless this is true, these norms will lead to decisions that prevent us from achieving our goals as well as other decisions might. Our goals themselves, including our altruistic goals, therefore give us reason to be concerned about nonconsequentialist decisions. What we do about this disturbance is another question, to which we might apply consequentialist norms.

Departures from consequentialism

I shall now present a few examples of possible violations of consequentialism. I hope these examples make it plausible that nonconsequentialist thinking exists and matters, even if each example is subject to one quibble or another.

Omission and status-quo bias. Ritov and Baron (1990) examined a set of hypothetical vaccination decisions like that just described. We compared omission and commission as options within the same choice. In one experiment, subjects were told to imagine that their child had a 10 out of 10,000 chance of death from a flu epidemic, a vaccine could prevent the flu, but the vaccine itself could kill some number of children. Subjects were asked to indicate the maximum overall death rate for vaccinated children for which they would be willing to vaccinate their child. Most subjects answered well below 9 per 10,000. Of the subjects who showed this kind of reluctance, the mean tolerable risk was about 5 out of 10,000, half the risk of the illness itself. The results were also found when subjects were asked to take the position of a policy maker deciding for large numbers of children. When subjects were asked for justifications, some said that they would be responsible for any deaths caused by the vaccine, but they would not be (as) responsible for deaths caused by failure to vaccinate. When a Golden Rule argument was presented (Baron, 1992), the bias was largely eliminated. Asch et al. (1993) and Meszaros et al. (1992) have found that the existence of this bias correlates with mothers' resistance toward DPT vaccination (which may produce death or permanent damage in a very few children).

Other studies (Ritov & Baron, 1992; Spranca, Minsk, & Baron, 1991) indicate a general bias toward omissions over acts that produce the same outcome. In one case of Spranca et al. (1991), for example, subjects were told about John, a tennis player who thought he could beat Ivan Lendl only if Lendl were ill. John knew that Ivan was allergic to cayenne pepper, so, when John and Ivan went out to the customary dinner before their match, John planned to recommend to Ivan the house salad dressing, which contained cayenne pepper. Subject were asked to compare John's morality in different endings to the story. In one ending, John recommended the dressing. In another ending, John was about to recommend the dressing when Ivan chose it for himself, and John, of course, said nothing. Ten out of 33 subjects thought that John's behavior was worse in the commission ending, and no subject thought that the omission was worse. Other studies (Baron & Ritov, 1992; Ritov & Baron, 1992; Spranca et al., 1991, Experiment 4) show that the bias toward omissions is not limited to cases in which harm (or risk) is the result, although the effect is greater when the decision leads to the worse of two possible outcomes (Baron & Ritov, 1992).

Inaction is often confounded with keeping the status-quo, and several studies have shown an apparent bias toward the status-quo/omission option. People require more money to give up a good than they are willing to pay for the same good (Knetsch & Sinden, 1984; Samuelson & Zeckhauser, 1988; and, for public goods, Mitchell & Carson, 1989). Kahneman, Knetsch, & Thaler (1990) showed that these effects were not the result of wealth effects or other artifacts. They are, at least in part, true biases. Although Ritov and Baron (1992) found that this status-quo bias was largely a consequence of omission bias, Schweitzer (in press) found both omission bias without a status-quo option, and status-quo bias without an omission option. Baron (1992) and Kahneman et al. (1990) also found a pure status-quo bias. Status-quo bias, like omission bias, can result from overgeneralization of rules that are often useful, such as, "If it ain't broke, don't fix it."

It is clear that omission and status-quo bias can cause failures to achieve the best consequences, as we would judge them in the absence of a decision. Possible examples in real life are the pain and waste of resources resulting from the prohibition of active euthanasia (when passive euthanasia is welcomed), the failure to consider aiding the world's poor as an obligation on a par with not hurting them (Singer, 1979), and the lives of leisure (or withdrawal from worldly pursuits) led by some who are capable of contributing to the good of others.

Compensation. Compensation for misfortunes is often provided by insurance (including social insurance) or by the tort system. The consequentialist justification of compensation is complex (Calabresi, 1970; Calfee & Rubin, in press; Friedman, 1982), but, in the cases considered here, compensation should depend on the nature of the injury (including psychological aspects) and not otherwise on its cause or on counterfactual alternatives to it. (The compensation in these cases can help the victim, but it cannot punish the injurer or provide incentive for the victim to complain.) Any departure from this consequentialist standard implies that some victims will be overcompensated or others undercompensated, or both.

Miller and McFarland (1986) asked subjects to make judgments of compensation. When a misfortune was almost avoided, more compensation was provided than when it was hard to imagine how it could have been avoided. A possible justification for this difference is that victims were more emotionally upset in the former case than in the latter. Ritov and Baron (in press), however, found the same sort of result when subjects understood that the victim did not know the cause of the injury or the alternatives to it. In all cases a train accident occurred when a fallen tree was blocking the tracks. Subjects judged that more compensation should be provided (by a special fund) when the train's unexpected failure to stop caused the injury than when the suddenness of the stop was the cause. The results were found whether the failure was that of an automatic stopping device or of a human engineer.

These results can be partially explained in terms of norm theory (Kahneman & Miller, 1986), which holds that we evaluate outcomes by comparing them to easily imagined counterfactual alternatives. When it is easy to imagine how things could have turned out better, we regard the outcome as worse. When subjects were told that the outcome would have been worse if the train had stopped (when it did not stop), or if the train had not stopped (when it did), they provided less compensation, as norm theory predicts. Likewise, they provided more compensation if the counterfactual outcome would have been better. But this information about counterfactuals did not eliminate the effect of the cause of the outcome. Hence, norm theory, while supported, is not sufficient to explain all the results. Another source could be overgeneralization of principles that would be applied to cases in which an injurer must pay the victim. The injurer is more likely to be at fault when a device fails or when the engineer fails to stop.

A similar sort of overgeneralization might be at work in another phenomenon, the person-causation bias. Here, subjects judge that more compensation should be provided by a third party when an injury is caused by human beings than when it is caused by nature (Baron, 1992; Ritov & Baron, in press). This result is found, again, even when both the injurer (if any) and victim are unaware of the cause of the injury or of the amount of compensation (Baron, in press b), so that even psychological punishment is impossible. For example, subjects provided more compensation to a person who lost a job from unfair and illegal practices of another business than to one who lost a job from normal business competition. (Neither victim knew the cause.) The same result was found for blindness caused by a restaurant's violation of sanitary rules vs. blindness caused by a mosquito.

This effect might be an overgeneralization of the desire to punish someone. Ordinarily, punishment and compensation are correlated, because the injurer is punished by having to compensate the victim (or possibly even by the shame of seeing that others must compensate the victim). But when this correlation is broken, subjects seem to continue to use the same heuristic rule. This sort of reasoning might account in part for the general lack of concern about the discrepancy between victims of natural disease, who are rarely compensated (beyond their medical expenses), and victims of human activity, who are often compensated a great deal, even when little specific deterrence results because the compensation is paid by liability insurance.

Punishment. Notoriously, consequentialist views of punishment hold that two wrongs do not make a right, so punishment is justified largely on the grounds of deterrence. Deterrence can be defined generally so as to include education, support for social norms, etc., but punishment must ultimately prevent more harm than it inflicts. Again, I leave aside the question of how to add up harms across people and time. The simple consequentialist model put forward here implies that, normatively, our judgment of whether a punishment should be inflicted should depend entirely on our judgment of whether doing so will bring about net benefit (compared to the best alternative), whether or not the judgment of benefit is made by adding up benefits and costs in some way. (We might want to include here the benefits of emotional satisfaction to those who desire to see punishment inflicted. But we would certainly want to include deterrent effects as well.)

People often ignore deterrence in making decisions about punishment or penalties. Baron and Ritov (in press) asked subjects to assess penalties and compensation separately for victims of birth-control pills and vaccines (in cases involving no clear negligence). In one case, subjects were told that a higher penalty would make the company and others like it try harder to make safer products. In an adjacent case, a higher penalty would make the company more likely to stop making the product, leaving only less safe products on the market. Most subjects, including a group of judges, assigned the same penalties in both of these cases. In another test of the same principle, subjects assigned penalties to the company even when the penalty was secret, the company was insured, and the company was going out of business, so that (subjects were told) the amount of the penalty would have no effect on anyone's future behavior. Baron, Gowda, & Kunreuther (1993) likewise found that subjects, including judges and legislators, typically did not penalize companies differently for dumping of hazardous waste as a function of whether the penalty would make companies try harder to avoid waste or whether it would induce them to cease making a beneficial product. It has been suggested (e.g., Inglehart, 1987) that companies have in fact stopped making beneficial products, such as vaccines, exactly because of such penalties.

Such a tendency toward retribution could result from overgeneralization of a deterrence rule. It may be easier for people - in the course of development - to understand punishment in terms of rules of retribution than in terms of deterrence. Those who do understand the deterrence rational generally make the same judgments - because deterrence and retribution principles usually agree - so opportunities for social learning are limited. Other possible sources of a retribution rule may be a perception of balance or equity (Walster et al., 1978) and a generalization from the emotional response of anger, which may operate in terms of retribution (although it may also be subject to modulation by moral beliefs; see Baron, 1992).

A second bias in judgments of punishment is that people seem to want to make injurers undo the harm they did, even when some other penalty would benefit others more. Baron and Ritov (in press) found that both compensation and penalties tended to be greater when the pharmaceutical company paid the victim directly than when penalties were paid to the government and compensation was paid by the government (in the secret-settlement case described earlier). Baron et al. (1992) found (unsurprisingly) that subjects preferred to have companies clean up their own waste, even if the waste threatened no-one, rather than spend the same amount of money cleaning up the much more dangerous waste of a defunct company. Ordinarily, it is easiest for people to undo their own harm, but this principle may be overgeneralized.

Both of these biases can lead to worse consequences in some cases, although much of the time the heuristics that lead to them probably generate the best consequences. These results, then, might also be the result of overgeneralization of otherwise useful heuristics.

Resistance to coerced reform. Reforms are social rules that improve matters on the whole. Some reforms require coercion. In a social dilemma, each person is faced with a conflict between options, one of which is better for the individual and one of which is better for all of the members of the group in question. Social dilemmas can be, and have been, solved by agreements to penalize defectors (Hardin, 1968). Coercion may also be required to resolve negotiations, even though almost any agreement would be better for both sides than no agreement, or to bring about an improvement for many at the expense of a few, as when taxes are raised for the wealthy. It is in the interest of most people to support beneficial but coercive reforms, and some social norms encourage such support, but other social norms may oppose reforms (Elster, 1989).

To look for such norms, Baron and Jurney (1993) presented subjects with six proposed reforms, each involving some public coercion that would force people to behave cooperatively, that is, in a way that would be best for all if everyone behaved that way. The situations involved abolition of TV advertising in political campaigns, compulsory vaccination for a highly contagious flu, compulsory treatment for a contagious bacterial disease, no-fault auto insurance (which eliminates the right to sue), elimination of lawsuits against obstetricians, and a uniform 100% tax on gasoline (to reduce global warming).

Most subjects thought that things would be better on the whole if the reforms, as described, were put into effect, but many of THESE subjects said that they would not vote for the reforms. Subjects who voted against proposals that they saw as improvements cited several reasons. Three reasons played a major role in such resistance to reform, as indicated both by correlations with resistance (among subjects who saw the proposals as improvements) and by subjects indicating (both in yes-no and free-response formats) that these were reasons for their votes: fairness, harm, and rights.

Fairness concerns the distribution of the benefits or costs of reform. People may reject a generally beneficial reform, such as an agreement between management and labor, on the grounds that it distributes benefits or costs in a way that violates some standard of distribution.

Harm refers to a norm prohibiting helping one person through harming another, even if the benefit outweighs the harm and even if unfairness is otherwise not at issue (e.g., when those to be harmed are determined randomly). While opposition to reform on grounds of fairness compares reform to a reference point defined by the ideally fair result, opposition on ground of harm compares it to the status quo. One trouble with most reforms is that they help some people and hurt others. For example, an increased tax on gasoline in the U.S. may help the world by reducing CO2 emissions, and it will help most Americans by reducing the budget deficit. But it will hurt those few Americans who are highly dependent on gasoline, despite the other benefits for them. The norm against harm is related to omission bias, because failing to help (by not passing the reform) is not seen as an equivalent to the harm resulting from action.

A RIGHT, in this context, is an option to defect. The removal of this right might be seen as a harm, even if, on other grounds, the person in question is clearly better off when the option to defect is removed (because it is also removed for everyone else).

Subjects cited all of these reasons for voting against coercive reforms (both in yes-no and open-ended response formats). For example, in one study, 39% of subjects said they would vote for a 100% tax on gasoline, but 48% of those who would vote against the tax thought that it would do more good than harm on the whole. Subjects thus admitted to making nonconsequentialist decisions, both through their own judgment of consequences and through the justifications they gave. Of those subjects who would vote against the tax despite judging that it would do more good than harm, for example, 85% cited the unfairness of the tax as a reason for voting against it, 75% cited the fact that the tax would harm some people, and 35% cited the tax taking away a choice that people should be able to make. (In other cases, rights were more prominent.) Of course, removal of liberty by any means may set a precedent for other restrictions of freedom (Mill, 1859), so a consequentialist argument could be made against coercion even when a simple analysis suggests that coercion is justified. But no subject made this kind of argument. The appeals to the principles listed were in all cases direct and written as though they were sufficient.

Baron (in press b) obtained further evidence for the "do no harm" heuristic. Subjects were asked to put themselves in the position of a benevolent dictator of a small island consisting of equal numbers of bean growers and wheat growers. The decision was whether to accept or decline the final offer of the island's only trading partner, as a function of its effect on the incomes of the two groups. Most subjects would not accept any offer that reduced the income of one group in order to increase the income of the other, even if the reduction was a small fraction of the gain, and even if the group bearing the loss had a higher income at the outset. (It remains to be determined whether subjects think that the subjective effect of the loss is greater than that of the gain.) The idea of Pareto efficiency may have the same intuition origin.

Additional evidence for the role of fairness comes from a number of studies in which subjects refuse to accept beneficial offers because the benefits seem to be unfairly distributed (Camerer & Loewenstein, in press; Thaler, 1988).

In all of these studies of departures from consequentialism, it might be possible for someone who had made a "nonconsequentialist" decision to find a consequentialist justification for it, e.g., by imagining goals or subtle precedent-setting effects. Yet, in all these studies, justifications are typically not of this form. Moreover, the question at issue is not whether a conceivable consequentialist justification can be found but, rather, whether subjects faced with a description of the consequences, divorced from the decisions that led to them, would judge the consequences for goal achievement in a way that was consistent with their decisions. It seems unlikely that they would do so in all of these studies.

The sources of intuition

I have given several examples of possible nonconsequentialist thinking. My goal has been to make plausible the claim that nonconsequentialist decision rules exist and that they affect real outcomes. Before discussing what should be done about these norms, if anything, we should consider whether they are really as problematical as I have suggested. One argument against my suggestion is that these norms have evolved through biological and cultural evolution over a long period of time, so they are very likely the best that we can achieve, or close to the best.

Like Singer (1981), I am skeptical. Although several evolutionary accounts can explain the emergence of various forms of altruism as well as various moral emotions such as anger (Frank, 1985), I know of no such accounts of the specific norms I have cited. (I could imagine such an account for the norm of retribution, however, which might arise from a tendency to counterattack.) Any account that favors altruism would also seem to favor consequentialist behavior, and this would be inconsistent with nonconsequentialist norms. Even an account of anger as way of making threats credible (Frank, 1985) need not distinguish between anger at acts and omissions. Indeed, we are often angry at people for what they failed to do.

Even if norms have an evolutionary basis, we still do not need to endorse them. As Singer (1981) points out in other terms, evolution is trying to solve a different problem than that of determining the best morality to endorse. A rule might engender its own survival without meeting the above criteria for being worthy of endorsement. For example, chauvinism might lead to its own perpetuation by causing nations that encourage it to triumph over those that do not, the victors then spreading their norms to the vanquished. Likewise, ideologies that encourage open-mindedness might suffer defections at higher rates than those that do not, leading to the perpetuation of a doctrine that closed-minded thinking is good (Baron, 1991). Such mechanisms of evolution do not give us reason to endorse the rules they promote. As Singer (1981) points out, an evolutionary explanation of a norm can even undercut our attachment to it, because we then have an alternative to the hypothesis that we endorsed it because it was right.

Some have compared decision biases to optical illusions, which are a necessary side effect of an efficient design or adaptation (Funder, 1987). Without a plausible account of how this adaptation works, however, acceptance of this argument would require blind faith in the status quo. But more can be said. Unlike optical illusions (I assume), nonconsequentialist decision rules are not always used. In all the research I described, many or most subjects did not show the biases in question. Moreover, Larrick, Nisbett, & Morgan (in press) found that those who do not display such biases are at least no worse off in terms of success or wealth than those who do. (Whether they are morally worse was not examined.) Nonconsequentialist decision making is not a fixed characteristic of our condition.

To understand where nonconsequentialist rules (norms) come from, we need to understand where ANY decision rules come from. I know of no deep theory about this. Some rules may result from observation of our biological behavioral tendencies, through the naturalistic fallacy. We observe that men are stronger than women and sometimes push women around, so we conclude that men ought to be dominant. Rules are also discovered by individuals (as Piagetian theorists have emphasized); they are explicitly taught (as social-learning theorists have emphasized) by parents, teachers, and the clergy; and they are maintained through gossip and other kinds of social interaction (Sabini & Silver, 1982).

If people evolved to be docile, as proposed by Simon (1990), then we become attached to the rules that we are taught by others. These rules need have no justification for this attachment mechanism to work. Arbitrary rules can acquire just as much loyalty as well-justified rules. And, indeed, people sometimes seem just as attached to rules of dress or custom that vary extensively across cultures as they are to fundamental moral rules that seem to be universal (Haidt, 1992).

Why would anyone ever invent a rule, or modify a rule, and teach it to someone else? One reason is that the teachers benefit directly from the "students" following the rule, as when parents teach their children to tell the truth, help with the housework, or control their temper. In some cases, these rules are expressed in a general form ("don't bother people," "pitch in and do your share"), perhaps because parents understand that children will be liked by others if they follow those rules. So parents teach their children to be good in part out of a natural concern with the children's long-run interests. Parents also may take advantage of certain opportunities for such instruction: a moral lesson may be more likely after a harmful act than after a failure to help (unless the help was specifically requested).

Often, such rules are made up to deal with specific cases, e.g., "don't hurt people," in response to beating up the little brother. We can think of such rules as hypotheses, as attempts to capture what is wrong with the cases in question. It is of course useful to express the rules in a more general form, rather than referring to the specific case alone ("don't twist your brother's arm"). But such general rules are not crafted after deep thought. They are spur-of-the-moment inventions, although they do help control behavior for the better.

If the rule is badly stated, one corrective mechanism is critical thought about the rule itself (Singer, 1981). In order to criticize a rule, we need to have a standard, a goal, such as the test suggested earlier: is this a member of the set of rules that we benefit most from endorsing? We also need to have arguments about why the rule fails to achieve that standard as well as it could, such as examples where the rule leads to general harm (like those I gave earlier). And we need alternative rules, although these can come after the criticism rather than before it.

In the absence of such critical thought, rules may attain a life of their own. They become overgeneralized (or the rules that might replace them in specific cases are undergeneralized, even if they are used elsewhere). Because of our docility, perhaps, and because of the social importance of moral rules, our commitments to these rules are especially tenacious. The retributive rule of punishment, "an eye for an eye," was originally a reform (Hommers, 1986), an improvement over the kind of moral system that led to escalating feuds. But, when applied intuitively by a court to the cases of a child killed by a vaccine but with no negligence, it is overgeneralized to a case where the rule itself probably does harm (Baron & Ritov, in press; Oswald, 1986 makes a similar suggestion).

Critical thought about moral rules undoubtedly occurs. It may be what Piaget and his followers take to be the major mechanism of moral development. The sorts of experience that promote such thought may work because they provide counterexamples to rules that have been used so far. But critical thought is not universal. A principle such as "do no harm" may be developed as an admonition in cases of harm through actions. This principle may then be applied to cases in which harm to some is outweighed by much greater good to others, such as compulsory vaccination laws, fuel taxes, or free-trade agreements. The application may be unreflective. The principle has become a fundamental intuition, beyond question.

Such overgeneralization is well known in the study of learning. For example, Wertheimer (1959) noted that students who learn the base-times-height rule for the area of a parallelogram often apply the same rule inappropriately to roughly similar figures and fail to apply the rule when it should be applied, e.g., to a long parallelogram turned on its side. Wertheimer attributed such over- and undergeneralization to learning without understanding. I have suggested (Baron, 1988) that the crucial element in understanding is keeping the justification of the formula in mind, in terms of the purpose served and the arguments for why the formula serves that purpose. In the case of the base-times-height rule, the justification involves the goal of making the parallelogram into a rectangle, which cannot be done in the same way with, for example, a trapezoid.

Overgeneralization in mathematics is easily corrected. In morality and decision making, however, the rules that people learn arise from important social interactions. People become committed to these rules in ways that do not usually happen in schoolchildren learning mathematics. In this respect, overgeneralization also differs from mechanisms that have been proposed as causes of other types of biases than those discussed here, mechanisms such as the costs of more complex strategies, associative structures, and basic psychophysical principles (Arkes, 1991).

A defense of overgeneralization is that preventing it is costly. Crude rules might be good enough, given the time and effort it would take to improve them. Moreover, the effort to improve them might go awry. People who reflect on their decision rules might simply dig themselves deeper into whatever hole they are in, rather than improving those rules. We might also be subject to self-serving biases when we ask whether a given case is an exception to a generally good rule (Hare, 1981), e.g., in deciding whether an extramarital affair is really for the best (despite a belief that most are not).

These defenses should be taken seriously, but their implications are limited. They imply that we should be wary of trying to teach everyone to be a moral philosopher. They suggest that prescriptive systems of rules might differ from normative systems (although they do not prove this - see Baron, 1990). But they do not imply that simpler rules are more adequate as normative standards than full consequentialist analyses.

Moreover, some decisions are so important that the cost of thorough thinking and discussion pales by comparison to the cost of erroneous choices. I have in mind issues such as global environmental policy, fairness toward the world's poor, trade policy, and medical policy. In these matters, the thinking is often done by groups of people engaged in serious debate, not by individuals. Thus, there is more protection from error, and the effort is more likely to pay off. Many have suggested that utilitarianism, and consequentialism, are fully consistent with common sense or everyday moral intuition (e.g., Sidgwick, 1907), but this may be more true in interpersonal relations than in thinking about major social decisions.

Finally, the cost of thinking (or the cost of learning) may be a good reason not to learn a more adequate rule. But it is not a good reason to have high confidence in the inadequate rules that are used instead. Yet many examples of the use of nonconsequentialist rules are characterized by exactly such confidence, to the point of resisting counterarguments (e.g., Ritov & Baron, in press). In some cases, it might be best not to replace the nonconsequentialist rules with more carefully crafted rules but, rather, to be less confident of them and more willing to examine the situation from scratch. The carefully crafted rules might be too difficult to learn. In such cases, we might say that overgeneralization is a matter of excessive rigidity in the application of good general rules rather than in the use of excessively general rules.

In this section, I have tried to give a plausible account of how erroneous intuitions arise in the development of individuals and cultures. Direct evidence on such development is needed. In the rest of this article, I explore some implications of my view for research and application. These implications depend in different ways on the probability that this view is correct. Some require only that it is possible.

Intuition as a philosophical method

If intuitions about decision rules result from overgeneralization, then (as also argued by Hare, 1981, and Singer, 1981) these intuitions are suspect as the basic data for philosophical inquiry. Philosophers who argue that the act-omission distinction is relevant (e.g., Kamm, 1986; Malm, 1989) typically appeal directly to their own intuitions about cases. Unless it can be shown that intuitions are trustworthy, these philosophers are simply begging the question.

Rawls (1971) admits that single intuitions can be suspect, but he argues for a reflective equilibrium based on an attempt to systematize intuitions into a coherent theory. Such systematization, however, need not solve the problem. For example, it might (although it does not for Rawls) lead to a moral system in which the act-omission distinction is central, a system in which morality consists mainly of prohibitions, and in which positive duties play a limited role if any (as suggested by Baron, 1986).

Rawls's argument depends to some extent on an analogy between moral inquiry and fields such as modern linguistics, where systematization of intuition has been a powerful and successful method. Arguably, the same method underlies logic and mathematics (Popper, 1962, ch. 9). I cannot fully refute this analogical argument, but it is not decisive, only suggestive. I have suggested (along with Singer, 1981) that morality and decision rules have an external purpose through which they may be understood, and this criterion, rather than intuition, can be used as the basis of justification. Perhaps this idea, by analogy, can be extended to language, logic, and mathematics, but that is not my task here.

Experimental methodology

Experiments on decision biases often use between-subject designs (each condition given to different subjects) or other means to make sure that subjects do not directly compare the cases that the experimenter will compare (such as separating the cases within a long series). The assumption behind such between-subject designs is that subjects would not show a bias if they knew what cases were being compared. All of the biases I have described above are within-subject. Subjects show the omission bias knowingly, for example, even when the act and omission versions are adjacent.

In between-subject designs, subjects may display biases that they themselves would judge to be biases. Such inconsistency seems to dispense with the need for a normative theory, such as the consequentialist theory I have proposed, or expected utility theory. But, without an independent check, we do not know that subjects would consider their responses to be inconsistent. Frisch (in press) took the trouble to ask subjects whether they regarded the two critical situations as equivalent - such as buying vs. selling as ways of evaluating a good - and she found that they often did not. In other words, between-subject designs are not necessary to find many of the classic biases (including the status-quo effect described earlier), and subjects do not often regard situations as equivalent that experimenters regard as equivalent.

When we use within-subject designs, however, we cannot simply claim that subjects are making mistakes because they are violating rules that they endorse. When asked for justifications for their judgments, subjects in all the experiments I described earlier will endorse a variety of rules that are consistent with their responses. We therefore need a normative theory, such as the consequentialist theory I have outlined, if we are to evaluate subjects' responses.

Much the same normative theory seems to be implicit in most of the literature on framing effects and inconsistencies. Whether a factor is relevant or not to making a decision is a normative question, to which alternative answers can be given. For example, Schick (1991) argues that the way decision makers describe situations to themselves IS normatively relevant to the the decision that they ought to make (even, presumably, if these descriptions do not affect consequences), because descriptions affect their "understanding" of the situation, and understandings are necessarily part of any account of decision making. Hence, framing effects do not imply error. If consequentialism is correct, though, Schick is incorrect: consequentialism implies that understandings themselves can be erroneous (we might say). The attempt to bring in a consequentialist standard through the back door while ostensibly talking about inconsistency (Dawes, 1988) and framing effects will not work. The standard should be brought in explicitly, as I have tried to do.

In sum, although between-subject designs are useful for studying heuristics, we may also use within-subject designs to evaluate subjects' rules against a normative standard, such as consequentialism

Policy implications

The examples I used to illustrate my argument are of some relevance to issues of public concern, as noted. I am tempted to offer evidence of psychological biases as ammunition in various battles over public policy. Tetlock and Mitchell (in press), however, correctly warn us against using psychological research as a club for beating down our political opponents. It is too easy, and it can usually be done by both sides.

On the other hand, a major reason for studying biases is to discover where decisions need improvement, and if we researchers are going to forswear all application to public or personal decision making, we might as well fold up our tents and move on to more useful activities. How should we draw the line between making too many claims and too few?

I do not propose to answer this question fully, but part of an answer concerns the way in which we make claims concerning public policy debates. I suggest that claims of biased reasoning be directed at particular ARGUMENTS made by one side or the other, not at positions, and certainly not at individuals.

For example, one of the arguments against free-trade agreements is that it is wrong to hurt some people (e.g., those on both sides who will lose their jobs to foreign competition) in order to help others (e.g., those who will be prevented from losing their jobs because their products will be exported). This could be an example of omission bias, or the do-no-harm heuristic, which operates in much clearer cases and which, I have argued, is indeed an error in these clear cases. Now real trade negotiations are extremely complex, and they involve other issues characteristic of any negotiations, such trying to get the best deal for everyone. So at most we could conclude that a particular part of the argument against free trade is a fallacy that has been found elsewhere in the laboratory. This does not imply that the other arguments opposing free trade are wrong, that they are not decisive, or that the people who oppose free trade are any more subject to error than those who favor it.

With this kind of caution at least, the general program of research can be extended more broadly to other matters of policy that I have not discussed here, such as fairness in testing and selection for academic and employment opportunities, abortion, euthanasia, nationalism, the morality of sex, and so on. In all of these kinds of cases, arguments for two (or more) sides are complex, but some of the arguments are very likely erroneous. Psychology has a role to play in discovering fallacious arguments and pointing them out.

One discipline has concerned itself with exactly this kind of inquiry, the study of informal logic (e.g., Arnauld, 1964; Johnson & Blair, 1983; Walton, 1989). This field, however, has not incorporated many of the advances in normative theory that have occurred since the time of Aristotle, and it has paid no attention to psychological evidence - sparse as it is - about which fallacies actually occur in real life.

Educational implications

If discoveries about nonconsequentialist decision making are themselves to have any consequences, they must influence the thinking that people do. As I suggested at the outset, norms for thinking are enforced throughout society, so new research can influence these norms in many ways. Arguably, psychological research on racial and ethnic stereotyping (and authoritarianism, ethnocentrism, dogmatism, etc.) has influenced Western cultures in a great variety of ways. The claim that an argument is prejudiced now refers to a well established body of psychological literature that most people in these societies have heard of. (In using this as an example, I am not assuming that the research in question is flawless. Indeed, Tetlock and Mitchell point out that much of this research is itself politically biased, and the example may stand as a warning against excess as much as a model of influence.)

Much of the influence that this sort of scholarship exerts is very general. Scholars write articles, which are read by students, who acquire beliefs that they then teach to their children. Scholars also teach their students. Trade books and magazines convey ideas to the general public. But education provides a special channel for standards of thinking and decision making to be transmitted, because the transmission of such standards has long been a self-conscious goal of education.

A natural question, then, is how to conduct education in consequentialist decision making, assuming that we acquire increasing confidence in our assessment of what errors need to be corrected. Can we teach students that there is a right way of making decisions, and it is consequentialism? Here are two sides:

On the pro side, it is difficult to convey a standard that you don't apply yourself. If, for example, we want to teach students to write grammatically, or to avoid logical errors, we must try to practice what we preach and to apply the standards when we evaluate students, each other, and ourselves. Thus, when students hand in papers with dangling participles or ad hominem arguments, we must point this out with red pencils. Likewise, when students provide nonconsequentialist arguments, we must point this out too, if we want students to become aware of this issue. And we must give students opportunities to think about decisions in places where the standards of decision making can be discussed and brought to bear (Baron & Brown, 1991).

On the con side, decision making is not like grammar. People have deep commitments to their decision rules, and they OWN these commitments. They may regard attempts to change their views as a kind of theft. It is certainly a hurt. Moreover, no matter how confident we are in the normative theory, we can never be fully confident about prescriptive theory. Perhaps it is better that some people not try to understand consequentialist theory, even assuming its correctness. And we must remember that normative theory itself will evolve over time, so that we can give people at most a best guess (and very likely one that is nowhere near universally accepted). These considerations argue for a different kind of instruction, in which we present arguments as arguments rather than as standards - not in red pencil but in black pencil, as if to say: "This is just an argument on the other side that you should consider." This view is compatible with most approaches to moral education, in which the emphasis is on discussion and argumentation rather than on learning.

A moderate position between these two (Baron, 1990) is to present consequentialist theory as something that students should know but do not have to accept or follow. Thus, students should be able to apply the theory when asked to do so, although they need not endorse what they write as their own opinions. Alternative theories could be taught as well, and discussion like that suggested in the last paragraph is also compatible with this kind of instruction.

One relevant finding of Baron and Ritov (in press) is that a large fraction of our college-student subjects said that they had never heard of the argument that punishment was justified by deterrence rather than retribution. Of those who said that they heard it for the first time in our study, some accepted it and some rejected it, and a similar division was found among those who recognized the argument as one they knew of. I suggest that not recognizing this argument is evidence of an educational gap, and that this particular gap is a worse one than that shown by the copy editor who asked for Aristotle's first name.

References

Allais, M. (1953). Le comportement de l'homme rationnel devant le risque: Critique des postulats et axioms de l'cole amricaine. Econometrica, 21, 503-546.

Arkes, H. R. (1991). Costs and benefits of judgment errors: Implications for debiasing. Psychological Bulletin, 110, 486-498.

Arnauld, A. (1964). The art of thinking (Port Royal Logic) (J. Dickoff & P. James, Trans.). Indianapolis: Bobbs-Merrill. (Original work published 1662)

Asch, D., Baron, J., Hershey, J. C., Kunreuther, H., Meszaros, J., Ritov, I., & Spranca, M. (1993). Determinants of resistance to pertussis vaccination. Manuscript, Department of Psychology, University of Pennsylvania.

Baron, J. (1985). Rationality and intelligence. New York: Cambridge University Press.

Baron, J. (1986). Tradeoffs among reasons for action. Journal for the Theory of Social Behavior, 16, 173-195.

Baron, J. (1988). Thinking and deciding. New York: Cambridge University Press.

Baron, J. (1990). Thinking about consequences. Journal of Moral Education, 19, 77-87.

Baron, J. (1991). Beliefs about thinking. In J. F. Voss, D. N. Perkins, & J. W. Segal (Eds.), Informal reasoning and education, pp. 169-186. Hillsdale, NJ: Erlbaum.

Baron, J. (1992). The effect of normative beliefs on anticipated emotions. Journal of Personality and Social Psychology, 63, 320-330.

Baron, J. (in press a). Morality and rational choice. Dordrecht: Kluwer.

Baron, J. (in press b). Heuristics and biases in equity judgments: a utilitarian approach. In B. A. Mellers and J. Baron (Eds.), Psychological perspectives on justice: Theory and applications. New York: Cambridge University Press.

Baron, J., Baron, J. H., Barber, J. P., & Nolen-Hoeksema, S. (1991). Rational thinking as a goal of therapy. Journal of Cognitive Psychotherapy (special issue), 4, 293-302.

Baron, J. & Brown, R. V. (Eds.) (1991). Teaching decision making to adolescents. Hillsdale, NJ: Erlbaum.

Baron, J. & Jurney, J. (1993). Norms against voting for coerced reform. Journal of Personality and Social Psychology.

Baron, J. & Ritov, I. (in press a). Reference points and omission bias. Organizational Behavior and Human Decision Processes.

Baron, J. & Ritov, I. (in press b). Intuitions about penalties and compensation in the context of tort law. Journal of Risk and Uncertainty.

Baron, J., Gowda, R., & Kunreuther (1993). Attitudes toward managing hazardous waste: What should be cleaned up and who should pay for it? Risk analysis.

Bell, D. E., Raiffa, H., & Tversky, A. (Eds.) (1988). Decision making: Descriptive, normative, and prescriptive interactions. New York: Cambridge University Press.

Calabresi, G. (1970). The costs of accidents: A legal and economic analysis. New Haven: Yale University Press.

Calfee, J. E., & Rubin, P. H. (in press). Some implications of damage payments for nonpecuniary losses. Journal of Legal Studies.

Camerer, C., & Loewenstein, G. (in press). Information, fairness, and efficiency in bargaining. In B. A. Mellers and J. Baron (Eds.), Psychological perspectives on justice: Theory and applications. New York: Cambridge University Press.

Dawes, R. M. (1988). Rational choice in an uncertain world. San Diego: Harcourt Brace Jovanovich.

Ellsberg, D. (1961). Risk, ambiguity, and the Savage axioms. Quarterly Journal of Economics, 75, 643-699.

Elster J. (1989). The cement of society. Cambridge: Cambridge University Press.

Evans, J. St. B. T. (1989). Bias in human reasoning: Causes and consequences. Hillsdale, NJ: Erlbaum.

Fischer, J. M. & Ravizza, M. (1992). Ethics: Problems and principles. New York: Holt, Rinehart, & Winston.

Frank, R. F. (1985). Passions within reason: The strategic role of the emotions. New York: Norton.

Friedman, D. (1982). What is 'fair compensation' for death or injury? International Review of Law and Economics, 2, 81-93.

Frisch, D. (in press). Reasons for framing effects. Organizational Behavior and Human Decision Processes.

Funder, D. C., (1987). Errors and mistakes: Evaluating the accuracy of social judgment. Psychological Bulletin, 101, 75-90.

Gibbard, A. (1990). Wise choices, apt feelings. Cambridge, MA: Harvard University Press.

Haidt, J. (1992). Moral judgment, affect, and culture, or, is it wrong to eat your dog? Doctoral dissertation, Department of Psychology, University of Pennsylvania.

Hammond, P. H. (1988). Consequentialist foundations for expected utility. Theory and decision, 25, 25-78.

Hardin, G.R. (1968) The tragedy of the commons. Science,162, 1243-1248.

Hare, R. M. (1963). Freedom and reason. Oxford: Oxford University Press (Clarendon Press).

Hare, R. M. (1981). Moral thinking: Its levels, method and point. Oxford: Oxford University Press (Clarendon Press).

Hommers, W. (1986). Ist "Voller Ersatz" immer "Adquater Ersatz"? Zu einer Diskrepanz Regelungen des Gesetzbuches im EXODUS und der Adquatheits-These der Equity-Theorie. Psychologische Beitrge, 28, 164-179.

Inglehart, J. K. (1987). Compensating children with vaccine-related injuries. New England Journal of Medicine, 316, 1283-1288.

Johnson, R. H., & Blair, J. A. (1983). Logical self-defense (2nd ed.). Toronto: McGraw-Hill Ryerson.

Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1990). Experimental tests of the endowment effect and the Coase theorem. Journal of Political Economy, 98, 1325-1348.

Kahneman, D. & Miller, D. T. (1986). Norm theory: Comparing reality to its alternatives. Psychological Review, 93, 136-153.

Kahneman, D., & Tversky, A. (1972). Subjective probability: A judgment of representativeness. Cognitive Psychology, 3, 430-454.

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decisions under risk. Econometrica, 47, 263-291.

Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist, 39, 341-350.

Kamm, F. M. (1986). Harming, not aiding, and positive rights. Philosophy and Public Affairs, 15, 3-32.

Knetsch, J. L., & Sinden, J. A. (1984). Willingness to pay and compensation: Experimental evidence of an unexpected disparity in measures of value. Quarterly Journal of Economics, 99, 508-522.

Kohlberg, L. (1970). Stages of moral development as a basis for moral education. In C. Beck & E. Sullivan (Eds.), Moral education (pp. 23-92). University of Toronto Press.

Kuhse, H. (1987). The sanctity of life doctrine in medicine: A critique. Oxford: Oxford University Press.

Larrick, R. P., Nisbett, R. E., & Morgan, J. N. (in press). Who uses the cost-benefit rules of choice? Implications for the normative status of economic theory. Organizational Behavior and Human Decision Processes.

Malm, H. M. (1989). Killing, letting die, and simple conflicts. Philosophy and public affairs, 18, 238-258.

Meszaros, J. R., Asch, D. A., Baron, J., Hershey, J. C., Kunreuther, H., & Schwartz, J. (1992). Cognitive influences on parents' decisions to forgo pertussis vaccination for their children. Manuscript, Center for Risk Management and Decision Processes, University of Pennsylvania.

Mill, J. S. (1859). On liberty. London.

Miller, D. T., & McFarland, C. (1986). Counterfactual thinking and victim compensation. Personality and Social Psychology Bulletin, 12, 513-519.

Mitchell, R. C., & Carson, R. T. (1989). Using surveys to value public goods: The contingent valuation method. Washington: Resources for the Future.

Nisbett, R. E., & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Englewood Cliffs, NJ: Prentice-Hall.

Oswald, M. (1989). Schadenshhe, Strafe und Verantwortungsattribution. Zeitschrift fr Sozialpsychologie, 20, 200-210.

Peterson, C. R., & Beach, L. R. (1967). Man as an intuitive statistician. Psychological Bulletin, 68, 29-46.

Popper, K. R. (1962). Conjectures and refutations: The growth of scientific knowledge. New York: Basic Books.

Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.

Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission bias and ambiguity. Journal of Behavioral Decision Making, 3, 263-277.

Ritov, I., & Baron, J. (1992). Status-quo and omission bias. Journal of Risk and Uncertainty, 5, 49-61.

Ritov, I., & Baron, J. (in press). Judgments of compensation for misfortune: the role of expectation. European Journal of Social Psychology.

Sabini, J. (1992). Social Psychology. New York: Norton.

Sabini, J., & Silver, M. (1981). Moralities of everyday life. Oxford: Oxford University Press.

Samuelson, W., & Zeckhauser, R. (1988). Status-quo bias in decision making. Journal of Risk and Uncertainty, 1, 7-59.

Schick, F. (1991). Understanding action: An essay on reasons. New York: Cambridge University Press.

Schweitzer, M. E. (in press). Untangling the status-quo and omission effects: An experimental analysis. Organizational Behavior and Human Decision Processes.

Sharp, D., Cole, M., & Lave, C. (1979). Education and cognitive development: The evidence from experimental research. Monographs of the Society for Research in Child Development, 44 (1-2, Serial No. 178).

Sidgwick, H. (1907). The methods of ethics (7th ed.). London: Macmillan.

Simon, H. A. (1990). A mechanism for social selection and successful altruism. Science, 250, 1665-1668.

Singer, P. (1979). Practical ethics. Cambridge University Press.

Singer, P. (1981). The expanding circle: Ethics and sociobiology. New York: Farrar, Straus & Giroux.

Spranca, M., Minsk, E., & Baron, J. (1991). Omission and commission in judgment and choice. Journal of Experimental Social Psychology, 27, 76-105.

Tetlock, P. E., & Mitchell, G. (in press). Liberal and conservative approaches to justice: Conflicting psychological portraits. In B. A. Mellers and J. Baron (Eds.), Psychological perspectives on justice: Theory and applications. New York: Cambridge University Press.

Thaler, R. H. (1988). The ultimatum game. Journal of Economic Perspectives, 2, 195-206.

Tversky, A. (1967). Additivity, utility, and subjective probability. Journal of Mathematical Psychology, 4, 175-202.

Walster, E., Walster, G. W., & Berscheid, E. (1978). Equity: Theory and research. Boston: Allyn & Bacon.

Walton, D. N. (1989). Informal logic: A handbook for critical argumentation. Cambridge: Cambridge University Press.

Wertheimer, M. (1959). Productive thinking (rev. ed.). New York: Harper & Row (Original work published 1945)

Woodworth, R. S., & Schlosberg, H. (1954). Experimental psychology. New York: Holt.