Baron, J., & Spranca, M. (1997). Protected values. Organizational Behavior and Human Decision Processes, 70, 1-16.

Protected values

Jonathan Baron1
University of Pennsylvania

Mark Spranca
RAND

Original text of Baron, J., & Spranca, M. (1997). Protected values. Organizational Behavior and Human Decision Processes, 70, 1-16.

1  Abstract

Protected values are those that resist tradeoffs with other values, particularly economic values. We propose that such values arise from deontological rules concerning action. People are concerned about their participation in transactions rather than just with the consequences that result. This proposal implies that protected values, defined as those that display tradeoff resistance, will also tend to display quantity insensitivity, agent relativity, and moral obligation. People will also tend to experience anger at the thought of making tradeoffs, and to engage in denial of the need for tradeoffs through wishful thinking. These five properties were correlated with tradeoff resistance (across different values, within subjects) in five studies in which subjects answered several questions about each of several values, or in which they indicated their willingness to pay to prevent some harmful action. These correlations were found even when the subjects could not tell the experimenters which values they were responding to, so they cannot be ascribed entirely to subjects' desire to express commitment. We discuss implications for value measurement and public policy.

2  Introduction

Some theories of rational decision making require tradeoffs among values, including moral values. According to these theories, if we value human life and other goods as well, we will rationally spend some amount of money to reduce risks of death, but not an infinite amount. Some risks are just too small and too costly to reduce. The same goes for all other values, such as those for the protection of nature or individual freedom from interference.
Such willingness to make tradeoffs is especially reasonable when we consider what would mean to be completely committed to some value. It would mean that we could not take any risk of sacrificing this value, through our actions or our failures to act. We would thus be obliged to spend our lives looking for actions that could reduce small risks of sacrificing this value. If we had more than one such value, we would be in a serious quandary.
Although the need to make tradeoffs is a fact of life, it is not one that everyone is happy with. Some people say that human lives - or human rights, or natural resources - are infinitely more important than other economic goods. These people hold what we call protected values. Some of their values, as they conceive them, are protected against being traded off for other values. People who hold protected values may behaviorally trade them off for other things - by risking lives or by sacrificing nature or human rights - but they are not happy with themselves for doing so, if they are aware of what they are doing. They are caught in binds that force them to violate some important value, but the value is no less important to them because of this behavioral violation.

2.1  Why protected values cause problems

We have noted that protected values are typically impossible for individuals to satisfy. Protected values also cause difficulties for institutions, such as government agencies, that try to satisfy the values of many people. If everyone has values that can be traded off, then, in principle, it might be possible to measure the values and arrive at a utilitarian decision that maximizes total value satisfaction, i.e., total utility. At such an optimum, we cannot increase one person's utility without reducing someone else's utility by at least the same amount. Government agencies attempt this sort of optimization when they assess the value of human life in order to determine whether environmental regulations, safety programs, or medical treatments are cost-effective.
Protected values cause trouble for such efforts because they imply that one value is infinitely more important than others. If the value of forests is infinite for some people, we will simply not cut them, and we will have to find substitutes for wood and paper. Even if only a few people place such an infinite value on forests, their values will trump everyone else's values, and everyone else will spend more money and make do with plastic. This is still (theoretically) a utilitarian optimum. But a social decision not to cut any forests because a few people have infinite values for them seems to give excessive weight to those values.
Other problems arise when protected values conflict. If some people have protected values for yew trees while others have protected values for the rights of cancer patients to the drug that is produced from them, no solution seems possible. Of course, a solution is possible. We could honor one side or the other, ignoring the rights of patients or trees. But the choice of the solution would be unaffected by the number of those who favored patients vs. trees.
Such a situation violates apparent normative principles of decision making. For example, it is reasonable to think that, for two options L and T, we either prefer L, prefer T, or we are indifferent. In the situation just described, we would be indifferent, since either solution is "optimal" in the sense that any improvement for one person will make someone else worse off by at least the same amount. Yet, a doubling of the number of people who favored L or T would not change the decision. This seems to violate a principle of dominance, which could be stated roughly as, "If we are indifferent between L and T and then get additional reason for L (or T), we should then favor L (or T)." The same problems arise within an individual who holds conflicting protected values. An additional argument for one option or another will not swing the decision.
To avoid problems of this sort, most normative theories of decision making assume that values can be traded off. That is, for any pair of values, a sufficiently small change in the satisfaction of one value can be compensated by a change in some other value.2 The values may be held by the same person or by different people. We call values compensatory when they are part of such a pair. Economic theory speaks of tradeoff functions. Utility theory assumes that each value takes the form of a utility function relating individual utility to the amount of a good or some attribute of the good.
Protected values thus create problems for utilitarian analysis, such as violations of the dominance principle. Economic analysis seems to avoid some of these problems by converting all values to money before comparing them, rather than using utility as that common coin. If we try to assess people's willingness to pay (WTP) to avoid violations of protected values, we often find that it is finite. Realistically, people can pay only so much, so WTP is finite. However, the appropriate measure is sometimes willingness to accept (WTA), e.g., when individuals have rights to the goods in question. Moreover, in a cost-benefit analysis, the total WTA of all those with rights is the relevant value. If a few people have infinite WTA for a forest, then the forest has infinite economic value. It cannot be cut. When rights conflict, so that we have no choice but to violate one right or another, and when some people have protected values for each of the conflicting rights, we are back to the same problem of dominance violations. We can still make decisions, for example, by voting. But voting need not honor protected values. For example, voting on the siting of hazardous facilities will allow them to be put almost anywhere, even over the ancient burial grounds of native peoples.

2.2  The existence of protected values

The seriousness of these problems, and the possibility of solutions to them, may depend on the nature of protected values themselves. This article proposes a theory of protected values and presents some preliminary tests of it. In essence, we propose that protected values derive from rules that prohibit certain actions, rather than values for potential outcomes of these actions. If this is true, then part of the solution to the problem may involve separate measurement of values for actions and outcomes, if it is possible to do this.
People do claim to hold protected values. These values appear in survey responses. In the method of contingent valuation (CV), respondents are asked how much they would pay for some good, such as protection of a wilderness area, or how much they would accept to give up the good. Some respondents refuse to answer such questions sensibly (Mitchell & Carson, 1989). They say "zero" or "no amount" because they think that "we shouldn't put a price on nature." These responses may reflect people's true values, even if the same people are inconsistent with these values in their behavior. If some people say that trees have infinite value and you point out to them that they have just sent a fax when they could have used electronic mail, they may admit that they do not really place an infinite value on trees, but they may, instead, feel guilty at realizing that they have violated one of their values. If we are trying to do what is best for people, we may sometimes do better to try to satisfy the values they hold rather than the values they reveal in their behavior, for people may sometimes regret their own behavior. Their behavior may be inconsistent with their values. Even when people hold conflicting values that are impossible to satisfy jointly in this world, they may wish they lived in a different world. We cannot dismiss these statements with a charge of hypocrisy.
Some philosophers and social theorists defend these refusals to make tradeoffs. These defenses provide additional evidence of the reality of protected values, at least in the theorists themselves. Schwartz (1986) argues that certain practices should be inviolable, not compromised by tradeoffs with anything else. For example, academic standards for giving grades should not be distorted by the desire to pass the quarterback of the football team, regardless of how important it is for the team to win or how few additional examination points are needed. Anderson (1993) argues that economists and utility theorists have a distorted view of the nature of human values. Anderson argues that values cannot be measured quantitatively for the purpose of trading them off. Social decisions, she says, must be reached by a process of discussion. (She does not say how this discussion is to proceed without at least some implicit discussion of the strengths of competing values.)

2.3  The purpose of the present theory

Our purpose is to explain the nature of protected values. What are their general properties? How do they participate in judgments?
We do not attempt to settle the philosophical questions about the sense in which protected values are subject to criticism or not (see Baron, 1988, for discussion), although, if we can answer the question of what they are as they commonly occur, that discussion may be able to focus more accurately on its topic.
We also do not concern ourselves with the empirical question of how people resolve conflicts involving protected values. However, our findings may bear on the question of how such formal procedures as cost-benefit analysis could take these values into account without violating the underlying utilitarian theory. The psychological nature of values is central to the question of how we should deal with them. Of course, all we can do is examine some of the most common kinds of protected values in some cultures. We may miss the discovery of values with a different psychological nature, to which philosophical arguments may be relevant that are not relevant to the values we find.
Finally, we do not attempt to answer the question of which values are protected or absolute. Others have attempted this, and we have drawn on their work in designing tests of our theory, without necessarily accepting all their conclusions. Andre (1992) provides a taxonomy of "blocked exchanges," cases in which it is either impossible or immoral to sell something; we focus here on cases in which it is thought to be immoral although possible. Fiske and Tetlock (1997) attempt another analysis in terms of modes of social interaction (see also Tetlock, Lerner, & Peterson, 1996, pp. 36-39).

2.4  Protected values as deontological rules

Our purpose here is to examine the nature of such absolute values as they commonly occur. We present a theory about these values, and some preliminary results. We conclude with a discussion of the implications of these values for utilitarian decision making. We call the values in question "protected" to emphasize the fact that their defining property is the reluctance of their holders to trade them off with other values. They are at least partially protected from tradeoffs. This is what makes them troublesome for utilitarian analysis of decisions. As we pointed out, protected values exist in judgment, but cannot fully exist in action.
We propose that these values express absolute deontological rules, rules that apply to certain behavior "whatever the consequences." An example of such a rule is "Do not destroy natural processes irreversibly." Such a rule prohibits the holder from destroying species, even if, for example, the destruction in question would have the effect of saving more species in total. Utilitarianism, utility theory, and other forms of consequentialism define right or optimal action in terms of some evaluation of expected consequences. By contrast, deontological rules specify that certain actions should be taken or not taken as a function of a description of the action itself. The description may refer to the way an action is performed, its motives, its antecedent conditions, and even its immediate consequences, e.g., a direct causal link between the action and extinction of a species. But, if the description includes all the consequences and nothing else, then the rule becomes effectively consequentialist.
When people who try to follow such rules are asked about their values, they are reminded of the rules. So, for example, a person asked about WTA for species destruction will interpret acceptance of the money as complicity in the destruction and will refuse to accept any amount. The use of hypothetical questions does not prevent this interpretation: hypothetical questions are simulations of real questions, and subjects might think that even their answers to hypothetical questions will be known to the experimenter and perhaps reported, so their answers are still real in the sense that they may influence others, just as an opinion poll might do so.
Deontological rules are typically agent-relative as opposed to agent-neutral (e.g., Nagel, 1986). Agent-relative rules are those that concern the involvement of a particular person. A rule that parents ought to care for their children is an agent-relative rule, because it concerns particular people. It is not the same as a rule that the children should somehow be cared for, or a rule that we should regard parents caring for their own children as a good consequence. A truly agent-relative rule would hold that X should care about X's child's welfare and Y should care about Y's child's welfare, but X need have no concern either with Y's child or with insuring that Y look out for the welfare of Y's own child.
Deontological rules typically prohibit harmful actions (e.g., destroying species) rather than harmful omissions (e.g., doing nothing to stop destruction by others), although they may also prohibit omissions under specific and limited conditions (e.g., neglecting one's child or one's job - see Baron, 1996). In general, people think of acts as those that cause relevant outcomes through a chain of causality that involves predictable physical or psychological principles at each step (Baron, 1993). If we fail to prevent some harm because we are out playing tennis at the time it happens, no such link can be made between our behavior and the harm, although, in another sense, we cause it.
Rules that are agent-relative and that concern harmful actions (or specifically limited harmful omissions) create limited obligations. As a result, deontological rules are easier to think of as absolute. Consequentialist principles, by contrast, can create unlimited obligations unless they can be traded off with other obligations. Consider a consequentialist rule that prohibited tradeoffs, such as, "the destruction of species is infinitely bad." Such a rule would have to be honored before any other decision criteria, for omissions as well as commissions. People who took this rule seriously would have to design their lives so that they did as much as possible to preserve species, and to induce others to do the same. Only when they had satisfied this criterion could they apply other criteria. A rule based on consequences does not make a distinction between acts and omissions or between self and others, so the injunction to act to preserve species and to induce others to do so would be as strong as the injunction not to destroy them. A person who took this kind of rule seriously would be a fanatic. Perhaps some fanatics do indeed think this way. Because of the practical difficulty of living this way, however, fanatics are rare.
Deontological rules are not necessarily protected against tradeoffs. Indeed, philosophers typically regard them as prima facie constraints that can be overridden by other constraints, or by consequentialist considerations. Our suggestion is thus that essentially all protected values are deontological, not that all deontological rules are protected.
Of course, people do have rules based on consequences, but almost all of these rules trade off with other considerations, so that they do not lead to this problem. Moreover, people who hold protected values for some things also hold compensatory values for other things, and these compensatory values trade off in the usual ways. A person who holds a protected value for species may still buy a car by thinking about the tradeoffs among price, safety, efficiency, etc. Protected values are thus a function of both the value and the person.

2.5  Implications

The defining property of protected values is absoluteness. Our proposal that these values arise from deontological rules implies directly that three other properties should be present in most cases in which absoluteness is present: quantity insensitivity, agent relativity, and moral obligation. These properties need not be perfectly correlated with absoluteness, for other sorts of values may exist that have some of these properties but not all of them.
Absoluteness expresses itself in resistance to tradeoffs. People resist trading off protected values with compensatory values, such as their value for money. Typically, people want protected values to trump any decision involving a conflict between a protected and a compensatory value (Baron, 1986). In this sense, protected values are absolute. The resistance to make tradeoffs can also express itself in refusals to answer questions about tradeoffs. Thus, those with a rule against destroying species may refuse to accept any amount of money in return for allowing such destruction, or they may refuse to say how much they would accept. When asked how much they are willing to pay, they may again try to avoid answering. Potentially such a question creates a conflict with another protected value that people are not so willing to acknowledge, that for their own life. Someone who pays everything to save a species would die from inability to afford the necessities of life. The important implication here is the avoidance of tradeoffs of the usual sort. Of course, people with protected values may still answer tradeoff questions, with difficulty, in order to oblige the researcher.
Notice that when protected values lead to lexicographic rules - rules that eliminate options by applying one value at a time - these rules are not mere heuristics of the sort found in studies of consumer choices and others without moral components (e.g., Payne, Bettman, & Johnson, 1993). It may be reasonable for people to use a strategy of eliminating apartments if the rent is above a cutoff, even though people know that they might be willing to pay more if everything else were absolutely perfect. This is a heuristic because it is knowingly adopted to save time and effort. Protected values are different. They are treated like commitments.
If absolute values arise from deontological prohibitions, they will tend to have the following properties.
1. Quantity insensitivity. Quantity of consequences is irrelevant for protected values. Destroying one species through a single act is as bad as destroying a hundred through a single act. The protected value applies to the act, not the result (although a compensatory value may apply to the result as well). One form of quantity insensitivity is insensitivity to probability of occurrence.
Some opponents of abortion seem to ignore quantity when they oppose spending government money on international family planning programs that carry out abortions, even if the money does not pay for the abortions and even if other expenditures actually reduce the number of abortions performed. It is not the number of abortions they care about. Another example was the attitude of some abortion opponents to the use of fetal tissue in medical research, which they felt might encourage some women to have abortions: "In our view, if just one additional fetus were lost because of the allure of directly benefiting another life by the donation of fetal tissue, our department [Health and Human Services] would still be against federal funding. ... The issue is about whether or not the federal government should administer a policy that encourages induced abortions. However few or many more abortions result from this type of research cannot be erased or outweighed by the potential benefit of the research" (Mason, 1990).
2. Agent relativity. Protected values are agent relative, as opposed to being agent general. This means that participation of the decision maker is important, as opposed to the consequences themselves. This follows from the assumption that protected values arise as rules about action.
For present purposes (following our earlier discussion), agent relativity includes concern with action rather than omission (and related distinctions such as changing vs. not changing the status-quo, or causing an outcome vs. letting it happen: see Ritov & Baron, 1992; Spranca et al., 1991). Consider again the example of giving aid to family planning programs that carry out abortions. If the aid is withheld, arguably, the number of abortions will increase. However, those who withhold the aid would not feel responsible for these abortions if they think that they are not responsible for the results of their inaction. If protected values were agent general, people would have infinite responsibility for preventing violation of those values wherever it occurred. The combination of agent-general obligations with quantity-insensitivity for probability would generate obligations to take any action that might do some good, however improbably. A distinction between acts and omissions is therefore compelled by absoluteness, for practical reasons, even if people might otherwise see these values as agent-general.
3. Moral obligation. The actions required or prohibited by protected values are seen as moral obligations in the sense of Turiel (1983). Moral obligations are not just conventions or personal preferences. They are seen as universal and independent of what people think. They are also seen as objective obligations: people should try to carry them out even if they do not think they should. This is not to say that compensatory values are always nonmoral. Many are moral too. People who endorse deontological principles, however, may think of objectivity and universality as required in order to prevent tradeoffs. If someone thought of a principle as something that did not apply to people in certain situations or to people who did not endorse it, then she would be more free to conclude that she herself was in a situation where it did not apply or that she was no longer bound by it because she no longer endorsed it, and these conclusions would permit her to trade it off with other values.
These three properties follow from the the idea of rules concerning actions. However, variants are possible. For example, one variant keeps the action-based aspect while giving up absoluteness. By this variant, we should allocate resources in proportion to the rightness of making allocations of various kinds rather than the goodness of the results.3 Thus, people may believe that the best method of allocating resources is according to the importance of the kind of action paid for by each expenditure rather than according to the effects of the allocation on solving the problem or even according to the size of the problem. Unlike an absolute rule, this rule allows us to allocate some resources to less important actions, but without regard to the consequences.
Two other properties follow from those just listed, along with other assumptions:
4. Denial of tradeoffs by wishful thinking. People may resist the idea that anything must be sacrificed at all for the sake of their value. People generally tend to deny the existence of tradeoffs (Jervis, 1976, pp. 128-142; Montgomery, 1984), and this tendency may be particularly strong when one of the values involved is not supposed to trade off with anything. People may desire to believe that their values do no harm. Thus, opponents of family planning assistance are prone to deny that cutting aid will increase the abortion rate, or have any other undesired effects.
5. Anger. People may become angry at the thought of violation of a protected value. This is a consequence of its being a moral violation. Tetlock et al. (1996) have described both this property and the denial of the need for tradeoffs in preliminary data on reluctance to make tradeoffs, which anticipates the present work in these respects.
We hypothesize that these five properties will be correlated with absoluteness, across different values within each subject. These correlations need not be perfect. Each of the other properties could have other causes aside from absoluteness. However, the correlations should be substantial to the extent to which our proposal is helpful in understanding values in general.

2.6  Posturing

When people say that their values are absolute, they may sometimes be simply taking a strong negotiating stance, making "nonnegotiable demands." We call this "posturing." Environmentalists do not want to be drawn into a debate of how much money a pristine forest is worth. They would rather say that it should simply be preserved, whatever the cost. Still, the fact that philosophical writers defend such absolute values suggests that this is not just a bargaining ploy. Our studies address posturing in a couple of ways, which we shall discuss.

3  Experiments

We report five experiments. The first three were general surveys of several different values of the sort found to be protected in pilot studies (not reported). The values we examined concerned activities or actions, such as abortion or destruction of natural resources, that some people regard as morally prohibited despite benefits that cause people to engage in them. We hypothesize that opposition to these actions involves protected values in many people. We define a value as protected for a subject when the subject says that the value should not be traded off, i.e., it is absolute. We ask whether such protected values have the other properties we have listed. In particular, we examine correlations across items within a subject between absoluteness and each of the other five properties. The first study also compared conditions in which subjects either did or did not indicate to the experimenter what actions they were rating.
The last two studies concerned sensitivity to quantity, each in the case of a single kind of value: Experiment 4 concerned the prohibition of unnaturally raising IQ through genetic engineering; Experiment 5 concerned endangered species. We are particularly interested in the subjects - however few there are - who are willing to make tradeoffs. We asked whether these subjects were less sensitive to quantity when protected values were involved. Quantity was the number of children in Experiment 4 and the probability of saving a species in Experiment 5.

3.1  Experiment 1

We presented subjects with 14 different actions and asked 12 questions about each one. The questions corresponded to the hypothesized properties of protected values. The actions were chosen on the basis of pilot studies and prior literature. We tried to select actions so that each action would be prohibited by a protected value for some subjects.
We were also concerned about the effects of posturing. Subjects who felt very strongly about some values might say that they would not trade these values off with anything, etc., as a way of impressing the experimenter or others with the strength of their commitment. They may approach the experiment as if it were an opinion poll of sorts. This could make them exaggerate their views for the purpose of influencing others' opinions about the issues or about themselves. To look for such posturing, we compared two conditions. In the "public" condition, subjects reported the number of each action they were answering about, so that we could tabulate the responses by action. In the "private" condition, subjects omitted the numbers of the actions, so that we could not tabulate responses. Nobody could know what issue the subject was responding to.

3.1.1  Method

Subjects were 72 students from the University of Pennsylvania and the Philadelphia College of Pharmacy and Science, solicited by advertising and paid $6/hour for completing this questionnaire and others.
Subjects answered 12 questions about each of 14 actions. The number of each action was printed on a card. The subject had the cards, an answer sheet with a table for the answers, and a list of the actions and the questions. In the private condition, subjects wrote answers without identifying the actions to which they were responding. The instructions for that condition read as follows:
"You have a list of actions that some people oppose, numbered 1-14. Some of these are happening now, and others are not. For each action, suppose that those in favor of it were willing to pay a great deal of money. Please answer questions A-L for each action by writing YES, NO, or ? (not sure) in the blank on the answer sheet. Use one row for each action.
"We are interested in the relations between one answer and another answer. We want you to give your honest opinion about each question. We think you can do that best if there is no possibility that anyone will know which action you are talking about. We would like you to determine the order of the 14 items by shuffling cards with the numbers 1-14 and then answering the items in the order you get. Please DO NOT write the number of the item you are answering. This way, we cannot tell which action you are responding to."
In the "public" condition, the last paragraph read: "In order to randomize the order, we would like you to determine the order of the 14 items by shuffling cards with the numbers 1-14 and then answering the items in the order you get. Please write the number of the action you are answering in the leftmost column, so we can tell which action you are responding to."

The actions were:

1. Destruction of natural forests by human activity, resulting in the extinction of plant and animal species forever.
2. Raising the IQ of normal children by giving them (completely safe) drugs.
3. Using genetic engineering to make people more intelligent.
4. Performing abortions of normal fetuses in the early stages of pregnancy.
5. Performing abortions of normal fetuses in the second trimester of pregnancy.
6. Fishing in a way that leads to the painful death of dolphins.
7. Forcing women to be sterilized because they are retarded.
8. Forcing women to have abortions when they have had too many children, for the purpose of population control.
9. Putting people in jail for expressing nonviolent political views.
10. Letting people sell their organs (for example, a kidney or an eye) for whatever price they can command.
11. Refusing to treat someone who needs a kidney transplant because he or she cannot afford it.
12. Letting a doctor assist in the suicide of a consenting terminally ill patient.
13. Letting a family sell their daughter in a bride auction (that is, the daughter becomes the bride of the highest bidder).
14. Punishing people for expressing nonviolent political opinions.

The questions were:

A. I do not oppose this.
B. This should be prohibited no matter how great the benefits from allowing it.
C. If this is happening now, no more should be allowed no matter how great the benefits from allowing it.
D. My own role in this matters. If my own government allows this, I have more of an obligation to try to stop it than if some other government does, even if I have equal influence over both governments.
E. In public discussions of this issue, it is most effective to exaggerate the strength of our opposition to this.
F. In public discussions of this issue, it is morally right to exaggerate the strength of our opposition to this.
G. It is impossible for me to think about how much benefit we should demand in order to allow this to happen.
H. It is equally wrong to allow some of this to happen and to allow twice as much to happen. The amount doesn't matter.
I. It is worse to allow twice as much to happen than to allow some.
J. This would be wrong even in a country where everyone thought it was not wrong.
K. People have an obligation to try to stop this even if they think they do not.
L. In the real world, there is nothing we can gain by allowing this to happen.
Questions B and C (and possibly G) assessed absoluteness; D assessed agent relativity (an issue examined more in subsequent experiments); E and F assessed posturing, the willingness to overstate for for strategic purposes; H and I were supposed to assess quantity insensitivity; J and K assessed moral obligation; and L assessed denial of tradeoffs. Each question was coded as 1 for yes and 0 for no.

3.1.2  Results

Properties of protected values.   Subjects generally endorsed hypothesized properties of protected values - quantity insensitivity, denial, moral obligation, and agent relativity - more often for absolute values than for other values. We made these comparison within each subject and then averaged the results across subjects.
Table 1 shows the percent of subjects giving positive answers to each question for each of the 14 actions for the public condition only. It is apparent that many subjects endorsed the answers characteristic of protected values.
Table 1.
Percent of subjects endorsing each question for each action.
Question
Action A B C D E F G H I J K L
12 Assist suicide 73 28 24 30 50 20 32 47 16 29 28 25
4 Early abortion 53 34 37 54 51 37 40 47 43 44 38 41
2 IQ with drugs 42 56 62 27 50 25 48 56 27 53 49 32
10 Sell organs 37 48 47 50 59 32 45 53 52 45 45 42
3 IQ genetic 37 52 52 26 58 28 38 47 34 59 46 30
7 Sterilize 35 54 52 52 54 37 48 54 37 46 44 29
5 Late abortion 24 54 54 62 64 48 59 66 38 66 53 50
8 Force abortion 15 67 66 60 71 47 54 71 44 66 54 47
9 Free speech 14 74 79 67 67 45 71 76 63 75 69 73
13 Sell daughter 12 73 79 58 77 58 64 79 50 73 63 75
14 Free speech 11 73 82 62 72 55 72 75 53 81 73 64
1 End species 09 83 82 66 73 58 69 54 56 83 63 59
6 Kill dolphins 06 69 72 57 70 54 52 75 58 71 67 64
11 Refuse kidney 06 73 82 73 71 56 64 78 51 94 75 58
. Private actions 31 59 59 58 56 46 50 61 53 59 47 42
To evaluate differences among types of values within each subject, we divided each subject's values into those that the subject did not oppose (answered "yes" to question A), those that the subject opposed but did not consider Absolute ("no" to A, B, and C), and those that were Absolute ("no" to A, "yes" to B and C).4
Table 2 shows the mean proportions of hypothesized properties, averaged across subjects, as a function of this categorization. For example, a subject who considered five values to be "absolute" and answered "yes" to question L for four of these would get a proportion of 80% for the "Denial" column of the "Absolute" row. The average across subjects for this cell of Table 2 used one such proportion from each subject. Our hypotheses concern the difference between the properties of values considered "Absolute" and values that are merely "Opposed," but we tested the difference between Not-opposed and Opposed as well.


Table 2.
Percent of positive responses to each question (or pair of questions) as a function of the answer to the questions about opposition to the action in question and absoluteness, Experiments 1-3. An asterisk indicates that the number listed was not significantly greater (p < .05) than the number directly above it from the same Experiment.


Quantity Denial Moral Agent Posture Bother Anger
Experiment 1
Not opposed 24 18 11 26 25
Opposed 41* 23* 41 57 46
Absolute 80 64 79 72* 69
Experiment 2
Not opposed 53 7 9 19 20 19 15
Opposed 44* 15* 35 40* 48 45 37
Absolute 74 60 81 70 65* 77 56
Experiment 3
Not opposed 75 7 9 12 18
Opposed 46* 19* 60 62 56
Absolute 52* 55 73* 80 68*
Note: Significance tests in the table are based on tests across subjects of within-subject differences. In an alternative analysis, correlations (tau) were computed for each item across subjects and then tested across the 14 items used in each study. All differences shown as significant in the table (those without asterisks) were sigificant at the same level or better by this alternative.
Some of the properties were averages of two questions. Posture was the average of questions E and F (which correlated highly, mean g = .74), and Moral was the average of J and K (mean g = .86). Other properties were responses to single questions: Agent for question D; Denial for L; and Quantity for H. (H did not correlate negatively with I as expected.) We name the variables in this way to facilitate comparison across experiments.
All comparisons between each proportion and the one above it were significant (p < .01) except those marked with asterisks (which were not significant at p < .05). In essence, our hypotheses were supported except for the Agent property (agent relativity). Specifically, the proportions of endorsement of each property (Quantity, Denial, and Moral) were higher for Absolute values than for Opposed. For Agent, however, subjects felt obliged to stop something even when they were just opposed to it, no matter where it was. Endorsement of Posture was greater for Absolute values than Opposed, but the difference was not significant here.
Public vs. private.   We found the results just described in both public and private conditions. To compare public and private conditions - a between-subject manipulation - we averaged across issues for each subject. To compare the extent to which values were protected in the two conditions, public and private, we defined a new index for each subject, Protect, as the average of all the items making up Absolute, Quantity, Denial, and Moral, plus item G, which correlated with the others. We also computed each subject's mean value of Posture (items E and F, as before) across items.
Condition (public, coded 1, vs. private, coded 0) did not correlate significantly with Protect (r=.15) or Posture (r=.05). The first correlation is in the direction hypothesized - more properties of protected values for the public condition - but it is small. In addition, the association (measured as a g coefficient within each subject) between each property (Quantity, Denial, Moral, and Agent) and Absolute did not correlate with condition, and the pattern of significant differences among the three value categories was the same for the private group alone as for the combined group (except for Posture, where the difference between Opposed and Not opposed was no longer significant).
Protect was also uncorrelated with Posture across subjects (r=.10). This suggests that protected values are not just the result of a tendency to posture, even though items that evoke protected values over all subjects also evoke posturing, as described earlier. The fact that Posture is uncorrelated with Protect and the fact that evidence of protected values is still found in the private condition both indicate that protected values are not simply a matter of posturing.
We found no sex differences in Protect.

3.2  Experiment 2

This experiment added questions about emotion, to test the hypothesis that emotion, particularly anger, is related to other properties of protected values (suggested by Tetlock et al., 1996). It also asked whether protected values could be manipulated by presenting items in increasing or decreasing order of tendency to oppose them. Objectionable actions might induce a feeling of anger that would carry over to other items if these came first.

3.2.1  Method

Fifty-five subjects were solicited as in Experiment 1. The actions were the same as those used in Experiment 1, except that item 14 was replace with "using condoms to prevent the birth of unwanted children in marriage." Half of the subjects read the new item first and the rest of the items ordered as in Table 3. The other half read the items in the reverse order.
Table 3.
Mean percent of subjects endorsing each question for each action, Experiment 3. (For Item H, this is the percent who thought quantity mattered.)


Question
Action A B C D E F G H I J L M
2 Coma unpermitted 08 76 56 58 64 56 69 45 78 68 79 74
9 Kill species 10 71 47 59 78 54 66 56 86 73 65 76
10 Kill dolphins 13 76 50 50 76 51 65 50 74 54 66 79
6 Abortion 3rd 22 50 31 62 71 53 50 29 69 53 78 70
12 Products risk 29 68 49 51 68 58 62 40 69 58 68 71
8 Transplant unpermitted 29 73 22 63 68 55 64 51 66 51 78 69
7 Raise IQ 32 54 28 47 53 43 43 52 49 37 72 49
5 Abortion 2nd 40 42 33 57 68 47 39 23 61 46 70 56
4 Abortion 1st 51 29 26 40 56 31 33 21 49 39 66 42
14 Strike breakers 62 21 19 34 31 22 30 23 24 24 54 37
11 Products forced 68 26 20 40 40 32 19 37 37 23 49 34
3 Assist suicide 71 26 29 44 36 35 27 30 26 26 73 36
13 Non union 81 10 13 26 29 14 17 34 18 13 54 18
1 Coma permitted 82 16 06 26 29 25 23 10 21 19 56 29
The questions were identical to those use in Experiment 1, except that two questions about emotion were added:
M. Thinking about this bothers me.
N. I get angry when I think about this.
Also question I was reworded: "The amount matters. It is more wrong to allow twice as much to happen than to allow some to happen."

3.2.2  Results

The order of questions did affect the tendency to oppose the actions (question A: 29% opposition with increasing opposition, 41% with decreasing, t=2.56, p=.013), but it did not affect Protect, a composite based on all questions except E and F (Posture). This result suggests that some opposition is not based on protected values. Opposition was increased by presenting the most objectionable actions first, but protectedness was unaffected.
Unlike Experiment 1, females were higher in Protect than males (t=2.52, p=.015) and also higher in their tendency to oppose actions (t=3.87, p=.000). However, in a logistic regression of sex on these two measures, only the latter was significant. Hence, women are simply more opposed to this set of actions, but given that they are opposed, their values are no more likely to be protected.
The main results of Experiment 1 were replicated, as shown in Table 2. The new question I correlated negatively with H, so it was reversed and combined with H to form the Quantity score. All differences in the original items between Absolute and Opposed were significant (p < .02 one tailed), including Agent, which did not differ significantly in Experiment 1. Questions M (bother) and N (anger), as well, significantly differed between Opposed and Absolute (p < .0005 for Bother; p=.039, for Anger, one tailed). We conclude (along with Tetlock et al., 1996) that being angry about an action and bothered by thinking about it are properties of protected values, along with the other properties.

3.3  Experiment 3

Experiment 3 used a different set of actions to examine the robustness of some previous findings, particularly those concerned with agent-relativity.

3.3.1  Method

Thirty-nine subjects, solicited as in Experiment 1, completed a questionnaire in which they answered 10 questions about each of 14 actions, in a table. The questionnaire began, "Below are some actions that could be paid for by your nation's government, with money collected from your taxes. They could also be carried out by private corporations. For each action, please answer questions A-N by writing Yes, No, or ? (not sure) in the blank on the table. ...."
The questions were the same as Experiment 1 except as follows:
C. There are no benefits from allowing it, in fact.
H. You have two choices:
1. This will happen 100 times.
2. This will happen 200 times.
Which choice is worse, or are they equally bad? (Answer 1, 2, or =.)
I. [Same as J in Experiment 1, Moral.]
J. [Same as K, Moral.]
[K was inadvertently misworded and is omitted from this report.]
L. The government should not pay for this from tax money of those who disapprove of it.
M. You have an option to buy stock in a company that does this. Another buyer will buy the stock if you don't. This is the last share of a special offer, so your decision does not affect the price of the stock. Is it wrong for you to buy the stock?
The actions were:
1. Doctors causing the death of comatose patients who will never recover, with permission of the patient's family.
2. Doctors causing the death of comatose patients who will never recover, against the wishes of the patient's family.
3. Doctors assisting in the suicide of a consenting terminally ill patient.
4. Aborting normal fetuses in the first three months of pregnancy.
5. Aborting normal fetuses in the second three months of pregnancy.
6. Aborting normal fetuses in the last three months of pregnancy.
7. Raising the IQ of normal children by giving them (completely safe) drugs.
8. Taking organs from people who have just died, for transplantation into other people, against the wishes of the dead person's family.
9. Cutting down forests for wood in a way that results in the extinction of plant and animal species forever.
10. Fishing in a way that leads to the painful death of dolphins.
11. Selling products for profit made by the forced labor of prisoners.
12. Selling products for profit made by workers exposed to hazardous chemicals that increase their risk of cancer.
13. Selling products for profit made by non-union labor.
14. Selling products for profit made by strike breakers.

3.3.2  Results

Table 3 shows the mean scores of items, and Table 2 shows the differences among value categories. Of primary interest were three new questions, described here according to their position in Table 2.
Quantity. Question H assessed quantity sensitivity more directly than previous questions. The new Quantity (scored as 1 when quantity did not matter, 1 when 200 times was worse than 100, and missing otherwise) did not distinguish Absolute and Opposed significantly. It seems as though making the idea of quantity insensitivity explicit by using numbers, in contrast to the items used in Experiments 1 and 2, reduced the correlations with other properties of protected values. However, subjects were slightly more willing to ignore quantity in Absolute than in Opposed, despite the fact that they were in fact significantly (p=.010, two tailed) more sensitive to quantity in Opposed than Not opposed, presumably because quantity really is irrelevant when no opposition is present. Possibly, the weak results for this item resulted from ambiguity about its meaning: the
Denial. Question C assessed denial of tradeoffs. It clearly distinguished Absolute from Opposed, as did previous Denial items.
Moral. The Moral items were endorsed more often for Absolute than for Opposed, but the difference was not significant here, as it had been for the same items in Experiments 1 and 2.
Agent. Questions L and M assessed agent-relativity. Table 2 (Agent) shows the results for M, the stock-purchase item, because this is the clearest item for distinguishing personal involvement from consequences. The consequences are the same because the share will be bought in any case. Item M (about stock purchase) distinguished Absolute and Opposed (p=.007) as did item L (about paying from tax money of opponents, p=.037), although item D (the original item about obligation to stop one's own government) did not. The results for item M clearly support the relation between absolute values and agent relativity.

3.4  Experiment 4

Experiment 4 took a closer look at insensitivity to quantity. In Experiments 1-3, many subjects said that quantity would not matter, but we wanted to find out whether subjects would really show complete insensitivity. We used a procedure based loosely on contingent valuation, with a single value from Experiment 1, the use of genetic engineering to raise IQ. (See Agar, 1995, for discussion of this action.) Subjects indicated their attitude toward raising IQ by indicating whether they would accept a reduction in medical costs in order to allow it and whether they would even pay an increase in order to see it done. We call these WTA (willingness to accept) and WTP (willingness to pay), respectively.
In order to manipulate protectedness, we compared two conditions differing in deviation from normality. In a Low condition, IQ was raised from 75 to 100. This could be seen as a treatment for retardation. In the High condition, IQ was raised from 100 to 125. The Low condition creates or restores normality and the High condition takes the person away from normality, hence, interferes more with nature, perhaps violating a protected value against interference with nature (Spranca, 1992). Alternatively, this factor may be understood in terms of egalitarianism: help those worse off before helping those better off. This principle could also be protected.
The Low-High manipulation was crossed with a manipulation of why the IQ was to be raised. In the Human condition, IQ had been made 25 points lower as a result of exposure to pollution caused by humans. In the Nature condition, IQ differences were simply the result of natural variation.

3.4.1  Method

Subjects were 93 students solicited and paid as in Experiment 1 for completing a questionnaire.
The questionnaire began, "U.S. residents all pay for each other's medical care, both through insurance payments and through taxes (which fund Medicare, Medicaid, and other government programs). Suppose that the average person in the U.S. pays $3,000 per year for medical care, though all sources. (If you are not a U.S. resident, imagine that you are.) The following cases are made up, but some day they could be real." One form began with the Low Natural condition, which read as follows:
"1. Certain natural genetic defects that cause mental retardation can be detected by tests performed early in pregnancy. If found, an artificially produced gene can be inserted into the fetal tissue through a surgical procedure that is relatively easy and safe. The gene increases average IQ from 75 (retarded) to 100 (normal).
"A. Suppose that 10 out of 10,000 people could be helped in this way.
Would you be willing to pay extra in order to make this procedure available to all who wanted it? (Circle one.)     YES     NO
If YES, what is the most you would be willing to pay? $
Would you be willing to allow this to be done if you and others saved money for health costs?     YES     NO
If you would be willing to allow it, what is the least that you would have to save per year in order to allow it? $
B. Suppose that 1 out of 10,000 people could be helped in this way.
[Same questions.]"
The High Natural condition, which came next, began, "Children expected to have normal IQ can have their IQ increased. A test for normal IQ is done early in pregnancy. If the test is positive, an artificially produced gene can be inserted into the fetal tissue through a surgical procedure that is relatively easy and safe. The gene increases average IQ from 100 (normal) to 125 (superior)."
The Low Human condition began, "Certain genetic defects are found to result from exposure to pollution. The pollution is no longer produced, but what was produced before will remain in the environment for centuries and cannot be cleaned up. These defects can be detected by tests performed early in pregnancy. [The rest was identical to the Low Natural condition.]"
The High Human condition began, "Pollution is found to cause certain genetic defects that lower the IQ of whose who would be well above average. The pollution is no longer produced, but what was produced before will remain in the environment for centuries and cannot be cleaned up. A test for these defects can be done early in pregnancy. [The rest was identical to the High Natural condition.]"
Half of the subjects did the four conditions in the opposite order, and, for these subjects, the order of the WTP and WTA questions ("pay" and "allow," respectively) were reversed as well.

3.4.2  Results

As expected, many subjects would not accept any amount in the High conditions, especially in the High Nature condition. Conversely, most subjects were willing to pay something in the Low conditions, and the cause of the low IQ did not matter. In those subjects who answered numerically, insensitivity to quantity was more prevalent in the High conditions. (Order of presentation had no effect.)
Many subjects answered both WTA and WTP questions affirmatively. We interpreted this to mean that they were willing to pay something, but, of course, they would also be willing to accept something if it were offered. Thus, we counted WTP in these cases and ignored WTA. We coded the responses in terms of WTP, using negative values for what the subject was willing to accept when she was unwilling to pay anything. If subjects answered "no" to both questions, we coded that as an extreme negative number for purposes of ranking responses. This represents unwillingness to trade off the violation of a value with monetary gain.
Table 4 classifies responses by condition, based on the response to the high-quantity condition (10 out of 10,000). In the Low conditions, most subjects were willing to pay something. A Wilcoxon test comparing Human vs. Nature in these conditions (using the responses coded as described above) was not significant. Thus, raising low IQ is generally acceptable whatever its cause. All other differences among conditions were significant at p=.000 by Wilcoxon tests. Thus, raising IQ above normal is unacceptable to many subjects, and few subjects are willing to pay for it. However, this is especially true when the IQ is naturally normal; when IQ is reduced by human pollution, then more people are willing to pay to raise it.


Table 4.
WTA and WTP for genetic repair, Experiment 4.


No WTA WTP WTP
amount > 0 = 0 > 0
Low Nature 8.7% 13.0% 4.3% 73.9%
High Nature 47.8% 15.2% 5.4% 30.4%
Low Human 7.6% 9.8% 3.3% 79.3%
High Human 21.7% 24.0% 5.4% 48.9%
To assess quantity effects, we classified each subject as showing an effect or not in the direction of higher WTP for more children helped (10 vs. 1 out of 10,000). Subjects who were unwilling to accept anything in both quantity conditions (10 and 1) were counted as missing data.
Table 5 shows the number of subjects showing an effect, no effect, or missing, in each of the four conditions. Because of the large number of missing data in High Nature, we tested the hypothesis by comparing the proportion of High cases showing sensitivity to quantity (excluding missing data) to the proportion of Low cases. (The proportions for each condition were thus 0, .5, or 1.) High showed a significantly smaller proportion of quantity effects than Low (p=.017 one tailed Wilcoxon test). Because we assume that the High conditions involve protected values in many subjects, this result supports the hypothesis that protected values are associated with insensitivity to quantity.


Table 5.
Numbers of subjects showing a quantity effect or not in each condition, Experiment 4.


missing no effect effect
Low Nature 8 30 54
High Nature 44 19 29
Low Human 7 31 54
High Human 18 35 39

3.5  Experiment 5

Experiment 5 examines sensitivity to quantity in another way, specifically, sensitivity to the probability of success of programs to save endangered species. Deontological obligations to save species would not be sensitive to the probability of success.

3.5.1  Method

Fifty-eight subjects, solicited as in Experiment 1, were given a questionnaire, which began, "The Endangered Species Act requires a plan to save each endangered species. These plans often interfere with economic development, so they end up costing money. Imaging you live in a region that will be affected by each of the following plans. For each plan, indicate the most you would be willing to pay in increased prices for goods and services, in percent, for a five-year period. You may use fractions or decimals. Say zero if you would not be willing to pay anything. Answer each one of the four subcases (A-D) as if it were the only possible plan." The four subcases involved, respectively: success probability of 0 without the plan and .25 with it; 0 without and .75 with; .50 without and .75 with; and .75 without and 1.00 with. (Half the subjects did these in the opposite order.)
The six cases were:
1. A species of tree is endangered because too much of it was cut down to make farms. It is useless for wood, but it is unique. No other trees are like it. It cannot be cultivated outside of its natural habitat.
2. A species of tree is endangered in your region because too much of it was cut down to make farms. It is useful for wood and valued as an ornamental tree. Because it is valued, it has already been preserved in many arboretums, and it can be cultivated.
3. A species of squirrel is endangered because too many trees were cut down to make farms.
4. A species of dolphin is endangered because too many dolphins were strangled in nets used to catch tuna.
5. A species of tuna is endangered because it has been overfished. People like to eat it.
6. A species of tuna is endangered because its natural predators have become more numerous. People like to eat it.
The order of these cases was reversed for half of the subjects.
Finally, to measure the extent to which subjects had protected values for each species, each subject answered the following (yes, no, or not sure [treated as missing data]) for each of the six species:
B. We should save this species even if there are no tangible benefits to people.
F. Really no price is too high to save a species like this.
Other questions measuring other aspects of protected values were also included, but, although they correlated with these questions as they should, they were not analyzed further.

3.5.2  Results

Protected values were again correlated with insensitivity to quantity. And most of the questions about protected values correlated with each other. (There was no effect of order or sex.)
Across the six cases, the mean answers to the four subcases were, respectively: 13% increase (for 0 to 25% change in probability); 25% (for 0 to 75%); 18% (50% to 75%); and 20% (75% to 100%). All differences were significant (p < .005 by t test across subjects) except that between the last two conditions. Subjects seemed more concerned with the final probability after the program was put into effect, rather than the change in probability from before to after. They were also somewhat sensitive to the change, however, as indicated by the greater WTP for the 0-75% change.
Sensitivity to quantity was defined for each species as the log of the ratio of WTP for the 0-75 change to the mean WTP of the 0-25 and 50-75 changes, divided by log(3) so that proportional sensitivity would have a value of 1. The mean sensitivity (averaged across items, then across subjects) was 0.58 (s.d., 0.31). A protected-value score for each species was defined as the mean of questions B and F. The mean of this score was 0.60 (s.d., 0.30). The correlation between sensitivity and the protected-value score was computed for each subject across the six items. The mean of this correlation was -.16 (p=.008, t test across subjects). Although significant, the measure of this correlation was small. It is clear that many other factors affect sensitivity to quantity. Still, the present experiment supports previous experiments in finding a small but significant relationship between insensitivity and protectedness.

4  General discussion

We hypothesized that five properties would correlate with absoluteness, the defining property of protected values: quantity insensitivity, agent relativity, moral obligation, denial of tradeoffs, and anger. These properties followed from the assumption that protected values derive from deontological prohibitions of action rather than values for consequences. We found these correlations.
Posturing could not account fully for these effects. In particular, they were present even when it was impossible for subjects to communicate which values they were responding to (although subjects could communicate a general tendency toward absolute values). So the tendency to hold protected values appears to exist apart from concerns about self-presentation with respect to particular values.
Experiments 1, 2, 4 and 5 supported the hypothesis that protected values contribute to quantity insensitivity. Value measurements are often insensitive to the quantity of the good being valued (Baron, 1997; Baron & Greene, 1996; Diamond et al., 1993; Jones-Lee et al., 1995; Kahneman & Knetsch, 1992; McFadden, 1994). For example many people will pay no more to save three wilderness areas than they would pay to save one (McFadden, 1994). The same insensitivity is found in judgments of willingness to accept (Baron & Greene, 1996), so the problem is not just one of budget constraints. We suggest that such insensitivity is more likely when it involves protected values - assuming that people are willing to oblige the researcher by answering the questions - because such values concern the acts involved rather than their consequences. Very likely, however, such values are not the only cause of insensitivity. (Baron & Greene, 1996, suggest others.)
Quantity insensitivity creates problems for value measurement because most social decisions are repeated. Those willing to pay $10 to save one wilderness area might be expected to be willing to do this more than once. So they would be willing to pay about $30 to save three. This would conflict with a stated value of $10 for three. (Baron & Greene, 1996, give other examples of such conflicts.)
The strong relation between Absoluteness and Denial suggests that people want to have their non-utilitarian cake and eat it too. They understand that commitment to protected values could make overall consequences worse in some sense. Rather than simply rejecting their competing, utilitarian intuitions, they deny that this is needed. Perhaps this is true more generally of commitment to deontological rules, absolute or not.
Other properties that we did not examine might also characterize protected values. Irwin (1994) has found that the ratio of willingness to accept to willingness to pay is greater for environmental goods than for consumer goods. She suggested that environmental goods are seen as moral. This difference may result from the belief that taking money to allow immorality is itself immoral. This may be more true of protected values, but it may also be true of moral values in general.
Another implication that we did not test is that protected values should distinguish acts and omissions. Protected values are absolute prohibitions on certain actions. If people tried to follow corresponding prohibitions against omissions that led to the same results, then people would have infinite obligations. Protected values thus depend on the omission-commission distinction. Individuals and situations differ considerably in whether this distinction is relevant to moral judgments or not (Baron, 1992, 1994; Baron & Ritov, 1994; Ritov & Baron, 1990; Spranca et al., 1991). Our theory implies that it will be made more often when protected values are involved.5
We have suggested that protected values are a subset of moral values. This idea may have biased our selection of actions. It may be possible to find nonmoral values that are also protected.

4.1  Origin of protected values

Why do people have protected values? Several reasons come to mind. Some explanations may be true, or partially true, but insufficient. One of these is self-enhancement. Most people will feel better about themselves knowing that they have a few protected values. Having protected values is a source of self-identity (Williams, 1981). This is true when a culture endorses the idea of "integrity" as a matter of sticking up for certain values. But where do members of a culture get the idea that integrity is a matter of simple adherence to one value at the expense of others? That is why this explanation, while possibly true, is insufficient.
The same can be said of impression management. For the same reason having protected values enhances one's image to oneself, it enhances one's image in the eyes of others. Politicians are keenly aware of this. To treat a protected value like a compensatory value is political suicide (Tetlock et al., 1996). Yet, if holding protected values makes a good impression, some people must already think of them as admirable.
Holding protected values may increase persuasive power. Many activists believe that they are more likely to achieve their activist goals if they take a hard negotiating stance. Using the rhetoric of protected values makes it easier to justify using hard bargaining strategies. Politicians are aware of this too. Part of this effect is related to impression management. Another part is simply that statements of protected values are the hardest bargaining position that one can take. Our results suggest that the effort to be persuasive is both real and separate from other determinants of protected values. It may be part of their source, but, once started, they seem to take on a life of their own.
Ultimately, the explanation of protected values may lie elsewhere. Two aspects must be explained. One is their absoluteness. The other is their emphasis on action. We have already discussed (in the Introduction) the emphasis on action, and why we think that consequentialist values are not absolute. But we have not said why people adopt absolute rules of action in the first place.
One possibility is that protected values are adopted intentionally and knowingly as rigid, inviolable prescriptive rules. Such prescriptive rules - such as "do not lie under oath" - are best to follow in practice even though one might imagine situation in which, if one accepted all the assumptions without question, it would be best to break the rule. People might want such rules to be followed absolutely because they may have good reason to believe that tradeoffs, once allowed, will not be honestly made. For example, experience with past abuses - such as the Nazis' use of eugenic arguments - may lead people to think that it is better never to allow something, such as eugenics, than to try to calculate when the benefits exceed the costs. People may adopt such rules because they mistrust others or themselves. People can imagine some hypothetical situation in which a tradeoff might be allowed, but, as a practical matter, they think that allowing tradeoffs would be too risky and that, if they tried to recognize such situations, they would make too many misses and too many false positives. When people are asked about their values, they may reasonably rely on their practical principle rather than on their imagination, since they may distrust potential users of the information they provide, including themselves.
This view of rules as coldly calculated devices for control of self and others is inconsistent with the emotionality we found to be associated with protected values. It seems more likely that such prescriptive rules - regardless of whether they are rationally justifiable in the way just described - take on a life of their own (Hare, 1981). After all, parents and other moral educators typically teach such practical rules without saying whether they are absolute or not. More generally, even if some people understand the rationale just described, they may fail to convey this rationale when they transmit the rules to others. The rules become detached from their justifications. Even when circumstances change so that the rationale - if it was ever valid - is no longer valid, the rule may still be blindly applied (Baron, 1994). Thus, restoring trust in the ability to make some tradeoff would not immediately change a protected value into a compensatory one.
We note, however, that our studies did not specifically ask subjects if they could imagine situations in which they would be willing to compromise their values. This would be worthy of further research.
Absolute values, whatever their initial origin, may also appeal to a preference for cognitive simplicity in decision making. It is probably easier to make decisions if we have a few protected values to constrain decision making. Of course, protected values held for this purpose are prescriptive heuristics at best. If people come to hold protected values for this reason, they are elevating rules of thumb into absolutes without adequate reason. They may do this because they have acquired from their culture a concept of moral rules as being like laws, that is, constraints on action that should never be violated.
Alternatively, absolute prohibitions may be at first only temporary phenomena that result from a kind of experience in which one of two competing perceptions becomes dominant and prevents the alternative being noticed, leading to excessive confidence that the alternative is absent (Margolis, 1987). Thus, when faced with a choice between competing harms, one of the harms might become dominant and prevent a person from thinking that the other one is important too. This is especially so when one of the harms results from action. And it is especially so when the options and their results are outside the range of normal experience, where people will have made many choices that sacrificed each of the two competing values. It is possible that such perceptual dominance occurs when subjects confront valuation questions for the first time, but it is also possible that one side or the other has become habitual as a result of prior repetition.

4.2  Implications

What are the practical implications of our conclusions for value measurement and social decision making? Value measurement is typically done by policy analysts who are concerned with the utility of consequences. They want to know how people value various consequences, so that they can recommend a policy that maximizes utility. When they question respondents about values, they do not yet know what policies they will recommend, what actions they will ask governments or other institutions to take. If we are correct about the nature of protected values, however, the respondents impose on the value measures some imagined means of producing the consequence, and their stated value for the consequence is contaminated by their value for the imagined means of achieving it. Sometimes this value takes the form of an absolute moral prohibition. Protected values are thus a monkey wrench thrown into the works. They are not about consequences, but rather about the participation of respondents in imagined actions. That is not what the policy analysts need to know, for they need to compare different ways of producing similar outcomes. Respondents' hypotheses about how they would participate might be incorrect.
Practical solutions to this problem must await further research. Perhaps one direction to explore is to separate elicited values into those involving "means" and those involving "ends" (Keeney, 1992). This may help respondents, along with further encouragement from analysts, to think about their values for consequences (ends) separately from the values connected with their own participation, since they would have a chance to express those separately.
Another direction is to teach respondents that some prescriptive rules are absolute only because of the practical difficulty of applying compensatory rules. If respondents understand this, then they may be willing to express values as compensatory in hypothetical situations even when they would not be willing to advocate tradeoffs in real situations.
The remaining problem of how societies should respond to protected values concerning means is a serious one. For example, we might think that people who place extreme value on not participating in some activity should be excused from paying taxes to fund that activity. However, such a policy would provide incentive for people to say that they had such values even if they did not. It may be that societies simply cannot take all such values into account.

4.3  References

Agar, N. (1995). Designing babies: morally permissible ways to modify the human genome. Bioethics, 9, 1-15.
Anderson, E. (1993). Value in ethics and economics. Cambridge, MA: Harvard University Press.
Andre, J. (1992). Blocked exchanges: a taxonomy. Ethics, 103, 29-47.
Andreoni, J. (1990). Impure altruism and donations to public goods: A theory of warm-glow giving. Economic Journal, 100, 464-477.
Baron, J. (1988). Utility, exchange, and commensurability. Journal of Thought, 23, 111-131.
Baron, J. (1992). The effect of normative beliefs on anticipated emotions. Journal of Personality and Social Psychology, 63, 320-330.
Baron, J. (1993). Morality and rational choice. Dordrecht: Kluwer.
Baron, J. (1994). Nonconsequentialist decisions (with commentary and reply). Behavioral and Brain Sciences, 17, 1-42.
Baron, J. (1996). Do no harm. In D. M. Messick & A. E.\ Tenbrunsel (Eds.), Codes of conduct: Behavioral research into business ethics, pp. 197-213. New York: Russell Sage Foundation.
Baron, J. (1997). Biases in the quantitative measurement of values for public decisions. Psychological Bulletin.
Baron, J., & Greene, J. (1996). Determinants of insensitivity to quantity in valuation of public goods: contribution, warm glow, budget constraints, availability, and prominence. Journal of Experimental Psychology: Applied, 2, 107-125.
Baron, J. & Ritov, I. (1994). Reference points and omission bias. Organizational Behavior and Human Decision Processes, 59, 475-498.
Diamond, P. A., Hausman, J. A., Leonard, G. K., & Denning, M. A. (1993). Does CV measure preferences? Some experimental evidence. In J. A. Hausman (Ed.), CV: A critical assessment. Amsterdam: North Holland Press.
Fiske, A. P., & Tetlock, P. E. (1997). Taboo tradeoffs: Reactions to transactions that transgress spheres of justice. Political Psychology.
Hare, R. M. (1981). Moral thinking: Its levels, method and point. Oxford: Oxford University Press (Clarendon Press).
Irwin, J. R. (1994). Buying/selling price preference reversals: Preference for environmental changes in buying versus selling modes. Organizational Behavior and Human Decision Processes, 60, 431-457.
Jervis, R. (1976). Perception and misperception in international politics. Princeton, NJ: Princeton University Press.
Jones-Lee, M. W., Loomes, G., & Philips, P. R. (1995). Valuing the prevention of non-fatal road injuries: Contingent valuation vs. standard gambles. Oxford Economic Papers, 47, 676 ff.
Kahneman, D. & Knetsch, J. L. (1992). Valuing public goods: The purchase of moral satisfaction. Journal of Environmental Economics and Management, 22, 57-70.
Keeney, R. L. (1992). Value-focused thinking: A path to creative decisionmaking. Cambridge, MA: Harvard University Press.
Krantz, D. H., Luce, R. D., Suppes, P., & Tversky, A. (1971). Foundations of measurement (Vol. 1). New York: Academic Press.
Margolis, H. (1982). Selfishness, altruism, and rationality: A theory of social choice. New York: Cambridge University Press.
Margolis, H. (1987). Patterns, thinking, and cognition: A theory of judgment. Chicago: University of Chicago Press.
Mason, J. O. (1990). Should the fetal tissue research ban be lifted? Journal of NIH Research, 2, 17-18.
McFadden, D. (1994). Contingent valuation and social choice. American Journal of Agricultural Economics, 76, 689-708.
Mitchell, R. C., & Carson, R. T. (1989). Using surveys to value public goods: The contingent valuation method. Washington: Resources for the Future.
Montgomery, H. (1984). Decision rules and the search for dominance structure: Towards a process model of decision making. In P. C. Humphreys, O. Svenson, & A. Vari (Eds.), Analysing and aiding decision processes. Amsterdam: North Holland.
Nagel, T. (1986). The view from nowhere. New York: Oxford University Press.
Payne, J. W., Bettman, J. R, & Johnson, E. J. (1993). The adoptive decision maker. New York: Cambridge University Press.
Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission bias and ambiguity. Journal of Behavioral Decision Making, 3, 263-277.
Ritov, I., & Baron, J. (1992). Status-quo and omission bias. Journal of Risk and Uncertainty, 5, 49-61.
Schwartz, B. (1986). The battle for human nature: Science, morality, and modern life. New York: Norton.
Spranca, M. (1992). The effect of naturalness on desirability and preference in the domain of foods. Unpublished Masters Thesis, Department of Psychology, University of California, Berkeley.
Spranca, M., Minsk, E., & Baron, J. (1991). Omission and commission in judgment and choice. Journal of Experimental Social Psychology, 27, 76-105.
Tetlock, P. E., Lerner, J. & Peterson, R. (1996). Revising the value pluralism model: Incorporating social content and context postulates. In C. Seligman, J. Olson, & M. Zanna (Eds.), The psychology of values: The Ontario symposium, Volume 8. Hillsdale, NJ: Erlbaum.
Turiel, E. (1983). The development of social knowledge: Morality and convention. Cambridge University Press.
Williams, B. (1981). Moral luck: Philosophical Papers 1973-1980. Cambridge: Cambridge University Press.

Footnotes:

1This research was supported by N.S.F. grant SBR92-23015. We thank Barbara Gault, Howard Margolis, and Carol Nickerson for comments. Send correspondence to Jonathan Baron, Department of Psychology, University of Pennsylvania, 3815 Walnut St., Philadelphia, PA 19104-6196, or (e-mail) baron@psych.upenn.edu.
2Technically, this amounts to a form of Archimedian axiom (Krantz, Luce, Suppes, & Tversky, 1971).
3This is analogous to, and perhaps a cause of, Andreoni's (1990) "warm glow" (also Margolis's (1982) theory of altruism).
4Three subjects were more likely to say "yes" to B and C when they said "yes" to A than when they said "no." We reversed the answer to A for these subjects. Results were essentially the same with many other analyses that did not depend on this reversal. We also did this for three subjects in Experiment 2 and two in Experiment 3. In addition, in Experiments 1 and 2, we omitted individual items when B and C disagreed.
5In as yet unpublished work Ritov and Baron have found support for this hypothsis.


File translated from TEX by TTH, version 3.74.
On 23 Jun 2007, 15:35.