Baron, J. (1996). Norm-endorsement utilitarianism and the
nature of utility. Economics and Philosophy, 12, 165-182.
Norm-endorsement utilitarianism and the nature of utility
Jonathan Baron1
University of Pennsylvania
Norm-endorsement utilitarianism and the nature of utility
In this article, I shall suggest an approach to the justification
of normative moral principles which leads, I think, to
utilitarianism. The approach is based on asking what moral norms
we would each endorse if we had no prior moral commitments. I
argue that we would endorse norms that lead to the satisfaction
of all our nonmoral values or goals. The same approach leads to
a view of utility as consisting of those goals that we would want
satisfied. In the second half of the article, I examine the
implication of this view for several issues about the nature of
utility, such as the use of past and future goals. The argument
for utilitarianism is not completed here. The rest of it
requires a defense of expected-utility theory, of interpersonal
comparison, and of equal consideration (see Baron, 1993; Broome,
1991).
I reject moral intuition as justification for moral claims.
Moral intuition is too easily understood as arising from the
application of heuristics that are usually good but often
misapplied (Baron, 1994a). For example, the distinction between
acts and omissions is morally relevant in a variety of
situations, but some people apply it in ways that others see as
erroneous: for example, some people are reluctant to vaccinate
children against diseases when the vaccine can cause harm, even
though it reduces substantially the overall probability of harm
(Ritov & Baron, 1990; Baron, 1994a; Asch et al., 1994).
Philosophers are not necessarily immune from errors based on
misapplication of principles. In general, such empirical results
give us another interpretation of our intuitions, against the
view that intuitions arise because of some connection with truth.
Instead of relying on intuitions, I try here to justify normative
moral principles in terms of their purpose, which is ultimately
to regulate each other's conduct in accord with our goals. Such
principles are expressed through our endorsement of norms. By
asking what norms we would endorse to achieve this purpose, we
can evaluate alternative normative proposals. Such a method is a
kind of analysis. It carves off a domain of human activity and
labels various parts of it (e.g., thinking, decision making,
goals, options, outcomes). The argument is therefore of the
form: if we think about things this way, and if we have this set
of basic goals, then it follows that the following principle will
help us achieve these goals.
The specific form of utilitarianism that I shall defend is one
that concerns itself with maximizing the achievement of goals or
the satisfaction of values as determined by the individuals who
hold those values. The primary purpose of this kind of
utilitarian theory is to guide decision making rather than to
justify praise or blame (except that acts of praising and blaming
are also decisions with consequences). Moreover, I distinguish
between normative principles - which are concerned with ultimate
justification - and prescriptive principles - which are
practical guides to action (Baron, 1994b). This article is about
normative issues.
By goal or value I mean a standard for evaluation of outcomes, as
assumed in decision analysis (e.g., Keeney, 1992). In theory,
and perhaps in practice, goals are best measured by asking people
to make judgments of states of affairs, not by observing choices.
Goals are by definition the standards that we have for evaluating
how things turn out. Thus, conflict between choices and goals is
prima facie evidence that the choices are in error. (The term
"preference" can refer to choices or values.) The idea of
utility as goal achievement allows us to count goals that some
theories do not count, such as past goals. But it is does not
allow us to count such goals as interpersonal equity (as done by
Broome, 1991, and Hammond, 1988) unless these can be reduced to
the goals of individuals.
I make no important distinction between a goal and a set of
goals. Almost any criterion can be subdivided. For example, the
criterion "spells correctly" can be subdivided into different
kinds of spelling patterns or even different words. Likewise,
goals can be combined. So, if a state can be evaluated by each
of several goals, it can also be evaluated by all the goals in
combination. We may therefore speak of the achievement of a set
of goals taken together. This assumption is necessary if the
idea of goal achievement is to yield decisions.
Norm endorsement
Morality is often taken to concern what we should do, in some
sense of "should." But I think we can make progress by
narrowing the topic somewhat. Instead of looking at what we
should do, let us look at the social function of morality only.
Specifically, morality involves telling each other what
we should do, or otherwise trying to influence each other's
decision making. Moreover, the telling goes beyond specific
cases. When we try to influence people morally, we try to
influence some standard they hold for how they should behave in a
class of cases (possibly a small class, but not restricted to a
single instance). We thus try to influence the principles they
follow, not just their behavior in a specific situation.
I shall call this telling function "norm endorsement," using
Gibbard's (1990) term, without necessarily accepting everything
he says about it. I emphasize the social function of endorsement
- which Gibbard referred to as "normative discussion" - but I
take no position about Gibbard's claims concerning the centrality
of moral emotions such as guilt and anger. What matters here is
whether behavior can be influenced by advice or by the
promulgation of standards. Morality as I discuss it is useless
if it cannot be used as a way of giving advice or of inculcating
standards that will be followed to some extent. It does not
matter here how this influence works, or whether the rationality
of advice giving can be reduced to questions about what emotions
it makes sense to feel.
Norms, in this view, are standards of choice or behavior that
people try to induce each other to follow. When we give advice,
exhort others to behave in certain ways, or advocate certain
principles, we endorse certain norms. These norms can either be
prescriptive guides or normative standards. Asking what norms we
have most reason to endorse (for either purpose) is a way to
think about what normative or prescriptive principles are best.
We can express morality, as a way of influencing others, in
several ways: teaching principles directly by explaining them or
by expressing moral emotions; setting examples of how to behave;
gossiping (and thereby expressing principles by presupposing them
when we criticize others - see Sabini & Silver, 1982);
rewarding behavior that follows the principles and punishing
behavior that violates them; supporting or violating
institutional (family, governmental) laws or rules (depending on
what we think of their morality); supporting or opposing such
laws or rules, making or repealing them; or justifying them in
hindsight by referring to norms, which are then supported by the
appeal. The concept of morality can be seen as arising as an
abstraction from all of these situations (as suggested by Singer,
1982). We can think of the purpose of moral standards as advice
giving, with a view to affecting decisions, although the advice
need not be verbal.
Suppose that there is such a thing as a "morally best set of
norms" that is not what we have the most reason to endorse for
each other to follow. People's reasons for endorsing norms lead
them to endorse some other set, not this set. There are then no
reasons to which we can appeal in order to persuade each other to
endorse this "best" set. It is out of the question, unless we
are irrational. If "ought" implies "rationally can," then
this set fails.
Moral exhortation does not include all attempts to impart
standards of conduct to others. "Please don't wear that perfume
around me," for example, is an expression of personal
preference, not an endorsement of a moral norm. Moral principles
are expressed by groups and are taken to be the same for all
members of the group. This property makes them "universal" in
Hare's (1963, 1981) sense. They are impartial with respect to
individuals in the group. (I leave aside the question of why we
should consider the largest possible group.) Of course, an act
of moral exhortation could serve other functions, such as
advancing the self-interest of the speaker. Exhortation, then,
is not so much a type of act as a purpose that acts could serve
among other purposes.
So the proposal is to try to answer the question of what
standards we should endorse for others and ourselves to follow,
if any. A simple answer is that we should endorse the moral
principles we hold. If we are committed to these principles,
that is surely what we would do. This answer is not acceptable
here, because we are trying to use the function of morality to
answer the question of what principles we should endorse. To do
this, we must put aside our previous moral commitments. More
generally, we must put aside any principles that tell us what to
endorse, whether we take these principles to be moral or not.
Such principles might include aesthetic or personal ideals, such
as piety, fashionable style, or personal development, that would
not count as moral in a narrow sense but serve the same social
function as moral expressions. We must put these aside because
the question is what principles we should endorse. If we do not
put these principles aside, then they will affect the conclusion.
We are trying to come up with a set of principles from scratch.
Having put aside prior principles for endorsement, we can then
ask whether we have any reasons to make any endorsements, what
those reasons are, and what endorsements they lead us to make.
Moreover, when you think about this, you must put aside not only
your own principles but also everyone else's. Otherwise,
circularity will still result, but with your principles out of
the loop.
What is left? What premises can I reason from? What reasons do
I have to endorse anything for others? First, I have
self-interested values for others' behavior, things I like other
people to do, such as being nice to me, or perhaps realizing
themselves in certain ways (being pious or stylish or
interesting, depending on who "I" am).
Second, each of us has altruistic values, values that depend on
the personal values of others. If you like Berlioz, I can
satisfy an altruistic value by giving you a Berlioz recording
even if I hate Berlioz myself. Altruistic values need not
concern endorsement, so, insofar as they do not, we may include
them, even though they are moral in the sense of being morally
good goals to have (goals we would endorse). That is, it is
possible to have altruistic values without also endorsing these
values as principles. I shall argue that such altruistic values
are more important than selfish ones as reasons for endorsing
norms.
Social morality can thus be viewed as a set of standards that we
try to get each other to follow in making decisions. In asking
what are the best moral principles, we might well put ourselves
in the position of a group that had no principles for advice
giving about decision making and then ask what principles would
best serve all the remaining values of the members of the group.
These remaining values, both altruistic and self-interested,
would serve as the reasons for adopting moral principles. In the
end, all of these values come from self-interested values,
because the specific altruistic values of one person depend on
the self-interested values of another. The most general moral
standards are those that apply to the largest group, including
people as yet unborn. Such standards do not need to be changed
as a function of time and place. If we want this sort of general
system, we should not base our morality on values that are
peculiar to a particular group of people.
This exercise leads to a form of consequentialism. If the
reasons we have for endorsing norms come from self-interested and
altruistic values, then the norms we would endorse must be those
that satisfy those values. For example, we would endorse a norm
that says to satisfy everyone's values as best we can. Even if
such a norm did not tell us how to weigh each person's values, it
would not be empty. It would, for example, tell us not to
endorse norms in favor of nondeterrent punishment, since this
goes against the values of some but in favor of the values of
none. (We have put aside values arising from prior moral
beliefs, such as the belief in retribution.)
This is a skeptical view of moral principles. It challenges
others to show why anything other than nonmoral goals is relevant
to the adoption of these principles. Someone who values freedom,
even when it goes strongly against other goals, would have to
explain why we should endorse freedom if we did not already value
it as something to endorse. Once we figure out the principles
that best satisfy our values, then the admission of other
principles will either be empty or will subvert the achievement
of our goals (Kupperman, 1983).
The following exercise in imagination, while not required for the
points I have made, may make them more vivid. Imagine we live in
a society without any morality. Obviously, it is relatively new,
or else it would have done itself in. Praise, blame, punishment,
and gossip are unknown. Then someone gets the idea of having a
moral system. The idea is that certain things will be counted as
right and others as wrong. We will teach our children to do the
right things and abjure the wrong things. We will say nice
things about people who do the right things and nasty things
about people who do the wrong things. We will try to stop people
from doing the wrong things by "punishing" them publicly, so
that others see what will happen to them if they do these things.
And so on. We agree to do this. Now all we need to do is to
draw up the lists for what we count as right and wrong. So each
of us has the chance to say that X is right or Y is wrong. But
in this imaginary world, we have no prior views. If I say "X is
wrong" because I have always thought it was wrong without
knowing why, you have very little reason to pay attention to me
unless you already agree. I have given you no reasons. I have
just asserted my view. But in the imaginary world this cannot
happen because we have no prior views. We have just gotten the
idea of having a morality. Thus, I must give you a reason, and I
must even have one myself, or else there is no reason for me to
say anything. One reason that we all have is that we care about
each other. We want good outcomes for others, and we dislike bad
outcomes. If I say that "infliction of suffering is wrong" out
of this kind of motivation, you have reason to agree with me. A
moral rule against doing harm is likely to decrease your
suffering and everyone else's, and suffering is something you too
would like to reduce. We can thus agree on moral rules because
we already have certain desires for each others' good. The rules
that we endorse and come to accept in this way will be those
that, in general, bring about good consequences and prevent bad
consequences. And our judgment of consequences does not require
any preexisting morality.
The idea of asking what we should endorse, rather than what we
should do, may seem strange. But it does capture the social
aspect of our moral life, the idea that morality is public.
The motivation for making and heeding norm endorsement
People have both self-interested and altruistic reasons to
endorse norms. If you convince others to do good for others
generally, for example, then they will do good for you along with
everyone else. However, your own altruism increases the value of
norm endorsement. Suppose that you weigh the good of each other
person some proportion, A, of your own good. And suppose that,
by endorsing to an audience of one person the norm of doing good,
you can make that person do an extra amount of good G for each
of N other people, as well as for you. Then your selfish
benefit for the endorsement is G, but, if we add in the
altruistic benefit, then the total benefit is G + AGN.
Clearly, if A is large relative to 1/N, then the second term
will make norm endorsement much more valuable. Even if A is
small (but positive), the second term contributes to your
motivation to endorse norms.
Rational self-interest cannot motivate people to follow purely
altruistic (self-sacrificing) norms that others endorse. People
can be "tricked" into thinking that following a norm is in
their self-interest, and both biological and cultural evolution
may well have prepared us to be tricked in this way. In part (as
Gibbard, 1990, suggests), we learn to follow the endorsements of
others because they often involve coordination rather than
self-sacrifice. It is in everyone's self-interest to heed an
endorsement for the sake of coordination, i.e., when it is in
each person's interest to do what others do. If someone says,
"In this country, we drive on the left," then it is in
everyone's interest to follow that (in the absence of contrary
information).
Rational altruistic people, however, can be motivated to follow
norms that others rationally endorse out of altruism. This can
happen in three ways. First, the endorsement may remind people
to adhere to their own previously formed altruistic goals.
Altruistic goals often compete with desires that are seen as
temptations, distractions from following one's better judgment.
Second, endorsements of more specific goals can provide
information about the connection between pursuit of these goals
as means and deeper altruistic values already present, e.g., "If
you care about helping poor people, support candidate X." Thus
- assuming adequate explanation - a new goal of supporting X is
created as a subgoal or means of achieving the deeper altruistic
goal of helping the poor.
Third, rational altruism can motivate heeding endorsements when a
critical mass of cooperators is needed to provide some benefit to
all, in the way in which public officials can prop up the value
of a currency by saying that it is undervalued, which leads
investors to buy it thinking that other investors will buy it
too. If the benefit of cooperation were not provided unless some
minimal number cooperated, then endorsement would encourage
people to cooperate by making them believe that others would
cooperate.
Of course, endorsements made out of altruism can also work for
irrational reasons. People may have a biological propensity to
be influenced by each other, that is, to be "docile" (Simon,
1990). Sustainable cultures may encourage this propensity in a
variety of ways.
In sum, rational self-interest is limited in its ability to
maintain a system based on norm endorsement. People can endorse
norms out of self-interest, but they cannot expect others to heed
these endorsements out of rational self-interest alone. The
rational core of a system based on norm endorsement must be
driven by altruism. This is why self-interested endorsements
must be disguised as altruistic: "I'm telling you this for your
own good," or "I'm telling you this because I know you care
about Aunt Matilda." Irrational forces may help to maintain the
system, but they are ultimately unstable without the rational
core. People who see moral endorsement as mere expression of the
interests of those who do it - like some of the radicals of the
1960s - can come to distrust morality altogether.
Conflicting principles and self-interest
Reliance on altruism helps to solve another problem, the conflict
of principles. If people choose which principle to endorse on
the basis of self-interest, people would endorse conflicting
principles. This might not be a decisive reason to reject the
present approach. We already have a world in which people
endorse conflicting principles. If everyone accepted the present
approach, the number of such principles would decrease, since many
principles would be ruled out. Still, more reliance on altruism
may allow us to do better.
Consider, for example, three sorts of rules that have been
proposed for the distribution of goods: a modification of Rawls's
(1971) difference principle applied to individual allocations
(rather than to the basic structure of society, as Rawls
intended); utilitarianism; and economic efficiency. By the
modified difference principle, goods are distributed to the least
advantaged group of people. By utilitarianism, goods are
distributed so as to maximize total utility. By economic
efficiency, goods are distributed so as to maximize wealth,
since, by maximizing wealth, the winners in any efficient
reallocation could compensate the losers and still be better off
than before. If you are a member of the least favored class, it
is to your advantage to try to convince others to follow the
difference principle. If you are rich, it is probably to your
advantage to convince others to favor efficiency. This is
because the efficiency principle will sometimes lead to increases
in your wealth at the expense of those less fortunate. The
difference principle will clearly rule out such changes.
Utilitarianism will also rule out some of them because the
decrease in wealth that the poor must suffer so that you can gain
will often mean more to them than your increase in wealth means
to you. This is because the utility of wealth is typically
marginally declining.
If you are completely uncertain about who you are, it is arguably
to your advantage to endorse utilitarianism. And I should heed
you if I don't know who I am. But this is not a good argument
for utilitarianism, for we are not completely uncertain about who
we are. You cannot appeal to my self-interest (broadly
construed) to accept a rule when the self-interest that you
appeal to is counterfactual (my not knowing my position in
society) and when my true self-interest tells me that a different
rule would be better. This is just an example of a more general
problem with the appeal to self-interest as a way of persuading
people to accept moral rules. Such appeals can lead to even more
blatant forms of parochialism, such as racism or nationalism.
Altruism can remove the conflict between alternative norms that
arises from self-interest. If I endorse the same rule that I
would choose if I did not know my identity, then I cannot be
suspected of this kind of selfishness, and such suspicion will
not be an impediment to agreement. Note that the rule I would
choose if I did not know my identity is also the rule that I
would choose if I were perfectly altruistic (including myself
only as one person within the scope of altruism), or if I were
unaffected by the decision to be made. Here my reasons for norm
endorsement come from altruism, defined as my goal for the
achievement of the goals of others. If others know that my
reasons for endorsing norms are altruistic, then they have
another reason to accept my endorsement: their own altruism would
lead to the same endorsement (if we agree on the facts, at any
rate).
One altruistic decision rule for several people is the
utilitarian rule that counts all of their goals equally. I will
not here try to defend this rule, as distinct from some rule that
favors some at the expense of others in an arbitrary way (not
based on self-interest). However, the appeal to altruism as the
basis of norm endorsement does remove an impediment to acceptance
of the utilitarian rule, namely, the fact (argued above) that
self-interest can lead to endorsement of conflicting principles
such as efficiency and the difference principle.
Which goals or values should we count
The norm-endorsement view presented here is not just a way of
getting to the same old conclusions that other utilitarian
writers have reached. It provides answers to several questions
about which goals or values should be counted in a
consequentialist analysis. Sometimes these answers are different
from those of other utilitarian approaches. The question of what
should count is answered by asking whether our altruism (and
self-interest) give us reason to endorse counting some particular
kind of goal. Our altruism depends on someone's self-interest,
so, ultimately, the question is whether endorsement of counting a
particular goal serves people's self-interest.
In the rest of the paper, I provide some brief examples of how
the norm-endorsement view can answer questions about what goals
or values should count in a utilitarian calculus. These are the
main questions where the norm-endorsement view differs from other
utilitarian views.
Erroneous subgoals
Some goals owe their existence to other goals. For example,
Helen has a strong liking for avocados because she believes that
they are healthy and non-caloric, although she does not much like
their taste. Her desire for avocados is therefore dependent on
her desire to be healthy and slim. Such erroneous subgoals are
dependent on false belief. True subgoals, and, more generally,
true goals, are not. (Keeney, 1992, distinguishes between
fundamental values, which are independent of beliefs, and proxy
values, or subgoals, which are related to fundamental values
through beliefs.)
Suppose you know that avocados are full of calories and saturated
fat. Should you give her avocados as a present? (You do not
know her well enough to point out her error.) To answer this
question, let us apply the norm-endorsement approach. We all have
an interest in supporting a general principle of helping each
other to achieve our goals. This certainly applies to
fundamental goals, those that are unaffected by changed beliefs
about the extent to which their achievement promotes the
achievement of other goals. But does this interest extend to
subgoals based on error?
At first, we might think so. Putting ourselves in Helen's
position, we might think of the pleasure we would experience at
getting a nicely wrapped box of avocados. And surely the
pleasure would provide a reason to give Helen the avocados. But
suppose that our alternative gift idea would provide just much
pleasure while being neutral toward other goals (e.g., a bouquet
of flowers). In both cases, of course, Helen has a goal of
getting pleasure itself, and this goal is equally satisfied, let
us assume. Now we can face the issue of whether the avocados are
to be preferred because Helen thinks they help her achieve her
other goals.
In this case, it clear that we should endorse a rule that favors
taking account of fundamental goals, not erroneous subgoals. If
a subgoal is based on a false belief, then we would want someone
to honor our true goals. It is these true goals that give us the
ultimate reasons for what we endorse for other to do. The
erroneous goals do not follow, so the chain of reasons is broken.
By this rule, you should give her the flowers. (Perhaps
criticizing the epistemic bases of our subgoals is what Brandt,
1988, means by "cognitive psychotherapy.")
Note that this criterion can be applied to the absence of goals
as well as their presence. If a goal would be present except for
an erroneous belief, then we should assume that it is present.
Erroneous beliefs can include lack of knowledge of the existence
of things. Thus, by this view, we can help people by giving them
things that they "did not know they wanted." This possibility
helps to avoid a common criticism of other forms of
utilitarianism, namely, that it is unfair to those who are
ignorant or whose imagination is limited (Elster, 1983).
The point here is that we would want each other to act on the
basis of the truth. In practice, this requirement leads to all
sorts of complexities, for people are rarely sure of their
beliefs. Helen may be just as sure of her belief about avocados
as you are of yours. In this case, she may want you to act on
her belief. More generally, the principle of ignoring
erroneous subgoals can lead to excessive paternalism. We
sometimes think we are ignoring such goals, when in fact we are
the ones in error. But this is a practical, prescriptive, issue,
not a normative one. We must put this issue aside when asking
whether have any reason to neglect erroneous subgoals.
I do not mean to minimize the ambiguities created in practice by
the question of when a subgoal is erroneous. Those who attempt
to impose ideals on others - for example, religious ideals -
often argue that the goals that they frustrate are not true goals
at all. On the other hand, sometimes people really do not
know what helps them achieve their true goals, and we do well to
put aside their opinions in making decisions on their behalf. To
complicate matters further, people may have a goal of pursuing
their own goals (true or false), that is, a goal of being
autonomous. This goal, or its strength, may itself be erroneous,
but it, too, must by respected to the extent to which it is real.
I doubt that any single, simple principle can help us to make the
truly difficult decisions that arise from this conflict.
Goals and preferences
I defined utility in terms of the achievement of goals rather
than the satisfaction of preferences, although the latter
formulation is a common one. In theory, we can measure goal
achievement by asking people directly to estimate the extent to
which a well-described outcome achieves their goals - separately
or together. Although "preference satisfaction" can mean many
things (including "goal achievement"), in its strictest sense
it refers to a behavioral choice, a "revealed preference." If
I choose an apple over a banana (or say that I would), then, in
this sense, I prefer it.
Preferences in this sense often conflict with goals or values.
People often make choices against their values because they lack
crucial information (as I just discussed). Also, a great deal of
psychological research (reviewed by Baron, 1994b) shows that
people make choices that are inconsistent if we assume that they
are trying to achieve a fixed set of goals. When preferences and
values conflict, which should we count?
The norm-endorsement approach implies that we should honor
values, not preferences. Our values are by definition the
standards by which we evaluate all outcomes. We use these
standards to criticize, among other things, our own preferences.
Future goals
Goals may refer to specific times, although they need not. A
perfectly reasonable goal is "to get to the station before the
train leaves." Another is "to have an increasing standard of
living over my life." (The "principle of temporal good,"
discussed by Broome, 1991, ch. 11, is therefore not assumed
here.) Goals can also change over time in their strength or
existence. The time at which a goal is satisfied need not be the
same time at which it is held.
Two questions arise about future goals. One is whether we should
count them fully when we calculate total goal achievement. The
other is whether (or when) we should try to bring future goals
into existence. This section concerns only the first question.
Concern with the achievement of future goals is a bi-product of
the impartiality of moral principles. Future people are people,
so we should care about their goals, even if these goals are not
yet present. The same goes for the future goals of a person who
now exists. Each of us would want others to honor our goals,
even if those others made their decisions before the goals
existed (before they affected judgments), assuming, of course,
that the others could anticipate the existence of these goals.
Beyond this, people have present goals that their future goals
are achieved, so future goals are relevant for achieving present
goals as well. We would thus endorse norms for achievement of
future goals (other things being equal).
This principle, together with the principles concerning erroneous
subgoals, also obviate - in some cases - the need for an
additional principle specifying that we should help others to
achieve only their rational goals (e.g., Parfit, 1984, in the
"Critical Present Aim Theory"). The goals that we consider
"rational" for this purpose are often those that are not
erroneous subgoals and do not conflict too much with future
goals.
Goals for the unknown
Is it true that "What a person doesn't know can't hurt her?"
If it is, we are justified in ignoring the wishes of the dead
(except for the precedents they set for honoring other
wishes) and in deceiving people to make them think that their
goals have been achieved. The view that utility is a property of
experience leads either to this attitude or to rejection of
utilitarianism.
To determine whether goals for events not experienced are
relevant by the norm-endorsement criterion, we ask whether we
would endorse a rule or principle that these goals should be
honored. If they are truly our goals, it is clear that we would.
People pay money to lawyers to insure that their wishes will be
carried out after they are dead. They pay detectives to find out
if their spouses are cheating even when a positive answer can
only make them unhappy. So they seem to have goals for
events they do not experience. People who have such goals have
reason to endorse norms that encourage others to honor these
goals. People can, then, be hurt by their spouse's disloyalty,
even if they never find out. Sleeping with someone else's spouse
impairs the achievement of their goals in ways that they would
not want, and they would want a norm that protects them from
this.
Past goals
A related issue is the relevance of past goals for decisions made
now that can affect their achievement. If someone had a goal
concerning outcomes at some future time (e.g., their later life,
or after their death), should we take this goal into account when
the time comes, even though the goal itself is no longer present?
If we ignore past goals, we need to draw a distinction between
goals that people will have in the future - which I have argued
should be considered - and past goals. Is it arbitrary to attend
to future goals and ignore past ones?
In thinking about this issue, we must put aside many of the
reasons that often cause us to honor past goals. We do honor
people's wishes after they are dead, for example, but the reasons
for this are often not intrinsic to the wishes themselves:
Failure to execute someone's last will would undermine the
incentive function of will making, which motivates people to work
for their heirs as well as themselves. These issues concern
future goals, the goals of those who will be helped by this work,
for example.
Parfit (1984, ch. 8) points out that past and future goals often
differ in relevant ways. Consider a future goal and a past goal,
neither of which is now present - for example, a goal of a person
who died and the goal of a person not yet born. Our decisions
can affect the satisfaction of the future goal at the time
the goal is present but they cannot affect the satisfaction of
the past goal at the time it is present. If we limited our
concerns to the satisfaction of goals when they were present, we
would be able to distinguish past and future goals. We cannot
limit ourselves in this way, however, because some goals really
do concern some future time after the goals themselves are
absent, and, most importantly, we might well want others to honor
our goals in the future after our goals are psychologically
absent. Because we sometimes want others to honor these goals,
we would, other things being equal, endorse their inclusion.
Parfit (1984, p. 151) distinguish between desires concerning the
future that are "conditional on their own persistence" and
those that are not. The former apply only so long as they are
present, and the latter apply even after they cease to exist.
Unconditional goals give us reason to bind the future behavior of
ourselves or others. If I desire now to go swimming tomorrow,
and if tomorrow comes and I no longer have the desire, I have no
reason to honor my former desire, because it was, from its
inception, conditional on its own persistence. I could have
stated initially, "I want to go swimming tomorrow, unless I
change my mind." Other desires, such as those concerning my
child's welfare after I die, are not conditional. I can speak
meaningfully of the achievement of this goal being affected by
events that occur after my death, and it is a goal that
influences my present decisions: e.g., writing a will.
The difference between conditional and unconditional goals is
this: For conditional goals, we prefer (or judge to be superior)
options that would allow us to subvert the goals in the future,
other things being equal. We have a goal of future freedom to
change our mind. For unconditional goals, we have no general
reason to judge options that preserve freedom as superior to
options that do not. Our efforts to achieve these goals are not
reduced if we know that we cannot undo the effects of our
choices. We might even take steps to bind ourselves, to restrict
our future freedom. Goals that we have for after our death are
unconditional because we have them despite the fact that we
cannot take them back.
We have reason to endorse the inclusion of both types of goals,
just because they are both goals that people have. If we insist
on honoring a past conditional goal that is no longer present,
then we are subverting the holder's goal of being able to take
back the goal. We would want others to honor our unconditional
goals, however, whether they involved binding ourselves ("Don't
give me more than three drinks even if I ask for a fourth") or
desires for after our death (wills, etc.). We would want such a
system even if no precedents were set concerning promise-keeping
in general. It seems, then, that we should ignore past goals
unless they are unconditional (on their own persistence). Past
unconditional goals are those that each of us would want a
fiduciary to honor. However, if the goal is conditional on its
persistence and if it no longer exists, then it cannot be
achieved (any better than it has been achieved already). It does
not count, then, because it outside of our power to affect.
Parfit (1984, p. 157, also p. 151) is skeptical about this
solution. He gives (among others) the example of his having
wanted to be a poet when he was young. He thinks that his past
desire gives him no reason to write poems now, even though the
original desire was not conditional on its own persistence. He
would exclude such goals. But how does he know that his desire
to be a poet was unconditional (on its own persistence)? He
might try to produce evidence of his saying "I want to be a poet
even if it is not what I later want." I would argue that this
is not enough unless we really believe that he would have been
willing to limit his future freedom. It is doubtful that he
would have done so. (If he would have done so, then he might be
pursuing an erroneous subgoal).
Most of our desires (including Parfit's examples) are, I think,
conditional. The major exception consists of those cases in
which people anticipate a change and arranges matters so that
their earlier desire will be fulfilled: Ulysses having himself
bound to the mast; or women who ask that they give birth without
anesthesia even if they should change their mind and ask for it
during a normal birth. All statements of desire, it seems, have
an implicit escape clause, "unless I change my mind," and we do
not discourage people from changing their mind for their own good
unless we are sure that this clause was crossed out. In general,
then, desires are unconditional only when we would take steps to
see that they are brought to bear on future decisions.
The rationality of goals
What makes goals rational? To answer this question, we can think
of goals as consequences of decisions. Decisions affect goals in
a variety of ways:
1. Most decisions create subgoals. If I decide to get a drink of
water, I may have a subgoal of finding a drinking fountain.
2. Some decisions can be expected to bring certain goals into
existence, strengthen them, or weaken them. If you decide to go
to law school, you can expect to acquire at least some of the
goals that lawyers typically have. If you undergo a religious
conversion, in either direction, you can expect your goals to
change drastically. Often we make decisions of this sort because
we want our goals to change. Intentional goal change is also a
means of self-control, as when someone cultivates a dislike of
cigarettes.
3. When we bring sentient life into existence, or terminate life,
we create or destroy goals. When we create life, we know only
probabilistically what goals will result, but such uncertainty is
not unique to this kind of decision.
The problem of the rationality and morality of goal change is
therefore related to a variety of other problems, from the choice
of one's life commitments to the question of population growth
and abortion. What principles should we endorse for creation and
destruction of goals?
A normative theory of decisions about goals (including
strengthening, weakening, adding, or deleting) can be based on
the same criterion as that applied to other decisions, the
maximization of utility, i.e., the achievement of (other) goals.
Some of our goals concern goals themselves. We want goals that
others approve of, goals that will bring good feelings when we
try to achieve them, or goals that we are likely to achieve.
Decision about goals also affect directly the achievement of
other goals that we already have. Typically, the addition of new
personal goals impairs the achievement of other goals, but
sometimes we are lucky and our efforts to achieve one goal help
us achieve others as well. For example, in some companies,
married men are trusted more than unmarried men, so an otherwise
celibate workaholic might do better even in his work by adopting
the goal of having a family.
Finally, choices about goals affect the achievement of goals that
we will have in the future (whether these goals will arise
inevitably or as a result of our present decision about goals).
We can evaluate such effects in terms of our current goals for
the achievement of future goals. For example, I have considered
running for congress, expecting not to win but to "educate" my
fellow citizens. If I ran, my desire to win would increase as
the race went on. In deciding whether to run, I must consider
both the creation of this new goal and the (low) probability of
its achievement. Because I do not think I could prevent this
goal from developing, and because I have a present goal of not
adding goals that are unachievable, I have decided not to run.
In conceiving of goals and their rational adoption, it might help
to think of each goals as a legislator in a governing oligarchy
(of "multiple selves"). Each legislator has a fixed
agenda, a set of criteria (goals) for evaluating every proposal
put before the group. Admission of new members is based on the
same agendas. Voting is not used; instead, the honest appraisals
of each member are added up. That is, the group admits a new
member when the expected behavior of the new member furthers the
agendas of the members more than does the best alternative
option. But, importantly, the new member is not simply a means
to further the agendas of the current members, although that is
why they admit her. The new member brings an agenda of her own,
thereby changing somewhat the overall behavior of the group. In
this way, the rational adoption of goals is instrumental, but its
effect is not solely instrumental. New goals are truly added.
By this criterion, a new goal is rational if it meets this test.
Note that this method of evaluation by itself does not allow us
to reason from any particular first principles. We cannot
compare two sets of goals without some core set of common goals.
In these respects, the evaluation of goals is similar to the
evaluation of beliefs in the Bayesian theory. We can evaluate
the probability of each belief, given the probabilities assigned
to all other relevant beliefs, but we cannot compare systems of
belief as wholes.
Goals of future people
I have argued that we should endorse caring about the goals of
such future people as are assumed to exist (assuming that we
include them in the scope of morality, an issue I have put
aside). A different, more difficult, question is whether we
should bring new goals into existence. The birth of a person
will lead to the formation of many goals, which will be achieved
to some degree, and which will affect the achievement of the
goals of others. Adding a person, then, is like adding a goal
within a single person. We are uncertain what the new goals will
be, but we can still form an expectation about their degree of
success and their effect on goals already present. The decision
to add new people should, by this view, depend entirely on the
extent to which they help achieve our present goals, including
our goals for the creation of new goals.
Suppose that policy A will lead to the birth of certain future
people, call them group A, and policy B will lead to group B.
Group A is larger than B. In evaluating the policies, we should
compare the goal achievement of the two groups in the usual way,
as best we can, assuming that the group exists in each case. If
average goal-achievement per person is constant, group A will
have more goal achievement, so policy A would be favored on this
ground alone. But we must also consider our own goals, and the
goals of others, concerning the creation of new goals. Such
goals could specify some optimal number of future people, with
group B closer to the optimum than A. These goals could tilt the
decision toward policy B. Possibly, a person with altruistic
goals would want a sufficient number of future people to exist so
that the long-term effects of her altruism could be maximized
after her death. But the same goals would dictate that the
population be sufficiently small so that those who exist could
have good lives.
Against this view, Hare (1975) argued that, since each of us
(presumably) would not have wanted not to have been born, then we
must act toward others to see that they are born too, at least
until the marginal utility of births is negative. Although the
argument I gave for consequentialism was similar to Hare's (1963)
argument for the Golden Rule, the rule he applies here, it was
not the same. In particular, in my argument, our motivation to
support moral principles came from our (collective) current
goals. The unborn - those who are at issue when we make
decisions about creating new people - do not have such goals
(although they will if they are born). Consequentialism derived
in this way need not be extended to those without goals (in the
same way that it must be extended to future people conditionally
on their birth). Rather, the creation of goals by creating
people is analogous to the taking on of extra goals within an
individual.
The question of bringing people into existence is dependent on
what our current goals concerning future people would be if they
were rationally adopted (i.e., consistent with other goals and
not erroneous subgoals). This possibility gives us, potentially,
a way out of Parfit's (1984) "repugnant conclusion" that an
enormous population living in misery could be the best of all
possible worlds.
I have assumed in this section that decisions about goals should
be made independently of whose goals they are. The view of goals
as independent of the people who have them is sometimes taken to
imply that we can freely kill people so long as we replace them,
a conclusion that is used to argue against the premise (Broome,
1985). This conclusion ignores, however, people's goals to
achieve their present goals, and their goals for their lives as
wholes. For these kinds of reasons, Singer (1979) argues that
killing "persons" is worst than killing creatures whose goals are
confined to their experiences of pleasure and pain.
References
Asch, D., Baron, J., Hershey, J. C., Kunreuther, H., Meszaros,
J., Ritov, I., & Spranca, M. (1994). Determinants of
resistance to pertussis vaccination. Medical Decision
Making, 14, 118-123.
Baron, J. (1993). Morality and rational choice.
Dordrecht: Kluwer.
Baron, J. (1994a). Nonconsequentialist decisions (with
commentary and reply). Behavioral and Brain Sciences,
17, 1-42.
Baron, J. (1994b). Thinking and deciding (2nd ed.). New
York: Cambridge University Press.
Brandt, R. B. (1979). A theory of the good and the
right. Oxford: Clarendon Press.
Broome, J. (1985) The economic value of life. Economica,
52, 281-294.
Broome, J. (1991). Weighing goods: Equality, uncertainty and
time. Oxford: Basil Blackwell.
Elster, J. (1983). Sour grapes: Studies of the subversion of
rationality. New York: Cambridge University Press.
Gibbard, A. (1990). Wise choices, apt feelings: A theory of
normative judgment. Cambridge, MA: Harvard University Press.
Hammond, P. H. (1988). Consequentialist foundations for expected
utility. Theory and decision, 25, 25-78.
Hare, R. M. (1963). Freedom and reason. Oxford: Oxford
University Press (Clarendon Press).
Hare, R. M. (1975). Abortion and the golden rule.
Philosophy and public affairs, 4, 201-222.
Hare, R. M. (1981). Moral thinking: Its levels, method and
point. Oxford: Oxford University Press (Clarendon Press).
Keeney, R. L. (1992). Value-focused thinking: A path to
creative decisionmaking. Cambridge, MA: Harvard University
Press.
Kupperman, J. (1983). The foundations of morality. London:
Allen & Unwin.
Parfit, D. (1984). Reasons and persons. Oxford: Oxford
University Press (Clarendon Press).
Rawls, J. (1971). A theory of justice. Cambridge, MA:
Harvard University Press.
Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission
bias and ambiguity. Journal of Behavioral Decision Making,
3, 263-277.
Sabini, J., & Silver, M. (1981). Moralities of everyday
life. Oxford: Oxford University Press.
Simon, H. A. (1990). A mechanism for social selection and
successful altruism. Science, 250, 1665-1668.
Singer, P. (1979). Practical ethics. Cambridge University
Press.
Singer, P. (1982). The expanding circle: Ethics and
sociobiology. New York: Farrar, Strauss & Giroux.
Footnotes:
1This article is a substantial
reworking of the arguments in Baron (1993), chs. 1-3. I think
Samuel Freeman for comments. Send correspondence to Jonathan
Baron, Department of Psychology, University of Pennsylvania, 3815
Walnut St., Philadelphia, PA 19104-6196, or (e-mail)
baron@psych.upenn.edu.
File translated from
TEX
by
TTH,
version 3.59.
On 3 Feb 2005, 07:16.