Ethics Reviewer FINAL
Ethics Reviewer FINAL
It is clear that we often disagree about questions of value. Should same-sex marriage be legal? Should women have
abortions? Should drugs such as marijuana be legalized? Should we torture terrorists in order to get information from
them? Should we eat animals or use them in medical experiments? These sorts of questions are sure to expose
divergent ideas about what is right or wrong.
Discussions of these sorts of questions often devolve into unreasonable name- calling, foot-stomping, and other
questionable argument styles. The philosophical study of ethics aims to produce good arguments that provide
reasonable support for our opinions about practical topics. If someone says that abortion should (or should not) be
permitted, he or she needs to explain why this is so. It is not enough to say that abortion should not be permitted
because it is wrong or that women should be allowed to choose abortion because it is wrong to limit women’s choices.
To say that these things are wrong is merely to reiterate that they should not be permitted. Such an answer begs the
question. Circular, question-begging arguments are fallacious. We need further argument and information to know why
abortion is wrong or why limiting free choice is wrong. We need a theory of what is right and wrong, good or evil,
justified, permissible, and unjustifiable, and we need to understand how our theory applies in concrete cases. The first
half of this text will discuss various
theories and concepts that can be used to help us avoid begging the question in debates about ethical issues. The
second half looks in detail at a number of these issues.
It is appropriate to wonder, at the outset, why we need to do this. Why isn’t it sufficient to simply state your opinion and
assert that “x is wrong (or evil, just, permissible, etc.)”? One answer to this question is that such assertions do nothing to
solve the deep conflicts of value that we find in our world. We know that people disagree about abortion, same- sex
marriage, animal rights, and other issues. If we are to make progress toward understanding each other, if we are to
make progress toward establishing some consensus about these topics, then we have to understand why we think
certain things are right and others are wrong. We need to make arguments and give reasons in order to work out our
own conclusions about these issues and in order to explain our conclusions to others.
It is also insufficient to appeal to custom or authority in deriving our conclusions about moral issues. While it may be
appropriate for children to simply obey their parents’ decisions, adults should strive for more than conformity and
obedience to authority. Sometimes our parents and grandparents are wrong—or they disagree among themselves.
Sometimes the law is wrong—or laws conflict. And sometimes religious authorities are wrong—or authorities do not
agree. To appeal to authority on moral issues, we would first have to decide which authority is to be trusted and
believed. Which religion provides the best set of moral rules? Which set of laws in which country is to be followed? Even
within the United States, there is currently a conflict of laws with regard to some of these issues: some states have
legalized medical marijuana or physician assisted suicide, others have not. The world’s religions also disagree about a
number of issues: for example, the status of women, the permissibility of abortion, and the question of whether war is
justifiable. And members of the same religion or denomination may disagree among themselves about these issues. To
begin resolving these conflicts, we need critical philosophical inquiry into
basic ethical questions. In Chapter 2, we discuss the world’s diverse religious traditions and ask whether there is a set of
common ethical ideas that is shared by these traditions. In this chapter, we clarify what ethics is and how ethical
reasoning should proceed.
WHAT IS ETHICS?
On the first day of an ethics class, we often ask students to write one-paragraph answers to the question, “What is
ethics?”
How would you answer? Over the years, there have been significant differences of opinion among our students on this
issue. Some have argued that ethics is a highly personal thing, a matter of private opinion. Others claim that our values
come from family upbringing. Other students think that ethics is a set of social principles, the codes of one’s society or
particular groups within it, such as medical or legal organizations. Some write that many people get their ethical beliefs
from their religion.
One general conclusion can be drawn from these students’ comments: We tend to think of ethics as the set of values or
principles held by individuals or groups. I have my ethics and you have yours; groups—professional organizations and
societies, for example—have shared sets of values. We can study the various sets of values that people have. This could
be done historically and sociologically. Or we could take a psychological interest in deter- mining how people form their
values. But philosophical ethics is a critical enterprise that asks whether any particular set of values or beliefs is better
than any other. We compare and evaluate sets of values and beliefs, giving reasons for our evaluations. We ask
questions such as, “Are there good reasons for preferring one set of ethics over another?” In this text, we examine
ethics from a critical or evaluative standpoint. This examination will help you come to a better understanding of your
own values and the values of others.
Ethics is a branch of philosophy. It is also called moral philosophy. In general, philosophy is a discipline or study in which
we ask—and attempt to answer—basic questions about key areas or subject matters of human life and about pervasive
and significant aspects of experience. Some philosophers, such as Plato and Kant, have tried to do this systematically by
interrelating their philosophical views in many areas. According to Alfred North Whitehead, “Philosophy is the endeavor
to frame a coherent, logical, necessary system of general ideas in terms of which every element of our experience can
be interpreted.”1 Some contemporary philosophers have given up on the goal of building a sys- tem of general ideas,
arguing instead that we must work at problems piecemeal, focusing on one particular issue at a time. For instance, some
philosophers might analyze the meaning of the phrase to know, while others might work on the morality of lying. Some
philosophers are optimistic about our ability to address these problems, while others are more skeptical because they
think that the way we analyze the issues and the conclusions we draw will always be influenced by our background,
culture, and habitual ways of thinking. Most agree, however, that these problems are worth wondering about and caring
about.
We can ask philosophical questions about many subjects. In the philosophical study of aesthetics, philosophers ask basic
or foundational questions about art and objects of beauty: what kinds of things do or should count as art (rocks
arranged in a certain way, for example)? Is what makes something an object of aesthetic interest its emotional
expressiveness, its peculiar formal nature, or its ability to reveal truths that cannot be described in other ways? In the
philosophy of science, philosophers ask whether scientific knowledge gives us a picture of reality as it is, whether
progress exists in science, and whether the scientific method discloses truth. Philosophers of law seek to understand the
nature of law itself, the source of its authority, the nature of legal interpretation, and the basis of legal responsibility. In
the philosophy of knowledge, called epistemology, we try to answer questions about what we can know of ourselves
and our world, and what it means to know something rather than just to believe it. In each area, philosophers ask basic
questions about the particular subject matter. This is also true of moral philosophy.
Ethics, or moral philosophy, asks basic questions about the good life, about what is better and worse, about whether
there is any objective right and wrong, and how we know it if there is.
One objective of ethics is to help us decide what is good or bad, better or worse. This is generally called normative
ethics. Normative ethics defends a thesis about what is good, right, or just. Normative ethics can be distinguished from
metaethics. Metaethical inquiry asks questions about the nature of ethics, including the meaning of ethical terms and
judgments. Questions about the relation between philosophical ethics and religion—as we discuss in Chapter 2—are
metaethical. Theoretical questions about ethical relativism—as discussed in Chapter 3—are also metaethical. The other
chapters in Part I are more properly designated as ethical theory. These chapters present concrete normative theories;
they make claims about what is good or evil, just or unjust.
From the mid 1930s until recently, metaethics predominated in English-speaking universities. In doing metaethics, we
often analyze the meaning of ethical language. Instead of asking whether the death penalty is morally justified, we
would ask what we meant in calling something “morally justified” or “good” or “right.” We analyze ethical language,
ethical terms, and ethical statements to determine what they mean. In doing this, we function at a level removed from
that implied by our definition. It is for this reason that we call this other type of ethics metaethics—meta meaning
“beyond.” Some of the discussions in this chapter are metaethical discussions—for example, the analysis of various
senses of “good.” As you will see, much can be learned from such discussions.
“That’s great!” “Now, this is what I call a delicious meal!” “That play was wonderful!” All of these statements express
approval of something. They do not tell us much about the meal or the play, but they do imply that the speaker thought
they were good. These are evaluative statements. Ethical statements or judgments are also evaluative. They tell us what
the speaker believes is good or bad. They do not simply describe the object of the judgment—for example, as an action
that occurred at a certain time or that affected people in a certain way. They go further and express a positive or
negative regard for it. Of course, factual matters are relevant to moral evaluation. For example, factual judgments about
whether capital punishment has a deterrent effect might be relevant to our moral judgments about it. So also would we
want to know the facts about whether violence can ever bring about peace; this would help us judge the morality of
war. Because ethical judgments often rely on such empirical information, ethics is often indebted to other disciplines
such as sociology, psychology, and history. Thus, we can distinguish between empirical or descriptive claims, which state
factual beliefs, and evaluative judgments, which state whether such facts are good or bad, just or unjust, right or wrong.
Evaluative judgments are also called normative judgments. Moral judgments are evaluative because they “place a
value,” negative or positive, on some action or practice, such as capital punishment.
Descriptive (empirical) judgment: Capital punishment acts (or does not act) as a deterrent.
We also evaluate people, saying that a person is good or evil, just or unjust. Because these evaluations also rely on
beliefs in general about what is good or right, they are also normative. For example, the judgment that a person is a
hero or a villain is based upon a normative theory about good or evil sorts of people.
“That is a good knife” is an evaluative or normative statement. However, it does not mean that the knife is morally good.
In making ethical judgments, we use terms such as good, bad, right, wrong, obligatory, and permissible. We talk about
what we ought or ought not to do. These are evaluative terms. But not all evaluations are moral in nature. We speak of a
good knife without attributing moral goodness to it. In so describing the knife, we are probably referring to its practical
usefulness for cutting. Other evaluations refer to other systems of values. When people tell us that a law is legitimate or
unconstitutional, that is a legal judgment. When we read that two articles of clothing ought not to be worn together,
that is an aesthetic judgment. When religious leaders tell members of their com- munities what they ought to do, that is
a religious matter. When a community teaches people to bow before elders or use eating utensils in a certain way, that
is a matter of custom. These various normative or evaluative judgments appeal to practical, legal, aesthetic, religious, or
customary norms for their justification.
How do other types of normative judgments differ from moral judgments? Some philosophers believe that it is a
characteristic of moral “oughts” in particular that they override other “oughts,” such as aesthetic ones. In other words, if
we must choose between what is aesthetically pleasing and what is morally right, then we ought to do what is morally
right. In this way, morality may also take precedence over the law and custom. The doctrine of civil disobedience relies
on this belief, because it holds that we may disobey certain laws for moral reasons. Although moral evaluations differ
from other normative evaluations, this is not to say that there is no relation between them. In fact, moral reasons often
form the basis for certain laws. But law—at least in the United States—results from a variety of political compromises.
We don’t tend to look to the law for moral guidance. And we are reluctant to think that we can “legislate morality,” as
the saying goes. Of course, there is still an open debate about whether the law should enforce moral ideas in the context
of issues such as gay marriage or abortion.
There may be moral reasons supporting legal arrangements—considerations of basic justice, for example. Furthermore,
the fit or harmony between forms and colors that ground some aesthetic judgments may be similar to the rightness or
moral fit between certain actions and certain situations or beings. Moreover, in some ethical systems, actions are judged
morally by their practical usefulness for producing valued ends. For now, however, note that ethics is not the only area
in which we make normative judgments.
TRAITS OF MORAL PRINCIPLES
A central feature of morality is the moral principle. We have already noted that moral principles are guides for action,
but we must say more about the traits of such principles. Although there is no universal agreement on the
characteristics a moral principle must have, there is a wide consensus about five features: (1) pre- scriptivity, (2)
universalizability, (3) overridingness, (4) publicity, and (5) practica-bility. Several of these will be examined in chapters
throughout this book, but let’s briefly consider them here.
First is prescriptivity, which is the commanding aspect of morality. Moral principles are generally put forth as commands
or imperatives, such as “Do not kill,” “Do no unnecessary harm,” and “Love your neighbor.” They are intended for use:
to advise people and influence action. Prescriptivity shares this trait with all normative discourse and is used to appraise
behavior, assign praise and blame, and produce feelings of satisfaction or guilt.
Second is universalizability. Moral principles must apply to all people who are in a relevantly similar situation. If I judge
that an act is right for a certain person, then that act is right for any other relevantly similar person. This trait is
exemplified in the Golden Rule, “Do to others what you would want them to do to you.” We also see it in the formal
principle of justice: It cannot be right for you to treat me in a manner in which it would be wrong for me to treat you,
merely on the ground that we are two different individuals.4
Universalizability applies to all evaluative judgments. If I say that X is a good thing, then I am logically committed to
judge that anything relevantly similar to X is a good thing. This trait is an extension of the principle of consistency: we
ought to be consistent about our value judgments, including one’s moral judgments. Take any act that you are
contemplating doing and ask, “Could I will that everyone act according to this principle?”
Third is overridingness. Moral principles have predominant authority and override other kinds of principles. They are not
the only principles, but they take precedence over other considerations, including aesthetic, prudential, and legal ones.
The artist Paul Gauguin may have been aesthetically justified in abandoning his family to devote his life to painting
beautiful Pacific Island pictures, but morally he probably was not justified, and so he probably should not have done it. It
may be prudent to lie to save my reputation, but it probably is morally wrong to do so. When the law becomes
egregiously immoral, it may be my moral duty to exercise civil disobedience. There is a general moral duty to obey the
law because the law serves an overall moral purpose, and this overall purpose may give us moral reasons to obey laws
that may not be moral or ideal. There may come a time, however, when the injustice of a bad law is intolerable and
hence calls for illegal but moral defiance. A good example would be laws in the South prior to the Civil War requiring
citizens to return runaway slaves to their owners.
Fourth is publicity. Moral principles must be made public in order to guide our actions. Publicity is necessary because we
use principles to prescribe behavior, give advice, and assign praise and blame. It would be self-defeating to keep them a
secret.
Fifth is practicability. A moral principle must have practicability, which means that it must be workable and its rules must
not lay a heavy burden on us when we follow them. The philosopher John Rawls speaks of the “strains of commitment”
that overly idealistic principles may cause in average moral agents.5 It might be desirable for morality to require more
selfless behavior from us, but the result of such principles could be moral despair, deep or undue moral guilt, and
ineffective action. Accordingly, most ethical systems take human limitations into consideration.
Although moral philosophers disagree somewhat about these five traits, the above discussion offers at least an idea of
the general features of moral principles.
DOMAINS OF ETHICAL ASSESSMENT
At this point, it might seem that ethics concerns itself entirely with rules of con- duct that are based solely on evaluating
acts. However, it is more complicated than that. Most ethical analysis falls into one or more of the following domains:
(1) action, (2) consequences, (3) character traits, and (4) motive. Again, all these domains will be examined in detail in
later chapters, but an overview here will be helpful.
Let’s examine these domains using an altered version of the Kitty Genovese story. Suppose a man attacks a woman in
front of her apartment and is about to kill her. A responsible neighbor hears the struggle, calls the police, and shouts
from the window, “Hey you, get out of here!” Startled by the neighbor’s reprimand, the attacker lets go of the woman
and runs down the street where he is caught by the police.
Action
One way of ethically assessing this situation is to examine the actions of both the attacker and the good neighbor: The
attacker’s actions were wrong whereas the neighbor’s actions were right. The term right has two meanings. Sometimes,
it means “obligatory” (as in “the right act”), but it also can mean “permissible” (as in “a right act” or “It’s all right to do
that”). Usually, philosophers define right as permissible, including in that category what is obligatory:
1.) A right act is an act that is permissible for you to do. It may be either (a) obligatory or (b) optional.
a.) An obligatory act is one that morality requires you to do; it is not permissible for you to refrain from doing it.
b. )An optional act is one that is neither obligatory nor wrong to do. It is not your duty to do it, nor is it your duty
not to do it. Neither doing it nor not doing it would be wrong.
2.) A wrong act is one you have an obligation, or a duty, to refrain from doing: It is an act you ought not to do; it is not
permissible to do it:
In our example, the attacker’s assault on the woman was clearly a wrong action (prohibited); by contrast, the neighbor’s
act of calling the police was clearly a right action—and an obligatory one at that.
But, some acts do not seem either obligatory or wrong. Whether you take a course in art history or English literature or
whether you write a letter with a pencil or pen seems morally neutral. Either is permissible. Whether you listen to rock
music or classical music is not usually considered morally significant. Listening to both is allowed, and neither is
obligatory. Whether you marry or remain single is an important decision about how to live your life. The decision you
reach, however, is usually considered morally neutral or optional. Under most circumstances, to marry (or not to marry)
is considered neither obligatory nor wrong but permissible.
Within the range of permissible acts is the notion of supererogatory acts, or highly altruistic acts. These acts are neither
required nor obligatory, but they exceed what morality requires, going “beyond the call of duty.” For example, suppose
the responsible neighbor ran outside to actually confront the attacker rather than simply shout at him from the window.
Thus, the neighbor would assume an extra risk that would not be morally required. Similarly, while you may be obligated
to give a donation to help people in dire need, you would not be obligated to sell your car, let alone become
impoverished yourself, to help them. The complete scheme of acts, then, is this:
1.) Right act (permissible)
One important kind of ethical theory that emphasizes the nature of the act is called deontological (from the Greek word
deon, meaning “duty”). These theories hold that something is inherently right or good about such acts as truth telling
and promise keeping and inherently wrong or bad about such acts as lying and promise breaking. Classical deontological
ethical principles include the Ten Commandments and the Golden Rule. The leading proponent of deontological ethics in
recent centuries is Immanuel Kant (1724–1804), who defended a principle of moral duty that he calls the categorical
imperative: “Act only on that maxim whereby you can at the same time will that it would become a universal law.”
Examples for Kant are “Never break your promise” and “Never commit suicide.” What all of these deontological theories
and principles have in common is the view that we have an inherent duty to perform right actions and avoid bad actions.
Consequences
Another way of ethically assessing situations is to examine the consequences of an action: If the consequences are on
balance positive, then the action is right; if negative, then wrong. In our example, take the consequences of the
attacker’s actions. At minimum he physically harms the woman and psychologically traumatizes both her and her
neighbors; if he succeeds in killing her, then he emotionally devastates her family and friends, perhaps for life. And what
does he gain from this? Just a temporary experience of sadistic pleasure. On balance, his action has overwhelmingly
negative consequences and thus is wrong. Examine next the consequences of the responsible neighbor who calls the
police and shouts down from the window “Hey you, get out of here!” This scares off the attacker, thus limiting the harm
of his assault. What does the neighbor lose by doing this? Just a temporary experience of fear, which the neighbor might
have experienced any- way. On balance, then, the neighbor’s action has overwhelmingly positive con- sequences, which
makes it the right thing to do.
Ethical theories that focus primarily on consequences in determining moral rightness and wrongness are sometimes
called teleological ethics (from the Greek telos, meaning “goal directed”). The most famous of these theories is
utilitarianism, set forth by Jeremy Bentham (1748–1832) and John Stuart Mill (1806– 1873), which requires us to do
what is likeliest to have the best consequences. In Mill’s words, “Actions are right in proportion as they tend to promote
happiness; wrong as they tend to produce the reverse of happiness.”
Character Traits
Whereas some ethical theories emphasize the nature of actions in themselves and some emphasize principles involving
the consequences of actions, other theories emphasize a person’s character trait, or virtue. In our example, the attacker
has an especially bad character trait—namely, malevolence—which taints his entire out- look on life and predisposes
him to act in harmful ways. The attacker is a bad person principally for having this bad character trait of malevolence.
The responsible neighbor, on the other hand, has a good character trait, which directs his
outlook on life—namely, benevolence, which is the tendency to treat people with kindness and assist those in need.
Accordingly, the neighbor is a good per- son largely for possessing this good trait.
Moral philosophers call such good character traits virtues and bad traits vices. Entire theories of morality have been
developed from these notions and are called virtue theories. The classic proponent of virtue theory was Aristotle (384–
322 BCE), who maintained that the development of virtuous character traits is needed to ensure that we habitually act
rightly. Although it may be helpful to have action-guiding rules, it is vital to empower our character with the tendency to
do good. Many people know that cheating, gossiping, or overindulging in food or alcohol is wrong, but they are
incapable of doing what is right. Virtuous people spontaneously do the right thing and may not even consciously follow
moral rules when doing so.
Motive
Finally, we can ethically assess situations by examining the motive of the people involved. The attacker intended to
brutalize and kill the woman; the neighbor intended to thwart the attacker and thereby help the woman. Virtually all
ethical systems recognize the importance of motives. For a full assessment of any action, it is important to take the
agent’s motive into account. Two acts may appear identical on the surface, but one may be judged morally blameworthy
and the other excusable. Consider John’s pushing Mary off a ledge, causing her to break her leg. In situation (A), he is
angry and intends to harm her, but in situation (B) he sees a knife flying in her direction and intends to save her life. In
(A) he clearly did the wrong thing, whereas in (B) he did the right thing. A full moral description of any act will take
motive into account as a relevant factor.
CONCLUSION
The study of ethics has enormous practical benefits. It can free us from prejudice and dogmatism. It sets forth
comprehensive systems from which to orient our individual judgments. It carves up the moral landscape so that we can
sort out the issues to think more clearly and confidently about moral problems. It helps us clarify in our minds just how
our principles and values relate to one another, and, most of all, it gives us some guidance in how to live. Let’s return to
questions posed at the beginning of this chapter, some of which we should now be able to better answer.
What is the nature of morality, and why do we need it? Morality concerns discovering the rules that promote the human
good, as elaborated in the five traits of moral principles: prescriptivity, universalizability, overridingness, publicity, and
practicability. Without morality, we cannot promote that good.
What is the good, and how will I know it? The good in question is the human good, specified as happiness, reaching
one’s potential, and so forth.
Whatever we decide on that fulfills human needs and helps us develop our deepest potential is the good that morality
promotes.
Is it in my interest to be moral? Yes, in general and in the long run, for morality is exactly the set of rules most likely to
help (nearly) all of us, if nearly all of us follow them nearly all of the time. The good is good for you—at least most of the
time. Furthermore, if we believe in the superior importance of morality, then we will bring children up so that they will
be unhappy when they break the moral code. They will feel guilt. In this sense, the commitment to morality and its
internalization nearly guarantee that if you break the moral rules you will suffer.
What is the relationship between morality and religion? Religion relies more on revelation, and morality relies more on
reason, on rational reflection. But, religion can provide added incentive for the moral life for those who believe that God
sees and will judge all our actions.
What is the relationship between morality and law? Morality and law should be very close, and morality should be the
basis of the law, but there can be both unjust laws and immoral acts that cannot be legally enforced. The law is
shallower than morality and has a harder time judging human motives and intentions. You can be morally evil, intending
to do evil things, but as long as you don’t do them, you are legally innocent.
What is the relationship between morality and etiquette? Etiquette consists in the customs of a culture, but they are
typically morally neutral in that the culture could flourish with a different code of etiquette. In our culture, we eat with
knives and forks, but a culture that eats with chopsticks or fingers is no less moral.
1.2 Agency
If, as the result of an earthquake, a boulder were to break off from the face of a cliff and kill an unfortunate mountaineer
below, it wouldn’t make sense to hold either the boulder or the Earth morally accountable for her death. If, on the other
hand, an angry acquaintance dislodged the rock, aiming to kill the mountaineer for the sake of some personal grudge,
things would be different. Why?
One of the key differences between the two deaths is that the second, unlike the first, involves “agency.”This difference
is a crucial one, as agency
is often taken to be a necessary condition or requirement of moral responsibility. Simply put, something can only be
held morally responsible for an event if that something is an agent. Angry colleagues are agents but the Earth is not
(assuming, of course, that the Earth isn’t some animate, conscious being).This seems obvious enough, but what precisely
is agency, and why does it matter?
Agency for many involves the exercise of freedom. Freedom is usually taken to require the ability to act otherwise or in
ways contrary to the way one is currently acting or has acted in the past. For many holding this point of view, being
responsible (and thence an agent) means possessing a “free will” through which one can act independently of desires
and chains of natural causes. Of course, there are also many philosophers who don’t think much of this conception of
freedom. Most of the critics, however, nevertheless do accept using the term for actions that proceed in a causal way
from one’s self or one’s own character in the absence of external compulsion, coercion, or mental defect. (These
philosophers are called “compatibilists.”)
Conditions of agency
For thinkers following Aristotle (384–322 BCE), agency requires that one understands what one’s doing, what the
relevant facts of the matter are, and how the causal order of the world works to the extent that one is able to foresee
the likely consequences of chosen courses of action.
It’s also important that an agent possess some sort of self-understanding – that is, some sense of self-identity,
knowledge of who and what one is, what one’s character and emotional architecture are like, what one is capable and
not capable of doing. Self-knowledge is important because it doesn’t normally make sense to think of someone as a free
agent who is unaware of what he or she does – for example, while asleep or during an unforeseen seizure. It can still
make sense to talk of some of this kind of action as the result of agency, however, if the impairments that lead to the
unconscious conduct are the result of one’s own free choices. For example, consider someone who voluntarily gets
drunk while piloting an airliner, knowing full well what’s likely to happen; or consider someone else whose ignorance
about the small child standing behind the car he has just put into gear results from negligence, from his not bothering to
look out the rear window.
For Immanuel Kant (1724–1804), the ability to reason is crucial to agency. In Kant’s Critique of Practical Reason (1788),
what’s important is that one act unselfishly, purely on the basis of reason or a certain kind of rational principle (a
categorical imperative), instead of on the basis of desire or fear. Only this sort of rational action qualifies for Kant as
truly moral action, because even acting well on the basis of desire ultimately boils down to the same thing as acting in
other ways for the sake of desire. Desires and fears simply come over us, the result of natural and social causes beyond
our control. To act strictly from desire is to be a slave to desire. Only by acting on the basis of reason alone are we, for
Kant, autonomous – that is, self-governing beings who legislate moral laws of action to ourselves.
But perhaps it’s wrong to regard feelings and desires as irrelevant. Indeed, shouldn’t moral agency also be understood
to require the capacity to sympathize with others, to be distressed by their suffering, and to feel regret or remorse after
harming others or acting immorally? Would it make sense to regard as a moral or free agent a robot that behaved
rationally and that possessed all the relevant information but didn’t have any inner, affective life? It’s not obvious what
the answer to this question is. Star Trek’s Mr. Spock, for example, seemed to be a moral agent, even though the only
reason he had for condemning immoral acts was that they were “illogical.”
Similarly, it might be thought that the right social conditions must be in place for moral agency to be possible. Could
people truly be moral agents capable of effective action without public order and security, sufficient means of
sustenance, access to information and communication, education, a free press, and an open government? But again,
this is far from obvious. Although it seems true that when civilization breaks down immorality or amorality rises, it also
seems excessively pessimistic to conclude that moral agency is utterly impossible without many of the supports and
constraints of society.
Types of agent
It may seem strange to consider things like corporations or nations or mobs or social classes as agents, but the issue
often arises in reflections about whether one should make judgments that attribute collective responsibility. People did
speak of the guilt of the German nation and demand that all Germans contribute to war reparations after World War I.
When the government of a truly democratic nation goes to war, because its policy in some sense expresses “the will of
the people,” the country arguably acts as though it were a kind of single agent. People also, of course, speak collectively
of the responsibilities of the ruling class, corporations, families, tribes, and ethnic
groups. Because human life is populated by collectives, institutions, organizations, and other social groupings, agency
can sometimes be dispersed or at least seem irremediably unclear. These “gray zones,” as thinkers like Primo Levi (The
Periodic Table, 1975) and Claudia Card (The Atrocity Paradigm, 2002) have called them, make determining agency in
areas like sexual conduct and political action exceedingly difficult.
There are three ways of understanding how we talk about collectives as agents. One is that it’s just mistaken and that
collectives cannot be agents. The second is that collectives are agents in some alternative, perhaps metaphorical sense –
that they are like real agents but not quite the same as them. The third is that collectives are as much agents as
individual people, who are themselves perhaps not as singular, cohesive, and unified as many would like to believe.
1.4 Autonomy
The legitimacy of “living wills” or “advance directives” is at present a hotly contested social and moral issue. Expressing
people’s preferences should they become unable to do so because of illness or injury, these curious documents aim to
make sure that physicians treat individuals as they wish, not as others think best. The 2005 case of Terri Schiavo, the
brain-damaged American woman whose husband and parents fell into a sensational and painful legal wrangle
concerning her wishes, illustrates all too well the reasons people write living wills.
Proponents of the practice argue that one of the most important bases for the human capacity to act in moral ways is
the ability not only to choose and to act on those choices but also to choose for oneself, to be the author of one’s own
life. This capacity is known as “autonomy.” But what does it mean to be autonomous?
Autonomy requires, at the very least, an absence of compulsion. If someone is compelled to act rightly by some internal
or external force – for instance, to return a lost wallet packed with cash – that act isn’t autonomous. So, even though the
act was morally proper, because it was compelled it merits little praise.
For philosophers like Immanuel Kant (1724–1804), this is why autonomy is required for truly moral action. Kant argues in
his Critique of Practical Reason (1788) and elsewhere that autonomously acting without regard for one’s desires or
interests is possible because people are able to act purely on the basis of rational principles given to themselves by
themselves. Indeed, the word “autonomous” derives from the Greek for self (auto) and law (nomos) and literally means
self-legislating, giving the law to one’s self. Actions done through external or internal compulsion are, by contrast,
“heteronomous” (the law being given by something hetero or “other”). In this way autonomy differs from, though also
presupposes, metaphysical freedom, which is commonly defined as acting independently of the causal order of nature.
Political freedom, of course, has to do with people’s relationship to government and other people regardless or their
relationship to systems of cause and effect. But theories of political freedom also draw upon the concept of autonomy.
Politics
Conceptions of autonomy are important politically, because one’s ideas about politics are often bound up with one’s
ideas about what people are and what they’re capable of doing or not doing. Those who think that people are not
capable or little capable of self-legislating, self-regulating action are not likely to think that people are capable of
governing themselves.
Liberal democratic theory, however, depends upon that ability. The authority of government in liberal democracies
draws its justification from the consent of the governed. Through systems of elections and representation the people of
democracies give the law to themselves. Liberal democracies are also configured to develop certain institutions (like the
free press) and to protect political and civil rights (such as the rights to privacy and property) toward the end of ensuring
people’s ability to act autonomously and effectively. In this way, liberal democrats recognize autonomy not only as an
intrinsic human capacity but also as a political achievement and an important element of human well-being.
The legitimacy of liberal democracy is therefore threatened by claims that human beings are not the truly autonomous
agents we believe our- selves to be. And there is no shortage of people prepared to argue this view. Many critics
maintain that people really can’t act independently of their passions, of their families, of the societies in which they live,
of customs, conventions, and traditions, of structures of privilege, exploitation, and oppression, including internalized
oppression. Some go as far as to claim that the sort of autonomy liberal democrats describe is the fantasy of wealthy,
white, European and North American males – or, worse, a privilege they enjoy only because they deny it to others. Still
other critics regard the idea as a mystifying ideology through which the ruling class deludes people about the exploitive
system under which they labor.
Medical ethics
The concept of autonomy has become increasingly important, however, in medicine. Many medical ethicists have
effectively argued that because human beings are autonomous, no medical treatments ought to be administered to
patients without their informed consent to any procedures or therapies – unless they’re not competent to make
informed choices (e.g. perhaps because of mental illness). That is, patients must both: (1) under- stand what is to be
done to them, what risks are entailed, and what consequences may follow; and (2) agree to the treatment in light of that
understanding. When they fail to acquire informed consent, medics are said to act paternalistically, invasively, and
disrespectfully, deciding what’s best for an adult who really should be allowed to make such decisions for him or herself.
Problems, however, with the requirement to respect autonomy arise in trying to determine precisely when a patient is
incompetent, as well as when a patient is sufficiently well informed. Just how much mental impairment is too much; just
how much information is enough? Living wills are intended as one way to get around at least some of these problems.
These may seem to be very broad ethical questions, yet the existence of child labor, breast ironing, female circumcision,
and divergent sexual practices make them very real questions – and in some cases, where children’s lives are at stake,
quite urgent. People have thought about and struggled with these kinds of questions about the origins of ethics for
many centuries. When one faces these hard questions, thinks about the philosophical problem of the origins of ethics,
and becomes aware of the great variety of human customs the world over, it becomes tempting to say that right and
wrong are just a matter of opinion, since what is regarded as right or wrong in one culture may not be seen in the same
way in another culture. Right and wrong seem culturally relative. Also, some practices that were once regarded as right,
either a century ago or 20 years ago, are nowadays regarded as wrong. Ethical standards seem to change, and there is so
much disagreement between cultural practices that ethical relativism, the view that right and wrong are always relative,
seems justified (see Figure 1.1).
Those who defend the idea that ethics is relative emphasize the differences among our ethical judgments and the
differences among various ethical traditions. Some relativists call these cultural and ethical traditions folkways. This is a
helpful concept for understanding ethical relativism because it highlights that the ways and customs are simply
developed by average people (folk) over long periods of time. Here is how the twentieth‐century social scientist William
G. Sumner describes the folkways:
The folkways…are not creations of human purpose and wit. They are like products of natural forces which men
unconsciously set in operation, or they are like the instinctive ways of animals, which are developed out of experience,
which reach a final form of maximum adaptation to an interest, which are handed down by tradition and admit of no
exception or variation, yet change to meet new conditions, still within the same limited methods, and without rational
reflection or purpose. From this it results that all the life of human beings, in all ages and stages of culture, is primarily
controlled by a vast mass of folkways handed down from the earliest existence of the race. (Sumner 1906: 19–20)
Folkways: The concept that customs are developed by average people (folk) over long periods of time.
Something is right, an ethical relativist will say, if it is consistent with a given society’s folkways and wrong if it goes
against a society’s folkways. Relative ethics will say that in cultures where female circumcision has taken place for
centuries, it is right to continue to circumcise young girls, and wrong to attempt to change this tradition.
Relativists believe that ethical differences between cultures are irreconcilable. On their view, irreconcilable differences
are actually quite predictable because each society today has its own unique history and it is out of this history that a
society’s ethical values and standards have been forged. Around the globe, each society has its own unique history;
consequently, each society has its own unique set of ethical standards. Relativists would say that if there are any
agreements between cultures on ethical values, standards, or issues, we should not place any importance on that
accidental fact, because, after all, the true nature of ethics is relative, and the origin of ethics lies in each society’s
unique history.
Not everyone, though, is content with the relativist’s rather skeptical answer to the question about the ultimate nature
and origin of ethics. Instead of a relativist answer to the question, plenty of people have asserted that not everything is
relative. A critic of relativism will say that not everything in ethics is relative, because some aspects of ethics are
universal. Those who hold this view are called ethical universalists. In contrast to the ethical relativist who claims that all
ethics is relative, the universalists contend that there are at least some ethical values, standards, or principles that are
not relative. And this somewhat modest claim is all that a universalist needs to challenge the relativist’s generalization
that all ethics is relative. An easy way to grasp what universalists are talking about is to consider the concept of universal
human rights. The Universal Declaration of Human Rights was created in 1948 by the United Nations General Assembly.
It has inspired close to 100 bills of rights for new nations. People who believe in universal human rights hold ethical
universalism: they believe there are certain rights that all human beings have, no matter what culture or society they
belong to. An ethical relativist will deny this, and maintain that rights are meaningful only within a particular cultural
tradition, not in a universal sense.
In order to achieve a bit more clarity on the issue of relativism, we must consider the difference between cultural
relativism and ethical relativism. Cultural relativism is the observation that, as a matter of fact, different cultures have
different practices, standards, and values. Child labor, breast ironing, divergent sexual practices, and female circumcision
are examples of practices that are customary in some cultures and would be seen as ethical in those cultures. In other
cultures, however, such practices are not customary, and are seen as unethical. If we took the time to study different
cultures, as anthropologists and other social scientists do, we would see that there is no shortage of examples such as
these. As the anthropologist Ruth Benedict has put it: “The diversity of cultures can be endlessly documented” (1934:
45).
As examples, consider wife and child battering, polygamy, cannibalism, or infanticide. There are some cultures
(subcultures at least) that endorse these practices as morally acceptable. Western culture, by contrast, regards these
practices as immoral and illegal. It seems to be true, therefore, just as a matter of fact, that different cultures have
different ethical standards on at least some matters. By comparing different cultures, we can easily see differences
between them, not just on ethical matters, but on many different levels.
Cultural Relativism: The theory that, as a matter of fact, different cultures have different practices, standards, and
values.
What we need to notice about ethical relativism, in contrast with cultural relativism, is that ethical relativism makes a
much stronger and more controversial claim about the nature of ethics. Ethical relativism is the view that all ethical
standards are relative, to the degree that there are no permanent, universal, objective values or standards. This view,
though, cannot be justified by simply comparing different cultures and noticing the differences between them. The
ethical relativist’s claim goes beyond observation and predicts that all ethical standards, even the ones we have not yet
observed, will always be relative. More simply put, ethical relativism is an ethical theory, hence its name. Cultural
relativism is not an ethical theory, it is simply the view that cultures differ and display much diversity.
A universalist will respond to ethical relativism by pointing out that very general basic values – not specific moral rules or
codes – are recognized, at least implicitly, to some extent in all societies. Even though on the surface, in particular
actions or mores, there seems to be unavoidable disagreement, a universalist will observe that there are general values
that provide the foundations of ethics (see Figure 1.2). One ambition, then, for the universalists who wish to immerse
themselves in cultural studies, is not only to attempt to understand and appreciate other cultures’ perspectives and
experiences, but to detect what common ground – common values – are shared by the different cultures. Certainly
there is cultural difference on how these values are manifested, but according to universalism, the values themselves
represent more than arbitrary social conventions.
Figure 1.2
An ethical universalist, then, can agree that there are cultural differences and that cultures display diversity. But
universalists will also claim that only some social practices are merely conventional; they hold that not all practices are
merely conventional. Thus universalists will explain that it is possible to believe that both ethical universalism and
cultural relativism are true (see Figure 1.3).
Figure 1.3
Although ethical universalism is conceptually consistent with cultural relativism, this point can sometimes be
overlooked, since social scientists from the first half of the twentieth century who carried out extensive research into
different cultures and societies have contributed to the linking in our minds of ethical relativism and cultural relativism.
The distinction between cultural relativism and ethical relativism is an important one. Everyone agrees with cultural
relativism, even universalists, but not everyone agrees with ethical relativism. Ethical relativism is a theory about the
ultimate nature of ethics, cultural relativism is only a theory about cultural diversity.
2.7 Egoism
“All sensible people are selfish,” wrote Ralph Waldo Emerson (1803–82). Nowadays, conventional wisdom is that one
doesn’t even have to be sensible to selfish – because in fact everyone is always selfish. In some circles, a belief in
genuine altruism is taken as a sign of naivety.
Emerson’s line, however, need not inspire cynicism. The question, “Can egoism be morally justified?” is clearly not self-
contradictory and needs to be answered. Furthermore, if being good and being selfish happen to require the same
things, then selfishness would be something to celebrate.
Psychological egoism
First, however, something must be said about the view that, as a matter of fact, everyone is at heart an egoist. People
may not do what’s in their own best interests, but they will, according to the psychological egoist, only do what they
believe is in their own best interests. Apparent counterexamples are just that – apparent. Take the sentiments
expressed in Bryan Adams’s soppy ballad, “(Everything I Do) I Do It For You.” Echoing countless other love songs, Adams
sings “Take me as I am, take my life. / I would give it all,
I would sacrifice.”Yet even this extreme profession of selflessness can easily be seen as masking a deeper selfishness.
Why, after all, is he saying this? For the purposes of seduction, of course. He may believe he is sincere, but then perhaps
this is one of nature’s tricks: only by fooling the seducer can the seduction be successful. Besides, even if he’s telling the
truth, what does that show? That he would rather die than be without his love? Selfishness again! Death is better than
being miserable for him.
This view – known as psychological egoism – can be very persuasive. But although you can always explain away altruistic
behavior in selfish terms, it’s not clear why we should prefer a selfish explanation over an altruistic one simply because
it’s possible to do so.
From a logical point of view it’s important to see that from the fact that the act is pleasing it doesn’t follow that the act
was done for the sake of the pleasure. From the fact that saving a drowning swimmer makes one feel good, for example,
it doesn’t follow that the saving was done for the sake of the good feeling. Pleasure may be a happy result of an action
while not being the reason for the action.
There’s also an objection that can be brought against the egoistic hypothesis from the point of view of scientific method
– it can’t be tested. If every act can be interpreted as selfish, it’s not even possible to construct an experiment that might
falsify the hypothesis. If someone saves a drowning swimmer, he did it for selfish reasons. If he doesn’t save the
drowning swimmer, he didn’t do it for selfish reasons. Admissible hypotheses must, at least in principle, be somehow
testable. And since every possible act can be interpreted as selfish, no observation could ever in principle test psycho-
logical egoism.
Ethical egoism
Even if psychological egoism is true, however, it only says something about the facts of human psychology. It doesn’t say
anything about whether or not being egoistic is rational or moral – whether one ought to be selfish. In short, it leaves all
the big ethical questions unanswered. Ethicists cannot avoid the question of whether egoism is morally justified.
Adam Smith (1732–90) took a stab at an answer, at least in part, by arguing that selfishness in economic affairs is
morally justified because it serves the common good in the most efficient way: “It is not from the benevolence of the
butcher, the brewer, or the baker, that we expect our dinner,” he wrote, “but from their regard to their own interest.
We address ourselves, not to their humanity but their self-love, and never talk to them of our own necessities but of
their advantages.”
Smith’s argument in The Wealth of Nations does not, however, justify what is known as ethical egoism: the view that it’s
always ethical to act in one’s own interests. Even though it may be true that egoism is an efficient route to the common
good in certain contexts, it’s implausible that it’s always so. Contrary to popular conception, Smith’s general moral
theory is, in fact, decidedly not egoistic, grounding morality instead in sympathy, moral sentiment, and an unselfish
“impartial spectator.” Smith does not defend ethical egoism as a universal or even general principle.To do that, one
needs to argue that egoism is itself morally justifiable, that it’s justifiable even if it doesn’t serve as a means to some
other good.Rational egoism
So, how might one argue that egoism is ethically justified? Well, many believe that ethics must be rational. Moral laws
might not be entirely derived from rational principles, but at the very least ethics must accord with reason, and not
command anything contrary to reason – that is, any- thing that’s inconsistent, self-contradictory, or conceptually
incoherent. So, if ethics must be rational, and one may rationally (consistently, etc.) act for the sake of self-interest, then
acting selfishly meets at least a rationality test for morality.
It’s not at all clear, however, how acting rationally for the sake of self- interest is in any ethical sense decisive. Helping
oneself seems no more or less rational than helping someone else. Might one not act rationally for the sake of immoral
aims? Indeed, many would argue that aims or goals cannot be established by rationality alone.
Perhaps the most important question with regard to this issue is whether there’s any conflict between self-interest and
altruism anyway. Many ancient Greek philosophers, including Plato and Aristotle, wouldn’t have seen any conflict
between egoism and altruism because they thought that if one behaves badly one ultimately harms oneself. The greedy
man, for example, is never at peace with himself, because he is never satisfied with what he has. In contrast, as Plato
had Socrates say before his own execution, “a good man cannot be harmed either in life or in death.”That may be too
optimistic a view. But the idea that being good is a form of “enlightened self-interest” is plausible.
But does enlightened self-interest give people a reason for being altruistic, or does it show genuine altruism isn’t
possible? Some would argue that any act that’s in one’s self-interest cannot be called altruistic, even if it helps others:
the concept of altruism excludes self-interested actions, even those that coincide with the interests of others. An
alternative view holds that altruism and self-interest are compatible: the fact that do-gooders know that doing good
helps them in no way diminishes the extent to which what they do is done for others. The dilemma can be posed with
regard to the Bryan Adams song. Is he lying when he says everything he does, he does it for her, if he also does it for
himself? Or has he just conveniently neglected to point out that his altruism requires no self-sacrifice?
2.8 Hedonism
Why be moral? One way to try to answer this question is to consider why it would be a good thing if every moral
problem were actually sorted out. What would everyone being good actually lead to? World peace. No one dying of
hunger. Everyone being free. Justice reigning supreme. And what would be so good about that?
The obvious answer is that then everyone would be happy – or at least, as happy as is humanly possible. So, the point of
being good is that it would lead to a happier world.
If this is right, then the basis of morality is hedonism: the view that the only thing that is of value in itself is happiness (or
pleasure, though for simplicity we will talk only of happiness for now), and the only thing bad in itself is unhappiness (or
pain). This might seem a surprising conclusion. After all, hedonism is usually associated with the selfish pursuit of
fleeting pleasures. So, how can it be the basis of morality?
The answer to this question must start with an explanation of why happiness is the only good. Aristotle (384–322 BCE)
thought this was evidently true, because there are things done for their own sake and things done for the sake of
something else. Things done for the sake of something else are not valuable in themselves, but only instrumentally
valuable, as means to an end. Those things done for their own sake, in contrast, are intrinsically valuable, as ends in
themselves. Of all the good things in life, only happiness, it seems, is prized for its own sake. Everything else is valued
only because it leads to happiness. Even love is not valued in itself – a love that makes us permanently miserable is not
worth having.
There is, however, nothing in this conclusion that entails pursuing selfish, fleeting pleasures. Epicurus (341–271 BCE),
one of the first hedonic philosophers, understood this well. He thought that no one could be happy if he or she
permanently sought intense pleasures, especially of the fleeting kind (what he called kinetic or active pleasures). Rather,
to be truly happy – or, perhaps better, “content” – one needs a certain calm, tranquility, and peace of mind (static
pleasures). And if we see that happiness has value in itself, then we have reason to be concerned with the happiness of
others, not just our own. Hence, Epicurus concluded, “It is impossible to live a pleasant life without living wisely and
honorably and justly, and it is impossible to live wisely and honorably and justly without living pleasantly.”
One of the most important hedonic ethics of the modern era is the utilitarianism of Jeremy Bentham (1749–1832) and
John Stuart Mill (1806–73). From the same premise – that pleasure and happiness are the only goods, and pain and
unhappiness the only evils – they concluded that actions are right in so far as they promote the greatest happiness of
the greatest number and wrong in so far as they diminish it.
Precisely what?
One of the recurring problems for hedonic philosophies is pinning down just what it is that is supposed to be intrinsically
valuable. Is it pleasure – by which we mean pleasant sensations? Or is it happiness, in which case what is that? A stable
state of mind? A temporary feeling of well-being? Objectively flourishing? Or are each of these good in themselves?
The problem is a persistent and serious one, for if we understand happiness and pleasure in conventional senses, it
becomes far from clear that they are intrinsic goods, above all others. Moreover, philosophers’ attempts to precisely
define the crucial qualities of pleasure (as Bentham did, for example, by pointing to properties like “intensity” and
“duration”) are notoriously slippery. Critics of Mill’s work argue that, if he were serious, he would have to admit that the
life of a contented pig is better than that of a troubled philosopher. Mill tried to reply to this by distinguishing between
higher pleasures of the mind and lower pleasures of the body (Utilitarianism, 1859).
But what makes higher pleasures higher? Mill thought “competent judges,” who had experienced both, would prefer a
life with some higher pleasures than one with only lower ones, but not vice versa. Yet, even if this were true, it doesn’t
seem to be the case that the higher pleasures are preferred simply because they are more pleasurable. If, however,
there are other reasons for choosing them, then hedonic considerations are not the only important ones after all.
Robert Nozick made an even stronger argument against hedonism in a thought experiment in which he asked if one
would choose to live happily in a virtual world or less happily in the real one. Almost everyone, he suggested, would
prefer the real world, which suggests people prefer reality to happiness. If that’s right, then happiness is not the only
thing that’s good in itself. It seems that truth and authenticity are, as well.
The next player in the story is Alfred Jules Ayer (1910–1989), who was influenced both by Hume’s and Moore’s
presentation of the fact–value problem. Hume and Moore each showed two things. First, they explained why there is a
fact–value problem; second, they offered solutions to the problem by showing what moral value really is. For Hume, the
problem involves the fallacy of deriving ought from is, and his solution is that moral value rests on emotional reactions.
For Moore, the problem involves the naturalistic fallacy, and his solution involves intuitively recognizing moral goodness
within things.
Ayer also takes this two-pronged approach. First, he argues that the fact– value problem arises because moral
statements cannot pass a critical test of meaning called the verification principle. Second, expanding on Hume, his
solution is that moral utterances are only expressions of feelings, a position called emotivism. Let’s look at each of these
components.
Ayer’s Theory
Regarding the verification principle, in the 1930s, Ayer went to Vienna to study with a group of philosophers called the
“Logical Positivists,” who believed that the meaning of a sentence is found in its method of verification. According to
that test, all meaningful sentences must be either
(a) Tautologies (statements that are true by definition and of the form “A is A” or reducible to such statements) or
(b) Empirically verifiable (statements regarding observations about the world, such as “The book is red”).
Based on this test, mathematical statements are meaningful, such as all triangles have three sides, because they are
tautologies. The statement “The Empire State Building is in New York City” is meaningful because it is empirically
verifiable.
What, though, about value statements such as “Charity is good”? According to the above test, they are meaningless
because they are neither tautologies nor verifiable statements. That is, it is not true by definition that charity is good,
and there is no way to empirically verify whether charity is good. Similarly, according to the above test, a theological
statement such as “God is guiding your life” is meaningless because it is neither a tautology nor empirically verifiable.
Ayer makes his point about the meaninglessness of value utterances here:
[T]he fundamental ethical concepts are unanalyzable, inasmuch as there is no criterion by which one can test the validity
of the judgments in which they occur. ... The reason why they are unanalyzable is that they are mere pseudo-concepts.
The presence of an ethical symbol in a proposition adds nothing to its factual content. Thus if I say to someone, “You
acted wrongly in stealing that money,” I am not stating anything more than if I had simply said, “You stole that money.”
In adding that the action is wrong, I am not making any further statement about it.
Thus, there is a fact–value problem insofar as moral utterances fail the verification test and are not factual statements.
Ayer’s solution to the fact–value problem is that moral utterances function in a special nonfactual way. Although they
are indeed factually meaningless, they are not just gibberish. For Ayer, utterances such as “Charity is good” express our
positive feelings about charity in much the same way as if we shouted out "Charity—hooray!” Similarly, the utterance
“Murder is wrong” expresses our negative feelings about murder just as if we shouted “Murder—boo!” The view that
moral utterances merely express our feelings is called emotivism. Ayer emphasizes that moral utterances don’t even
report our feelings; they just express our feelings. Here’s the difference:
Reported feeling: “Charity is good” means “I have positive feelings about charity.”
Even reports of feelings are in some sense factual: It is either true or false that “I have positive feelings about charity,”
and I can empirically verify this with a psychological analysis of my mental state. However, the emotional expression
“Charity—hooray!” is like a grunt or a sigh; there is nothing to factually report.
Philosophers have introduced two terms to distinguish between factual and nonfactual utterances: cognitive and
noncognitive. When a statement has factual content, it is cognitive: We can know (or “cognize”) its truth value—
whether it is true or false. When a statement lacks factual content, it is noncognitive: It has no truth value. Traditional
moral theories all claim to be cognitivist: They all claim that moral statements have truth value. Here is how four
traditional theories would give a cognitivist interpretation of the moral utterance “Charity is good”:
Moore’s emotivist solution to the fact–value problem is also cognitivist because for him “Charity is good” means
“Charity has the indefinable property of moral goodness” (which, according to Moore, we know to be true through
moral intuition). For Ayer, all these cognitivist theories are misguided. Because moral utterances like “Charity is good”
do not pass the test for meaning by the verification principle, they cannot be cognitive. The content that they have is
only noncognitive and takes the form of expressing our feelings.
Ayer’s account of emotivism directly attacks many of our cherished assumptions about morality. We typically think that
moral utterances are factually meaningful— not so according to Ayer. We typically think that morality involves some use
of our reasoning ability—again, not so for Ayer. What’s perhaps most unsettling about Ayer’s theory is its implication
that ethical disagreement is fundamentally a disagreement in attitude. Suppose you and I disagree about whether
abortion is morally permissible and we debate the issue—in a civilized way without any emotional outbursts. In Ayer’s
view, this is still simply a matter of us having underlying emotional attitudes that conflict; it is not really a disagreement
about facts of the matter.
Criticisms of Emotivism
Several objections to Ayer’s emotivism were quickly forthcoming after the appearance of his book. A first criticism was
that the verification theory of meaning, upon which Ayer’s emotivism was founded, had serious problems.
Specifically, it did not pass its own test. Here in brief is the principle:
Verification principle: A statement is meaningful if and only if it is either tautological or empirically verifiable.
We now ask the question, “Is the verification principle itself either tautological or empirically verifiable?” The answer is
that it is not, which means that the verification principle is meaningless. If that’s the case, then we are not obliged to use
the verification principle as a test for moral utterances. The rest of Ayer’s emotivist analysis of morality thus falls apart.
Second, there is a problem with the emotivist view that ethical disagreements are fundamentally disagreements in
attitude. Specifically, this blurs an important distinction between having reasons for changing attitudes and having
causes that change our attitudes. Suppose again that you and I are debating the abortion issue. Consider now two
methods of resolving our dispute. Method 1 involves you giving me a series of reasons in support of your position, and I
eventually agree with you. Method 2 involves a surgeon operating on my brain in a way that alters my emotional
attitude about the abortion issue. Method 1 involves reasons behind my changed view, and Method 2 involves causes
for my changed view. The emotivist theory cannot easily distinguish between these two methods of attitude change.
One way or another, according to emotivism, changes in attitude will come only through some kind of causal
manipulation with our emotions. This is a problem because virtually everyone would agree that there is a major
difference between what is going on in Method 1 and Method 2, and it is only the former that is a legitimate way of
resolving moral disagreements.
Third, morality seems deeper than mere emotions or acting on feelings or attitudes. Moral judgments are
universalizable: If it is wrong for Jill to steal, then it is wrong for anyone relevantly similar to Jill to steal. Emotivism
reduces morality to isolated emotive expressions or attitudes that don’t apply universally. It makes more sense to see
morality as a function of applying principles such as “It is wrong to steal,” which has a universal element.
Ayer’s version of emotivism is rather extreme, and it is no surprise that it creates so many problems. A more moderate
version of emotivism was later pro- posed by Charles Leslie Stevenson (1908–1979) in his book Ethics and Language
(1944).5 Stevenson agrees that moral utterances have an emotive component that is noncognitive. However, he argues
that moral utterances sometimes have cognitive elements too. Moral utterances are so complex, Stevenson says, that
we cannot give a specific pattern that applies to all moral utterances all the time.
Nevertheless, a typical moral utterance like “Charity is good” might have these specific components:
Emotive expression (noncognitive): “Charity—hooray!”
Description of other qualities (cognitive): “Charity has qualities or relations X, Y, and Z” (for example, reduces suffering,
reduces social inequality).
Stevenson’s suggestion is reasonable. If we are unhappy with Ayer’s extreme emotivism, we can still accept that there is
some noncognitive emotive element to moral utterances. Indeed, considering how frequently emotion enters into our
moral evaluations, such as the opening example from the Weblog, we will want to recognize at least a more limited role
of emotive expressions within moral discussions.
It is important to know how to reason well in thinking or speaking about ethical matters. This is helpful not only in trying
to determine what to think about controversial ethical matters but also in arguing for something you believe is right and
in critically evaluating positions held by others.
The Structure of Ethical Reasoning and Argument To be able to reason well in ethics you need to under- stand what
constitutes a good argument. We can do this by looking at an argument’s basic structure. This is the structure not only of
ethical arguments about what is good or right but also of arguments about what is the case or what is true.
Suppose you are standing on the shore and a per- son in the water calls out for help. Should you try to rescue that
person? You may or may not be able to swim. You may or may not be sure you could rescue the person. In this case,
however, there is no time for reasoning, as you would have to act promptly. On the other hand, if this were an imaginary
case, you would have to think through the reasons for and against trying to rescue the person. You might conclude that
if you could actually rescue the per- son, then you ought to try to do it. Your reasoning might go as follows:
Some structure like this is implicit in any ethical argument, although some are longer and more complex chains than the
simple form given here. One can recognize the reasons in an argument by their introduction through key words such as
since,
because, and given that. The conclusion often contains terms such as thus and therefore. The reasons supporting the
conclusion are called premises. In a sound argument, the premises are true and the conclusion follows from them. In the
case presented earlier, then, we want to know whether you can save this person and also whether his life is valuable.
We also need to know whether the conclusion actually follows from the premises. In the case of the earlier examples, it
does. If you say you ought to do what will save a life and you can do it, then you ought to do it. However, there may be
other principles that would need to be brought into the argument, such as whether and why one is always obligated to
save someone else’s life when one can.
To know under what conditions a conclusion actually follows from the premises, we would need to analyze arguments in
much greater detail than we can do here. Suffice it to say, however, that the connection is a logical connection—in other
words, it must make rational sense. You can improve your ability to reason well in ethics first by being able to pick out
the reasons and the conclusion in an argument. Only then can you subject them to critical examination in ways we
suggest here.
Ethical reasoning can be done well or done poorly. Ethical arguments can be constructed well or constructed poorly. A
good argument is a sound argument. It has a valid form in which the conclusion actually follows from the premises, and
the premises or reasons given for the conclusion are true. An argument is poorly constructed when it is fallacious or
when the reasons on which it is based are not true or are uncertain. An ethical argument always involves some claim
about values—for example, that saving a life is good. These value-based claims must be established through some
theory of values. Part I of this book examines different theories that help establish basic values.
Ethical arguments also involve conceptual and factual matters. Conceptual matters are those that relate to the meaning
of terms or concepts. For example, in a case of lying, we would want to know
what lying actually is. Must it be verbal? Must one have an intent to deceive? What is deceit itself? Other conceptual
issues central to ethical arguments may involve questions such as, “What constitutes a ‘person’?” (in arguments over
abortion, for example) and “What is ‘cruel and unusual punishment’?” (in death penalty arguments, for example). Some-
times, differences of opinion about an ethical issue are a matter of differences not in values but in the meaning of the
terms used.
Ethical arguments often also rely on factual claims. In our example, we might want to know whether it was actually true
that you could save the drowning person. In arguments about the death penalty, we may want to know whether such
punishment is a deterrent. In such a case, we need to know what scientific studies have found and whether the studies
themselves were well grounded. To have adequate factual grounding, we will want to seek out a range of reliable
sources of information and be open-minded. The chapters in Part II of this book include factual material that is relevant
to ethical decisions about the topics under consideration.
It is important to be clear about the distinction between facts and values when dealing with moral conflict and
disagreement. We need to ask whether we disagree about the values involved, about the concepts and terms we are
employing, or about the facts connected to the case.
There are various ways in which reasoning can go wrong or be fallacious. We began this chapter by considering the
fallacy of begging the question or circular argument. Such reasoning draws on the argument’s conclusion to support its
premises, as in “abortion is wrong because it is immoral.” Another familiar problem of argumentation is the ad hominem
fallacy. In this fallacy, people say something like, “That can’t be right because just look who is saying it.” They look at the
source of the opinion rather than the reasons given for it. You can find out more about these and other fallacies from
almost any textbook in logic or critical thinking.
You also can improve your understanding of ethical arguments by making note of a particular type of reasoning that is
often used in ethics: arguments from analogy. In this type of argument, one com- pares familiar examples with the issue
being disputed. If the two cases are similar in relevant ways, then whatever one concludes about the first familiar case
one should also conclude about the disputed case. For example, Judith Jarvis Thomson (as discussed in Chapter 11) once
asked whether it would be ethically acceptable to “unplug” someone who had been attached to you and who was using
your kidneys to save his life. If you say that you are justified in unplugging, then a pregnant woman is also justified in
doing the same with regard to her fetus. The reader is prompted to critically examine such an argument by asking
whether or not the two cases were similar in relevant ways—that is, whether the analogy fits.
Finally, we should note that giving reasons to justify a conclusion is also not the same as giving an explanation for why
one believes something. A woman might explain that she does not sup- port euthanasia because that was the way she
was brought up or that she is opposed to the death penalty because she cannot stand to see someone die. To justify
such beliefs, one would need rather to give reasons that show not why one does, in fact, believe something but why one
should believe it. Nor are rationalizations justifying reasons. They are usually reasons given after the fact that are not
one’s true reasons. Rationalizations are usually excuses, used to explain away bad behavior. These false reasons are
given to make us look better to others or our- selves. To argue well about ethical matters, we need to examine and give
reasons that support the conclusions we draw.
1.2 Agency
If, as the result of an earthquake, a boulder were to break off from the face of a cliff and kill an unfortunate
mountaineer below, it wouldn't make sense to hold either the boulder or the Earth morally accountable for her
death. If, on the other hand, an angry acquaintance dislodged the rock, aiming to kill the mountaineer for the
sake of some personal grudge, things would be different. Why?
One of the key differences between the two deaths is that the second, unlike the first, involves
"agency."This difference is a crucial one, as agency is often taken to be a necessary condition or requirement of
moral responsibility. Simply put, something can only be held morally responsible for an event if that something
is an agent. Angry colleagues are agents but the Earth is not (assuming, of course, that the Earth isn't some
animate, conscious being). This seems obvious enough, but what precisely is agency, and why does it matter?
Agency for many involves the exercise of freedom. Freedom is usually taken to require the ability to act
otherwise or in ways contrary to the way one is currently acting or has acted in the past. For many holding this
point of view, being responsible (and thence an agent) means possessing a "free will" through which one can act
independently of desires and chains of natural causes. Of course, there are also many philosophers who don't
think much of this conception of freedom. Most of the critics, however, nevertheless do accept using the term
for actions that proceed in a causal way from one's self or one's own character in the absence of external
compulsion, coercion, or mental defect. (These philosophers are called "compatibilists.")
Conditions of agency
For thinkers following Aristotle (384-322 BCE), agency requires that one understands what one's doing, what
the relevant facts of the matter are, and how the causal order of the world works to the extent that one is able to
foresee the likely consequences of chosen courses of action.
It's also important that an agent possess some sort of self-understanding - that is, some sense of self-
identity, knowledge of who and what one is, what one's character and emotional architecture are like, what one
is capable and not capable of doing. Self-knowledge is important because it doesn't normally make sense to
think of someone as a free agent who is unaware of what he or she does - for example, while asleep or during an
unforeseen seizure. It can still make sense to talk of some of this kind of action as the result of agency, however,
if the impairments that lead to the unconscious conduct are the result of one's own free choices. For example,
consider someone who voluntarily gets drunk while piloting an airliner, knowing full well what's likely to
happen; or consider someone else whose ignorance about the small child standing behind the car he has just put
into gear results from negligence, from his not bothering to look out the rear window.
For Immanuel Kant (1724-1804), the ability to reason is crucial to agency. In Kant's Critique of Practical
Reason (1788), what's important is that one act unselfishly, purely on the basis of reason or a certain kind of
rational principle (a categorical imperative), instead of on the basis of desire or fear. Only this sort of rational
action qualifies for Kant as truly moral action, because even acting well on the basis of desire ultimately boils
down to the same thing as acting in other ways for the sake of desire. Desires and fears simply come over us,
the result of natural and social causes beyond our control. To act strictly from desire is to be a slave to desire.
Only by acting on the basis of reason alone are we, for Kant, aurononious - that is, self-governing beings who
legislate moral laws of action to ourselves.
Other conditions ofagency
But perhaps it's wrong to regard feelings and desires as irrelevant. Indeed, shouldn't moral agency also be
understood to require the capacity to sympathize with others, to be distressed by their suffering, and to feel
regret or remorse after harming others or acting immorally? Would it make sense to regard as a moral or free
agent a robot that behaved rationally and that possessed all the relevant information but didn't have any inner,
affective life? It's not obvious what the answer to this question is. Star "T'rek's Mr Spock, for example, seemed
to he a moral agent, even though the only reason he had for condemning immoral acts was that they were
"illogical."
Similarly, it might be thought that the right social conditions must be in place for moral agency to be
possible. Could people truly be moral agents capable of effective action without public order and security,
sufficient means of sustenance, access to information and communication, education, a free press, and an open
government? But again, this is far from obvious. Although it seems true that when civilization breaks down
immorality or amorality rises, it also seems excessively pessimistic to conclude that moral agency is utterly
impossible without many of the supports and constraints of society.
7ipes of agent
It may seem strange to consider things like corporations or nations or mobs or social classes as agents, but the
issue often arises in reflections about whether one should make judgments that attribute collective
responsibility. People did speak of the guilt of the German nation and demand that all Germans contribute to
war reparations after World War I. When the government of a truly democratic nation goes to war, because its
policy in some sense expresses "the will of the people," the country arguably acts as though it were a kind of
single agent. People also, of course, speak collectively of the responsibilities of the ruling class, corporations,
families, tribes, and ethnic groups. Because human life is populated by collectives, institutions, organizations,
and other social groupings, agency can sometimes be dispersed or at least seem irremediably unclear. These
"gray zones," as thinkers like Primo Levi (The Periodic Table, 1975) and Claudia Card (TheAtrocity Paradigm,
2002) have called them, make determining agency in areas like sexual conduct and political action exceedingly
difficult.
There are three ways of understanding how we talk about collectives as agents. One is that it's just mistaken
and that collectives cannot be agents. The second is that collectives are agents in some alternative, perhaps
metaphorical sense - that they are like real agents but not quite the same as them. The third is that collectives are
as much agents as individual people, who are themselves perhaps not as singular, cohesive, and unified as many
would like to believe.
See also
1.4 Autonomy, 2.13 Rationalism, 3.21 Moral subjects/moral agents, 5.1 Akrasia, 5.7 Free will and determinism
Reading
Aristotle, Nicomachean Ethics, edited by Sarah Broadie (Oxford: Oxford University Press, 2002), Book III
Alfred R. Mel e, Autonomous Agents: From Self-Control to Autononry (Oxford: Oxford University Press,
1995)
*Sandra Bartky, "Agency: What's the Trouble?" in her Sympathy, Solidarity, and Other Essays (Lanham, MD:
Rowman & Littlefield, 2002)
1.3 Authority
I will punish the Amalekites for what they did to Israel when they waylaid them as they came up
from Egypt. Now go, attack the Amalekites and totally destroy everything that belongs to them. Do
not spare them; put to death men and women, children and infants, cattle and sheep, camels and
donkeys. (1 Samuel 15:2-3)
This is a pretty terrifying command. But Saul and the Israelites had one very good reason to obey it: the order
came from God. Surely, God has the authority to command anything at all, even mass killing and total war. No?
It certainly seems plausible that recognizing authority figures is important to moral development. As
Sigmund Freud (1856-1939) argued, we all begin our lives under the authority of some caregiver who
commands us to do or not to do various things. In growing older we encounter the authority of governments,
teachers, traditions, customs, religions, social groups, and even perhaps conscience each demanding obedience
to its commands. What all this seems to suggest is that actions are morally right or wrong when they are,
accordingly, either commanded or prohibited by the proper authority.
Grounding ethics simply in authority is, however, deeply problematic. How can one decide among
competing authorities without appealing to something independent of authority? And if authority determines
right and wrong, then how can anyone possibly criticize or oppose authority? In order to ground ethics in
authority, therefore, it's necessary to demonstrate that there are, in fact, legitimate authorities and then figure out
what it is that they command. Logicians have a name for the mistake of justifying a claim through inappropriate
or unwarranted authorities arguinentuin ad verecun- diaui. Avoiding this form of error is harder than it looks.
The divine
One of the most common moral authorities to which people appeal is the divine. Some versions of this appeal
may he collected under the title, "voluntarism," as they ground morality in God's "will" (vohnrtas).This strategy,
however, faces two problems. The first is how one is to know what these divine commands are! Some maintain
that they are apprehended through revelation, as they were for Moses and Muhammad. Others argue that they're
evident in the order of nature, a position advanced by natural law theorists such asThomasAquinas (1224-74),
Hugo Grotius (1583-1645), and the deists.
The problem here is that people appear to arrive at very different interpretations by these methods of what
God or other divine beings will. The Book of Samuel, for example, admits of various interpretations; and even
among those who accept the legitimacy of revelation, let alone the atheists, not everyone accepts or can even be
expected to accept the authority of Abrahamic Scripture. If there's no way to achieve agreement among
reasonable people about a given topic, even when the same methods for determining truth are employed, is it
meaningful to say that anything can be known about it? In any case, it's certainly very hard to defend the claim
to know God's will.
The second problem concerns why God should be recognized as an authority anyway. This may seem a
strange question, but consider the line of inquiry developed in Plato's (427-347 BCE) well-known dialogue,
Euth_yphro. Socrates poses a dilemma that may be recast in the terms of Abrahamic monotheism this way: does
God command some actions because they are morally right, or are those actions morally right because God
commands them? If the answer is the former, then something besides God grounds the authority of God's
commands. But if it's the latter, then God's authority and moral goodness seem to result from nothing more than
a whimsical or arbitrary exercise of power.
Society, tradition, state
God, of course, isn't the only authority to which people appeal in moral matters. Many different philosophers
ground morality in social authority. Social relativists hold that the authority of social conventions and
agreements determines what in any given society is moral and immoral. As social agreements and conventions
are variable, different, and changing, societies can and do authorize different, changing moralities. Moreover, as
conventions and agreements are limited to the social order that has produced them and social orders differ
widely, there may be no common moral ground across societies.
Considering the importance of traditional authorities animates critical theories as well as explanations of
moral norms. Conservative thinkers like Michael Oakeshott (1901-90) and Edmund Burke (1729-97), for
example, have emphasized the authority of tradition in determining moral norms, and they criticize thinkers
who would ground morality in some purportedly universal rationality that speciously claims to operate
independently of tradition. And, indeed, the procedures of reasoning in the sciences and elsewhere have,
especially in modern times, made claim to independence, universality, and preeminence in determining what
people should and shouldn't believe. (Would it be proper to call the claims of reason the commands of
authority?)
The authority of the state has been justified in various ways, by appeals to divine right, nature, tradition,
and conquest. During the modern era philosophers have characteristically centered their attempts to justify state
authority upon the consent of the governed, in addition to appealing to strictly practical or utilitarian issues. For
modern liberal theorists, the "will of the people" is the ultimate authority, at least in matters political.
These lines of thought, however, can lead to the undermining of critical resistance or to supine
acquiescence to immorality. If society defines what's right and wrong, then taking a moral stance against the
current norms of society seems to be ruled out. On what basis, for example, would Martin Luther King, Jr, or
others in the US civil rights movement, have launched moral criticisms of racial segregation and
discrimination? If one accepts that the state derives its authority from the consent of the people, how can anyone
criticize a democratic majority that authorizes slavery, the oppression of women, or something like the
Holocaust? It seems that a proper democratic theory needs, at the very least, to distinguish between government
by general consent and crude majoritarianism. In On Liberty (1859), John Stuart Mill (1806-73) undertook to do
just that.
The ancient sophist Thrasymachus (at least as he's depicted by Plato in Republic), the nineteenth-century
German thinker Friedrich Nietzsche (1844-1900), and the French post-structuralist Michel Foucault (1926-84)
all in one way or the other hold that those who are able to establish their power authorize moral norms. That is,
the authority of any specific set of moral norms is the expression of power exercised by some over others. On
the one hand, this position seems an amoral and cynical rejection of morality. On the other hand, perhaps it
offers a critical tool for extricating people from the abuse, exploitation, and oppression of others. Perhaps some
forms of power are to be preferred to others.
Wherever one falls on this issue, it's important to remember that people will do all sorts of terrible things if
they feel someone with authority commands them to do it. Psychologist Stanley Milgram conducted disturbing
experiments in 1961-2, in which ordinary people were found to be willing to administer dangerous electric
shocks to virtual strangers - just because authoritative people wearing white lab coats commanded them to do
so. If we are to obey authorities, we need to be very sure they deserve to be obeyed.
See also
Reading
John Austin, Lectures on jurisprudence, or The Philosophy of Positive Law 11873, 4th revised edition 1879]
(Bristol:Thoemmes Press, 2002)
Michel Foucault, Discipline and Punish: The Birth of the Prison [ 1975] (New York: Pantheon Books, 1977)
*Joseph Raz, ed., Authority (New York: New York University Press, 1991)
1.4 Autonomy
The legitimacy of "living wills" or "advance directives" is at present a hotly contested social and moral issue.
Expressing people's preferences should they become unable to do so because of illness or injury, these curious
documents aim to make sure that physicians treat individuals as they wish, not as others think best. The 2005
case of Terri Schiavo, the brain-damaged American woman whose husband and parents fell into a sensational
and painful legal wrangle concerning her wishes, illustrates all too well the reasons people write living wills.
Proponents of the practice argue that one of the most important bases for the human capacity to act in moral
ways is the ability not only to choose and to act on those choices but also to choose for oneself, to be the author
of one's own life. This capacity is known as "autonomy." But what does it mean to be autonomous?
Autonomy requires, at the very least, an absence of compulsion. If someone is compelled to act rightly by
some internal or external force - for instance, to return a lost wallet packed with cash - that act isn't autonomous.
So, even though the act was morally proper, because it was compelled it merits little praise.
For philosophers like Immanuel Kant (1724-1804), this is why autonomy is required for truly moral action.
Kant argues in his Critique of Practical Reason (1788) and elsewhere that autonomously acting without regard
for one's desires or interests is possible because people are able to act purely on the basis of rational principles
given to themselves by themselves. Indeed, the word "autonomous" derives from the Greek for self (auto) and
law (nomos) and literally means self-legislating, giving the law to one's self. Actions done through external or
internal compulsion are, by contrast, "heteronomous" (the law being given by something hetero or "other"). In
this way autonomy differs from, though also presupposes, metaphysical freedom, which is commonly defined
as acting independently of the causal order of nature. Political freedom, of course, has to do with people's
relationship to government and other people regardless or their relationship to systems of cause and effect. But
theories of political freedom also draw upon the concept of autonomy.
Politics
Conceptions of autonomy are important politically, because one's ideas about politics are often bound up with
one's ideas about what people are and what they're capable of doing or not doing. Those who think that people
are not capable or little capable of self-legislating, self-regulating action are not likely to think that people are
capable of governing themselves.
Liberal democratic theory, however, depends upon that ability. The authority of government in liberal
democracies draws its justification from the consent of the governed. Through systems of elections and
representation the people of democracies give the law to themselves. Liberal democracies are also configured to
develop certain institutions (like the free press) and to protect political and civil rights (such as the rights to
privacy and property) toward the end of ensuring people's ability to act autonomously and effectively. In this
way, liberal democrats recognize autonomy not only as an intrinsic human capacity but also as a political
achievement and an important element of human well-being.
The legitimacy of liberal democracy is therefore threatened by claims that human beings are not the truly
autonomous agents we believe ourselves to be. And there is no shortage of people prepared to argue this view.
Many critics maintain that people really can't act independently of their passions, of their families, of the
societies in which they live, of customs, conventions, and traditions, of structures of privilege, exploitation, and
oppression, including internalized oppression. Some go as far as to claim that the sort of autonomy liberal
democrats describe is the fantasy of wealthy, white, European and North American males - or, worse, a
privilege they enjoy only because they deny it to others. Still other critics regard the idea as a mystifying
ideology through which the ruling class deludes people about the exploitive system under which they labor.
Medical ethics
The concept of autonomy has become increasingly important, however, in medicine. Many medical ethicists
have effectively argued that because human beings are autonomous, no medical treatments ought to be
administered to patients without their informed consent to any procedures or therapies - unless they're not
competent to make informed choices (e.g. perhaps because of mental illness). That is, patients must both: (1)
understand what is to be done to them, what risks are entailed, and what consequences may follow; and (2)
agree to the treatment in light of that understanding. When they fail to acquire informed consent, medics are
said to act paternalistically, invasively, and disrespectfully, deciding what's best for an adult who really should
be allowed to make such decisions for him or herself.
Problems, however, with the requirement to respect autonomy arise in trying to determine precisely when a
patient is incompetent, as well as when a patient is sufficiently well informed. Just how much mental
impairment is too much; just how much information is enough? Living wills are intended as one way to get
around at least some of these problems.
2.7 Egoism
"All sensible people are selfish," wrote Ralph Waldo Emerson (1803-82). Nowadays, conventional wisdom is
that one doesn't even have to be sensible to selfish because in fact everyone is always selfish. In some circles, a
belief in genuine altruism is taken as a sign of naivety.
Emerson's line, however, need not inspire cynicism. The question, "Can egoism be morally justified?" is
clearly not self-contradictory and needs to be answered. Furthermore, if being good and being selfish happen to
require the same things, then selfishness would be something to celebrate.
Psychological egoism
First, however, something must be said about the view that, as a matter of fact, everyone is at heart an egoist.
People may not do what's in their own best interests, but they will, according to the psychological egoist, only
do what they believe is in their own best interests. Apparent counterexamples are just that - apparent. Take the
sentiments expressed in Bryan Adams's soppy ballad, "(Everything I Do) I Do It ForYou." Echoing countless
other love songs, Adams sings ""lake me as I am, take my life. / I would give it all, I would sacrifice."Yet even
this extreme profession of selflessness can easily be seen as masking a deeper selfishness. Why, after all, is he
saying this? For the purposes of seduction, of course. He may believe he is sincere, but then perhaps this is one
of nature's tricks: only by fooling the seducer can the seduction be successful. Besides, even if he's telling the
truth, what does that show? That he would rather die than be without his love? Selfishness again! Death is better
than being miserable for hint.
This view - known as psychological egoism - can be very persuasive. But although you can always explain
away altruistic behavior in selfish terms, it's not clear why we should prefer a selfish explanation over an
altruistic one simply because it's possible to do so.
From a logical point of view it's important to see that from the fact that the act is pleasing it doesn't follow
that the act was done for the sake of the pleasure. From the fact that saving a drowning swimmer makes one feel
good, for example, it doesn't follow that the saving was done for the sake of the good feeling. Pleasure may be a
happy result of an action while not being the reason for the action.
There's also an objection that can be brought against the egoistic hypothesis from the point of view of
scientific method - it can't be tested. If every act can be interpreted as selfish, it's not even possible to construct
an experiment that might falsify the hypothesis. If someone saves a drowning swimmer, he did it for selfish
reasons. If he doesn't save the drowning swimmer, he didn't do it for selfish reasons. Admissible hypotheses
must, at least in principle, be somehow testable. And since every possible act can be interpreted as selfish, no
observation could ever in principle test psychological egoism.
Ethical egoism
Even if psychological egoism is true, however, it only says something about the facts of human psychology. It
doesn't say anything about whether or not being egoistic is rational or moral - whether one ought to be selfish.
In short, it leaves all the big ethical questions unanswered. Ethicists cannot avoid the question of whether
egoism is morally justified.
Adam Smith (1732-90) took a stab at an answer, at least in part, by arguing that selfishness in econonnic
affairs is morally justified because it serves the common good in the most efficient way: "It is not from the
benevolence of the butcher, the brewer, or the baker, that we expect our dinner," he wrote, "but from their
regard to their own interest. We address ourselves, not to their humanity but their self-love, and never talk to
them of our own necessities but of their advantages."
Smith's argument in The Wealth of Nations does not, however, justify what is known as ethical egoism: the
view that it's always ethical to act in one's own interests. Even though it may be true that egoism is an efficient
route to the common good in certain contexts, it's implausible that it's always so. Contrary to popular
conception, Smith's general moral theory is, in fact, decidedly not egoistic, grounding morality instead in
sympathy, moral sentiment, and an unselfish "impartial spectator." Smith does not defend ethical egoism as a
universal or even general principle. To do that, one needs to argue that egoism is itself morally justifiable, that
it's justifiable even if it doesn't serve as a means to some other good.
Rational egoism
So, how might one argue that egoism is ethically justified? Well, many believe that ethics must be rational.
Moral laws might not be entirely derived from rational principles, but at the very least ethics must accord with
reason, and not command anything contrary to reason - that is, anything that's inconsistent, self-contradictory,
or conceptually incoherent. So, if ethics must be rational, and one may rationally (consistently, etc.) act for the
sake of self-interest, then acting selfishly meets at least a rationality test for morality.
It's not at all clear, however, how acting rationally for the sake of selfinterest is in any ethical sense
decisive. Helping oneself seems no more or less rational than helping someone else. Might one not act
rationally for the sake of immoral aims? Indeed, many would argue that aims or goals cannot be established by
rationality alone.
Perhaps the most important question with regard to this issue is whether there's any conflict between self-
interest and altruism an way. Many ancient Greek philosophers, including Plato and Aristotle, wouldn't have
seen any conflict between egoism and altruism because they thought that if one behaves badly one ultimately
harms oneself. The greedy man, for example, is never at peace with himself, because he is never satisfied with
what he has. In contrast, as Plato had Socrates say before his own execution, "a good man cannot be harmed
either in life or in death."That may be too optimistic a view. But the idea that being good is a form of
"enlightened self-interest" is plausible.
But does enlightened self-interest give people a reason for being altruistic, or does it show genuine altruism
isn't possible? Some would argue that any act that's in one's self-interest cannot he called altruistic, even if it
helps others: the concept of altruism excludes self-interested actions, even those that coincide with the interests
of others. An alternative view holds that altruism and self-interest are compatible: the fact that do-gooders know
that doing good helps them in no way diminishes the extent to which what they do is done for others. The
dilemma can be posed with regard to the Bryan Adams song. Is he lying when he says everything he does, he
does it for her, if he also does it for himself? Or has he just conveniently neglected to point out that his altruism
requires no self-sacrifice?
See also
Reading
*Bernard Mandeville, The Fable of the Becs [ 1714] and Other Writings, edited by E. J. Hundert (Indianapolis,
IN: Hackett Publishing, 1997)
Adam Smith, The Wealth of Nations [17761, edited by Edwin Carman (New York: The Modern Library, 1994),
Chapter 2
*Robert Shaver, Rational Egoism: A Selective and Critical History (Cambridge: Cambridge University Press,
1998)
2.8 Hedonism
Why be moral? One way to try to answer this question is to consider why it would be a good thing if every
moral problem were actually sorted out. What would everyone being good actually lead to? World peace. No
one dying of hunger. Everyone being free. Justice reigning supreme. And what would be so good about that?
The obvious answer is that then everyone would be happy - or at least, as happy as is humanly possible. So,
the point of being good is that it would lead to a happier world.
If this is right, then the basis of morality is hedonism: the view that the only thing that is of value in itself is
happiness (or pleasure, though for simplicity we will talk only of happiness for now), and the only thing bad in
itself is unhappiness (or pain). This might seem a surprising conclusion. After all, hedonism is usually
associated with the selfish pursuit of fleeting pleasures. So, how can it be the basis of morality?
Happiness as the ultimate good
The answer to this question must start with an explanation of why happiness is the only good. Aristotle (384
322 BCE) thought this was evidently true, because there are things done for their own sake and things done for
the sake of something else. Things done for the sake of something else are not valuable in themselves, but only
instrumentally valuable, as means to all end. Those things done for their own sake, in contrast, are intrinsically
valuable, as ends in themselves. Of all the good things in life, only happiness, it seems, is prized for its own
sake. Everything else is valued only because it leads to happiness. Even love is not valued in itself - a love that
makes us permanently miserable is not worth having.
There is, however, nothing in this conclusion that entails pursuing selfish, fleeting pleasures. Epicurus (341-
271 BCE), one of the first hedonic philosophers, understood this well. He thought that no one could he happy if
he or she permanently sought intense pleasures, especially of the fleeting kind (what he called kinetic or active
pleasures). Rather, to he truly happy
or, perhaps better, "content" - one needs a certain calm, tranquillity, and peace of mind (static pleasures). And if
we see that happiness has value in itself, then we have reason to be concerned with the happiness of others, not
just our own. Hence, Epicurus concluded, "It is impossible to live a pleasant life without living wisely and
honorably and justly, and it is impossible to live wisely and honorably and justly without living pleasantly."
One of the most important hedonic ethics of the modern era is the utilitarianism of Jeremy Bentham (1749-
1832) and John Stuart Mill (1806-73). From the same premise that pleasure and happiness are the only goods,
and pain and unhappiness the only evils - they concluded that actions are right in so far as they promote the
greatest happiness of the greatest number and wrong in so far as they diminish it.
Preciseh, what?
One of the recurring problems for hedonic philosophies is pinning down just what it is that is supposed to be
intrinsically valuable. Is it pleasure - by which we mean pleasant sensations? Or is it happiness, in which case
what is that? A stable state of mind? A temporary feeling of well-being? Objectively flourishing? Or are each of
these good in themselves?
The problem is a persistent and serious one, for if we understand happiness and pleasure in conventional
senses, it becomes far from clear that they are intrinsic goods, above all others. Moreover, philosophers'
attempts to precisely define the crucial qualities of pleasure (as Bentham did, for example, by pointing to
properties like "intensity" and "duration") are notoriously slippery. Critics of Mill's work argue that, if he were
serious, he would have to admit that the life of a contented pig is better than that of a troubled philosopher. Mill
tried to reply to this by distinguishing between higher pleasures of the mind and lower pleasures of the body
(Utilitarianism, 1859).
But what makes higher pleasures higher? Mill thought "competent judges," who had experienced both,
would prefer a life with some higher pleasures than one with only lower ones, but not vice versa.Yet, even if
this were true, it doesn't seem to be the case that the higher pleasures are preferred simply because they are
more pleasurable. If, however, there are other reasons for choosing them, then hedonic considerations are not
the only important ones after all.
Robert Nozick made an even stronger argument against hedonism in a thought experiment in which he
asked if one would choose to live happily in a virtual world or less happily in the real one. Almost everyone, he
suggested, would prefer the real world, which suggests people prefer reality to happiness. If that's right, then
happiness is not the only thing that's good in itself. It seems that truth and authenticity are, as well.
Moral Dilemmas
First published Mon Apr 15, 2002; substantive revision Sat Jun 16, 2018
Moral dilemmas, at the very least, involve conflicts between moral requirements. Consider the cases given
below.
1. Examples
2. The Concept of Moral Dilemmas
3. Problems
4. Dilemmas and Consistency
5. Responses to the Arguments
6. Moral Residue and Dilemmas
7. Types of Moral Dilemmas
8. Multiple Moralities
9. Conclusion
Bibliography
o Cited Works
o Other Worthwhile Readings
Academic Tools
Other Internet Resources
Related Entries
1. Examples
In Book I of Plato’s Republic, Cephalus defines ‘justice’ as speaking the truth and paying one’s debts.
Socrates quickly refutes this account by suggesting that it would be wrong to repay certain debts—for
example, to return a borrowed weapon to a friend who is not in his right mind. Socrates’ point is not that
repaying debts is without moral import; rather, he wants to show that it is not always right to repay one’s
debts, at least not exactly when the one to whom the debt is owed demands repayment. What we have here
is a conflict between two moral norms: repaying one’s debts and protecting others from harm. And in this
case, Socrates maintains that protecting others from harm is the norm that takes priority.
Nearly twenty-four centuries later, Jean-Paul Sartre described a moral conflict the resolution of which was,
to many, less obvious than the resolution to the Platonic conflict. Sartre (1957) tells of a student whose
brother had been killed in the German offensive of 1940. The student wanted to avenge his brother and to
fight forces that he regarded as evil. But the student’s mother was living with him, and he was her one
consolation in life. The student believed that he had conflicting obligations. Sartre describes him as being
torn between two kinds of morality: one of limited scope but certain efficacy, personal devotion to his
mother; the other of much wider scope but uncertain efficacy, attempting to contribute to the defeat of an
unjust aggressor.
While the examples from Plato and Sartre are the ones most commonly cited, there are many others.
Literature abounds with such cases. In Aeschylus’s Agamemnon, the protagonist ought to save his daughter
and ought to lead the Greek troops to Troy; he ought to do each but he cannot do both. And Antigone, in
Sophocles’s play of the same name, ought to arrange for the burial of her brother, Polyneices, and ought to
obey the pronouncements of the city’s ruler, Creon; she can do each of these things, but not both. Areas of
applied ethics, such as biomedical ethics, business ethics, and legal ethics, are also replete with such cases.
3. Problems
It is less obvious in Sartre’s case that one of the requirements overrides the other. Why this is so, however,
may not be so obvious. Some will say that our uncertainty about what to do in this case is simply the result
of uncertainty about the consequences. If we were certain that the student could make a difference in
defeating the Germans, the obligation to join the military would prevail. But if the student made little
difference whatsoever in that cause, then his obligation to tend to his mother’s needs would take
precedence, since there he is virtually certain to be helpful. Others, though, will say that these obligations
are equally weighty, and that uncertainty about the consequences is not at issue here.
Ethicists as diverse as Kant (1971/1797), Mill (1979/1861), and Ross (1930, 1939) have assumed that an
adequate moral theory should not allow for the possibility of genuine moral dilemmas. Only recently—in
the last sixty years or so—have philosophers begun to challenge that assumption. And the challenge can
take at least two different forms. Some will argue that it is not possible to preclude genuine moral
dilemmas. Others will argue that even if it were possible, it is not desirable to do so.
To illustrate some of the debate that occurs regarding whether it is possible for any theory to eliminate
genuine moral dilemmas, consider the following. The conflicts in Plato’s case and in Sartre’s case arose
because there is more than one moral precept (using ‘precept’ to designate rules and principles), more than
one precept sometimes applies to the same situation, and in some of these cases the precepts demand
conflicting actions. One obvious solution here would be to arrange the precepts, however many there might
be, hierarchically. By this scheme, the highest ordered precept always prevails, the second prevails unless it
conflicts with the first, and so on. There are at least two glaring problems with this obvious solution,
however. First, it just does not seem credible to hold that moral rules and principles should be
hierarchically ordered. While the requirements to keep one’s promises and to prevent harm to others clearly
can conflict, it is far from clear that one of these requirements should always prevail over the other. In the
Platonic case, the obligation to prevent harm is clearly stronger. But there can easily be cases where the
harm that can be prevented is relatively mild and the promise that is to be kept is very important. And most
other pairs of precepts are like this. This was a point made by Ross in The Right and the Good (1930,
Chapter 2).
The second problem with this easy solution is deeper. Even if it were plausible to arrange moral precepts
hierarchically, situations can arise in which the same precept gives rise to conflicting obligations. Perhaps
the most widely discussed case of this sort is taken from William Styron’s Sophie’s Choice (1980; see
Greenspan 1983 and Tessman 2015, 160–163). Sophie and her two children are at a Nazi concentration
camp. A guard confronts Sophie and tells her that one of her children will be allowed to live and one will
be killed. But it is Sophie who must decide which child will be killed. Sophie can prevent the death of
either of her children, but only by condemning the other to be killed. The guard makes the situation even
more excruciating by informing Sophie that if she chooses neither, then both will be killed. With this added
factor, Sophie has a morally compelling reason to choose one of her children. But for each child, Sophie
has an apparently equally strong reason to save him or her. Thus the same moral precept gives rise to
conflicting obligations. Some have called such cases symmetrical (Sinnott-Armstrong 1988, Chapter 2).
Intuitively this principle just says that the same action cannot be both obligatory and forbidden. Note that as
initially described, the existence of dilemmas does not conflict with PC. For as described, dilemmas
involve a situation in which an agent ought to do AA, ought to do BB, but cannot do both AA and BB. But
if we add a principle of deontic logic, then we obtain a conflict with PC:
(PD)□(A→B)→(OA→OB).(PD)◻(A→B)→(OA→OB).
Intuitively, PD just says that if doing AA brings about BB, and if AA is obligatory (morally required),
then BB is obligatory (morally required). The first argument that generates inconsistency can now be
stated. Premises (1), (2), and (3) represent the claim that moral dilemmas exist.
1. OAOA
2. OBOB
5. □¬(B&A)◻¬(B&A) (from 3)
6. □(B→¬A)◻(B→¬A) (from 5)
Line (10) directly conflicts with PC. And from PC and (1), we can conclude:
11. ¬O¬A¬O¬A
And, of course, (9) and (11) are contradictory. So if we assume PC and PD, then the existence of dilemmas
generates an inconsistency of the old-fashioned logical sort. (Note: In standard deontic logic, the ‘□◻’ in
PD typically designates logical necessity. Here I take it to indicate physical necessity so that the appropriate
connection with premise (3) can be made. And I take it that logical necessity is stronger than physical
necessity.)
Two other principles accepted in most systems of deontic logic entail PC. So if PD holds, then one of these
additional two principles must be jettisoned too. The first says that if an action is obligatory, it is also
permissible. The second says that an action is permissible if and only if it is not forbidden. These principles
may be stated as:
(OP)OA→PA;(OP)OA→PA;
and
(D)PA↔¬O¬A.(D)PA↔¬O¬A.
Principles OP and D are basic; they seem to be conceptual truths (Brink 1994, section IV). The second
argument that generates inconsistency, like the first, has as its first three premises a symbolic representation
of a moral dilemma.
1. OAOA
2. OBOB
3. ¬C(A&B)¬C(A&B)
And like the first, this second argument shows that the existence of dilemmas leads to a contradiction if we
assume two other commonly accepted principles. The first of these principles is that ‘ought’ implies ‘can’.
Intuitively this says that if an agent is morally required to do an action, it must be possible for the agent to
do it. This principle seems necessary if moral judgments are to be uniquely action-guiding. We may
represent this as
4. OA→CAOA→CA (for all AA)
The other principle, endorsed by most systems of deontic logic, says that if an agent is required to do each
of two actions, she is required to do both. We may represent this as
5. (OA&OB)→O(A&B)(OA&OB)→O(A&B) (for all AA and all BB)
So if one assumes that ‘ought’ implies ‘can’ and if one assumes the principle represented in (5)—dubbed
by some the agglomeration principle (Williams 1965)—then again a contradiction can be derived.
8. Multiple Moralities
Another issue raised by the topic of moral dilemmas is the relationship among various parts of morality.
Consider this distinction. General obligations are moral requirements that individuals have simply because
they are moral agents. That agents are required not to kill, not to steal, and not to assault are examples of
general obligations. Agency alone makes these precepts applicable to individuals. By contrast, role-related
obligations are moral requirements that agents have in virtue of their role, occupation, or position in
society. That lifeguards are required to save swimmers in distress is a role-related obligation. Another
example, mentioned earlier, is the obligation of a defense attorney to hold in confidence the disclosures
made by a client. These categories need not be exclusive. It is likely that anyone who is in a position to do
so ought to save a drowning person. And if a person has particularly sensitive information about another,
she should probably not reveal it to third parties regardless of how the information was obtained. But
lifeguards have obligations to help swimmers in distress when most others do not because of their abilities
and contractual commitments. And lawyers have special obligations of confidentiality to their clients
because of implicit promises and the need to maintain trust.
General obligations and role-related obligations can, and sometimes do, conflict. If a defense attorney
knows the whereabouts of a deceased body, she may have a general obligation to reveal this information to
family members of the deceased. But if she obtained this information from her client, the role-related
obligation of confidentiality prohibits her from sharing it with others. Supporters of dilemmas may regard
conflicts of this sort as just another confirmation of their thesis. Opponents of dilemmas will have to hold
that one of the conflicting obligations takes priority. The latter task could be discharged if it were shown
that one these two types of obligations always prevails over the other. But such a claim is implausible; for it
seems that in some cases of conflict general obligations are stronger, while in other cases role-related duties
take priority. The case seems to be made even better for supporters of dilemmas, and worse for opponents,
when we consider that the same agent can occupy multiple roles that create conflicting requirements. The
physician, Harvey Kelekian, in Margaret Edson’s (1999/1993) Pulitzer Prize winning play, Wit, is an
oncologist, a medical researcher, and a teacher of residents. The obligations generated by those roles lead
Dr. Kelekian to treat his patient, Vivian Bearing, in ways that seem morally questionable (McConnell
2009). At first blush, anyway, it does not seem possible for Kelekian to discharge all of the obligations
associated with these various roles.
In the context of issues raised by the possibility of moral dilemmas, the role most frequently discussed is
that of the political actor. Michael Walzer (1973) claims that the political ruler, qua political ruler, ought to
do what is best for the state; that is his principal role-related obligation. But he also ought to abide by the
general obligations incumbent on all. Sometimes the political actor’s role-related obligations require him to
do evil—that is, to violate some general obligations. Among the examples given by Walzer are making a
deal with a dishonest ward boss (necessary to get elected so that he can do good) and authorizing the torture
of a person in order to uncover a plot to bomb a public building. Since each of these requirements is
binding, Walzer believes that the politician faces a genuine moral dilemma, though, strangely, he also
thinks that the politician should choose the good of the community rather than abide by the general moral
norms. (The issue here is whether supporters of dilemmas can meaningfully talk about action-guidance in
genuinely dilemmatic situations. For one who answers this in the affirmative, see Tessman 2015, especially
Chapter 5.) Such a situation is sometimes called “the dirty hands problem.” The expression, “dirty hands,”
is taken from the title of a play by Sartre (1946). The idea is that no one can rule without becoming morally
tainted. The role itself is fraught with moral dilemmas. This topic has received much attention recently.
John Parrish (2007) has provided a detailed history of how philosophers from Plato to Adam Smith have
dealt with the issue. And C.A.J. Coady (2008) has suggested that this reveals a “messy morality.”
For opponents of moral dilemmas, the problem of dirty hands represents both a challenge and an
opportunity. The challenge is to show how conflicts between general obligations and role-related
obligations, and those among the various role-related obligations, can be resolved in a principled way. The
opportunity for theories that purport to have the resources to eliminate dilemmas—such as Kantianism,
utilitarianism, and intuitionism—is to show how the many moralities under which people are governed are
related.
9. Conclusion
Debates about moral dilemmas have been extensive during the last six decades. These debates go to the
heart of moral theory. Both supporters and opponents of moral dilemmas have major burdens to bear.
Opponents of dilemmas must show why appearances are deceiving. Why are examples of apparent
dilemmas misleading? Why are certain moral emotions appropriate if the agent has done no wrong?
Supporters must show why several of many apparently plausible principles should be given up—principles
such as PC, PD, OP, D, ‘ought’ implies ‘can’, and the agglomeration principle. And each side must provide
a general account of obligations, explaining whether none, some, or all can be overridden in particular
circumstances. Much progress has been made, but the debate is apt to continue.
Bibliography
Cited Works
Aquinas, St. Thomas, Summa Theologiae, Thomas Gilby et al. (trans.), New York: McGraw-Hill, 1964–1975.
Blackburn, Simon, 1996, “Dilemmas: Dithering, Plumping, and Grief,” in Mason (1996): 127–139.
Brink, David, 1994, “Moral Conflict and Its Structure,” The Philosophical Review, 103: 215–247; reprinted
in Mason (1996): 102–126.
Coady, C.A.J., 2008. Messy Morality: The Challenge of Politics, New York: Oxford University Press.
Conee, Earl, 1982, “Against Moral Dilemmas,” The Philosophical Review, 91: 87–97; reprinted in Gowans
(1987): 239–249.
Dahl, Norman O., 1996, “Morality, Moral Dilemmas, and Moral Requirements,” in Mason (1996): 86–101.
Donagan, Alan, 1977, The Theory of Morality, Chicago: University of Chicago Press.
–––, 1984, “Consistency in Rationalist Moral Systems,” The Journal of Philosophy, 81: 291–309; reprinted
in Gowans (1987): 271–290.
Edson, Margaret, 1999/1993. Wit, New York: Faber and Faber.
Freedman, Monroe, 1975, Lawyers’ Ethics in an Adversary System, Indianapolis: Bobbs-Merrill.
Gowans, Christopher W. (editor), 1987, Moral Dilemmas, New York: Oxford University Press.
Greene, Joshua, 2013, Moral Tribes: Emotion, Reason, and the Gap Between Us and Them, New York:
Penguin Books.
Greenspan, Patricia S., 1983, “Moral Dilemmas and Guilt,” Philosophical Studies, 43: 117–125.
–––, 1995, Practical Guilt: Moral Dilemmas, Emotions, and Social Norms, New York: Oxford University
Press.
Haidt, Jonathan, 2012, The Righteous Mind: Why Good People are Divided by Politics and Religion, New
York: Pantheon.
Hill, Thomas E., Jr., 1996, “Moral Dilemmas, Gaps, and Residues: A Kantian Perspective,” in Mason (1996):
167–198.
Holbo, John, 2002, “Moral Dilemmas and the Logic of Obligation,” American Philosophical Quarterly, 39:
259–274.
Hursthouse, Rosalind, 1999, On Virtue Ethics, New York: Oxford University Press.
Kant, Immanuel, 1971/1797, The Doctrine of Virtue: Part II of the Metaphysics of Morals, Trans, Mary J.
Gregor, Philadelphia: University of Pennsylvania Press.
Lemmon, E.J., 1962, “Moral Dilemmas,” The Philosophical Review, 70: 139–158; reprinted in Gowans
(1987): 101–114.
–––, 1965, “Deontic Logic and the Logic of Imperatives,” Logique et Analyse, 8: 39–71.
Marcus, Ruth Barcan, 1980, “Moral Dilemmas and Consistency,” The Journal of Philosophy, 77: 121–136;
reprinted in Gowans (1987): 188–204.
Mason, H.E., (editor), 1996, Moral Dilemmas and Moral Theory, New York: Oxford University Press.
McConnell, Terrance, 1978, “Moral Dilemmas and Consistency in Ethics,” Canadian Journal of Philosophy,
8: 269–287; reprinted in Gowans (1987): 154–173.
–––, 1988, “Interpersonal Moral Conflicts,” American Philosophical Quarterly, 25: 25–35.
–––, 1996, “Moral Residue and Dilemmas,” in Mason (1996): 36–47.
–––, 2009, “Conflicting Role-Related Obligations in Wit,” in Sandra Shapshay (ed.), Bioethics at the Movies,
Baltimore: Johns Hopkins University Press.
Mill, John Stuart, 1979/1861, Utilitarianism, Indianapolis: Hackett Publishing.
Parrish, John, 2007, Paradoxes of Political Ethics: From Dirty Hands to Invisible Hands, New York:
Cambridge University Press.
Plato, The Republic, trans, Paul Shorey, in The Collected Dialogues of Plato, E. Hamilton and H. Cairns
(eds.), Princeton: Princeton University Press, 1930.
Ross, W.D., 1930, The Right and the Good, Oxford: Oxford University Press.
–––, 1939, The Foundations of Ethics, Oxford: Oxford University Press.
Sartre, Jean-Paul, 1957/1946, “Existentialism is a Humanism,” Trans, Philip Mairet, in Walter Kaufmann
(ed.), Existentialism from Dostoevsky to Sartre, New York: Meridian, 287–311,
–––, 1946, “Dirty Hands,”, in No Exit and Three Other Plays, New York: Vintage Books.
Sinnott-Armstrong, Walter, 1988, Moral Dilemmas, Oxford: Basil Blackwell.
Smith, Holly M., 1986, “Moral Realism, Moral Conflict, and Compound Acts,” The Journal of Philosophy,
83: 341–345.
Styron, William, 1980, Sophie’s Choice, New York: Bantam Books.
Taylor, Erin, 2011, “Irreconciliable Differences,” American Philosophical Quarterly, 50: 181–192.
Tessman, Lisa, 2015, Moral Failure: On the Impossible Demands of Morality, New York: Oxford University
Press.
Thomason, Richmond, 1981, “Deontic Logic as Founded on Tense Logic,” in Risto Hilpinen (ed.), New
Studies in Deontic Logic, Dordrecht: Reidel, 165–176.
Trigg, Roger, 1971, “Moral Conflict,” Mind, 80: 41–55.
Vallentyne, Peter, 1987, “Prohibition Dilemmas and Deontic Logic,” Logique et Analyse, 30: 113–122.
–––, 1989, “Two Types of Moral Dilemmas,” Erkenntnis, 30: 301–318.
Van Fraassen, Bas, 1973, “Values and the Heart’s Command,” The Journal of Philosophy, 70: 5–19;
reprinted in Gowans (1987): 138–153.
Walzer, Michael, 1973, “Political Action: The Problem of Dirty Hands,” Philosophy and Public Affairs, 2:
160–180.
Williams, Bernard, 1965, “Ethical Consistency,” Proceedings of the Aristotelian Society (Supplement), 39:
103–124; reprinted in Gowans (1987): 115–137.
Zimmerman, Michael J., 1988, An Essay on Moral Responsibility, Totowa, NJ: Rowman and Littlefield.
–––, 1996, The Concept of Moral Obligation, New York: Cambridge University Press.
Academic Tools
How to cite this entry.
Preview the PDF version of this entry at the Friends of the SEP Society.
Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).
Enhanced bibliography for this entry at PhilPapers, with links to its database.
Related Entries
dirty hands, the problem of | Kant, Immanuel | logic: deontic | Mill, John Stuart | Plato | Sartre, Jean-Paul
Acknowledgments
I thank Michael Zimmerman for helpful comments on this essay.
ETHICAL THEORY
Good reasoning in ethics usually involves either
implicit or explicit reference to an ethical theory. An
ethical theory is a systematic exposition of a particular
view about what is the nature and basis of
good or right. The theory provides reasons or norms
for judging acts to be right or wrong; it provides a
justification for these norms. These norms can then
be used as a guide for action. We can diagram the
relationship between ethical theories and moral decision
making as follows.
Ethical Judgment
Ethical Theory
Ethical Principle
We can think of the diagram as a ladder. In practice,
we can start at the ladder’s top or bottom. At the
top, at the level of theory, we can start by clarifying
for ourselves what we think are basic ethical values.
We then move downward to the level of principles
generated from the theory. The next step is to apply
these principles to concrete cases. We can also start
at the bottom of the ladder, facing a particular ethical
choice or dilemma. We can work our way back up the
ladder, thinking through the principles and theories
that implicitly guide our concrete decisions. Ultimately
and ideally, we come to a basic justification, or the
elements of what would be an ethical theory. If we
look at the actual practice of thinking people as they
develop their ethical views over time, the movement
is probably in both directions. We use concrete cases
to reform our basic ethical views, and we use the basic
ethical views to throw light on concrete cases.
An example of this movement in both directions
would be if we start with the belief that pleasure
is the ultimate value and then find that applying
this value in practice leads us to do things that are
contrary to common moral sense or that are repugnant
to us and others. We may then be forced to
look again and possibly alter our views about the
moral significance of pleasure. Or we may change
our views about the rightness or wrongness of some
particular act or practice on the basis of our theoretical
reflections. Obviously, this sketch of moral
reasoning is quite simplified. Feminists and others
have criticized this model of ethical reasoning, partly
because it claims that ethics is governed by general
Copyright 2018 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. WCN 02-200-202
__ PART ONE __ ETHICAL THEORY
principles that are supposedly applicable to all ethical
situations. Does this form of reasoning give due
consideration to the particularities of individual,
concrete cases? Can we really make a general judgment
about the value of truthfulness or courage that
will help us know what to do in particular cases in
which these issues play a role?
Cultural differences in moral judgment and behavior, across and within societies
Highlights
•Cultural variations in morality within societies can vary as much as cultural variations in morality between societies.
•Cultural factors contributing to this variation include religion, social ecology (weather, crop conditions, population
density, pathogen prevalence, residential mobility), and regulatory social institutions such as kinship structures and
economic markets.
•This variability raises questions for normative theories of morality, but also holds promise for future descriptive work
on moral thought and behavior.
We review contemporary work on cultural factors affecting moral judgments and values, and those affecting moral
behaviors. In both cases, we highlight examples of within-societal cultural differences in morality, to show that these can
be as substantial and important as cross-societal differences. Whether between or within nations and societies, cultures
vary substantially in their promotion and transmission of a multitude of moral judgments and behaviors. Cultural factors
contributing to this variation include religion, social ecology (weather, crop conditions, population density, pathogen
prevalence, residential mobility), and regulatory social institutions such as kinship structures and economic markets. This
variability raises questions for normative theories of morality, but also holds promise for future descriptive work on
moral thought and behavior.
5.1 Akrasia
Oscar Wilde’s character Lord Darlington famously remarks in Lady Windermere’s Fan (1892) that, “I can resist anything
except temptation.” He is, alas, not alone in this. Most of us have at some time done something that we’ve known to be
wrong but found ourselves unable to resist doing. Aristotle (384–322 BCE) called this failing akrasia (lack of self-mastery
or moral “incontinence”; Nicomachean Ethics, VII 1–10), otherwise known as moral “weakness” (astheneia), or
“weakness of the will.”
This phenomenon has puzzled philosophers for centuries. Why do we do what we know or believe we should not? There
are various explanations.
According to the Socrates of Plato (427–347 BCE), all wrongdoing is the result of ignorance. People act badly simply
because they are ignorant about what’s truly good or right – in that situation or generally. On this view akrasia is
impossible, since if we truly knew what was right we’d never choose not to do it. Apparent examples of akrasia are
therefore not what they seem: people never do what they truly know is wrong. If someone has an affair, for example,
and says “I know it is wrong” the adulterer is being disingenuous. He or she may know it involves deceit or hurt, but on
balance somehow the adulterer thinks going ahead is still justifiable.
Augustine (354–430), on the other hand, saw wrongdoing as a characteristic of human sinfulness. People clearly know
the good but choose the bad, anyway; sometimes they even do what’s bad because it’s bad, as a form of rebellion.
According to Aristotle, people, through the immediate urgings of passion, act without thinking, or at least without
thinking clearly. If they had thought about the issue more carefully and deliberately, they might well have acted
differently; but the need came over them with sudden forcefulness. Desire
and anger are the common culprits in this sort of impetuous act, a sort of act that might be described as “akratic
impetuosity.”
Aristotle also talks of “akratic weakness.” Here immediacy isn’t the issue. People take the time to think things through
and come to the right decision about how to act. But sometimes they simply can’t bring themselves to act that way
because they are overwhelmed by sustained passions, especially desire or anger, perhaps also fear.
What’s at stake
Which account we take to be true (Plato, Aristotle, or Augustine) affects how we evaluate the extent to which people
can be expected to realize moral rectitude. Just because something is the ethically right thing to do, is it reasonable to
expect people to be able to do it? How much should the presence of strong emotion mitigate one’s judgment about an
ethical lapse or a morally wrong action?
Consider, for example, the distinction drawn between someone coolly, in a premeditated and carefully planned way,
murdering someone; and cases where someone kills another in a fit of rage triggered by some traumatic event, such as
the sudden discovery that the victim had murdered the killer’s child. Many think of the cases as different because of
what one understands about the power and nature of passion and the reasonable limits of human moral restraint.
Acting well, doing what’s right, becoming and remaining virtuous are difficult things for human beings. How much slack
should they be given? When, if ever, might the force of passion be thought of as so strong as to render an action non-
voluntary? How generous and forgiving should one be in moral judgment?
5.2 Amoralism
An old saying tells people that “All’s fair in love and war.” Although few sincerely believe it, many do accept that
sometimes the categories of the moral and immoral have no place. In such situations, we are left with the amoral: that
which is neither good nor bad but which stands outside morality.
The activities of businesses and corporations, for instance, are sometimes held to be about one thing and one thing only:
profit. Whether one is kind, honest, generous, and trustworthy is irrelevant to the conduct of commercial affairs – unless
being that way helps maximize profit. This view can be presented as a critique of capitalism, as stark realism, or perhaps
even as a defense of capitalism (by arguing that amoral conduct in the market actually produces the best outcomes for
everyone, as if, as Adam Smith (1723–90) maintained, the market were guided by a beneficent “invisible hand”).
In war, too, amoralists argue, there is only one objective: victory. Any- thing that contributes to victory is permissible –
lying, killing, stealing, destroying property, etc. In fact, like the context of commerce, obeying moral rules will probably
inhibit one from realizing the goal of war.
Politics, too, has been described as an amoral context. Machiavelli (1469– 1527) famously described how the successful
leader must be prepared to present the appearance of moral rectitude but in reality be prepared to engage in the most
ruthless vice in order to obtain and secure power. Many who maintain that national politics should be governed by
moral principle nevertheless argue that international politics, like war, is entirely amoral. Those holding these views
sometimes prefer to be called political “realists” rather than amoralists.
If it’s accepted that some human activities fall outside moral consideration, where do we draw the line that separates
the moral from the amoral?
One way of doing this is to appeal to divine principle and argue that there are some activities that divine commands
neither require nor prohibit. Perhaps tugging gently on one’s earlobe is neither moral nor immoral – although tugging on
it in order to send a signal to someone across the room to steal something would be.
Another way of sectioning off the moral from the amoral is to use the harm and happiness principles. Those acts that
lead to or at least are likely to lead to some sort of harm, especially serious harm, are to be regarded as immoral; while
those that contribute to happiness or are likely to contribute to happiness are to be regarded as moral. Activities,
however, that contribute neither to harm nor to happiness or are likely not to do so are amoral. It’s not likely, in most
contexts, that a few tugs on one’s ear will contribute to people’s happiness or unhappiness in any way. So, that action is
perfectly amoral – unless one argues that the opportunity cost of tugging on one’s ear rather than doing something else
is an immoral waste of resources.
More radical is the claim that there’s no line to be drawn, anyway, since morality is an illusion, and the world is in fact
entirely amoral. Even if we don’t go quite this far, many see actual moral codes as in some sense a sham or a deceit or
an instrument by which the strong manipulate the weak. Joseph Conrad’s Heart of Darkness (1902) and André Gide’s
The Immoralist (1902), for example, are both fictional narratives about Europeans who see the moral systems that had
seemed so solid crumble before their eyes. It seems an affliction suffered by many. Even recent continental philosophers
like Gilles Deleuze (1925–95) and Jacques Derrida (1930–2004) have held that subverting the objectionable dimensions
of what goes by the name of ethics and morality is to stand in a posture of permanent critique against it.
The trouble is that even those most cynical about established moralities seem not to be fully fledged amoralists, since
their righteous indignation itself requires that they hold some values. Calls to rebellion, freedom, and critique may entail
subverting existing moral orders, but they seem also to imply moralities themselves.
Poor old Barbra Streisand and Donna Summer. In “No More Tears” they sang that they always dreamed they’d find the
perfect lover, but he turned out to be just like every other man. Still, it wasn’t their fault it all went so wrong. “I had no
choice from the start,” they sang, “I’ve gotta listen to my heart.”
At the risk of being pedantic, however, surely we do all have the power to choose whether or not to get involved with a
lover, and how far we take the relationship? The trouble is that we would rather kid ourselves that we are not in control.
That way, we avoid responsibility for the consequences of our actions. But given how common this sort of rationalization
is, doesn’t it threaten our capacity to make moral choices?
Self-deception
The very concept of “self-deception” is a curious one, for it requires that one is both the liar (who knows the truth) and
the victim of the lie (from whom the truth has been hidden). But how is this even possible?
Perhaps the self isn’t a unitary whole but is actually somehow fractured into discrete parts. One of the most popular
ways of explaining self- deception this way is to divide the self between the conscious and the unconscious. Sigmund
Freud (1856–1939) is perhaps most famous for this gesture. But the same general idea recurs in various forms
throughout the history of ideas.
For Immanuel Kant (1724–1804), the self one is able to observe is only an empirical, superficial self, behind which
deeper selves lie. One might say, in fact, that modern questions about self-deception and an unconscious begin with
Descartes’s (1596–1650) worry, in Meditations on First Philosophy, about whether or not he is possessed by a demon
and whether he may be the source of his own possibly false ideas about the world and God.
Søren Kierkegaard (1815–55) criticized the modern, scientific, rationalistic age and what passes for Christianity in it in
terms of self-deception. The modern world lulls people into a self-deceptive state in which they pretend they’re leading
meaningful lives predicated on faith and reason, when really they are steeped in a deep despair or malaise.
It’s characteristic of this despair, for Kierkegaard, that people are un- conscious of it, refusing to admit it to themselves.
They therefore live in an inauthentic state, failing to become authentic, passionate selves. Instead each merely exists as
what Kierkegaardian Walker Percy described in The Moviegoer (1960) as an “Anyone living Anywhere” – not as a true
individual but as a neutral, indefinite “one.” As Kierkegaard wrote in The Sickness unto Death, “the specific character of
despair is this: it is unaware of being despair.”
Bad faith
A specific form of self-deception has been called, “bad faith” (mauvaise foi), a term of criticism developed by
existentialist thinkers like Jean-Paul Sartre (Being and Nothingness, 1944) and Simone de Beauvoir (Ethics of Ambiguity,
1947). It means a number of things – none of them good.
In the first place, bad faith is an effort to avoid the anxiety and responsibility humans must bear because they are free.
To avoid freedom and its responsibility, people say in bad faith that they are merely the products of society, the results
of their upbringing, the unchangeable effects of natural causes. In doing so they deny their capacities to choose as
subjects and stress their status as objects. But, according to the existentialists, all this is said in bad faith, because on
some level it is immediately evident to people that they are free consciousnesses.
Second, as strange as this sounds, bad faith is manifest when people try to pretend that they are something, that they
have an essence. But people have no fixed essence which defines their being. At every instant we must choose to be
something (a “husband,” a “waiter,” a “woman,” a “homo- sexual,” “French,” “American,” “black,” or “white,” an
“evolved animal”). But this choice can’t be fixed, solidified, or made permanent; as soon as the choice is made it’s
transcended into a new moment where a new free choice must be made. Nevertheless, one’s present identities (“I am a
leftist Lithuanian professor”) are claimed as if they were real and enduring.
People also engage in a third form of bad faith when they deny others the same freedom they would have for
themselves . The problem with doing so isn’t simply logical, one of consistency. It also stems from our knowing that
everyone else is also a free consciousness and that practically speaking each person’s freedom depends on the freedom
of others. The urgent effort to prove, for example, that black Africans weren’t equal to European whites betrays the fact
that the slaveholders knew that blacks were enslaved humans like themselves, not sub-human animals. Those who
oppress others, who characterize them as “cockroaches” (as the Hutu militia characterized their Tutsi victims) and
“vermin” (as Nazis characterized Jews), typically do so in bad faith.
For some, the widespread prevalence of self-deception in humans makes them skeptical of the human capacity to make
authentic, moral choices. We must doubt not only the sincerity of others, but also that of our own moral reasoning.
Might we not be kidding ourselves when we argue for moral values as if they were authentically our own? For the
existentialists, however, there are grounds for optimism.We can be truly free and avoid bad faith. If we do not share this
optimism, then we have to accept that moral discourse will always be infected with self- deception.
Xiao is a manager for a large multinational mining company. He takes ethics very seriously which is why he is concerned
about his latest project. It requires him to pay a bribe, forcibly evict indigenous people from their land, employ children,
and destroy an important, bio-diverse habitat. He reasons, however, that bribes are just the local way of doing things, as
is the practice of employing children, who actually make an important contribution to stretched household budgets. The
evicted people will get compensation, and it is not as though western countries don’t have compulsory purchase orders.
As for the environmental damage, the company has pledged to create a new sanctuary near the site. Anyway, if he
refuses, the company will simply get someone else to do it. Xiao is uncomfortable, but his conscience is appeased.
Are the justifications for Xiao’s actions adequate, or are they merely convenient ways for him to excuse what’s really
morally abhorrent behavior? It’s impossible to tell from such a brief description, but the suspicion is certainly that a
more impartial examination of the relevant rights and wrongs may come to a different conclusion as to the morality of
his actions. This kind of danger is ever-present in the real world of practical ethics, particularly in business ethics. It
would be too cynical to suggest that the authors of corporate ethics policies are always simply trying to provide a
respectable veneer for their employers’ callous self-interest. But whether the relevant conduct is commercial or
personal, it’s easy to end up looking for moral justifications for what one really wants to do, even if one’s desire to be
good is sincere. By contrast, it’s hard to assess the morality of an action in which one has an interest fairly,
dispassionately, and objectively. There are always arguments to be found for and against any given action, and since
ethics is not like mathematics, it’s easy to give more weight to the reasons that suit than to those that don’t.
Casuistry
Finding justifications for what one wants to do anyway is sometimes de- scribed as “casuistry.” But in fact this is a little
misleading, since genuine casuistry is a sincere attempt to arrive at solutions to moral dilemmas on a case-by-case basis,
by appeal to paradigm cases and precedents, rather than to a particular moral framework. This makes it particularly
useful for solving real-world debates, since it does not assume a consensus of moral theory among those attempting to
resolve the dilemma. Among other in- stances, casuistry has a rather noble history in English common law, and it in part
grounds the common legal practice to day of citing precedent cases to justify rulings. All casuistry requires is that
everyone agrees what the right thing to do is in certain given circumstances, which people holding different theoretical
commitments often do. This is why, although it is not usually described in this way, a lot of work in applied bioethics
today takes the form of casuistry.
Because, however, casuistic thinking leaves a lot of room for interpretation and is not about applying a set of clear moral
principles, it’s open to abuse, which is why it got a bad name. Catholic Blaise Pascal (1623–62), in his Provincial Letters
(1656–7) for example, lambasted the Church for misusing casuistry to rationalize the sinful behavior of the powerful and
privileged; and, of course, a host of Protestant reformers shared his view, reserving special criticism for Jesuit abuses of
the casuist method. Where there is a need for subtlety and interpretation there is also room for self- serving evasiveness
and rationalization.
Correcting bias
But how can one employ casuistry properly and make sure that its reasoning isn’t distorted by desire or interest? First
and foremost, one simply has to accept that everyone is prone to such distortions, even those (perhaps especially those)
who are utterly confident in their ability to make impartial assessments.
One must, therefore, in the second place, make a careful, conscious effort to correct biases – including biases that may
seem imperceptible or of which one seems free. This takes real self-knowledge, vigilance, and care. A useful technique is
to ask oneself honestly what solution one really wants to be justified and then make an extra effort to see opposing
arguments in their strongest light. This kind of self-monitoring can compensate for the natural, but regrettable,
inclination to follow the arguments, not where they lead (as Socrates advised in Plato’s Phaedo), but where we want
them to go.
Understanding some of the mechanisms of self-deception, avoidance, and denial – as well as some of the typical things
that people deny, avoid, or deceive themselves about – can help pull back the cloaks behind which immoral motives
commonly hide themselves. Still, another effective technique is to discuss one’s choice and the justifications for and
against it with someone who is both disinterested and competent in moral reasoning. A disinterested ear is often the
best protection against a clever desire.
5.5 Fallenness
How are we to make sense of events like the Rwandan genocide, petty cru- elties, and perhaps even environmental
degradation? Typically we look for the causes in poor socialization, ignorance, history, or political dynamics. These
travesties are not inevitable but could all be avoided if we could order our societies and ourselves better.
But there is an older, now less fashionable, way of interpreting phenom- ena like these. Human beings are inclined to
evil because they are fallen. Sinfulness is a part of our nature, and to counter it we require not simply moral and
intellectual virtue, but theological virtue and divine assistance as well. In short, a purely secular ethics which fails to take
into account our fallen natures and the gap between us and the divine is woefully inadequate.
Fallenness and sin
The Abrahamic religious traditions share broadly speaking an endorsement of the account of Genesis 2–3, where Adam
and Eve eat the fruit taken from the tree of knowledge of good and evil – the very knowledge investigated by moral
philosophy! God had forbidden them to eat this fruit, so in punishment for their transgression He casts them out of
Eden.
This transgression or sin and subsequent punishment is called the “Fall.” Its punishments have been thought to include,
variously interpreted, the pain of childbirth, the requirement to labor for sustenance, mortality, the subordination of
women to men, the weakening of the will, the perversion of desire, and the darkening of the intellect.
These last three in particular suggest limits to what one may expect of people, ethically speaking. Because the will has
been weakened, humanity lacks the rectitude to adhere to moral principle in the face of adversity or temptation.
Because of the perversion of desire, the lust for earthly pleasures (concupiscence), people can’t be expected to be
consistently or naturally inclined to desire the good. On the contrary, they can be expected to want what’s in fact bad
for them and for others, what’s evil. Because the intellect has been darkened, despite having eaten the fruit of the tree
of moral knowledge, people can be expected to be commonly ignorant about right and wrong and to possess limited
capacities to figure it out on their own. Many Christians hold the additional belief that all people are born with original
sin (the moral stain we inherit as descendents of Adam), and so all humans are inherently subject to weakness and sin.
Because of fallenness, then, despite their vigorous and even desperate efforts to improve things on their own, people
can be expected frequently to fail to be good. War, crime, and vice of every sort are inevitable. Sins of the intellect and
sins of the emotions will be pervasive.
One might say that modernity has been in part the effort to overcome through reason and technology the consequences
of the Fall. Medicine and the health sciences work to reverse and limit pain and even mortality. Machines reduce the
need for labor. Modern science and philosophy raise claims to having acquired knowledge, while modern ethics and
political theory struggle to achieve practical wisdom. René Descartes lays out much of this in his Discourse on Method
(1637). But those who find the account of fallenness compelling are likely to think that there’s vanity in the modern
project, that humanity can only overcome the Fall through divine assistance. For Christians this assistance is typically
articulated through concepts such as: grace, salvation, redemption, and the sacrifice of the Messiah or Christ.
Martin Heidegger, in Being and Time (1927) developed a different though also ethically relevant conception of
“fallenness” (Verfallenheit, das Verfallen). Following Søren Kierkegaard’s diagnosis of modern society’s pathologies,
Heidegger describes how in average everyday life individuals fall prey to idle busy-talk, habit, as well as practical,
commercial, and technical projects in ways that alienate them from their authentic and “ownmost” ways of being (as
well as from being, Sein, itself).
People who fall into this state of average everydayness can understand themselves only as the impersonal they
understands them, in the way that what Heidegger calls das Man conceives things. Individuals become average “they-
selves,” one (as the neutral grammatical pronoun). To break out of this fallenness and averageness and resolutely
achieve authenticity is, one might say, the ethical purpose of Heideggerian phenomenology (despite his claim that there
is nothing moral or political about it). Doing so requires, among other things (as it does for many existentialists), coming
to terms with human mortality, as well as the way we are vulnerable to falling.
Of course, if you do not accept Abrahamic theology, all this talk of fallenness might just sound like old-fashioned guff.
But even without religious beliefs, the idea that human beings are by nature inclined toward wrongdoing must be
seriously considered. If accepted, it has major repercussions for what we think to be possible ethically.
If you’ve ever heard someone say that they deserve what they have because they’ve earned it, you’ve encountered an
example of what some social critics call false consciousness.
But what on earth could be false about something that in many cases seems so obviously true? It’s perhaps not false
that people who say they’ve earned what they’ve got have worked very hard for it and perhaps exercised remarkable
intelligence, creativity, and sacrifice. There is, however, no divine or natural law about what sort of return or reward
someone is to receive for hard work, intelligence, creativity, sacrifice, or anything else. It’s only the peculiar social
arrangements of our society (as well as, in many cases, a fair measure of good fortune) that have distributed to any
particular individual the precise amount he or she claims to have earned. Other social arrangements might have
distributed far less or far more.
So, we might define “false consciousness” briefly as a set of beliefs people hold, usually called ideologies, that obscure
from them the real social- political-economic relationships that govern their lives and the true nature of the social-
political-economic order in which they live. In an 1893 correspondence with Karl Marx (1818–83), Friedrich Engels
(1820–95) remarked that:
Ideology is a process accomplished by the so-called thinker consciously, it is true, but with a false consciousness. The
real motive forces impelling him remain unknown to him; otherwise it simply would not be an ideological process.
Hence he imagines false or seeming motive forces.
It’s a single brief and fleeting remark. Marx himself never used the phrase. Nevertheless, Marx did lay the groundwork
for much of what later thinkers made of the idea. Principally, Marxian theories of false consciousness rely on Marx’s
description in Das Kapital (1867) and elsewhere of the way that capitalism distorts the self-understanding of the
proletariat about its real situation.
Among the principal forms of false consciousness is the understanding people acquire about themselves through what
Marx and others have called the fetishism of commodities. “Fetishism” is a process whereby people project value upon
things and then pretend or convince themselves that it’s there intrinsically. So, people come to believe that diamonds or
BMWs have great intrinsic value, when in fact they are shiny pebbles and machines whose value comes only from the
social world in which they’re situated. A BMW is likely to have little or no value to a nomadic herdsman in the Himalayas.
A diamond or a stock certificate would have had no value to an ancient Spartan.
Later critics like Guy Debord (1931–94) and Jean Baudrillard (b. 1929) have described the way in which devices like the
media and advertising convince people that they’re defined and have value to the extent they buy or own certain things
and imitate the images that pervade their lives. In Debord’s terms, “spectacle” replaces human social relations. In
Baudrillard’s formulation, we become images of images, imitations of imitations, simulacra not of real things but rather
of other simulacra. People even begin to prefer imitations or cyber-realities to reality itself. For example, people prefer
Disney Europe to Europe, resorts to beaches, malls to neighborhoods, Internet relationships to flesh and blood, video
games to sport. The wars people know are not real wars but the spectacular images they see on TV.
Frankfurt School critics like Theodor Adorno (1903–69) describe how even the simplest dimensions of our lives – even
things like lipstick and pop music – hide oppressions at the very time they advance them.
Even the predominant liberal political beliefs with which people under- stand and justify the social relations they do
observe are, according to many critics, instruments of false consciousness. Talk of “free” markets blinds people to the
coercion and manipulation that are endemic to them. Talk of “freedom of speech” obscures how speech only actually
matters politically if one has access to the media. Talk of “property rights” masks how the ideology of private property
makes it possible for vast concentrations of it to deprive others of their holdings and degrade the natural world with
impunity.
The limit on ethical deliberation implied here, then, is that people steeped in false consciousness cannot be expected to
reach sound ethical conclusions when their understanding of themselves and their world is deeply distorted in a way
that prevents them from understanding many of the ethic- ally salient features of the realities they face.
Of course, the critique only makes sense if you accept that the various beliefs comprising “false consciousness” are
indeed false. They may not be. Moreover, the accusation of false consciousness might sometimes be turned on its
accusers. Is it false consciousness to deny that the value of goods is determined by markets, for example? At its worst,
saying that something is an example of false consciousness can thus degenerate into mere name- calling: you don’t
accept what I see as the truth, therefore you must be the victim of false consciousness. Those who wish to level the
charge of “false consciousness,” therefore, will do well not only to describe the content of the false consciousness
they’ve identified but also present an error theory which accounts for the mechanism or reasons why reasonable people
see things so wrongly. Otherwise it will be difficult to get around the presumption of clear-sightedness.
In law and in everyday morality, people make allowances for mitigating circumstances. A wife who murders her husband
may be given a lighter sentence if she can show that he frequently battered her and that she committed her crime under
sustained stress. People who can demonstrate diminished responsibility due to mental illness, chronic or acute, will (or
at least should) receive more treatment and less punishment. It’s also widely accepted that to a certain extent a difficult
upbringing can make someone more likely to turn to crime.
What this shows is that people do not believe that free will is all-powerful. Sometimes people’s actions are partly
determined by what has happened to them, and this makes them less responsible for what they do. But what if free will
normally makes less of a contribution to our actions than we think, or even plays no role at all? What if, when closely
scrutinized, the very concept of free will doesn’t make sense? Wouldn’t that totally undercut our common sense notions
of responsibility and blame?
Ted Honderich (b. 1933) maintains that free will doesn’t exist at all and that our ordinary ideas of moral responsibility
will have to go. On Honderich’s view, moral responsibility only makes sense if one accepts “origination” – the view that
the first causes of human actions originate within human agents themselves, and that these first causes are not them-
selves caused by anything outside the agents. Honderich argues that there can be no such thing as origination. Human
beings are as much part of the natural, material world as anything else, and in this world everything that happens is the
effect of past causes. Causes determine their effects necessarily and uniformly. There is, therefore, simply no room for
something called free will to step in and change the physical course of events, in the brain or in the ordinary world of
human experience J. L. Austin (1911–60) called the world of “medium sized dry goods.” It follows, then, that
determinism is true, and that most ideas we have about moral responsibility are false.
More radically, does the concept of origination even make sense? If nothing at all causes human decisions of the will,
then, as David Hume argued, they’re no different from random events (Enquiry Concerning Human Understanding,
1748; Section VIII). But it hardly seems palatable to maintain that moral responsibility rests on something random, a
matter of pure chance, without any cause.
Compatibilism
Talking about free acts in moral discourses and otherwise may still be acceptable, however, through a strategy known as
“compatibilism.” This theory accepts that human actions are as much caused by prior events as any other. But it also
holds that it makes perfect sense to say that people have free will, just as long as by the words, “free will,” one means
just that human actions are not the result of external coercion or outside force. So long as the proximate (that is,
nearest) causes of an action are in some sense within or part of the person acting, especially if the act flows from the
actor’s character, the act can meaningfully be described as a “free” act. If one jumps through a window because one
chooses to do so, it’s done freely (even if that choice was caused). If one is thrown through a window against one’s
wishes, one’s act of defenestration is not a free one. On this account, however, it still seems true to say that people
really could not do other than they do, and that, for many, still undercuts what is necessary to attribute moral
responsibility.
Harry Frankfurt (b. 1929) has argued, using what have come to be known as “Frankfurt-style” cases (“Alternative
Possibilities and Moral Responsibility,” 1969), that even if it’s true that one can’t do otherwise, it still can make sense to
describe one’s action as free. Suppose, for example, someone possesses a secret device to force you to do X but won’t
use it unless you try to do something else besides X. If you do in fact choose to do X, says Frankfurt, it’s true both that
you couldn’t do otherwise (that alternatives weren’t possible) and that you chose freely. But for many, the simple idea
even in these cases that people really could not do other than they do undercuts what is necessary to attribute moral
responsibility.
The ability to act otherwise than one has is one way to define freedom. Other definitions include ideas like acting
independently of the causal order of the natural world, acting on the basis of reason alone, acting independently of
desire, acting at any time in opposition to one’s current line of action. In any case, using a variety of definitions, many
philosophers have tried to save free will, or at least freedom. In the Critique of Practical Reason (1788), for example,
Kant advanced a “transcendental argument” for the reality of free will: people recognize that they have moral duties,
but moral duties can only exist if people have free will. Therefore, since in order for morality to make sense free will
must exist, it’s reasonable to “postulate” that people have free wills – even though there is and in fact can be no proper
proof for it and even though some plausible arguments maintain that it doesn’t exist.
Thomas Nagel (b. 1937) adopts a position similar to Kant’s, arguing that free will seems undeniably not to exist from a
third-person point of view on the world – and undeniably to exist from a first-person point of view. Humans thus seem
condemned to endure a perpetual “double vision” understanding of the reality of free will.
A weaker argument for free will might be described this way: irrespective of the ultimate truth, people somehow have
to act as though they have free will. This seems to be psychologically true: no matter what people cling to intellectually,
they always seem to feel and act as though they’re free. But as a philosophical solution this option seems unsatisfactory,
as it seems to imply that everyone must inevitably live under a delusion.
Jean-Paul Sartre maintained, in Being and Nothingness (1943), that human freedom is immediately, phenomenologically
evident to consciousness. On the one hand, that option seems to be a disappointing cop-out – an attempt to resolve the
issue through mere assertion rather than careful argument. If someone simply replies, “Well, I don’t see it that way,” the
debate reaches an impasse. All the Sartrean can respond with is: “Look again.” But, on the other hand, perhaps for many
serious philosophical issues, at some point one reaches what Ludwig Wittgenstein (1889–1951) called bedrock, where
one simply has to make a fundamental philosophical decision, or where ultimately one simply sees it or doesn’t. Perhaps
Sartre’s appeal to what’s simply evident is enough to cut the Gordian knot.
Things for those on the other side of the barricades aren’t easy either. The challenge for those who reject both
origination and Sartrean immediacy is to explain how one can make sense of moral responsibility while simultaneously
not ignoring the disquieting implications determinism has for it. It’s a tough row to hoe, but an important one. Indeed,
this is perhaps one of the most vibrant philosophical debates today.
Aisha was driving home through London one day when her mobile phone rang. She didn’t have a hands-free set, but she
answered it anyway. When the conversation finished, she put the phone down and carried on with her life. Had she
been caught by the police, she would have faced a large fine and could have lost her license.
At the same time, somewhere else, Sophia was also driving home, and she too answered a mobile phone call manually.
But as she was talking, a child ran out into the road in front of her. Distracted, and with only one hand on the steering
wheel, she was unable to avoid a collision. The child died as a result. Sophia is now facing a prison sentence of up to 14
years. Had she not been on the phone at the time, she would have avoided killing the child.
What’s particularly interesting about this comparison is that the only difference between Sophia and Aisha is luck. Had a
child run into the road in front of Aisha, she too would have become a killer. So we have two women, both of whom
performed the same acts; but in one case that act led to the death of a child and in the other case it did not – and only
luck determined which was which. Is it fair that one woman is punished while the other is not?
One’s moral standing isn’t usually considered to be a matter of luck or fortune. But situations such as this suggest it may
play a very important role. The law certainly won’t treat the two women equally, even though their characters and
behavior may be just the same. Morally speaking, most would also consider Sophia more culpable than Aisha, even
though Aisha was driving just as dangerously. The implication seems to be that how good or bad one is depends partly
on what the consequences of our actions are, but consequences are in turn determined in part by luck.
Accepting luck as a factor in moral status is certainly a counter-intuitive view, and one with which many today disagree
(interestingly, the ancients seem to have taken fortuna more seriously).We might justify the resistance to luck by
arguing that although the law does – and perhaps has to – distinguish between reckless driving that leads to death and
reckless driving that doesn’t, morally speaking both women are in truth equally culpable. Perhaps contemporary moral
intuitions that distinguish the two are distorted by the knowledge of what consequences actually follow. Perhaps either
Aisha should be morally condemned a lot more, or Sophia should be condemned a lot less. Perhaps recognizing that only
good fortune prevents most drivers from becoming careless killers should yield more sympathy for the killers. Indeed,
how many of us can honestly claim to drive with due care and attention at all times?
To deny that moral luck exists at all, however, one needs to deny that actions become better or worse depending on
what their consequences are, since what actually happens is almost always beyond anyone’s full control. But this option
also seems counter-intuitive: surely it does matter what actually happens. To judge people purely on the basis of their
intentions or on the nature of the act itself seems to diminish the importance of what actually happens.
Constitutive luck
There is another kind of moral luck, known as constitutive luck. How good or bad one is depends a great deal on one’s
personality or character. But character is formed through both nature and nurture, and by the time one becomes
mature enough to be considered a morally responsible adult, these character traits are more or less set. So, for example,
a kind person hasn’t fully chosen to be kind: that’s how she grew up. Certainly many cruel and nasty people were
themselves mistreated as children; that abuse almost certainly affected the way their personalities developed. Since
people don’t choose their genes, or their parents, or their culture of origin, or a lot of the other factors that affect moral
development, there therefore seems to be another important element of luck in morality.
Martha Nussbaum has argued in The Fragility of Goodness (1986) that for the ancient Greeks not only does a good life
depend upon constitutive luck, it also depends upon good luck in the sense of avoiding increased danger. The very
attempt to be good, says Nussbaum, makes one vulnerable to many bad things that don’t threaten the vicious. For
example, the attempt to fulfill their duties led Hector, Agamemnon, Antigone, and Oedipus each to tragic ends. Perhaps
Socrates might be thought of this way as well.
Given that the role of luck or fortune in life seems indubitable, but the idea of moral luck oxymoronic, isn’t the best
solution to say that where luck enters in, morality cannot be found? Yet, that too is a controversial road to follow.
Screening out those dimensions of a situation attributable to luck may leave little left to praise or blame. So, however
one looks at it, accepting the role of luck presents a major challenge to judgments of moral praise and blame – but
perhaps something essential, too.
5.9 Nihilism
In the Coen Brothers’ 1998 film The Big Lebowski, nihilism is compared to one of the vilest creeds in human history – and
found wanting. On discovering that the people menacing his friend “the Dude” are nihilists, and not Nazis as he had
thought, the character Walter says, “Nihilists! Jesus! Say what you like about the tenets of National Socialism, Dude, at
least it’s an ethos.”
“Nihilism” is often used as a term of criticism and even abuse. It’s most often hurled by those who wish to defend
“absolute” or divinely grounded morals against those they believe subvert them or the institutions built around them.
But the term has also sometimes been used by the subversives themselves.
Deriving from the Latin nihil, meaning “nothing,” modern usage of the term “nihilism” seems to have developed in the
wake of its use in Ivan Turgenev’s 1862 novel, Fathers and Sons. It came to characterize Russian social critics and
revolutionaries of the nineteenth century like Alexander Herzen (1812–70), Mikhail Bakhunin (1814–76), and Nikolai
Chernyshev- sky (1828–89), who were associated with anarchism and socialism as well as with modern, secular, western
materialism generally.
Anarchism, socialism, secularism, and materialism are not, of course, nothing. They comprise very specific truth-claims
and moral values. But achieving their realization and acceptance requires the destruction or annihilation of the old order
– of traditional morals and values and social systems said to be grounded in something divine or transcendent. After all,
these thinkers aimed at the creation of a new, better world, a truly good world.
But creating that world demanded first violently erasing the old world.
The threat of nihilism
But there’s more to the charge of nihilism than the subversion of things based upon tradition and religion. Concepts and
theories described as nihilistic are commonly taken to imply negative claims like these: (a) that there is no truth; (b) that
there is no right or wrong, good or evil; (c) that life has no meaning; and even (d) that it’s not possible to communicate
meaningfully with one another. In short, any theory not ultimately grounded or finally justifiable may be subject to the
charge of nihilism, whether its proponents realize it or not.
Most recently, intellectual movements collected under the moniker, “post-modernism” – like post-structuralism and
deconstruction – have been called nihilistic. But nearly all things modern have also been subject to the charge – modern
science, evolutionary theory, the Protestant Reformation, existentialism, pragmatism, modern relativism, rationalism,
Kantianism, etc.
There’s often a logical criticism wrapped up in all of this, a critique of consistency or coherence. The claim that “there is
no truth” is itself a truth- claim. The claim that “language cannot communicate meanings” itself depends upon the ability
of language to communicate. But does the claim that there are no values (no right and wrong) involve holding a value?
Thinkers like Friedrich Nietzsche (1844–1900) and Martin Heidegger (1889–1976) have held that in a perverse way it
does. As they see it, it’s a short hop from asserting that “nothing has value” to positively affirming the value of nothing.
That is to say, nihilistic ideas and social movements, say the critics, inevitably lead to grotesque outpourings of violence
and destruction.
Since nihilism cannot provide any foundation, ground, or reason for morality, ultimately “everything is permitted.” Since
everything is permit- ted, nothing is prohibited. That nothing’s prohibited ought somehow to be exhibited and made
manifest; therefore every act (even the most extreme acts) ought to happen. Some blame nihilism, therefore, for
everything from the French Revolution’s Terror, the Holocaust, and the Soviet gulags to pornography, drug abuse,
abortion, divorce, petty crime, and rock and roll.
Overcoming nihilism
Traditionalists blame the modern abandonment of God for these mala- dies and prescribe a return to tradition,
absolutes, and a religiously based society. One of the most influential analyses of the nihilistic characteristics of the
modern world, however, inverts this diagnosis and places responsibility for nihilism squarely upon the western
philosophical and religious traditions themselves.
Nihilism, says Nietzsche, actually results from the Christian-Platonic tradition, from its attempts to acquire truth that is
singular, universal, and unchanging, together with its promoting the morals developed by a weak and conquered people.
One might call these pathologies the “God’s Eye” conception of truth and “slave” morality. After centuries of careful
philosophical scrutiny philosophers have learned that truth of that sort is unavailable to humans. The frustration and
exhaustion of this disappointing realization (the realization that “God is dead”) together with the soporific effects of
slave morality have finally resulted in thanatos or the desire for nothingness and death, even the desire to wreak
revenge upon the world for this disappointment.
For Nietzsche, our task is not to return to the pathological traditions and philosophies that produced nihilism but, rather,
to overcome nihilism. Overcoming nihilism requires first recognizing and taking responsibility for the fact that we are the
source and creators of value. Next, overcoming nihilism demands that we find within ourselves the strength to make
new affirmative values, healthy values that honor our human finitude, our embodiedness, and our desires, that love the
human fate (amor fati) and don’t lead to nihilism. Existentialism has in many ways followed Nietzsche in trying to
achieve this project.
5.10 Pluralism
Jean-Paul Sartre (1905–80) told a story of a young man who was caught in dilemma between his duties to his country
and to his mother. Should he join the Free French Forces to fight Nazism or look after his sick, aged parent? Many moral
theories would maintain that there must be some way of determining which of these duties carries more weight. Sartre
disagreed because he thought it was finally up to each individual to choose his or her values, and no person or system
could do it on anyone else’s behalf. But there’s another explanation for why the dilemma could be irresolvable: perhaps
there are many values worth holding and no objective way of deter- mining which should take priority over others – and
sometimes these values simply conflict. This is the position known as pluralism, a doctrine most closely associated with
Isaiah Berlin (1909–97).
Many critics claim that pluralism amounts to no more than relativism, so it is worth addressing this accusation directly in
order to clarify what plural- ism entails.
Relativism holds that there are no absolute moral values and that what’s right or wrong is always relative to a particular
person, place, time, species, culture, and so on. This position, however, differs from pluralism in a number of important
respects. For one thing, the pluralist may well believe that moral values are not relative. For example, she might claim
that the young man in Sartre’s example really, objectively, has responsibilities to both his mother and his country.
Nevertheless, the nature of morality is such that these duties cannot be weighed up against each other with any kind of
mathematical precision to determine which has priority over the other. They both have a claim on him, yet he cannot
adhere to both.
But conflicts among moral claims may not simply be a matter of imprecision. For the pluralist, there are many different
values worth holding and many moral claims that may be made upon us. As W. D. Ross (1877–1971) and others have
argued, goods, duties, values, claims, and principles may be irreducibly plural and complex. In certain cases, the
constituents of this plurality may stand in conflict, and that conflict is simply incommensurable – that is, there may
simply be no way to reconcile them.
Even if the pluralist does not hold that moral values are objective, the reason she has for claiming that moral values are
plural and in conflict may not collapse into crude relativism. While there may be many ways in which human life has
value, there isn’t an unlimited variety. Some moral options – for example, genocide – are not permissible. In addition,
living in accordance with one option may in fact close off others.Take the example of the values of communal and
individual life.There’s value in living the kind of life in which one is very closely wedded to one’s community, and there’s
a different kind of value in living as an autonomous, unencumbered individual. But if one lives to reap the benefits of
one of these ways, the benefits of the other must be sacrificed. So, the values of community and individuality may be
both equally important yet incommensurable.
This approach isn’t a form of relativism because it’s consistent with the idea that both ways of life have absolute value.
Nor, again, is just any way of living valuable: there are limits to the plurality of value.While both commu- nity and
individuality have value, racial purity does not.
In practice, this means one has to accept that not all moral disputes can be resolved to everyone’s satisfaction, and this
isn’t just because some people are mistaken in what they see as most important. If pluralists are right, then there are
serious limits on the extent to which moral disagreements can be settled. Sometimes, the best we can do is to negotiate
and reach accommodations with others, not actually agree on what value is superior to others.
This is particularly important for multicultural societies, where the plurality of values is more evident. A common ground
can’t always be found, but people must still live with each other. The pluralist warns that insisting that all moral
disagreements are in principle resolvable forces people to conclude that those who disagree with them are
fundamentally wrong, irrational, and immoral. That in turn generates tension and conflict, often violence. Pluralism
offers the promise of a more peaceable alternative.
5.11 Power
The discourses orbiting around the recent war in Iraq include many arguments that the war is unjust, unnecessary,
poorly executed, or illegal. Dealing with these arguments directly is one of the main ways in which the morality of the
war has been debated. But there has been another way of criticizing these arguments, one that refuses to take any of
the arguments at face value. This starts with the question cui bono – who benefits? Ask this question, many people say,
and you will find the real reasons for war – or opposition to it. What people actually say is beside the point.
This approach reflects a strand in philosophy that analyzes events and discourses in terms of power relations. Look at
the disagreeing parties in the debate and you’ll find that each has some sort of interest in the stance it takes. The stance,
then, whatever it appears to be, is fundamentally a device for protecting, securing, or enhancing its own power. The
discourses about promoting democracy, advancing human rights, ensuring national security, upholding the
requirements of international law, are therefore often – or even always – deployed to advance other agendas. Those
opposed to the war have claimed these agendas might include securing access to oil, undermining Saudi power in the
region, protecting Israel, stemming the advance of Russian and European power, weakening international institutions,
galvanizing domestic support for the current government, creating a distraction and financial crisis to justify the
dismantling of American social programs, transferring wealth to the shareholders of specific corporations, or weakening
Islam.Those in favor of the war can also claim the anti-war movement is motivated by the desire to increase the power
of Europe, the left, Ba’athists, or Islam.
Taken to its extreme view, this kind of analysis claims that, instead of making us excellent, or piling up treasures in
heaven, or making more people happier, morality is largely, even completely, about power. Moral principles and moral
terms are actually clever instruments of manipulation.
There are many ways to think about the way power works. One way is in a top-down fashion, where those above (the
powerful) exert their power over those below (the powerless or less powerful). The classical Marxian model seems to
follow this rule: owner/slave, lord/serf, capitalist/proletarian; that is, those who control the means of production (on
top)/those who work the means of production (below). One of the things power of this sort can do is dictate the terms
of moral and immoral, right and wrong, just and unjust.
So, slave owners, aristocrats, and capitalists invent systems of morality and politics that explain, justify, and secure their
dominant position. Some people are born slaves and are intrinsically well suited to it, the aristocrat Aristotle (384–322
BCE) claimed. Slavery is actually good for slaves, American slavers argued. God has established the hierarchy where lords
rule, said the lords. Their blood is superior. They create, cultivate, and sustain the refinements of civilization in ways the
lower classes cannot. Capitalists have worked harder and smarter. They’ve been frugal, thrifty, diligent, disciplined, and
have invested wisely.
It’s no wonder, then, that Karl Marx and Friedrich Engels asserted that “The ruling ideas of society are in every epoch the
ideas of the ruling class” (The German Ideology, I. B, 1845–6).
But, of course, power isn’t simply exerted in a top-down way. Those underneath often struggle against those above,
sometimes successfully. Those who occupy lower rungs in the hierarchy often marshal clever and effective forms of
resistance and opposition.
There are, however, other models of power besides the top-down and bottom-up channels of hierarchy. Sometimes
power struggles exist among those on the same rung. Sometimes players in power struggles change sides or play both
sides against each other. Sometimes different power games go on at the same time, some along the lines of sex, other
times through ideas about race, mental illness, criminality, economic status, political affiliation, family role, species, and
personal history. Often these lines of power and struggle conflict with one another. Sometimes an individual may even
be torn in different directions by different moral discourses, different lines of struggle.
For thinkers like Michel Foucault (1926–84), there is no grand system governing society – no single capitalist system,
patriarchy, imperialist or racist order, etc. Rather there are instead countless power relationships constantly changing,
realigning, breaking apart, and reconfiguring. Power is more like a kaleidoscope or a plate of spaghetti than a pyramid or
a chain. On this view, to see something like the Iraq war as purely being about one group exerting its power over
another is far too simplistic.
In the debate preceding the invasion of Iraq in 2003, both supporters and advocates appealed to past precedents to
strengthen their cases. Critics pointed to other attempts by western nations to interfere with the internal affairs of other
states, while supporters compared leaving Saddam Hussein in power to the appeasement of Hitler.
Almost all moral debate requires some comparison. Similar cases require similar treatments, and what is right in one
instance is also right in another, relevantly similar one. But then, as Jacques Derrida (1930–2004) puts it in The Gift of
Death (1992): “tout autre est tout autre” (“every other is completely other”). No two individuals are the same, let alone
identical. No two situations are utterly alike. Words don’t mean precisely the same thing to me as they do to you, not
the same thing in this context as in another, not the same thing on this reading as another, not the same thing this time
as another. One might say that the very concept of sameness is itself problematic. There are a number of ethical
implications to this.
Laws, rules, and principles are by definition general. None of them indicates precisely which rules apply to which cases in
which manner. None of them can say whether a particular circumstance presents an exception. It’s not possible for
them to do so. So, when people appeal to a law, principle, or rule in some particular case, they can in fact only do so by
making an utterly singular and unique decision, and that decision cannot be strictly determined by anything general.
The impossibility of avoiding undeterminable, foundationless choices about what to do, how to live, and what to believe
was something Søren Kierkegaard (1813–55) emphasized as characteristic of the human existential condition. It’s
something that for him is most radically faced in a “leap of faith.” It’s a leap that, like all ethical choices, no reason, no
principle, no theory could ever fully justify. When made “authentically,” decisions like this particularize the self in a
radical way (Fear and Trembling, 1843).
Laws, rules, and principles by their very nature attempt to produce order, regularity, consistency, and sameness in
human practices. The same rewards are to be distributed for the same work; the same punishments are to be
administered for the same crime. Laws, etc., like moral theories, would pretend to create an utterly closed system – a
system that deals in a regular fashion with the same sort of cases in the same way without any arbitrary judgment. But if
the presumption of sameness is baseless, then isn’t it the case that this effort to make things the same necessarily
involves a kind of violence against particularity? Mustn’t the effort to expel the arbitrary, to close or complete that
which cannot be closed or completed, necessarily lead to violence against whatever resists, what must resist? In short,
aren’t ethical rules, as rules, themselves unethical?
To the inevitably unethical nature of ethics, Derridian justice responds with what might be called permanent critique
(echoing Leon Trotsky’s call for “permanent revolution”). Permanent critique prevents – or at least limits – the way laws,
rules, and principles must be used violently by subverting the fantasy of sameness and non-arbitrariness that captivates
those who wield them.
It’s a stirring call to arms. But what positive ideals of justice and morality does this make possible? What vision of a good
or at least better society can such a view of justice and ethics yield us? The worry is that in its refusal to be pinned down
and to accept any appeal to the general or the universal, such a permanent critique becomes hollow.
Jane is an easy-going, hard-working person who does not let misfortune bother her. She has a moderately well paid job
and has recently bought a small car, which gives her some pleasure, even though she doesn’t use it very much. Mary, in
contrast, is lazy and hard to please. But one thing she would really like is a car, which she can’t currently afford, partly
because she doesn’t work very hard. If she had one, she’d be much more content. Mary and Jane both think that people
should do whatever would increase the sum total of happiness. So Mary tries to persuade Jane that she has a moral duty
to give her the car. After all, it will make Mary much happier, whereas Jane will soon get over the loss – she always does.
What reason has Jane to say no?
Most people would think that Mary’s suggestion is outrageous. Jane has worked to get her car, while Mary has been
relatively idle. Yet, Mary is saying she should have Jane’s car, not because that would be a kind and generous thing for
Jane to do, but because it’s the morally right thing. Ridiculous, no?
The trouble is that if one takes act utilitarianism seriously, Mary has a strong argument. Utilitarianism insists that
everyone’s interests should be considered equally, and that the right action is the one that increases the general
happiness. This opens up the possibility that some people should be made worse off, even though they have done
nothing to deserve any deprivation, simply because that would result in an increase in the general happiness.
What this seems to violate is a principle known as the “separateness of persons.” Individuals are not simply carriers of
welfare, happiness or utility that can be topped up, plundered, or combined like cups of water in order
to achieve a fairer distribution of these goods. Harm to one individual cannot be compensated by benefits to another. If
a person chooses to sacrifice some of his or her own welfare for the sake of another, that’s an act of generosity, not the
fulfillment of a moral obligation. Any moral system that ignores this – as utilitarianism allegedly does – is therefore
flawed.
It’s possible, however, to argue that the separateness of persons has no real moral significance, and that its apparent
obviousness is illusory. For instance, in the case of Mary and Jane, other forms of utilitarianism, for example rule
utilitarianism, just wouldn’t demand that Jane give Mary her car. If one considers the whole picture, it’s clear that a
society operating upon rules that reward the lazy or don’t allow individuals to keep the fruits of their labors will be
dysfunctional, resent-ridden, and unproductive. So, contrary to appearances, utilitarianism doesn’t necessarily require
that the separateness of Jane’s person be denied on moral grounds in order to deal with Mary’s request.
Still, it’s not clear at all either that people are fully separate (see 3.12 Individual/collective) or that, even if they are, it
follows logically that redistributions of goods are unjust. Redistributions may be desirable for non-utilitarian reasons, say
for reasons of duty or virtue. In addition, once one accepts that transfers of welfare may be limited by other
considerations (e.g. the desire for security and stability of property and for effort and creativity to be rewarded), the
idea that such transfers are unjust becomes less plausible. European welfare states, for example, routinely redistribute
wealth from the rich to the poor through the taxation system, and most Europeans think this is a requirement of justice,
not an affront to it.
Furthermore, the principle of the separateness of persons may lead to repellent consequences of its own. For example,
suppose that the lives of many millions could be significantly improved by reducing the quality of life of a few of the best
off in a very small way, a way that left them still much better off than the rest. Unyielding insistence on honoring the
separateness of persons would, however, prohibit anyone from doing so. Is that prohibition something we should be
morally willing to accept?
5.14 Skepticism
In June 2002, a local council of elders in the Pakistani village of Meerwala allegedly sentenced 29-year-old Mukhtar Mai
to be gang raped by the male members of another local family in retribution for an allegedly improper relationship that
Mukhtar’s teenage brother had developed with one of the female members of the other family. International criticism
of the sentence, as well as criticism from many quarters within Pakistan, was fierce.
But who’s to say, and on what basis, that this punishment is unjust or just? Is it even possible to justify any moral claim,
principle, or conclusion in anything but a provisional way? Are there really any moral “facts” or “truths” about her
sentence, at least any that can actually be known? Even if there are, is there any reason to act morally or to care about
morality’s commands? The constellation comprising these and other questions has come to be called “moral
skepticism.”
Moral skeptics commonly hold that moral beliefs have purely subjective or internal bases, usually in feeling, and that no
objective or external dimensions of the world can either explain or define moral practice and language. So, on this score,
egoists, hedonists, and even moral sentiment thinkers would qualify as skeptics.
This recent usage, however, deviates from earlier usages, and overlaps quite a bit with moral nihilism. Ancient
Hellenistic skeptics, like Pyrrho of Elis and Sextus Empiricus, seem to have held more cautious attitudes toward the
possibility of moral truth. Rather than concluding negatively or positively about whether some doctrine is true, these
skeptics with- held judgment, neither affirming nor denying. This, in turn, led them to a tranquil, undisturbed state
(ataraxia), freeing skeptics from the conflict and disturbance of dogmatic belief. In particular, Hellenistic skeptics refused
the Stoics’ claim that people can apprehend the natural law and moral cataleptic impressions, which supposedly provide
an indubitable and secure ground for moral argument and judgment. Although caricatures like those presented by
Diogenes Laertius (probably third century CE) depict skeptics as paralyzed and unable to act (unable to move out of the
way of runaway carts, for example), Hellenistic skeptics did act and reflect about action. Instead of pretending to
absolute, divine, indubitable or universal moral truths, skeptics recommend deferring to custom, to what seems natural,
and to the imperatives of feeling.
Early modern thinkers like Michel de Montaigne (1533–92) followed the ancients in this understanding of skepticism,
criticizing dogmatists and rationalists for trying to become angels but instead becoming monstrous (“Of Experience,” in
Essays). For Montaigne, it’s better to accept that one is no more than a finite, history and culture-bound human being.
Answering skepticism
Many of the claims that motivate moral skepticism are accepted by those who nonetheless believe meaningful morality
is still possible. Non-cognitivists, for example, accept that there are no moral facts as such, but they still believe that
moral discourse is meaningful and fruitful. What tips people over to skepticism is the nagging concern that morality may
only be possible if there are absolute moral facts that we can know, but that there are no such facts. As such, and as
with other forms of skepticism, critics claim that it only gets off the ground because it sets an impossibly high standard
for what can qualify as genuine ethics, and then complains that nothing can meet the test.
On this view, the serious claims of skepticism simply undermine arrogant moralists who purport to base their claims on
the apprehension of universal natural rights, divine moral principles, natural law, or the commands of reason. In any
case, skepticism recommends that if effective moral criticism is to be made, it must be done in ways that makes sense in
terms of the feelings, customs, traditions, and natural psychological features of those involved.
5.15 Standpoint
G.W. F. Hegel’s 1807 classic, Phenomenology of Spirit, tells an interesting story about the relationship between a master
and a slave. While at the outset, the master in every way appears to hold a superior position to the slave, by the end of
Hegel’s exposition, we find that things are decidedly more complex and that the slave has achieved certain capacities
denied the master – including the capacity to apprehend various truths the master cannot know. Karl Marx (1818–83)
adopted this “master–slave dialectic,” substituting the exploited working class for the slave and the exploiting ruling
class for the master. Jean-Paul Sartre (1905–80), too, found influence in the idea, using it to devastating effect when he
defended violent rebellion against colonialism in his Preface to Frantz Fanon’s Wretched of the Earth (1963). The insight
common to all three thinkers is that things look very different from different points of view. This insight underwrites a
branch of philosophy that’s come to be called “standpoint theory.”
In its most basic form, standpoint theory argues two propositions: First, what appears to be true or good or right to
people is intrinsically related to the social, economic, and gendered position from which they see it. Second, moral
reasoning is neither uniform nor universal. For a very long time, philosophers have held that reasoning is the same for
any rational being at any place and any time – like 2 + 2 = 4. But if moral reasoning is tied to one’s standpoint, then those
in different standpoints will reason about ethics differently. Contrary, to simple relativism, however, not all standpoints
are morally or epistemologically equivalent.
While for example the wealthy may believe they understand the world better than the poor, the situation is actually just
the reverse. The wealthy, because of their snobbery and their fear of the poor, isolate themselves in protected enclaves
– seeing the world only from the top of the skyscraper, as it were. The poor, by contrast know both life at the bottom
(where they live) and life at the top (where they work). Similarly, minorities know their own communities as well as the
larger majority society because they must circulate in both. Those belonging only to majority races and religions,
however, tend to know only themselves.
It has been feminist theorists, however, that have most fully developed the concept of “standpoint.” Women, say these
theorists, hold distinctive standpoints both as subordinates in the patriarchy and in their roles as mothers, caregivers,
and the organizers of various social networks. Theorists like Sara Ruddick, in her book Maternal Thinking (1989), have
accordingly argued that maternal practices render women more ethically competent to understand and resolve moral
and political conflicts.
Attractions
One advantage often attributed to standpoint theory is that it allows theorists to attribute specific abilities to a class of
people without claiming that the members of that class possess them in an essential way or by nature. If blacks or
women, for example, possess superior capacities of some sort, they do so not because of some inherent essence that
defines them but rather simply through their contingently occupying certain standpoints in the social order. So, in fact,
males can adopt at least some of what are at present female standpoints when they start thinking and acting from that
standpoint, when they take up “maternal practices.”
If standpoint theory is correct, then significant, perhaps decisive weight must be given to voices from standpoints that
have long been ignored or silenced, from the accounts, judgments, and narratives articulated by the oppressed. For
example, with regard to issues of the sexual harassment of women, women’s voices must be placed in the foreground.
Moral assessments concerning the poor, the working classes, prisoners, and racial minorities must be attentive to the
way things look from their standpoints.
Critique
Sometimes it’s easy to tell whose standpoint has been neglected. For example, it was clear that the voices of blacks
under South African apartheid should have been given a greater hearing. But perhaps some cases aren’t so clear. In the
case of the Israel–Palestine conflict, each adversary claims the standpoint of the oppressed, besieged, and victimized:
Israeli Jews claim a privileged standpoint as victims of present and historical anti-Semitism surrounded by avowed
enemies; Palestinian Arabs claim the standpoint of the dispossessed and of those living under illegal, brutal, racist
occupation. How does one rank or adjudicate the competing claims of different standpoints? Moreover, doesn’t their
superior education, access to information, and opportunity for travel tip the balance back in favor of the standpoints of
privilege? Isn’t it true that oppression brings deprivation rather than elevation, ignorance rather than understanding? If
standpoint theory is right, then, doesn’t it lead to the rather incredible conclusion that since the oppressed understand
things better and possess better moral capacities, oppression, and deprivation aren’t quite so bad after all? Or at least
doesn’t it lead to this strange trade-off: privileged ignorance on the one side or oppressed wisdom on the other? Which
would you choose?
There’s also the danger of presenting the viewpoint of a particular social group as being more homogenous than it really
is. Can we really speak of a single, uniform standpoint that, say, all women, all workers, all members of a minority class,
or even all slaves share? Or would that mask the individuality of people who happen to belong to a certain group?
5.16 Supererogation
Siblings Sly, Freddie, and Rose always entered the national lottery together, and one day they won $3 million – $1
million each. Sly spent some and invested some, but gave nothing away. Freddie gave away 20 percent to charity. Rose,
however, bought herself a bottle of cheap champagne and gave away the remaining $999,975 to provide clean water for
thousands of people in Tanzania.
When we think about what morality demands of us, many think that it requires a certain lack of selfishness. Sly may not
be the most evil person alive, but a good person would have shared their good fortune at least a little, perhaps as much
as Freddie. But Rose’s generosity seems to go over and above what could reasonably be expected of her. Giving all her
winnings away is said to be a supererogatory act. People praise such acts as good, but they don’t criticize those who do
not perform them. This because it’s generally recognized that acts like Rose’s involve doing more than one is morally
obliged to do.
The exceptional nature of supererogatory acts means that they’re thought to merit special praise. For example, the
Congressional Medal of Honor is presented to a soldier who “distinguishes himself conspicuously by gallantry and
intrepidity at the risk of his life above and beyond the call of duty.” A soldier’s simply performing his or her duty is
respectable and honor- able, but merely dutiful conduct doesn’t merit an award like this. There are, it therefore seems,
morally praiseworthy forms of conduct in addition to those that morality requires. There is, one might say, “heroic
virtue” in addition to “ordinary virtue.” Tzvetan Todorov raises this issue with particular poignancy in his reflections on
moral life in concentration camps, Facing the Extreme (1996).
A special category?
Some moral theories, however, accommodate the supererogatory more easily than others. Deontological or duty-based
ethics tend to specify a limited range of acts that people are duty-bound to perform – therefore leaving plenty of space
to do more, if one so wishes. But act consequentialist theories can seem actually to require things that one would
ordinarily think of as supererogatory.
For example, let’s imagine Rose has a comfortable home and lifestyle before she wins the lottery. The extra pleasure she
will get out of life from the winnings (the increase in marginal utility, as economists like to say) is therefore fairly
minimal, considering that most research seems to suggest that once a comfortable material standard of living has been
achieved, happiness does not increase much more with increased wealth. If, however, she spends the money on clean
water provision for Tanzanians, thousands of people see their welfare and happiness increase significantly. Since this is
the course of action that yields the best consequences by far, it would seem wrong for her not to do it. So, what seems
like a heroic action turns out to be one everyone in her position should be expected to perform. Act consequentialists,
therefore, would seem to be committed to the view that supererogatory acts are very rare.
This needn’t mean, however, that for consequentialists the intuition that some moral actions are more heroic than
others is simply mistaken. It could be accepted, for example, that although people are equally bound by all moral duties,
human nature and social circumstances make some duties much harder to perform than others. Rose isn’t to be praised,
therefore, because what she did was beyond her duties, but because the vast majority of human beings would find
fulfilling this duty very difficult.
Another way to save the intuition that some acts are exceptionally praise- worthy without recourse to the
supererogatory is to claim that some duties have a stronger claim on us than others. For example, the duty not to kill
others makes so strong a claim that we legislate against it. The duty to be honest with our spouse seems to make a
slightly weaker claim. Hence, lying to one’s spouse about a serious matter isn’t something people consider a sufficiently
serious breach of duty to pass laws against it; but it is considered serious enough to warrant various kinds of reprimand
and social sanction. The duty to give away wealth seems to make an even weaker claim. Not giving away a portion of
one’s wealth, therefore, although thought by many to be a violation of duty, doesn’t make a sufficiently strong claim
upon us to warrant much disapproval at all.
One problem with this solution, however, is that while it explains why sometimes people aren’t punished for failing in
their duties, it doesn’t explain why they’re praised in extraordinary ways for fulfilling them. It’s not just that people don’t
blame those who fail to give away a portion of their wealth; they vigorously praise people who do.
It remains a serious possibility, therefore, that we should all act like Rose in the same circumstances and that our
surprise that she was so generous does not show that she acted above the call of duty, but that we so often fail to fulfill
the duties that fall upon us.
5.17 Tragedy
An airplane has been hijacked and is heading for a major city, where the hijackers say it will be deliberately crashed,
bringing devastation and death to thousands. The air force commander doesn’t believe it’s right to kill civilians,
especially those on one’s own side of a conflict. But the only way he can stop the suicide mission is to order the plane
shot down above an unpopulated area, killing approximately 200 innocent passengers – as well as, of course, the
hijackers.
Most people would say that the commander is right to order the plane shot down. Yet, no matter how one looks at it,
the decision involves killing 200 innocent people. It’s true that it seems likely that they’re going to die anyway. But isn’t
there a moral difference between killing and letting die? If someone’s going to die soon, does that mean it’s okay to kill
that person? Isn’t killing the innocent, even to save other innocents, morally wrong?
One might say that this is an example of a moral tragedy. In the dramatic sense, a tragedy is when a bad outcome is the
inevitable consequence, usually of the protagonist’s fatal flaw. By contrast, a moral tragedy occurs when, no matter
what one does, something morally bad must result, and the best one can hope for is to do the least bad thing. In morally
tragic situations the choice is not between the good and the bad, but the more and less bad. Indeed, according to
Martha Nussbaum (b. 1947), trying to lead a morally good life exposes one to moral tragedy. Goodness, in her
rendering, is a fragile thing. Others, following Stanley Cavell (b. 1926), have argued that the pathological qualities of
certain philosophical conundrums, especially those related to skepticism, lead to tragic results, at least in the dramatic
sense.
Although the thought that some choices leave us with no truly good option seems perfectly understandable, there is
nonetheless something odd about saying that someone did wrong if what he or she did was the best thing they could do
under the circumstances. For this reason it might be thought that, contrary to appearances, moral tragedy is impossible:
there’s always some best thing that one can do; and if that is indeed what one does, one does no wrong. But there are
several ways of explaining the seeming paradox of rightly choosing the wrong thing while retaining the idea of moral
tragedy.
The key is to distinguish between the good and the bad, and two senses of right and wrong. If one thinks of “good” and
“bad” as pertaining to out- comes or consequences, and “right” and “wrong” as pertaining to actions, then it clearly is
possible for right actions to have bad outcomes (and wrong actions good ones). In this schema, it’s quite easy to explain
moral tragedy in terms of people doing the right thing, even though what results is a fore- seeable bad. Moral tragedy,
on this view, is about the inevitability of bad consequences, not of performing a wrong act.
This solution, however, isn’t available to consequentialists, for whom an action must be wrong if its consequences are
bad. They do, however, have another way of making moral tragedy sound more plausible. Right or wrong also bear the
sense of “correct” and “incorrect.” When someone chooses the lesser of two evils, therefore, it’s true to say that they
do wrong. But in another, important sense, one can say they did the right thing in the sense that they chose correctly
between the options available to them. It doesn’t make what they did morally right, but it absolves them of any blame
for the bad consequences.
Whether moral tragedy is or isn’t avoidable, to say that someone has behaved in a morally wrong but nevertheless
correct way such that he or she is not morally culpable looks like a rather uncomfortable conceptual con- tortion. But
perhaps it’s a necessary one. It is usual to think that if someone knowingly acts wrongly and wasn’t forced to do so, then
that person is to blame for the act. But perhaps it should also be recognized that when there are no good options
available, a person is, in a sense, forced to do wrong. In such cases, therefore, although the wrong is done knowingly,
because the wrong was forced it’s not blameworthy. This seems particularly pertinent in the case of political leaders,
who often do find that their options are limited by circumstances. It’s not only when there’s only one choice that free
will is compromised.
Let’s look again at our opening story in Chapter 7 on utilitarianism. A millionaire makes a dying request for you to donate
$5 million to the Yankees. You agree but then are tempted to give the money to the World Hunger Relief Organization
instead. What should you do? The utilitarian, who focuses on the consequences of actions, would tell you to act in a way
that advances the greatest good for the greatest number. In essence, the end justifies the means. Accordingly, breaking
your promise to the millionaire and donating to the World Hunger Relief Organization appears to be the way to go.
The deontological answer to this question, however, is quite the opposite. It is not the consequences that determine the
rightness or wrongness of an act but certain features in the act itself or in the rule of which the act is a token or
example. The end never justifies the means. For example, there is something right about truth telling and promise
keeping even when such actions may bring about some harm; and there is something wrong about lying and promise
breaking even when such actions may bring about good consequences. Acting unjustly is wrong even if it will maximize
expected utility.
In this chapter, we explore deontological approaches of ethics, specifically that by Immanuel Kant (1724–1804). The
greatest philosopher of the German Enlightenment and one of the most important philosophers of all time, Kant was
both an absolutist and a rationalist. He believed that we could use reason to work out a consistent, nonoverridable set
of moral principles
KANT’S INFLUENCES
To understand Kant’s moral philosophy, it is helpful to know a little about his influences, and we will consider two here.
The first was the philosophical debate of his time between rationalism and empiricism, the second was natural law
intuitionist theories that then dominated moral philosophy.
Rationalism and Empiricism
The philosophical debate between rationalism and empiricism took place in the seventeenth and eighteenth centuries.
Rationalists, such as René Descartes, Baruch Spinoza, Gottfried Leibniz, and Christian Wolff, claimed that pure reason
could tell us how the world is, independent of experience. We can know meta- physical truth such as the existence of
God, the immortality of the soul, freedom of the will, and the universality of causal relations apart from experience.
Experience may be necessary to open our minds to these ideas, but essentially they are innate ideas that God implants
in us from birth. Empiricists, led by John Locke and David Hume, on the other hand, denied that we have any innate
ideas and argued that all knowledge comes from experience. Our minds are a tabula rasa, an empty slate, upon which
experience writes her lessons.
The rationalists and empiricists carried their debate into the area of moral knowledge. The rationalists claimed that our
knowledge of moral principles is a type of metaphysical knowledge, implanted in us by God, and discoverable by reason
as it deduces general principles about human nature. On the other hand, empiricists, especially Francis Hutcheson, David
Hume, and Adam Smith, argued that morality is founded entirely on the contingencies of human nature and based on
desire. Morality concerns making people happy, fulfilling their reflected desires, and reason is just a practical means of
helping them fulfill their desires. There is nothing of special importance in reason in its own right. It is mainly a
rationalizer and servant of the passions. As Hume said, “Reason is and ought only to be a slave of the passions and can
never pretend to any other office than to serve and obey them.” Morality is founded on our feeling of sympathy with
other people’s sufferings, on fellow feeling. For such empiricists then, morality is contingent upon human nature:
If we had a different nature, then we would have different feelings and desires, and hence we would have different
moral principles.
Kant rejected the ideas of Hutcheson, Hume, and Smith. He was outraged by the thought that morality should depend
on human nature and be subject to the fortunes of change and the luck of empirical discovery. Morality is not
contingent but necessary. It would be no less binding on us if our feelings were different from what they are. Kant
writes,
Every empirical element is not only quite incapable of being an aid to the principle of morality, but is even highly
prejudicial to the purity of
morals; for the proper and inestimable worth of an absolutely good will consists just in this, that the principle of action is
free from all influence of contingent grounds, which alone experience can furnish. We cannot too much or too often
repeat our warning against this lax and even mean habit of thought which seeks for its principle amongst empirical
motives and laws; for human reason in its weariness is glad to rest on this pillow, and in a dream of sweet illusions it
substitutes for morality a bastard patched up from limbs of various derivation, which looks like anything one chooses to
see in it; only not like virtue to one who has once beheld her in her true form.1
No, said Kant, it is not our desires that ground morality but our rational will. Reason is sufficient for establishing the
moral law as something transcendent and universally binding on all rational creatures.
Since the Middle Ages, one of the dominant versions of European moral philosophy was natural law theory. In a
nutshell, this view maintained that, through rational intuitions embedded in human nature by God, we discover eternal
and absolute moral principles. Medieval natural law philosopher Thomas Aquinas argued that we have a special mental
process called synderesis that gives us general knowledge of moral goodness. From this knowledge, then, we derive a
series of basic moral obligations. What is key here is the idea that humans have a natural faculty that gives us an
intuitive awareness of morality. This general position is called intuitionism. During the seventeenth and eighteenth
centuries, some sort of intuitionism was assumed in most ethical theories, and Kant was heavily influenced by some of
them. Two basic forms emerged: act- and rule-intuitionism.
Act-intuitionism sees each act as a unique ethical occasion and holds that we must decide what is right or wrong in each
situation by consulting our con- science or our intuitions or by making a choice apart from any rules. For each specific act
that we consider performing, we must consult our conscience to discover the morally right (or wrong) thing to do. An
expression of act-intuitionism is in the famous moral sermons of Joseph Butler (1692–1752), a bishop within the Church
of England. He writes,
[If] any plain honest man, before he engages in any course of action, ask [s] himself, Is this I am going about right, or is it
wrong? ... I do not in the least doubt but that this question would be answered agreeably to truth and virtue, by almost
any fair man in almost any circumstance.2
Butler believed that we each have a conscience that can discover what is right and wrong in virtually every instance. This
is consistent with advice such as “Let your conscience be your guide.” We do not need general rules to learn what is
right and wrong; our intuition will inform us of those things. The judgment lies in the moral perception and not in some
abstract, general rule.
Act-intuitionism, however, has some serious disadvantages. First, it is hard to see how any argument could take place
with an intuitionist: Either you both have the
same intuition about lying or you don’t, and that’s all there is to it. If I believe that a specific act of abortion is morally
permissible and you believe it is morally wrong, then we may ask each other to look more deeply into our consciences,
but we cannot argue about the subject. There is a place for deep intuitions in moral philosophy, but intuitions must still
be scrutinized by reason and corrected by theory.
Second, it seems that rules are necessary to all reasoning, including moral reasoning, and act-intuitionists seem to ignore
this. You may test this by thinking about how you learn to drive a car, to do long division, or to type. Even though you
may eventually internalize the initial principles as habits so that you are unconscious of them, one could still cite a rule
that covers your action. For example, you may no longer remember the rules for accelerating a car, but there was an
original experience of learning the rule, which you continue unconsciously to follow. Moral rules such as “Keep your
promises” and “Don’t kill innocent people” seem to function in a similar way.
Third, different situations seem to share common features, so it would be inconsistent for us to prescribe different
moral actions. Suppose you believe that it is morally wrong for John to cheat on his math exam. If you also believe that it
is morally permissible for you to cheat on the same exam, don’t you need to explain what makes your situation different
from John’s? If I say that it is wrong for John to cheat on exams, am I not implying that it is wrong for anyone relevantly
similar to John (including all students) to cheat on exams? That is, morality seems to involve a universal aspect, or what
is called the principle of universalizability: If one judges that X is right (or wrong) or good (or bad), then one is rationally
committed to judging anything relevantly similar to X as right (wrong) or good (bad). If this principle is sound, then act-
intuitionism is misguided.
The other intuitionist approach, rule-intuitionism, maintains that we must decide what is right or wrong in each situation
by consulting moral rules that we receive through intuition. Rule-intuitionists accept the principle of universalizability as
well as the notion that in making moral judgments we are appealing to principles or rules. Such rules as “We ought
never to lie,” “We ought always to keep our promises,” and “We ought never to execute an innocent person” constitute
a set of valid prescriptions regardless of the outcomes. The rule-intuitionist to have the greatest impact on Kant was
German philosopher Samuel Pufendorf (1632–1694), the dominant natural law theorist of his time. Pufendorf describes
the intuitive process by which we acquire moral knowledge:
It is usually said that we have knowledge of this [moral] law from nature itself. However, this is not to be taken to mean
that plain and distinct notions concerning what is to be done or avoided were implanted in the minds of newborn
people. Instead, nature is said to teach us, partly because the knowledge of this law may be attained by the help of the
light of reason. It is also partly because the general and most useful points of it are so plain and clear that, at first sight,
they force assent.... Although we are not able to remember the precise time when they first took hold of our
understandings and professed our minds, we can have no other opinion of our knowledge of this law except that it was
native to our beings, or born together and at the same time with ourselves.
The moral intuitions that we have, according to Pufendorf, fall into three groups: duties to God, to oneself, and to
others. The duties in all these cases are moral rules that guide our actions. Within these three groupings, the main rules
of duty that Pufendorf advocates are these:
Kant was influenced by Pufendorf in two ways. First, Kant was a rule- intuitionist of a special sort: He believed that moral
knowledge comes to us through rational intuition in the form of moral rules. As we will see, Kant’s moral psychology is
rather complex, and his conception of intuition draws on a distinct notion of reason, which we don’t find in Pufendorf.
Second, Kant accepted Pufendorf’s division of duties toward God, oneself, and others. Duties toward God, Kant argues,
are actually religious duties, not moral ones. However, duties to oneself and others are genuine moral obligations.
Kant was influenced by Pufendorf in two ways. First, Kant was a rule- intuitionist of a special sort: He believed that moral
knowledge comes to us through rational intuition in the form of moral rules. As we will see, Kant’s moral psychology is
rather complex, and his conception of intuition draws on a distinct notion of reason, which we don’t find in Pufendorf.
Second, Kant accepted Pufendorf’s division of duties toward God, oneself, and others. Duties toward God, Kant argues,
are actually religious duties, not moral ones. However, duties to oneself and others are genuine moral obligations.
The principal moral rule in Kant’s ethical theory is what he calls the categorical imperative—essentially meaning
“absolute command.” Before introducing us to the specific rule itself, he sets the stage with an account of intrinsic moral
goodness.
As we have noted, Kant wanted to remove moral truth from the zone of contingency and empirical observation and
place it securely in the area of necessary, absolute, universal truth. Morality’s value is not based on the fact that it has
instrumental value, that it often secures nonmoral goods such as happiness; rather, morality is valuable in its own right:
Nothing can possibly be conceived in the world, or even out of it, which can be called good without qualification, except
the Good Will. Intelligence, wit, judgment, and the other talents of the mind, however they may be named, or courage,
resolution, perseverance, as qualities of temperament, as undoubtedly good and desirable in many respects; but these
gifts of nature also may become extremely bad and mischievous if the will which is to make use of them, and which,
therefore constitutes what is called character is not good.... Even if it should happen that, owing to special disfavor of
fortune, or the stingy provision of a step-motherly nature, this Good Will should wholly lack power to accomplish its
purpose, if with its greatest efforts it should yet achieve nothing, and there should remain only the Good Will, ... then,
like a jewel, it would still shine by its own light, as a thing which has its whole value in itself. Its usefulness or fruitfulness
can neither add to nor take away anything from this value.4
The only thing that is absolutely good, good in itself and without qualification, is the good will. All other intrinsic goods,
both intellectual and moral, can serve the vicious will and thus contribute to evil. They are only morally valuable if
accompanied by a good will. Even success and happiness are not good in them- selves. Honor can lead to pride.
Happiness without good will is undeserved luck, ill-gotten gain. Nor is utilitarianism plausible, for if we have a quantity of
happiness to distribute, is it just to distribute it equally, regardless of virtue? Should we not distribute it discriminately,
according to moral goodness? Happiness should be distributed in proportion to people’s moral worth.
How successful is Kant’s argument for the good will? Could we imagine a world where people always and necessarily put
nonmoral virtues to good use, where it is simply impossible to use a virtue such as intelligence for evil? Is happiness any
less good simply because one can distribute it incorrectly? Can’t one put the good will itself to bad use as the misguided
do-gooder might? As the aphorism goes, “The road to hell is paved with good intentions.” Could Hitler have had good
intentions in carrying out his dastardly programs? Can’t the good will have bad effects?
Although we may agree that the good will is a great good, it is not obvious that Kant’s account is correct, that it is the
only inherently good thing. For even as intelligence, courage, and happiness can be put to bad uses or have bad effects,
so can the good will; and even as it does not seem to count against the good will that it can be put to bad uses, so it
should not count against the other virtues that they can be put to bad uses. The good will may be a necessary element
to any morally good action, but whether the good will is also a sufficient condition to moral goodness is another
question.
Nonetheless, perhaps we can reinterpret Kant so as to preserve his central insight. There does seem to be something
morally valuable about the good will, apart from any consequences. Consider the following illustration. Two soldiers
volunteer to cross enemy lines to contact their allies on the other side. Both start off and do their best to get through
the enemy area. One succeeds; the other does not and is captured. But, aren’t they both morally praiseworthy? The
success of one in no way detracts from the goodness of the other. Judged from a common- sense moral point of view,
their actions are equally good; judged from a utilitarian or consequentialist view, the successful act is far more valuable
than the unsuccessful one. Here, we can distinguish the agent’s worth from the value of the consequences and make
two separate, nonconflicting judgments.
For Kant, all mention of duties (or obligations) can be translated into the language of imperatives, or commands. As
such, moral duties can be said to have imperative force. He distinguishes two kinds of imperatives: hypothetical and
categorical. The formula for a hypothetical imperative is “If you want A, then do B.” For example, “If you want a good
job, then get a good education,” or “If you want to be happy, then stay sober and live a balanced life.” The formula for a
categorical imperative is simply: “Do B!” That is, do what reason
discloses to be the intrinsically right thing to do, such as “Tell the truth!” Hypothetical, or means–ends, imperatives are
not the kind of imperatives that characterize moral actions. Categorical, or unqualified, imperatives are the right kind of
imperatives, because they show proper recognition of the imperial status of moral obligations. Such imperatives are
intuitive, immediate, absolute injunctions that all rational agents understand by virtue of their rationality.
Kant argues that one must perform moral duty solely for its own sake (“duty for duty’s sake”). Some people conform to
the moral law because they deem it in their own enlightened self-interest to be moral. But they are not truly moral
because they do not act for the sake of the moral law. For example, a businessman may believe that “honesty is the best
policy”; that is, he may judge that it is conducive to good business to give his customers correct change and high-quality
products. But, unless he per- forms these acts because they are his duty, he is not acting morally, even though his acts
are the same ones they would be if he were acting morally.
The kind of imperative that fits Kant’s scheme as a product of reason is one that universalizes principles of conduct. He
names it the categorical imperative (CI): “Act only according to that maxim by which you can at the same time will that it
would become a universal law.” The categorical imperative, for Kant, is a procedure for determining the morality of any
course of action. All specific moral duties, he writes, “can be derived from this single imperative.” Thus, for example,
duties to oneself such as developing one’s talents and not killing oneself can be deduced from the categorical
imperative. So too can duties to others, such as keeping promises and helping those in need.
The first step in the categorical imperative procedure is for us to consider the underlying maxim of our proposed action.
By maxim, Kant means the general rule in accordance with which the agent intends to act. For example, if I am thinking
about assisting someone in need, my underlying maxim might be this: “When I see someone in need, I should assist him
or her when it does not cause an undue burden on me.” The second step is to consider whether this maxim could be
universalized to apply to everyone, such as “When anyone sees someone in need, that person should assist him or her
when it does not cause an undue burden on the person.” If it can be universalized, then we accept the maxim, and the
action is moral. If it cannot be universalized, then we reject the maxim, and the action is immoral. The general scheme of
the CI procedure, then, is this:
According to Kant, there is only one categorical imperative, but he presents three formulations of it:
Principle of the law of nature. “Act as though the maxim of your action were by your will to become a universal
law of nature.”
Principle of ends. “So act as to treat humanity, whether in your own person or in that of any other, in every case
as an end and never as merely a means.”
Principle of autonomy. “So act that your will can regard itself at the same time as making universal law through
its maxims.”
The theme that ties all of these formulations together is universalizability: Can a particular course of action be
generalized so that it applies to any rele- vantly similar person in that kind of situation? For Kant, determining whether a
maxim can successfully be universalized hinges on which of the three specific formulations of the categorical imperative
that we follow. The bottom line for all three, though, is that we stand outside our personal maxims and estimate
impartially and impersonally whether our maxims are suitable as principles for all of us to live by.
Let’s look at each of these formulations, beginning with the first and most influential, the principle of the law of nature.
Again, the CI principle of the law of nature is this: “Act as though the maxim of your action were by your will to become
a universal law of nature.” The emphasis here is that you must act analogous to the laws of physics, specifically insofar
as such laws are not internally conflicting or self-defeating. For example, nature could not subsist with a law of gravity
that had an object fall both up and down at the same time. Similarly, a system of morality could not subsist when a
universalized maxim has an internal conflict. If you could consistently will that everyone would act on a given maxim,
then there is an application of the categorical imperative showing the moral permissibility of the action. If you could not
consistently will that everyone would act on the maxim, then that type of action is morally wrong; the maxim must then
be rejected as self-defeated.
The heart of this formulation of the CI is the notion of a “contradiction,” and there has been much debate about exactly
the kind of contradiction that Kant had in mind. John Stuart Mill famously criticized this aspect of the CI: “[Kant] fails,
almost grotesquely, to show that there would be any contradiction, any logical (not to say physical) impossibility, in the
adoption by all rational beings of the most outrageously immoral rules of conduct” (Utilitarianism, Ch. 1). But
contemporary American philosopher Christine Korsgaard argues that there are three possible interpretations of what
Kant meant by “contradiction.” First, Kant might have meant that the universalization of such a maxim would be a logical
contradiction, where the proposed action would simply be inconceivable. Second, he might have meant that it would be
a teleological contradiction, where the maxim could not function as a law within a purposeful and organized system of
nature. Third, he might have meant that it would be a practical contradiction, where my action would become
ineffective for achieving my purpose if every- one tried to use it for that purpose. Korsgaard believes that all three of
these interpretations are supported by Kant’s writings, and Kant himself may not have even seen any differences
between the three. But, she argues, the third one is preferable because it enables the universalization test to handle
more cases
What the test shows to be forbidden are just those actions whose effi- cacy in achieving their purposes depends upon
their being exceptional. If the action no longer works as a way of achieving the purpose in question when it is
universalized, then it is an action of this kind.5
This formulation of the CI reveals a practical contradiction in my action insofar as it shows that I am trying to get away
with something that would never work if others did the same thing. It exposes unfairness, deception, and cheating in
what I am proposing.
Kant gives four examples of the application of this test: (1) making a lying promise, (2) committing suicide, (3) neglecting
one’s talent, and (4) refraining from helping others. The first and fourth of these are duties to others, whereas the
second and third of these are duties to oneself. Kant illustrates how the CI principle of the law of nature works by
applying it to each of these maxims.
Making a Lying Promise Suppose I need some money and am considering whether it would be moral to borrow the
money from you and promise to repay it without ever intending to do so. Could I say to myself that everyone should
make a false promise when he is in difficulty from which he otherwise cannot escape? The maxim of my act is M:
M. Whenever I need money, I should make a lying promise while borrowing the money.
Can I universalize the maxim of my act? By applying the universalizability test to M, we get P:
P. Whenever anyone needs money, that person should make a lying promise while borrowing the money.
But, something has gone wrong, for if I universalize this principle of making promises without intending to keep them, I
would be involved in a contradiction:
I immediately see that I could will the lie but not a universal law to lie. For with such a law [that is, with such a maxim
universally acted on] there would be no promises at all.... Thus my maxim would necessarily destroy itself as soon as it
was made a universal law.6
The resulting state of affairs would be self-defeating because no one in his or her right mind would take promises as
promises unless there was the expectation of fulfillment. Thus, the maxim of the lying promise fails the universalizability
criterion; hence, it is immoral. Now, I consider the opposite maxim, one based on keeping my promise:
M1. Whenever I need money, I should make a sincere promise while borrowing it.
P1. Whenever anyone needs money, that person should make a sincere promise
Yes, I can universalize M1 because there is nothing self-defeating or contradictory in this. So, it follows, making sincere
promises is moral; we can make the maxim of promise keeping into a universal law.
Committing Suicide Some of Kant’s illustrations do not fare as well as the duty to keep promises. For instance, he
argues that the categorical imperative would prohibit suicide because we could not successfully universalize the maxim
of such an act. If we try to universalize it, we obtain the principle, “Whenever it looks like one will experience more pain
than pleasure, one ought to kill oneself,” which, according to Kant, is a self-contradiction because it would go against the
very principle of survival upon which it is based. But whatever the merit of the form of this argument, we could modify
the principle to read “Whenever the pain or suffering of existence erodes the quality of life in such a way as to make
nonexistence a preference to suffering existence, one is permitted to commit suicide.” Why couldn’t this (or something
close to it) be universalized? It would cover the rare instances in which no hope is in sight for terminally ill patients or for
victims of torture or deep depression, but it would not cover the kinds of suffering and depression most of us experience
in the normal course of life. Kant seems unduly absolutist in his prohibition of suicide.
Neglecting One’s Talent Kant’s other two examples of the application of the CI principle of the law of nature are also
questionable. In his third example, he claims that we cannot universalize a maxim to refrain from developing our talents.
But again, could we not qualify this and stipulate that under certain circumstances it is permissible not to develop our
talents? Perhaps Kant is correct in that, if every- one selfishly refrained from developing talents, society would soon
degenerate into anarchy. But couldn’t one universalize the following maxim M3?
M3. Whenever I am not inclined to develop a talent, and this refraining will not seriously undermine the social order, I
may so refrain.
Refraining from Helping Others Kant’s last example of the way the CI principle of the law of nature functions regards the
situation of not coming to the aid of others whenever I am secure and independent. He claims that I cannot universalize
this maxim because I never know whether I will need the help of others at some future time. Is Kant correct about this?
Why could I not universalize a maxim never to set myself a goal whose achievement appears to require the cooperation
of others? I would have to give up any goal as soon as I realized that cooperation with others was required. In what way
is this contradictory or self-defeating? Perhaps it would be selfish and cruel to make this into a universal law, but there
seems nothing contradictory or self-defeating in the principle itself. The problems with universalizing selfishness are the
same ones we encountered in analyzing egoism, but it is doubtful whether Kant’s categorical impera- tive captures what
is wrong with egoism. Perhaps he has other weapons that do elucidate what is wrong with egoism (we return to this
later).
Kant thought that he could generate an entire moral law from his categorical imperative. The above test of
universalizability advocated by Kant’s principle of the law of nature seems to work with such principles as promise
keeping and truth telling and a few other maxims, but it doesn’t seem to give us all that Kant wanted. It has been
objected that Kant’s categorical imperative is both too wide and too unqualified. The charge that it is too wide is based
on the perception that it seems to justify some actions that we might consider trivial or even immoral.
For an example of a trivial action that might be mandated by the categorical imperative, consider the following maxim
M:
M. I should always tie my right shoe before my left shoe. This generates the following principle P: P. We should always
tie our right shoe before our left shoe.
Can we universalize P without contradiction? It seems that we can. Just as we universalize that people should drive cars
on the right side of the street rather than the left, we could make it a law that everyone should tie the right shoe before
the left shoe. But it seems obvious that there would be no point to such a law—it would be trivial. But it is justified by
the categorical imperative.
It may be objected that all this counterexample shows is that it may be permissible (not obligatory) to live by the
principle of tying the right shoe before the left because we could also universalize the opposite maxim (tying the left
before the right) without contradiction. That seems correct.
Another counterexample, offered by Fred Feldman,7 appears to show that the categorical imperative endorses cheating.
Maxim M states:
M. Whenever I need a term paper for a course and don’t feel like writing one, I will buy a term paper from Research
Anonymous and submit it as my own work.
P. Whenever anyone needs a term paper for a course and doesn’t feel like writing one, the person will buy one from a
suitable source and submit it as his or her own.
This procedure seems to be self-defeating. It would undermine the whole process of academic work because teachers
would not believe that research papers really represented the people who turned them in. Learning would not occur;
grades and transcripts would be meaningless, and the entire institution of education would break down; the whole
purpose of cheating would be defeated.
M1. When I need a term paper for a course and don’t feel like writing one, and no change in the system will occur if I
submit a store-bought one, then I will buy a term paper and submit it as my own work.
P1. Whenever anyone needs a term paper for a course and doesn’t feel like writing it, and no change in the system will
occur if one submits a store-bought paper, then one will buy the term paper and submit it as one’s own work.
Does P1 pass as a legitimate expression of the categorical imperative? It might seem to satisfy the conditions, but
Kantian students have pointed out that for a principle to be universalizable, or lawlike, one must ensure that it is public.
However, if P1 were public and everyone was encouraged to live by it, then it would be exceedingly difficult to prevent
an erosion of the system. Teachers would take precautions against it. Would cheaters have to announce themselves
publicly? In sum, the attempt to universalize even this qualified form of cheating would undermine the very institution
that makes cheating possible. So, P1 may be a thinly veiled oxymoron: Do what will undermine the educational process
in such a way that it doesn’t undermine the educational process.
Another type of counterexample might be used to show that the categorical imperative refuses to allow us to do things
that common sense permits. Suppose I need to flush the toilet, so I formulate my maxim M:
M. At time t1, I will flush the toilet. I universalize this maxim: P. At time t1, everyone should flush their toilet.
But I cannot will this if I realize that the pressure of millions of toilets flushing at the same time would destroy the
nation’s plumbing systems, and so I could not then flush the toilet. The way out of this problem is to qualify the original
maxim M to read M1:
M1. Whenever I need to flush the toilet and have no reason to believe that it will set off the impairment or destruction
of the community’s plumbing system, I may do so.
P1. Whenever anyone needs to flush the toilet and has no reason to believe that it will set off the destruction of the
community’s plumbing system, he or she may do so.
Thus, Kant could plausibly respond to some of the objections to his theory.
More serious is the fact that the categorical imperative appears to justify acts that we judge to be horrendously
immoral. Suppose I hate people of a certain race, religion, or ethnic group. Suppose it is Americans that I hate and that I
am not an American. My maxim is this:
M. Let me kill anyone who is American. Universalizing M, we get P: P. Always kill Americans.
Is there anything contradictory in this injunction? Could we make it into a universal law? Why not? Americans might not
like it, but there is no logical contradiction involved in such a principle. Had I been an American when this command was
in effect, I would not have been around to write this book, but the world would have survived my loss without too much
inconvenience. If I suddenly discover that I am an American, I would have to commit suicide. But as long as I am willing
to be consistent, there doesn’t seem to be anything wrong with my principle, so far as its being based on the categorical
imperative is concerned.
As with the shoe-tying example, it would be possible to universalize the opposite—that no one should kill innocent
people. Nevertheless, we certainly wouldn’t want to say that it is permissible to adopt the principle “Always kill
Americans.”
We conclude, then, that even though the first version of the categorical imperative is an important criterion for
evaluating moral principles, it still needs supplementation. In itself, it is purely formal and leaves out any understanding
about the content or material aspect of morality. The categorical imperative, with its universalizability test, constitutes a
necessary condition for being a valid moral principle, but it does not provide us with a sufficiency criterion. That is, if any
principle is to count as rational or moral, it must be universalizable; it must apply to everyone and to every case that is
relevantly similar. If I believe that it’s wrong for others to cheat on exams, then unless I can find a reason to believe that
I am relevantly different from these others, it is also wrong for me to cheat on exams. If premarital heterosexual sex is
prohibited for women, then it must also be prohibited for men (otherwise, with whom would the men have sex—other
men’s wives?). This formal consistency, however, does not tell us whether cheating itself is right or wrong or whether
pre- marital sex is right or wrong. That decision has to do with the material content of morality, and we must use other
considerations to help us decide about that.
We’ve discussed Kant’s first formulation of the categorical imperative; now we will consider the two others: the
principle of ends and the principle of autonomy.
Again, the principle of ends is this: “So act as to treat humanity, whether in your own person or in that of any other, in
every case as an end and never as merely a means.” Each person as a rational being has dignity and profound worth,
which entails that he or she must never be exploited or manipulated or merely used as a means to our idea of what is
for the general good (or to any other end).
What is Kant’s argument for viewing rational beings as having ultimate value? It goes like this: In valuing anything, I
endow it with value; it can have no value apart from someone’s valuing it. As a valued object, it has conditional worth,
which is derived from my valuation. On the other hand, the person who values the object is the ultimate source of the
object, and as such belongs to a different sphere of beings. We, as valuers, must conceive of ourselves as having
unconditioned worth. We cannot think of our personhood as a mere thing because then we would have to judge it to be
without any value except that given to it by the estimation of someone else. But then that person would be the source
of value, and there is no reason to suppose that one person should have unconditional worth and not another who is
relevantly similar. Therefore, we are not mere objects. We have unconditional worth and so must treat all such value-
givers as valuable in themselves—as ends, not merely means. I leave it to you to evaluate the validity of this argument,
but most of us do hold that there is something exceedingly valuable about human life.
Kant thought that this formulation, the principle of ends, was substantively identical to his first formulation of the
categorical imperative, but most scholars disagree with him. It seems better to treat this principle as a supplement to
the first, adding content to the purely formal CI principle of the law of nature. In this way, Kant would limit the kinds of
maxims that could be universalized. Egoism and the principle regarding the killing of Americans would be ruled out at
the very outset because they involve a violation of the dignity of rational persons. The process would be as follows:
1. Formulate the maxim (M).
2. Apply the ends test. (Does the maxim involve violating the dignity of rational beings?)
3. Apply the principle of the law of nature universalization test. (Can the maxim be universalized?)
In any event, we may ask whether the CI principle of ends fares better than the CI principle of the law of nature. Three
problems soon emerge. The first has to do with Kant’s setting such a high value on rationality. Why does reason and only
reason have intrinsic worth? Who gives this value to rational beings, and how do we know that they have this value?
What if we believe that reason has only instrumental value?
Kant’s notion of the high inherent value of reason will be plausible to those who believe that humans are made in the
image of God and who interpret that
as entailing that our rational capabilities are the essence of being created in God’s image: We have value because God
created us with worth—that is, with reason. But, even nontheists may be persuaded that Kant is correct in seeing
rationality as inherently good. It is one of the things rational beings value more than virtu- ally anything else, and it is a
necessary condition to whatever we judge to be a good life or an ideal life (a truly happy life).
Kant seems to be correct in valuing rationality. It does enable us to engage in deliberate and moral reasoning, and it lifts
us above lower animals. Where he may have gone wrong is in neglecting other values or states of being that may have
moral significance. For example, he believed that we have no obligations to animals because they are not rational. But
surely the utilitarians are correct when they insist that the fact that animals can suffer should constrain our behavior
toward them: We ought not cause unnecessary harm. Perhaps Kantians can sup- plement their system to accommodate
this objection.
This brings us to our second problem with Kant’s formulation. If we agree that reason is an intrinsic value, then does it
not follow that those who have more of this quality should be respected and honored more than those who have less?
(3) Therefore, those who have more reason than others are intrinsically better.
Thus, by Kantian logic, people should be treated in exact proportion to their ability to reason, so geniuses and
intellectuals should be given privileged status in society, as Plato and Aristotle might argue. Kant could deny the second
premise and argue that rationality is a threshold quality, but the objector could come back and argue that there really
are degrees in ability to use reason, ranging from gorillas and chimpanzees all the way to the upper limits of human
genius. Should we treat gorillas and chimps as ends in themselves while still exploiting small babies and severely senile
people because the former do not yet act rationally and the latter have lost what ability they had? If we accept the
Kantian principle of ends, what should be our view on abortion, infanticide, and euthanasia?
Kant’s principle of ends says all humans have dignity by virtue of their rationality, so they are permitted to exploit
animals (who are intelligent but not rational). But suppose Galacticans who visited our planet were superrational, as
superior to us as we are to other animals. Would we then be second-class citizens whom the Galacticans could justifiably
exploit for their purposes? Suppose they thought we tasted good and were nutritious. Would morality permit them to
eat us? Kantians would probably insist that minimal rationality gives one status—but then, wouldn’t some animals who
deliberate (chimps, bonobos, gorillas, and dolphins) gain status as persons? And don’t sheep, dogs, cats, pigs, and cows
exhibit minimally rational behavior? Should we eat them? The Chinese think nothing is wrong with eating dogs and cats.
There is a third problem with Kant’s view of the dignity of rational beings. Even if we should respect them and treat
them as ends, this does not tell us very much. It may tell us not to enslave them or not to act cruelly toward them
without a good reason, but it doesn’t tell us what to do in situations where our two or more moral duties conflict.
For example, what does it tell us to do about a terminally ill woman who wants us to help her die? What does it tell us to
do in a war when we are about to aim our gun at an enemy soldier? What does it mean to treat such a rational being as
an end? What does it tell us to do with regard to the innocent, potential victim and the gangsters who have just asked us
the whereabouts of the victim? What does it tell us about whether we should steal from the pharmacy to procure
medicine we can’t afford in order to bring healing to a loved one? It’s hard to see how the notion of ends helps us much
in these situations. In fairness to Kant, however, we must say that virtually every moral system has trouble with
dilemmas and that it might be possible to supplement Kantianism to solve some of them.
The final formulation of the categorical imperative is the principle of autonomy: “So act that your will can regard itself at
the same time as making universal law through its maxims.” That is, we do not need an external authority—be it God,
the state, our culture, or anyone else—to determine the nature of the moral law. We can discover this for ourselves.
And the Kantian faith proclaims, everyone who is ideally rational will legislate exactly the same universal moral
principles.
The opposite of autonomy is heteronomy: The heteronomous person is one whose actions are motivated by the
authority of others, whether it is religion, the state, his or her parents, or a peer group. The following illustration may
serve as an example of the difference between these two states of being.
In the early 1960s, Stanley Milgram of Yale University conducted a series of social psychological experiments aimed at
determining the degree to which the ordinary citizen was obedient to authority. Volunteers from all walks of life were
recruited to participate in “a study of memory and learning.” Two people at a time were taken into the laboratory. The
experimenter explained that one was to play the role of the “teacher” and the other the role of the “learner.” The
teacher was put in a separate room from which he or she could see the learner through a window. The teacher was
instructed to ask the learner to choose the correct correlate to a given word, and the learner was to choose from a set of
options. If the learner got the correct word, they moved on to the next word. But, if the learner chose the wrong word,
he or she was punished with an electric shock. The teacher was given a sample shock of 45 volts just to get the feeling of
the game. Each time that the learner made a mistake, the shock was increased by 15 volts (starting at 15 volts and
continuing to 450 volts). The meter was marked with verbal designations: slight shock, moderate shock, strong shock,
very strong shock, intense shock, extreme-intensity shock, danger: severe shock, and XXX. As the experiment proceeded,
the learner would generally be heard grunting at the 75-volt shock, crying out at 120 volts, begging for release at 150
volts, and screaming in agony at 270 volts. At around 300 volts, there was usually dead silence.
Now, unbeknown to the teacher, the learner was not actually experiencing any shocks; the learners were really trained
actors simulating agony. The results
of the experiment were astounding. Whereas Milgram and associates had expected that only a small proportion of
citizens would comply with the instruc- tions, 60 percent were completely obedient and carried out the experiment to
the very end. Only a handful refused to participate in the experiment at all once they discovered what it involved. Some
35 percent left at various stages. Milgram’s experiments were later replicated in Munich, Germany, where 85 percent of
the subjects were found to be completely “obedient to authority.”
There are two ways in which the problems of autonomy and heteronomy are illustrated by this example. In the first
place, the experiment seems to show that the average citizen acts less autonomously than we might expect. People are
basically heteronomous, herd followers. In the second place, there is the question about whether Milgram should have
subjected people to these experiments. Was he violating their autonomy and treating them as means (rather than ends)
in deceiving them in the way he did? Perhaps a utilitarian would have an easier time justifying these experiments than a
Kantian.
In any case, for Kant, it is our ability to use reason in universalizing the maxims of our actions that sets rational beings
apart from nonrational beings. As such, rational beings belong to a kingdom of ends. Kant thought that each of us—as a
fully rational, autonomous legislator—would be able to reason through to exactly the same set of moral principles, the
ideal moral law.
One of the problems that plague all formulations of Kant’s categorical imperative is that it yields unqualified absolutes.
The rules that the categorical imperative generates are universal and exceptionless. He illustrates this point with regard
to truth telling: Suppose an innocent man, Mr. Y, comes to your door, begging for asylum, because a group of gangsters
is hunting him down to kill him. You take the man in and hide him in your third-floor attic. Moments later the gangsters
arrive and inquire after the innocent man: “Is Mr. Y in your house?” What should you do? Kant’s advice is to tell them
the truth: “Yes, he’s in my house.”8 What is Kant’s reasoning here? It is simply that the moral law is exceptionless.
It is your duty to obey its commands, not to reason about the likely consequences. You have done your duty: hidden an
innocent man and told the truth when asked a straightforward question. You are absolved of any responsibility for the
harm that comes to the innocent man. It’s not your fault that there are gangsters in the world.
To many of us, this kind of absolutism seems counterintuitive. One way we might alter Kant here is simply to write in
qualifications to the universal principles, changing the sweeping generalization “Never lie” to the more modest “Never
lie, except to save an innocent person’s life.” The trouble with this way of solving the problem is that there seem to be
no limits on the qualifications that would need to be attached to the original generalization—for example, “Never lie,
except to save an innocent person’s life (unless trying to save that
person’s life will undermine the entire social fabric),” or “Never lie, except to save an innocent person’s life (unless this
will undermine the social fabric),” or “Never lie, except to spare people great anguish (such as telling a cancer patient
the truth about her condition).” And so on. The process seems infinite and time- consuming and thus impractical.
However, another strategy is open for Kant—namely, following the prima facie duty approach advocated by twentieth-
century moral philosopher William D. Ross (1877–1971). Let’s first look at the key features of Ross’s theory and then
adapt it to Kant’s.
Today, Ross is perhaps the most important deontological theorist after Kant, and, like Pufendorf, Ross is a rule-
intuitionist. There are three components of Ross’s theory. The first of these is his notion of “moral intuition,” internal
perceptions that both discover the correct moral principles and apply them correctly. Although they cannot be proved,
the moral principles are self-evident to any normal person upon reflection. Ross wrote,
That an act, qua fulfilling a promise, or qua effecting a just distribution of good ... is prima facie right, is self-evident; not
in the sense that it is evident ... as soon as we attend to the proposition for the first time, but in the sense that when we
have reached sufficient mental maturity and have given sufficient attention to the proposition it is evident without any
need of proof, or of evidence beyond itself. It is evident just as a mathematical axiom, or the validity of a form of
inference, is evident.... In our confidence that these propositions are true there is involved the same confidence in our
reason that is involved in our confidence in mathematics.... In both cases we are dealing with propositions that cannot
be proved, but that just as certainly need no proof.9
Just as some people are better perceivers than others, so the moral intuitions of more reflective people count for more
in evaluating our moral judgments. “The moral convictions of thoughtful and well-educated people are the data of
ethics, just as sense-perceptions are the data of a natural science.”10
The second component of his theory is that our intuitive duties constitute a plural set that cannot be unified under a
single overarching principle (such as Kant’s categorical imperative or the utilitarian highest principle of “the greatest
good for the greatest number”). As such, Ross echoes the intuitionism of Pufendorf by presenting a list of several duties,
specifically these seven:
1.Promise keeping
2. Fidelity
4. Beneficence
5. Justice
6. Self-improvement
7. Nonmaleficence
The third component of Ross’s theory is that our intuitive duties are not absolute; every principle can be overridden by
another in a particular situation. He makes this point with the distinction between prima facie duties and actual duties.
The term prima facie is Latin for “at first glance,” and, according to Ross, all seven of the above-listed moral duties are
tentatively binding on us until one duty conflicts with another. When that happens, the weaker one dis- appears, and
the stronger one emerges as our actual duty. Thus, although prima facie duties are not actual duties, they may become
such, depending on the circumstances. For example, if we make a promise, we put ourselves in a situation in which the
duty to keep promises is a moral consideration. It has presumptive force, and if no conflicting prima facie duty is
relevant, then the duty to keep our promises automatically becomes an actual duty.
What, for Ross, happens when two duties conflict? For an absolutist, an adequate moral system can never produce
moral conflict, nor can a basic moral principle be overridden by another moral principle. But Ross is no absolutist. He
allows for the overridability of principles. For example, suppose you have promised your friend that you will help her
with her homework at 3 p.m. While you are on your way to meet her, you encounter a lost, crying child. There is no one
else around to help the little boy, so you help him find his way home. But, in doing so, you miss your appointment. Have
you done the morally right thing? Have you broken your promise?
It is possible to construe this situation as constituting a conflict between two moral principles:
2. We ought always to help people in need when it is not unreasonably inconvenient to do so.
In helping the child get home, you have decided that the second principle overrides the first. This does not mean that
the first is not a valid principle—only that the “ought” in it is not an absolute “ought.” The principle has objective
validity, but it is not always decisive, depending on which other principles may apply to the situation. Although some
duties are weightier than others—for example, non- maleficence “is apprehended as a duty of a more stringent
character ... than beneficence”—the intuition must decide each situation on its own merits.
Many moral philosophers—egoists, utilitarians, and deontologists—have adopted the prima facie component of Ross’s
theory as a convenient way of resolving moral dilemmas. In doing so, they typically do not adopt Ross’s account of moral
intuitions or his specific set of seven duties (that is, the first two components of Ross’s theory). Rather, they just
incorporate Ross’s concepts of prima facie duty and actual duty as a mechanism for explaining how one duty might
override another.
How might this approach work with Kant? Consider again Kant’s innocent man example. First, we have the principle L:
“Never lie.” Next, we ask whether any other principle is relevant in this situation and discover that that is principle P:
“Always protect innocent life.” But we cannot obey both L and P (we assume for the moment that silence will be a
giveaway). We have two general principles; neither of them is to be seen as absolute or nonoverridable but rather as
prima facie. We have to decide which of the two overrides the other, which has greater moral force. This is left up to our
considered judgment (or the considered judgment of the reflective moral community). Presumably, we will opt for P
over L, meaning that lying to the gangsters becomes our actual duty.
Will this maneuver save the Kantian system? Well, it changes it in a way that Kant might not have liked, but it seems to
make sense: It transforms Kant’s absolutism into a modest objectivist system (as described in Chapter 3). But now we
need to have a separate criterion to resolve the conflict between two competing prima facie principles. For Ross, moral
intuitions performed that function. Since Kant is more of a rational intuitionist, it would be the task of reason to perform
that function. Perhaps his second formulation of the categorical imperative—the principle of ends—might be of service
here. For example, in the illustration of the inquiring killer, the agent is caught between two compelling prima facie
duties: “Never lie” and “Always protect innocent life.” When determining his actual duty, the agent might reflect on
which of these two duties best promotes the treatment of people as ends—that is, beings with intrinsic value. This now
becomes a contest between the dignity of the would-be killer who deserves to hear the truth and the dignity of the
would-be victim who deserves to live. In this case, the dignity of the would-be victim is the more compelling value, and
the agent’s actual duty would be to always protect innocent life. Thus, the agent should lie to protect the life of the
would-be victim.
Utilitarianism and deontological systems such as Kant’s are radically different types of moral theories. Some people
seem to gravitate to the one and some to the other, but many people find themselves dissatisfied with both positions.
Although they see something valid in each type of theory, at the same time there is something deeply troubling about
each. Utilitarianism seems to catch the spirit of the purpose of morality, such as human flourishing and the reduction of
suffering, but undercuts justice in a way that is counterintuitive. Deontological systems seem right in their emphasis on
the importance of rules and the principle of justice, but tend to become rigid or to lose focus on the central purposes of
morality.
One philosopher, William Frankena, has attempted to reduce this tension by reconciling the two types of theories in an
interesting way. He calls his position “mixed deontological ethics” because it is basically rule centered but in such a way
as to take account of the teleological aspect of utilitarianism.11 Utilitarians are right about the purpose of morality: All
moral action involves doing good or
alleviating evil. However, utilitarians are wrong to think that they can measure these amounts or that they are always
obligated to bring about the “greatest balance of good over evil,” as articulated by the principle of utility.
In place of the principle of utility, Frankena puts forth a near relative, the principle of beneficence, which calls on us to
strive to do good without demanding that we be able to measure or weigh good and evil. Under his principle of
beneficence, he lists four hierarchically arranged subprinciples:
In some sense, subprinciple 1 takes precedence over 2, 2 over 3, and 3 over 4, other things being equal.
The principle of justice is the second principle in Frankena’s system. It involves treating every person with equal respect
because that is what each is due. To quote John Rawls, “Each person possesses an inviolability founded on justice that
even the welfare of society as a whole cannot override.... The rights secured by justice are not subject to political
bargaining or to the calculus of social interests.”12 There is always a presumption of equal treatment unless a strong
case can be made for overriding this principle. So even though both the principle of beneficence and the principle of
justice are prima facie principles, the principle of justice enjoys a certain priority. All other duties can be derived from
these two fundamental principles.
Of course, the problem with this kind of two-principle system is that we have no clear method for deciding between
them in cases of moral conflict. In such cases, Frankena opts for an intuitionist approach similar to Ross’s: We need to
use our intuition whenever the two rules conflict in such a way as to leave us undecided on whether beneficence should
override justice. Perhaps we cannot decisively solve every moral problem, but we can solve most of our problems
successfully and make progress toward refining our subprinciples in a way that will allow us to reduce progressively the
undecidable areas. At least, we have improved on strict deontological ethics by outlining a system that takes into
account our intuitions in deciding complex moral issues.
Utilitarianism
Suppose you are on an island with a dying millionaire. With his final words, he begs you for
one final favor: “I’ve dedicated my whole life to baseball and for fifty years have gotten
endless pleasure rooting for the New York Yankees. Now that I am dying, I want to give all my
assets, $5 million, to the Yankees.” Pointing to a box containing money in large bills, he
continues: “Would you take this money back to New York and give it to the Yankees’ owner so
that he can buy better players?” You agree to carry out his wish, at which point a huge smile of
relief and gratitude breaks out on his face as he expires in your arms. After traveling to New
York, you see a newspaper advertisement placed by your favorite charity, World Hunger Relief
Organization (whose integrity you do not doubt), pleading for $5 million to be used to save
100,000 people dying of starvation in Africa. Not only will the $5 million save their lives, but it
will also purchase equipment and the kinds of fertilizers necessary to build a sustainable
economy. You decide to reconsider your promise to the dying Yankee fan, in light of this
advertisement.
What is the right thing to do in this case? Consider some traditional moral principles and see if
they help us come to a decision. One principle often given to guide action is “Let your
conscience be your guide.” I recall this principle with fondness, for it was the one my father
taught me at an early age, and it still echoes in my mind. But does it help here? No, since
conscience is primarily a function of upbringing. People’s consciences speak to them in
different ways according to how they were brought up. Depending on upbringing, some
people feel no qualms about committing violent acts, whereas others feel the torments of
conscience over stepping on a gnat. Suppose your conscience tells you to give the money to
the Yankees and my conscience tells me to give themoney to the World Hunger Relief
Organization. How can we even discuss the matter? If conscience is the end of it, we’re left
mute.
Another principle urged on us is “Do whatever is most loving”; Jesus in particular set forth the
principle “Love your neighbor as yourself.” Love is surely a wonderful value. It is a more
wholesome attitude than hate, and we should overcome feelings of hate if only for our own
psychological health. But is love enough to guide our actions when there is a conflict of
interest? “Love is blind,” it has been said, “but reason, like marriage, is an eye-opener.” Whom
should I love in the case of the disbursement of the millionaire’s money—the millionaire or the
starving people? It’s not clear how love alone will settle anything. In fact, it is not obvious that
we must always do what is most loving. Should we always treat our enemies in loving ways?
Or is it morally permissible to feel hate for those who have purposely and unjustly harmed us,
our loved ones, or other innocent people? Should the survivors of Nazi concentration camps
love Adolph Hitler? Love alone does not solve difficult moral issues.
A third principle often given to guide our moral actions is the Golden Rule: “Do to others as
you would have them do to you.” This, too, is a noble rule of thumb, one that works in simple,
commonsense situations. But it has problems. First, it cannot be taken literally. Suppose I love
to hear loud heavy-metal music. Since I would want you to play it loudly for me, I reason that I
should play it loudly for you—even though I know that you hate the stuff. Thus, the rule must
be modified: “Do to others as you would have them do to you if you were in their shoes.”
However, this still has problems. If I were the assassin of Robert Kennedy, I’d want to be
released from the penitentiary; but it is not clear that he should be released. If I put myself in
the place of a sex-starved individual, I might want to have sex with the next available person;
but it’s not obvious that I (or anyone else) must comply with that wish. Likewise, the Golden
Rule doesn’t tell me to whom to give the millionaire’s money.
Conscience, love, and the Golden Rule are all worthy rules of thumb to help us through life.
They work for most of us, most of the time, in ordinary moral situations. But, in more
complicated cases, especially when there are legitimate conflicts of interests, they are limited.
A more promising strategy for solving dilemmas is that of following definite moral rules.
Suppose you decided to give the millionaire’s money to the Yankees to keep your promise or
because to do otherwise would be stealing. The principle you followed would be “Always keep
your promise.” Principles are important in life. All learning involves understanding a set of
rules; as R. M. Hare says, “Without principles we could not learn anything whatever from our
elders.... Every generation would have to start from scratch and teach itself.” If youdecided to
act on the principle of keeping promises, then you adhered to a type of moral theory called
deontology. In Chapter 1, we saw that deontological systems maintain that the center of value
is the act or kind of act; certain features in the act itself have intrinsic value. For example, a
deontologist would see some- thing intrinsically wrong in the very act of lying.
If, on the other hand, you decided to give the money to the World Hunger Relief Organization
to save an enormous number of lives and restore economic solvency to the region, you sided
with a type of theory called teleological ethics. Sometimes, it is referred to as consequentialist
ethics. We also saw in Chapter 1 that the center of value here is the outcome or consequences
of the act. For example, a teleologist would judge whether lying was morally right or wrong by
the consequences it produced.
We have already examined one type of teleological ethics: ethical egoism, the view that the
act that produces the most amount of good for the agent is the right act. Egoism is teleological
ethics narrowed to the agent himself or herself. In this chapter, we will consider the dominant
version of teleological ethics— utilitarianism. Unlike ethical egoism, utilitarianism is a universal
teleological sys- tem. It calls for the maximization of goodness in society—that is, the greatest
goodness for the greatest number—and not merely the good of the agent.
CLASSIC UTILITARIANISM
In our normal lives we use utilitarian reasoning all the time; I might give money to charity
when seeing that it would do more good for needy people than it would for me. In time of
war, I might join the military and risk dying because I see that society’s needs at that time are
greater than my own. As a formal ethical theory, the seeds of utilitarianism were sewn by the
ancient Greek philosopher Epicurus (342–270 BCE), who stated that “pleasure is the goal that
nature has ordained for us; it is also the standard by which we judge everything good.”
According to this view, rightness and wrongness are determined by plea- sure or pain that
something produces. Epicurus’s theory focused largely on the individual’s personal experience
of pleasure and pain, and to that extent he advocated a version of ethical egoism.
Nevertheless, Epicurus inspired a series of eighteenth-century philosophers who emphasized
the notion of general happiness—that is, the pleasing consequences of actions that impact
others and not just the individual. Francis Hutcheson (1694–1746) stated that “that action is
best, which procures the greatest happiness for the greatest numbers.” David Hume (1711–
1776) introduced the term utility to describe the pleasing consequences of actions as they
impact people.
The classical expressions of utilitarianism, though, appear in the writings of two English
philosophers and social reformers Jeremy Bentham (1748–1832) and John Stuart Mill (1806–
1873). Their approach to morality was nonreligious andthey tried to reform society by
rejecting unfounded rules of morality and law.
Jeremy Bentham
There are two main features of utilitarianism, both of which Bentham articulated: the
consequentialist principle (or its teleological aspect) and the utility principle (or its hedonic
aspect). The consequentialist principle states that the right- ness or wrongness of an act is
determined by the goodness or badness of the results that follow from it. It is the end, not the
means, that counts; the end justifies the means. The utility, or hedonist, principle states that
the only thing that is good in itself is some specific type of state (for example, pleasure,
happiness, welfare). Hedonistic utilitarianism views pleasure as the sole good and pain as the
only evil. To quote Bentham, “Nature has placed mankind under the governance of two
sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as
well as what we shall do.”2 An act is right if it either brings about more pleasure than pain or
prevents pain, and an act is wrong if it either brings about more pain than pleasure or prevents
pleasure from occurring.
Bentham invented a scheme for measuring pleasure and pain that he called thehedonic
calculus.The quantitative score for any pleasure or pain experience is obtained by summing
the seven aspects of a pleasurable or painful experience: its intensity, duration, certainty,
nearness, fruitfulness, purity, and extent. Adding up the amounts of pleasure and pain for each
possible act and then com- paring the scores would enable us to decide which act to perform.
With regard to our example of deciding between giving the dying man’s money to the Yankees
or to the African famine victims, we would add up the likely pleasures to all involved, for all
seven qualities. If we found that giving the money to the famine victims would cause at least 3
million hedons (units of happiness) but that giving the money to the Yankees would cause less
than 1,000 hedons, we would have an obligation to give the money to the famine victims.
There is something appealing about Bentham’s utilitarianism. It is simple in that there is only
one principle to apply: Maximize pleasure and minimize suffering. It is commonsensical in that
we think that morality really is about reducing suffering and promoting benevolence. It is
scientific: Simply make quantitative measurements and apply the principle impartially, giving
no special treatment to ourselves or to anyone else because of race, gender, personal
relationship, or religion.
However, Bentham’s philosophy may be too simplistic in one way and too complicated in
another. It may be too simplistic in that there are values other than pleasure (as we saw in
Chapter 6), and it seems too complicated in its artificial hedonic calculus. The calculus is
burdened with too many variables and has problems assigning scores to the variables. For
instance, what score do we give a cool drink on a hot day or a warm shower on a cool day?
How do we compare a 5-year-old’s delight over a new toy with a 30-year-old’s delight with a
new lover? Can we take your second car from you and give it to Beggar Bob, who does not
own a car and would enjoy it more than you? And if it is simply the overall benefitsof pleasure
that we are measuring, then if Jack or Jill would be “happier” in the Pleasure Machine or the
Happiness Machine or on drugs than in the real world, would we not have an obligation to
ensure that these conditions become reality? Because of such considerations, Bentham’s
version of utilitarianism was, even in his own day, referred to as the “pig philosophy” because
a pig enjoying his life would constitute a higher moral state than a slightly dissatisfied Socrates.
John Stuart Mill
It was to meet these sorts of objections and save utilitarianism from the charge of
being a pig philosophy that Bentham’s successor, John Stuart Mill, sought to
distinguish happiness from mere sensual pleasure. His version of the theory is
often called eudaimonistic utilitarianism(from the Greek eudaimonia, meaning
“happiness”). He defines happiness in terms of certain types of higher-order
pleasures or satisfactions such as intellectual, aesthetic, and social enjoyments, as
well as in terms of minimal suffering. That is, there are two types of pleasures. The
lower, or elementary, include eating, drinking, sexuality, resting, and sensuous
titillation. The higher include high culture, scientific knowledge, intellectuality, and
creativity. Although the lower pleasures are more intensely gratifying, they also
lead to pain when overindulged in. The higher pleasures tend to be more long
term, continuous, and gradual.
Mill argued that the higher, or more refined, pleasures are superior to the lower
ones: “A being of higher faculties requires more to make him happy, is capable
probably of more acute suffering, and certainly accessible to it at more points,
than one of an inferior type,” but still he is qualitatively better off than the person
without these higher faculties. “It is better to be a human being dissatisfied than a
pig satisfied; better to be Socrates dissatisfied than a fool satisfied.”3 Humans are
the kind of creatures who require more to be truly happy. They want the lower
pleasures, but they also want deep friendship, intellectual ability, culture, the
ability to create and appreciate art, knowledge, and wisdom.
But one may object, “How do we know that it really is better to have these higher
pleasures?” Here, Mill imagines a panel of experts and says that of those who have
had a wide experience of pleasures of both kinds almost all give a decided
preference to the higher type. Because Mill was an empiricist—one who believed
that all knowledge and justified belief was based on experience—he relied on the
combined consensus of human history. By this view, people who experience both
rock music and classical music will, if they appreciate both, prefer Bach and
Beethoven to Metallica. That is, we generally move up from appreciating simple
things (for example, nursery rhymes) to more complex and intricate things (for
example, poetry that requires great talent) rather than the other way around.
Mill has been criticized for not giving a better reply—for being an elitist and for
unduly favoring the intellectual over the sensual. But he has a point. Don’t we
generally agree, if we have experienced both the lower and the higher types of
pleasure, that even though a full life would include both, a life with only the
former is inadequate for human beings? Isn’t it better to be Socrates dissatisfied
than the pig satisfied—and better still to be Socrates satisfied?
The point is not merely that humans wouldn’t be satisfied with what satisfies a pig
but that somehow the quality of the higher pleasures is better. But what does it
mean to speak of better pleasure? The formula he comes up with is this:
Happiness ... [is] not a life of rapture; but moments of such, in an existence made
up of few and transitory pains, many and various pleasures, with a decided
predominance of the active over the passive, and having as the foundation of the
whole, not to expect more from life than it is capable of bestowing.4
It implies that if you have employed a boy to mow your lawn and he has finished
the job and asks for his pay, you should pay him what you promised only if you
cannot find a better use for your money. It implies that when you bring home your
monthly paycheck you should use it to support your family and yourself only if it
cannot be used more effectively to supply the needs of others.5
For the most sophisticated versions of rule-utilitarianism, three levels of rules will
guide actions. On the lowest level is a set of utility-maximizing rules of thumb,
such as “Don’t lie” and “Don’t cause harm,” that should always be followed unless
there is a conflict between them. If these first-order rules conflict, then a second-
order set of conflict-resolving rules should be consulted, such as “It’s more
important to avoid causing serious harm than to tell the truth.” At the top of the
hierarchy is a third-order rule sometimes called the remainder rule, which is the
principle of act-utilitarianism: When no other rule applies, simply do what your
best judgment deems to be the act that will maximize utility.
An illustration of this is the following: Suppose you promised to meet your teacher
at 3 p.m. in his office. On your way there, you come upon an accident victim
stranded by the wayside who desperately needs help. The two first-order rules in
this situation are “Keep your promises” and “Help those in need when you are not
seriously inconvenienced in doing so.” It does not take you long to decide to break
the appointment with your teacher because it seems obvious in this case that the
rule to help others overrides the rule to keep promises. There is a second-order
rule prescribing that the first-order rule of helping people in need when you are
not seriously inconvenienced in doing so overrides the rule to keep promises.
However, there may be some situation where no obvious rule of thumb applies.
Say you have $50 that you don’t really need now. How should you use this money?
Put it into your savings account? Give it to your favorite charity? Use it to throw a
party? Not only is there no clear first-order rule to guide you, but there is no
second-order rule to resolve conflicts between first-order rules. Here and only
here, on the third level, the general act-utility principle applies without any other
primary rule; that is, do what in your best judgment will do the most good.
Debates between act- and rule-utilitarians continue today. Kai Nielsen, a staunch
act-utilitarian, argues that no rules are sacred; differing situations call forth
different actions, and potentially any rule could be overridden. He thus criticizes
what he calls moral conservatism, which is any normative ethical theory that
maintains that there is a privileged moral principle, or cluster of moral principles,
prescribing determinate actions that it would always be wrong not to act in
accordance with no matter what the consequences.
Nielsen argues further that we are responsible for the consequences of not only
the actions that we perform but also the nonactions that we fail to perform. He
callsthis “negative responsibility.” To illustrate, suppose you are the driver of a
trolley car and suddenly discover that your brakes have failed. You are just about
to run over five workers on the track ahead of you. However, if you act quickly,
you can turn the trolley onto a sidetrack where only one man is working. What
should you do? One who makes a strong distinction between allowing versus
doing evil would argue that you should do nothing and merely allow the trolley to
kill the five workers. But one who denies that this is an absolute distinction would
prescribe that you do something positive to minimize evil. Negative responsibility
means that you are going to be responsible for someone’s death in either case.
Doing the right thing, the utilitarian urges, means minimizing the amount of evil.
So you should actively cause the one death to save the other five lives.6 Critics of
utilitarianism contend either that negative responsibility is not a strict duty or that
it can be worked into other systems besides utilitarianism.
Its second strength is that utilitarianism seems to get to the substance of morality. It is
not merely a formal system that simply sets forth broad guidelines for choosing
principles but offers no principles—such as the guideline “Do what- ever you can
universalize.” Rather it has a material core: We should promote human (and possibly
animal) flourishing and reduce suffering. The first virtue gives us a clear decision
procedure in arriving at our answer about what to do. The second virtue appeals to our
sense that morality is made for people and that morality is not so much about rules as
about helping people and alleviating the suffering in the world.
have one overriding duty: to maximize general happiness. As long as the quality of life of
future people promises to be positive, we have an obligation to continue human
existence, to produce human beings, and to take whatever actions are necessary to
ensure that their quality of life is not only positive but high.
It does not matter that we cannot identify these future people. We may look upon them
as mere abstract placeholders for utility and aim at maximizing utility. Derek Parfit
explains this using this utilitarian principle: “It is bad if those who live are worse off than
those who might have lived.” He illustrates his principle this way. Suppose our
generation has the choice between two energy policies: the “Safe Energy Policy” and
the “Risky Energy Policy.”7 The Risky Policy promises to be safe for us but is likely to
create serious problems for a future generation, say, 200 years from now. The Safe
Policy won’t be as beneficial to us but promises to be stable and safe for posterity—
those living 200 years from now and beyond. We must choose and we are responsible
for the choice that we make. If we choose the Risky Policy, we impose harms on our
descendants, even if they don’t now exist. In a sense, we are responsible for the people
who will live because our policy decisions will generate different causal chains, resulting
in different people being born. But more important, we are responsible for their quality
of life because we could have caused human lives to have been better off than they are.
What are our obligations to future people? If utilitarians are correct, we have an
obligation to leave posterity to as good a world as we can. This would mean radically
simplifying our lifestyles so that we use no more resources than are necessary, keeping
as much top soil intact as possible, protecting endangered species, reducing our carbon
dioxide emissions, preserving the wilderness, and minimizing our overall deleterious
impact on the environment in general while using technology wisely.
CRITICISM OF UTILITARIANISM
Utilitarianism has been around for several centuries, but so too have been its
critics, and we need to address a series of standard objections to utilitarianism
before we can give it a “philosophically clean bill of health.”
The first set of problems occurs in the very formulation of utilitarianism: “The
greatest happiness for the greatest number.” Notice that we have two “greatest”
things in this formula: “happiness” and “number.” Whenever we have two
variables, we invite problems of determining which of the variables to rank first
when they seem to conflict. To see this point, consider the following example: I am
offering a $1,000 prize to the person who runs the longest distance in the shortest
amount of time. Three people participate: Joe runs 5 miles in 31 minutes, John
runs 7 miles in 50 minutes, and Jack runs 1 mile in 6 minutes. Who should get the
prize? John has fulfilled one part of the requirement (run the
longest distance), but Jack has fulfilled the other requirement (run the shortest
amount of time).
This is precisely the problem with utilitarianism. On the one hand, we might
concern ourselves with spreading happiness around so that the greatest number
obtain it (in which case, we should get busy and procreate a larger population). On
the other hand, we might be concerned that the greatest possible amount of
happiness obtains in society (in which case, we might be tempted to allow some
people to become far happier than others, as long as their increase offsets the
losers’ diminished happiness). So should we worry more about total happiness or
about highest average?
Utilitarians also need to be clear about specifically whose happiness we are talking
about: all beings that experience pleasure and pain, or all human beings, or all
rational beings. One criterion might exclude mentally deficient human beings, and
another might include animals. Finally, utilitarians need to indicate how we
measure happiness and make interpersonal comparisons between the happiness
of different people. We’ve seen Mill’s efforts to address this problem with his
notion of higher pleasures; we’ve also seen the additional complications that his
solution creates.
For want of a nail, the shoe was lost; For want of a shoe, the horse was lost; For
want of a horse, the rider was lost; For want of a rider, the battle was lost; For
want of a battle, the kingdom was lost; And all for the want of a horseshoe nail.
Poor, unfortunate blacksmith; what utilitarian guilt he must bear all the rest of his
days!
But it is ridiculous to blame the loss of one’s kingdom on the poor, unsuccessful
blacksmith, and utilitarians are not so foolish as to hold him responsible for the
bad situation. Instead, following C. I. Lewis, utilitarians distinguish two kinds of
consequences: (1) actual consequences of an act and (2) consequences that could
reasonably objectively right if it is reasonable to expect that it will have the best
consequences (as per consequence 2).
According to utilitarianism, one should always do that act that promises to pro-
mote the most utility. But there is usually an infinite set of possible acts to choose
from, and even if I can be excused from considering all of them, I can be fairly sure
that there is often a preferable act that I could be doing. For example, when I am
about to go to the cinema with a friend, I should ask myself if helping the homeless
in my community wouldn’t promote more utility. When I am about to go to sleep, I
should ask myself whether I could at that moment be doing something to help
save the ozone layer. And, why not simply give all my assets (beyond what is
absolutely necessary to keep me alive) to the poor to promote utility? Following
utilitarianism, I should get little or no rest, and, certainly, I have no right to enjoy
life when by sacrificing I can make others happier. Peter Singer actually advocates
an act-utilitarian position similar to this. Accord- ing to Singer, middle-class people
have a duty to contribute to poor people (especially in undeveloped countries)
more than one-third of their income, and all of us have a duty to contribute every
penny above $30,000 we possess until we are only marginally better off than the
worst-off people on earth.
The problem with approaches like Singer’s is that this makes morality too
demanding, creates a disincentive to work, and fails to account for different levels
of obligation. Thus, utilitarianism must be a false doctrine. But rule-utilitarians
have a response to this no-rest objection: A rule prescribing rest and entertain-
ment is actually the kind of rule that would have a place in a utility-maximizing set
of rules. The agent should aim at maximizing his or her own happiness awell as
other people’s happiness. For the same reason, it is best not to worry much about
the needs of those not in our primary circle. Although we should be concerned
about the needs of poor people, it actually would promote disutility for the
average person to become preoccupied with these concerns. Singer represents a
radical act-utilitarian position that fails to give adequate attention to the rules that
promote human flourishing, such as the right to own property, educate one’s
children, and improve one’s quality of life, all of which probably costs more than
$30,000 per year in many parts of North America. However, the utilitarian would
remind us, we can surely do a lot more for suffering humanity than we now are
doing—especially if we join together and act cooperatively. And we can simplify
our lives, cutting back on unnecessary consumption, while improving our overall
quality.
It is usually thought that moral principles must be known to all so that all may
freely obey the principles. But utilitarians usually hesitate to recommend that
everyone act as a utilitarian, especially an act-utilitarian, because it takes a great
deal of deliberation to work out the likely consequences of alternative courses of
action. It would be better if most people acted simply as deontologists.9 Thus,
utilitarianism seems to contradict our requirement of publicity.
There are two responses to this objection. First, at best this objection only works
against act-utilitarianism, which at least in theory advocates sitting down and
calculating the good and bad consequences of each action that we plan to
perform. Rule-utilitarianism, by contrast, does not focus on the consequences of
particular actions but on the set of rules that are likely to bring about the most
good. These rules indeed are publicized by rule-utilitarians.
How might the rule-utilitarian respond to this? David Hume, an early defender of
utilitarian moral reasoning, argued that human nature forces consis- tency in our
moral assessments. Specifically, he argues, there are “universal prin- ciples of the
human frame” that regulate what we find to be agreeable or disagreeable in moral
matters. Benevolence, for example, is one such type of conduct that we naturally
find agreeable.10 Following Hume’s lead, the rule- utilitarian might ground the key
components of happiness in our common human psychological makeup rather
than the result of fluctuating personal whims. This would give utilitarianism a more
objective foundation and thus make it less susceptible to the charge of relativism.
Chief among the criticisms of utilitarianism is that utilitarian ends might justify
immoral means. There are many dastardly things that we can do in the name of
maximizing general happiness: deceit, torture, slavery, even killing off ethnic
minorities. As long as the larger populace benefits, these actions might be
justified. The general problem can be laid out in this argument:
(1) If a moral theory justifies actions that we universally deem impermissible, then
that moral theory must be rejected.
History has taught us that often lies serve her better than the truth; for man is
sluggish and has to be led through the desert for forty years before each step in his
development. And he has to be driven through the desert with threats and
promises, by imaginary terrors and imaginary consolations, so that he should not
sit down prematurely to rest and divert himself by worshipping golden calves.11
Jim finds himself in the central square of a small South American town. Tied up
against the wall are a row of twenty Indians, most terrified, a few defiant, in front
of them several armed men in uniform. A heavy man in a sweat-stained khaki shirt
turns out to be the captain in charge and, after a good deal of questioning of Jim
which establishes that he got there by accident while on a botanical expedition,
explains that the Indians are a random group of inhabitants who, after recent acts
of protest against the government, are just about to be killed to remind other
possible protesters of the advantages of not protesting. However, since Jim is an
honored visitor from another land, the captain is happyto offer him a guest’s
privilege of killing one of the Indians himself. If Jim accepts, then as a special mark
of the occasion, the other Indians will be let off. Of course, if Jim refuses, then
there is no special occasion, and Pedro here will do what he was about to do when
Jim arrived, and kill them all. Jim, with some desperate recollection of schoolboy
fiction, wonders whether if he got hold of a gun, he could hold the captain, Pedro
and the rest of the soldiers to threat, but it is quite clear from the setup that
nothing of that kind is going to work: any attempt of that sort of thing will mean
that all the Indians will be killed, and himself. The men against the wall, the other
villagers, understand the situation, and are obviously begging him to accept. What
should he do?12
How can a man, as a utilitarian agent, come to regard as one satisfaction among
others, and a dispensable one, a project or attitude round which he has built his
life, just because someone else’s projects have so struc- tured the causal scene
that that is how the utilitarian sum comes out?13
In response to this criticism, the utilitarian can argue that integrity is not an
absolute that must be adhered to at all costs. Some alienation may be necessary
for the moral life, and the utilitarian can take this into account in devising strate-
gies of action. Even when it is required that we sacrifice our lives or limit our
freedom for others, we may have to limit or sacrifice something of what Williams
calls our integrity. We may have to do the “lesser of evils” in many cases. If the
utilitarian doctrine of negative responsibility is correct, we need to realize that we
are responsible for the evil that we knowingly allow, as well as for the evil we
commit.
With both of the previous problems, the utilitarian response was that we should
reconsider whether truth telling and personal integrity are values that should
never be compromised. The situation is intensified, though, when we consider
standards of justice that most of us think should never be dispensed with. Let’s
look at two examples, each of which highlights a different aspect of justice.
These careless views of justice offend us. The very fact that utilitarians even
consider such actions—that they would misuse the legal system or the medical
system to carry out their schemes—seems frightening. It reminds us of the
medieval Roman Catholic bishop’s justification for heresy hunts and inquisitions
and religious wars:
When the existence of the Church is threatened, she is released from the
commandments of morality. With unity as the end, the use of every means is
sanctified, even cunning, treachery, violence, simony, prison, death. For all order is
for the sake of the community, and the individual must be sacrificed to the
common good.
Similarly, Koestler argues that this logic was used by the Communists in the Soviet
Union to destroy innocent people whenever it seemed to the Communist leaders
that torture and false confessions served the good of the state because “you can’t
make an omelet without breaking eggs.”
How can the utilitarian respond to this? It won’t work this time to simply state that
justice is not an absolute value that can be overridden for the good of the whole
society. The sophisticated rule-utilitarian insists it makes good sense to have a
principle of justice to which we generally adhere. That is, general happiness is best
served when we adopt the value of justice. Justice should not be overridden by
current utility concerns because human rights themselves are out- comes of utility
consideration and should not be lightly violated. That is, because we tend
subconsciously to favor our own interests and biases, we institute the principle of
rights to protect ourselves and others from capricious and biased acts that would
in the long run have great disutility. Thus, we must not under- mine institutional
rights too easily. Thus, from an initial rule-utilitarian assessment, the sheriff should
not frame the innocent tramp, and the doctor should not harvest organs from the
bachelor.
However, the utilitarian cannot exclude the possibility of sacrificing innocent
people for the greater good of humanity. Wouldn’t we all agree that it would be
right to sacrifice one innocent person to prevent an enormous evil? Suppose, for
example, a maniac is about to set off a nuclear bomb that will destroy New York
City. He is scheduled to detonate the bomb in one hour. His psychiatrist knows the
lunatic well and assures us that there is one way to stop him—torture his 10-year-
old daughter and televise it. Suppose for the sake of the argument that there is no
way to simulate the torture. Would you not consider torturing the child in this
situation? As the rule-utilitarian would see it, we have two moral rules that are in
conflict: the rule to prevent widespread
harm and the rule against torture. To resolve this conflict, the rule-utilitarian might
appeal to this second-level conflict-resolving rule: We may sacrifice an innocent
person to prevent a significantly greater social harm. Or, if no conflict-resolving
rule is available, the rule-utilitarian can appeal to this third- level remainder rule:
When no other rule applies, simply do what your best judgment deems to be the
act that will maximize utility. Using this remainder rule, the rule-utilitarian could
justify torturing the girl.
Thus, in such cases, it might be right to sacrifice one innocent person to save a city
or prevent some wide-scale disaster. In these cases, the rule-utilitarian’s approach
to justice is in fact the same as the above-mentioned approach to lying and
compromising one’s integrity: Justice is just one more lower-order principle within
utilitarianism. The problem, clearly, is determining which kinds of wide-scale
disasters warrant sacrificing innocent lives. This question invariably comes up in
wartime: In every bombing raid, especially in the drop- ping of the atomic bomb
on Hiroshima and Nagasaki, the noncombatant–combatant distinction is
overridden. Innocent civilian lives are sacrificed with the prospect of ending the
war. We seem to be making this judgment call in our decision to drive automobiles
and trucks even though we are fairlycertain the practice will result in the death of
thousands of innocent people each year. Judgment calls like these highlight
utilitarianism’s difficulty in handling issues of justice.
CONCLUSION