Posted by: h4ck@lyst | December 7, 2007

Marked Private!


So says two of my new recent posts.

Well I never knew about anything like this. Guess this is one of the the things involved in one of those “I Accept” buttons that we always click without ever bothering while installing any software or signing up for any online service. Well two of my posts have been marked private. One regarding the trolley problem, and the other regarding the Double effect. So that gives me all the more solid reason to stick to wikipedia or other open sourced sites to quote from. Beside I always provide the link to the actual content. True I leach upon the sites images, but if anyone really wants to stop leaching they can easily do so with a simple line in their httpd.conf.

Anyway, here are the article from wikipedia about the two posts that have been marked private.

And the wikipedia article is far more intresting than the actual content. 🙂

The trolley problem is a thought experiment in ethics, first introduced by Philippa Foot, but also extensively analysed by Judith Jarvis Thomson and, more recently, by Peter Unger. Similar problems have traditionally been addressed by criminal lawyers and are sometimes regulated in penal codes, especially in civil legal systems. A classical example of these problems became known as “the plank of Carneades“, designed by Carneades to attack Stoic moral theories as inconsistent. Outside of the domain of traditional philosophical discussion, the trolley problem has been a significant feature in the field of neuroethics, which tends to approach philosophical questions from a neuroscientific approach.

The trolley problem

The problem is this:

A trolley is running out of control down a track. In its path are 5 people who have been tied to the track by a mad philosopher. Fortunately, you can flip a switch which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch?

A utilitarian view asserts that it is permissible to flip the switch. According to simple Utilitarianism, flipping the switch would be not only permissible, but, morally speaking, the better option (the other option being no action at all).

While simple utilitarian calculus seeks to justify this course of action, some non-utilitarians may also accept the view. Opponents might assert that, since moral wrongs are already in place in the situation, flipping the switch constitutes a participation in the moral wrong, making one partially responsible for the death (when otherwise the mad philosopher would be the sole culprit). Additionally, opponents may point to the incommensurability of human lives.

It might also be justifiable to consider that simply being present in this situation and being able to influence its outcome constitutes an obligation to participate. If this were the case, then deciding to do nothing would be considered an immoral act.

The fat man

One such is that offered by Judith Jarvis Thomson:

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by dropping a heavy weight in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

Resistance to this course of action seems strong; most people who approved of sacrificing one to save five in the first case do not approve in the second sort of case. This has led to attempts to find a relevant moral distinction between the two cases.

One clear distinction is that in the first case, one does not intend harm towards anyone – harming the one is just a side-effect of switching the trolley away from the five. However, in the second case, harming the one is an integral part of the plan to save the five. [1]

So, some claim that the difference between the two cases is that in the second, you intend someone’s death to save the five, and this is wrong, whereas in the first, you have no such intention. This solution is essentially an application of the doctrine of double effect, which says that you may take action which has bad side-effects, but deliberately intending harm (even for good causes) is wrong.

On the other hand, Thomson argues that an essential difference between the original trolley problem and this version with the fat man, is that in the first case, you merely deflect the harm, whereas in the second case, you have to do something to the fat man to save the five. Thomson says that in the first case, nobody has any more right than anyone else not to be run over, but in the second case, the fat man has a right not to be pushed in front of the trolley.

Act utilitarians deny this. So do some non-utilitarians such as Peter Unger, who rejects that it can make a substantive moral difference whether you bring the harm to the one or whether you move the one into the path of the harm. Note, however, that rule utilitarians do not have to accept this, and can say that pushing the fat man over the bridge violates a rule that adherence to is necessary for bringing about the greatest happiness for the greatest number.

The track that loops back

The claim that it is wrong to use the death of one to save five runs into a problem with “loop” variants like this:

As before, a trolley is hurtling down a track towards five people. As in the first case, you can divert it onto a separate track. On this track is a single fat man. However, beyond the fat man, this track loops back onto the main line towards the five, and if it weren’t for the presence of the fat man, flipping the switch would not save the five. Should you flip the switch?

The only difference between this case and the original trolley problem is that an extra piece of track has been added, which seems a trivial difference (especially since the trolley won’t travel down it anyway). So intuition may suggest that the answer should be the same as the original trolley problem – one may flip the switch. However, in this case, the death of the one actually is part of the plan to save the five.

The loop variant may not be fatal to the ‘using a person as a means’ argument. This has been suggested by M. Costa in his 1987 article “Another Trip on the Trolley,” where he points out that if we fail to act in this scenario we will effectively be allowing the five to become a means to save the one. If we do nothing then the impact of the trolley into the five will slow it down and prevent it from circling around and killing the one. As in either case some will become a means to saving others, then we are permitted to count the numbers. This approach requires that we downplay the moral difference between doing and allowing.


Here is a case, due to Thompson, where most of us come to the opposite conclusion that we do in the original Trolley Problem:

A brilliant transplant surgeon has five patients, each in need of a different organ, each of whom will die without that organ. Unfortunately, there are no organs available to perform any of these five transplant operations. A healthy young traveler, just passing through the city the doctor works in, comes in for a routine checkup. In the course of doing the checkup, the doctor discovers that his organs are compatible with all five of his dying patients. Suppose further that if the young man were to disappear, no-one would suspect the doctor.

As rare as it is to find someone who does not think we should turn the trolley, it is even rarer to find someone who thinks it is permissible for the doctor to murder this patient and harvest his organs. (A rare few utilitarians, such as Alastair Norcross, think that this might be acceptable under certain exceedingly unlikely circumstances.) Yet both cases seem to involve a choice between one life and five. What, if anything, explains this difference in our judgments?

  • In response to this philosophical question: Perhaps the fact that in the trolley case it is another man (the mad philosopher) that sentenced the 5 people to death, in which case another human involvement can be used to counteract or lessen the impact of this mad act. In the case of the healthy young man and 5 people already dying, nature is the one to decide – which takes us humans off the hook – we don’t have to take responsibility for those 5 lives.

The man in the yard

Unger argues extensively against traditional non-utilitarian responses to trolley problems. This is one of his examples:

As before, a trolley is hurtling down a track towards five people. You can divert its path by colliding another trolley into it, but if you do, both will be derailed and go down a hill, across a road, and into a man’s yard. The owner, sleeping in his hammock, will be killed. Should you proceed?

Responses to this are partly dependent on whether the reader has already encountered the standard trolley problem (since there is a desire to keep one’s responses consistent), but Unger notes that people who have not encountered such problems before are quite likely to say that, in this case, the proposed action would be wrong.

Unger therefore argues that different responses to these sorts of problems are based more on psychology than ethics – in this new case, he says, the only important difference is that the man in the yard does not seem particularly “involved”. Unger claims that people therefore believe the man is not “fair game”, but says that this involvedness cannot make a moral difference.

Unger also considers cases which are far more complex than the original Trolley problem, involving more than just two possible courses of action. In one such case, it is possible to do nothing and let five die, or to do something which will (a) save the five and kill four, (b) save the five and kill three, (c) save the five and kill two, or (d) save the five and kill one. Most naïve subjects presented with this sort of case, claims Unger, will choose (d), to save the five by killing one, even if this course of action involves doing something very similar to killing the fat man, as in Thomson’s case above.

The Guilty Man and the President

Dr. Robert Jacobson asks,

“What happens if, on the tracks of one trolley, five men guilty of murder are tied, and on the other, one man is innocent. Should you choose to save the one man, simply because he has committed no crime?”

Jacobson believes that most people will save the innocent man. He also raises this question: Should you save the five guilty men, or the innocent man, who may commit a murder after you save him?
Jacobson again asks a difficult question:

“What happens if, on one of the trolley tracks, the President of the United States has been tied by terrorists, and on the other trolley tracks, five average citizens are also tied up. As in the original Trolley Problem, who should you save?”
“What if the trolley is headed towards five average people you’ve never met but on the other tracks is your mother?” “Do you flip the switch and save five or save your mother?”

Jacobson, in this instance, is really asking if the President of the United States/your mother is more important than five average citizens.

The Ultimate Sacrifice

A final twist on the thought experiment runs this way:

As before, a trolley is hurtling down a track towards five helpless people. This time, however, you are on board the trolley yourself. There is a large explosive device on the trolley with you. Detonating it would utterly obliterate the trolley, saving the five people, but killing you. Or you could escape from the trolley, killing the five people, but saving your own life. Should you detonate the device?

Responses to this example, like some of the others, are also influenced by whether or not the subject has been exposed to previous thought experiments in the trolley problem. However, subjects appear to be much more willing to sacrifice their own lives to save the five than they are to sacrifice the lives of others. This appears to be because the subject is making the decision for himself and is therefore aware of the consequences, abrogating any need for a larger moral query. Another issue seems to be that subjects do not wish to be perceived as “selfish” or “cowardly.”

A further variation runs the same way, but posits that the subject and another, uninvolved person (often someone unconscious or otherwise unable to comprehend the consequences of the explosion) are both on the trolley. Is it now morally acceptable for the subject to detonate the device, killing not only himself, but also the other person?

(Notice that the question asks if you “should”, not if you would. There are certainly at least two possible answers to this, depending on the phrasing.)

Neuroethics and the Trolley Problem

In taking a neuroscientific approach to the Trolley Problem, Joshua Greene under Jonathan Cohen decided to examine the nature of brain response to moral and ethical conundra through the use of fMRI. In their more well-known experiments, Greene and Cohen analyzed subjects’ responses to the morality of responses in both the trolley problem involving a switch, and a footbridge scenario analogous to the fat man variation of the trolley problem. Their hypothesis suggested that encountering such conflicts evokes both a strong emotional response as well as a reasoned cognitive response that tend to oppose one another. From the fMRI results, they have found that situations highly evoking a more prominent emotional response such as the fat man variant would result in significantly higher brain activity in brain regions associated with response conflict. Meanwhile, more conflict-neutral scenarios, such as the relatively disaffected switch variant, would produce more activity in brain regions associated with higher cognitive functions. The potential ethical ideas being broached, then, revolve around the human capacity for rational justification of moral decision making.

The principle of double effect (PDE) or doctrine of double effect (DDE), sometimes simply called double effect for short, is a thesis in ethics, usually attributed to Thomas Aquinas. The principle seeks to explain under what circumstances one may act in a way that has both good and bad consequences (a “double effect”).

It states that an action having an unintended, harmful effect (e.g., an early death) is defensible on four conditions as follows:

  • the nature of the act is itself good (e.g., its nature is to relieve someone of pain or distress);
  • the intention is for the good effect and not the bad;
  • the good effect outweighs the bad effect in a situation sufficiently grave to merit the risk of yielding the bad effect (e.g., risking a patient’s death to stop intolerable pain); and
  • the good effect (relieving pain) does not go through the bad effect (e.g., death)

Intentional harm versus side-effects

Although different writers state the doctrine in different ways, it always claims that there is a moral difference between courses of action such as the following:

  1. An agent that deliberately causes harm in order to promote some good.
  2. An agent that promotes some good in such a way that harm is caused as a foreseen side-effect.

The doctrine of double effect stems from an application of the Hippocratic moral norm, i.e., “First, do no harm,” along with Aquinas’s First Precept (or Principle) of Natural Law, i.e., “Good is to be Done and Promoted and Evil is to be Avoided” [Summa Theo I-II Q94 Art 2].

Examples from medicine

A vaccine manufacturer typically knows that while a vaccine will save many lives, a few people will die from side-effects of taking the vaccine. The manufacturing of a drug is in itself morally neutral. The lives are saved as a result of the vaccine, not as a result of the deaths of those who die of side-effects. The bad effect, the deaths, due to side-effects does not further any goals the drug manufacturer has, and hence is not intended as a means to anything. Finally, the number of lives saved is much greater than the number lost, and so the proportionality condition is satisfied. This is more a case of side-effects/benefit analysis than of a real Principle application and is common in medicine.

The administration of a high dosage of opioids is sometimes allowed for the relief of pain in cases of terminal illness, even when this can cause death as a side effect. This argument played a great part in the acquittal of suspected serial killer Dr John Bodkin Adams.[1] Some, including most Catholic ethicists, hold that this concept is morally different from deliberate euthanasia for the relief of pain. Today, palliative care experience and research has shown that it is possible to manage pain or distress without hastening death (see opioids), so the debate relies on out-of-date data.[2]

The principle of double effect is frequently cited in cases of pregnancy and abortion. A doctor who believes abortion is always morally wrong may nevertheless perform a procedure on a pregnant woman, knowing the procedure will cause the death of the embryo or fetus, in cases in which the woman is certain to die without the procedure (examples cited include aggressive uterine cancer and ectopic pregnancy). In these cases, the intended effect is to save the woman’s life, not to terminate the pregnancy, and the effect of not performing the procedure would result in the greater evil of the death of both the mother and the unborn child.[3][4]


The Principle appears useful in war situations. In a war, it may be morally acceptable to bomb the enemy headquarters to end the war quickly, even if civilians on the streets around the headquarters might die. For, in such a case, the bad effect of civilian deaths is not disproportionate to the good effect of ending the war quickly, and the deaths of the civilians are side effect and not intended by the bombers, either as ends or as means. On the other hand, to bomb an enemy orphanage in order to terrorize the enemy into surrender would be unacceptable, because the deaths of the orphans would be intended, in this case as a means to ending the war early, contrary to condition.
Whether the Principle applies to the bombing of Hiroshima and Nagasaki is a highly controversial question, whereas the sometimes massive conventional bombing of european cities was usually justified by the principle.


Despite some apparent plausibility, the doctrine of double effect is controversial. Utilitarians, in particular, reject the notion that two acts can differ in their permissibility, if both have exactly the same consequences.

A major argument against the DDE is the hypothetical case where some evil must actively be done to bring about an enormous good. For example, suppose a nuclear bomb has been planted in a major city, and a person is held in custody who knows where it is, but who refuses to disclose the bomb’s location. May the interrogators torture this person’s family in front of his or her eyes, exploiting the family attachment to extract information and save millions of lives?

Even in such an extreme case, the DDE would not permit evil to be done prior to good consequences, whereas the utilitarian position states that the order of events is irrelevant. The argument against DDE thus becomes a question of how high must the stakes be before any evil is permissible for good ends, with the DDE position maintaining that evil is never permissible as an instigator to good ends.

In the past few years in the UK, at least two doctors undergoing murder trials for giving large doses of opioids to ill patients, have used the defence of double effect.[5]


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s


%d bloggers like this: