Friday, October 7, 2011

Moral math and sadistic choices

In the path of a runaway train car are five railway workmen who will surely be killed unless you, a bystander, do something. You are standing on a pedestrian walkway that arches over the tracks next to a large stranger. Your body would be too light to stop the train, but if you push the stranger onto the tracks, killing him, his large body will stop the train. In this situation, would you push him?[1]

Okay, the first response that pops into my mind is: “I’m too light to stop the train car by flinging myself in front of it, but I’m strong enough to push a man that large over the rail? We’re talking a railcar weighing thousands of pounds here! What were you on when you thought of this question?”

But that’s not the point, is it? The point is, the large man isn’t the one posing the threat to the workmen’s lives … but his death can save them, so would you deliberately cause an innocent by-stander’s death to save the lives of five other innocent people?

According to a utilitarian, your answer should be “yes”; if not, you’re making a wrong moral choice. After all, to quote Mr. Spock, “The needs of the many outweigh the needs of the few,” right?[*]

But the choice as given above demonstrates one problem with utilitarian ethics: it presumes that all moral choices can be boiled down to numbers with which you can do some simple math.


The “moral math” implicit in the question only works if the right to live of the workmen had a quantifiable additive property (let’s call it RL) such that the group has 5 RL while the large man on the overpass has only 1 RL: since 5 RL > 1 RL, the RL of the large man is only 20% of the group, and therefore his life is “worth less”. Your decision to not push Mr. Immense over is wrong, to the utilitarian’s thinking, since it implies that the individual workmen are only worth an average 0.2 RL.

But that’s not the case: since the right to life is an unquantifiable, intangible — dare I say it? — immaterial thing, no group can possess “more right to live” than any individual inside or outside of it. It’s as much of a fallacy as to say that a proposed second child will take half of the love you give to your first child away from her.

Next, the “moral math” presumes that a person’s social utility is not only quantifiable but also knowable in the context in which the decision has to be made: you, the person that has to push the large man over the rail, are possessed of a functional if not actual omniscience. Life has a way of handing us problems to solve where our solutions go awry because of factors we didn’t, and often couldn’t, know or foresee at the time. But Utilitarianism implicitly assumes that we always have enough information to make rapid yet valid calculations of both individual and group social utility.

Of course, you can’t use the utilitarian argument on beginning- and end-of-life issues without tacitly employing the concept of lebensunwertes Leben, “life unworthy of life”. In the cost-benefit analysis implicit in “social utility”, a person’s life is only worth as much as she contributes to the (often ill-defined) “greater good”; the power to decide that a given person’s life is worth less than others’ implies the power to decide her life is worthless. In such a moral context, the equality of humans to each other — the cornerstone of democracy and foundation of individual rights — is categorically denied.

In a discussion on FuturePundit about a recent study showing a correlation between a preference for utilitarianism and antisocial personality traits, Brett Bellmore summed up the matter nicely:

Essentially, the attraction of utilitarianism is that, since nobody can ACTUALLY apply the theory, you’re free to imagine that it works better than moral theories people can really try to put into practice. But that’s all you’re doing, imagining. The theory boils down to, “Imagine a perfect world. Do whatever leads to it. Pretend you’re doing math in between.”

Put differently, by presuming perfectly-known quantifiables and measurables where there are none, utilitarianism pretends that moral choices can be made easier, more logically and with less mess and waste than they can with traditional (not to say that ugly word “religious”) ethical theories. But in the real world, where people actually make their decisions, there’s no perfect knowledge, no objective numbers and no scientific means to measure utility even when given adequate time to do the calculus: you make the best decision you can with what information you have, then you “hope for the best and plan for the worst”.

In Spider-Man, the Green Goblin taunts Our Hero with a choice between saving his beloved Mary Jane and a tram full of children, declaring, “This is why only fools are heroes! Because you never know when some psychopath will come along with a sadistic choice!”

There’s not much to choose between the “trolley problem” and the “tram problem” for which is more sadistic. Director Sam Raimi and screenwriter David Koepp could just as easily have been speaking of utilitarian either/or scenarios that invite us to treat such choices by doing the math.

But human beings aren’t mere data or collections of social utility. They’re all equal, and equally precious in the sight of God. And so moral choices in the real world will continue to be messy and illogical, because it isn’t a science.

[H/T to Pat Archbold @ Creative Minority Report]


Footnotes:
[*] Star Trek II: The Wrath of Khan. Spock, however, was speaking in the context of his own self-sacrifice; he wasn’t trying to justify throwing Scotty into the dilithium chamber.


Citations:
[1] Thomson, J. J. (1985). “The trolley problem”. Yale Law Journal, 94, 1395–1415. Cited in Bartels, D. M.; Pizarro, D. A. (2011, October). “The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas”. Cognition, 121 (1), 154-161.