HomePurpose“The Class Agreed to Sacrifice ONE Person to Save FIVE… Until the...

“The Class Agreed to Sacrifice ONE Person to Save FIVE… Until the Professor Swapped a Lever for a Push—and Suddenly Everyone Called It Murder.”

It starts like a harmless classroom exercise.

The professor writes JUSTICE on the board, then turns to the room with a situation so clean it feels like arithmetic:

A trolley is racing down the track toward five workers. You’re the driver. You can pull a lever to divert it onto a side track where one worker stands.

The room answers fast—almost automatically.

“Pull the lever.”
“Save the five.”
“It’s tragic, but it’s the better outcome.”

This is the first moral instinct the lecture exposes: outcome-based reasoning. If you can reduce harm and save more lives, you should.

But the professor doesn’t celebrate the answer. He just nods—as if saying, Good. Now watch how fragile your certainty is.

He changes one detail.

Now you’re not steering a machine from a distance. You’re standing on a bridge. The trolley is still heading toward five. Beside you is a very large man. If you push him onto the track, his body will stop the trolley. Five live. He dies.

And suddenly the room’s confidence collapses.

People shift in their seats. Some laugh nervously. Some cross their arms as if protecting themselves from the question.

Most refuse.

And the professor asks the question that punches through the air:

“Why did you say yes when it was a lever… but no when it was a push?”

Same math. Same number of deaths.
But our moral instincts treat them as different acts.

Because pulling a lever feels like redirecting harm, while pushing a person feels like turning yourself into the weapon—and using someone as a means to an end.

That’s the first crack that opens the whole course:
We have competing moral principles living inside us.


PART 2

Then the professor takes the trolley out of the classroom and puts it in a hospital—where choices feel less hypothetical.

He offers an emergency-room dilemma:

One patient is severely injured. Five are moderately injured. You can save either the one or the five.

Most students still choose: save the five.

The “maximize lives saved” instinct stays strong.

Then comes the scenario that shocks nearly everyone:

A transplant surgeon has five dying patients who need organs. A healthy person comes in for a routine checkup. If the surgeon kills the healthy person and harvests organs, the five will live.

Almost the entire room rejects it immediately.

Not “maybe.” Not “it depends.” Just no.

And now the contradiction is unmistakable:

  • People accept sacrificing one life to save five in the trolley/ER cases…

  • But almost no one accepts killing one healthy person to save five.

The professor lets the discomfort hang, then asks:

“If consequences are what matter, why is this different?”

And that question forces the class to name what they usually feel but don’t articulate:

  • In the transplant case, the victim is innocent and not already threatened.

  • The killing is not a side effect—it’s the means.

  • The person is treated like a tool, not a human with rights.

This is where the lecture introduces the two rival styles of moral reasoning:

  • Consequentialism / Utilitarian thinking: judge actions by results (maximize welfare, lives, happiness).

  • Categorical / duty-based thinking: some actions violate a moral boundary (rights, dignity), even if the outcome is better.

The students begin to see that “justice” is not only about saving the most people.

It’s also about whether certain acts—like intentionally killing an innocent—are morally off-limits.


PART 3

Then the professor does something that changes the mood completely.

He says: “Now let’s leave thought experiments.”

And he tells the true case: Queen v. Dudley and Stephens.

Four sailors survive a shipwreck. Days pass without food or water. They believe death is near. The captain and first mate kill the cabin boy, Richard Parker, and eat him to survive.

Now the trolley problem is no longer a puzzle.

It’s a real dead child, real desperation, real law.

The moral debate splits the room:

  • Some argue necessity: “One died so others could live.”

  • Others argue categorical wrongness: “Murder is murder, even in desperation.”

Then the class tries to “repair” the horror with two ideas:

  1. Fair procedure (a lottery):
    If they had drawn lots, would that make it morally acceptable?

  2. Consent:
    If the boy had agreed, would that justify it?

And the professor pushes them into the hardest realization:

A lottery can feel fair in theory, but starvation may make “choice” meaningless.
Consent can sound moral, but coercion can hide inside hunger and fear.

So the case becomes the perfect bridge into the philosophers the course will study:

  • Bentham / Mill (utilitarianism): morality aims to maximize happiness and minimize suffering.

  • Kant (categorical imperative): persons must never be treated merely as means; some duties are unconditional.

The lecture ends without a neat answer on purpose—because moral reflection doesn’t end neatly.

It ends with a warning:

Even if you try to escape philosophy, you can’t.

Because in real life—law, medicine, policy, war, equality—we keep facing trolley-like decisions, just with better clothes and more paperwork.

And the “shock” lesson of the intro is this:

Most of us want to be consequentialists when the lever is far away…
but we turn into duty-based thinkers the moment a human body becomes the tool.

That tension—between outcomes and moral limits—is the heartbeat of the entire course on justice.

RELATED ARTICLES

Most Popular

Recent Comments