The lecture hall starts out ordinary: laptops open, people half-listening, the word JUSTICE written on the board like a topic they’ve heard a thousand times.
Then the professor drops a scenario so clean it feels like a math problem:
A trolley is racing toward five workers. You can pull a lever and divert it onto another track where one worker will die instead.
Most hands go up fast: pull the lever.
It feels like the moral version of common sense. Five lives saved. One lost. Tragic, but “right.”
The professor doesn’t argue. He simply smiles—like he’s waiting for the room to step into the next trap.
“New version,” he says. “Now you’re not the driver. You’re a bystander on a bridge. The trolley is still heading toward five. Next to you stands a very large man. If you push him off the bridge, he will stop the trolley. Five live. He dies. Do you push?”
Suddenly people hesitate.
Someone laughs awkwardly.
Someone whispers, “That’s different.”
Someone says, “I couldn’t physically do that.”
And the professor asks the question that makes everyone uncomfortable:
“If you were willing to kill one to save five a minute ago… why aren’t you willing now?”
Same numbers. Same outcome.
But the story changed:
-
Pulling a lever feels like redirecting danger.
-
Pushing a man feels like using a person as a tool.
That’s the moment the class realizes morality isn’t just a calculator.
It’s a tangled set of instincts about intentions, distance, rights, and what kind of person you become by doing the act.
PART 2
The professor tightens the screws by moving the dilemma into medicine—because hospitals make everything more real.
“ER scenario,” he says. “You’re a doctor. You can save either one severely injured patient or five moderately injured patients. Which do you choose?”
Many students pick: save five.
It matches the trolley lever logic. Outcomes matter. Maximize lives saved.
Then he drops the next one:
“Transplant scenario. Five patients will die without organs. A healthy person comes in for a routine checkup. If you kill him and take his organs, you can save the five. Do you do it?”
The room reacts instantly.
“No.”
“That’s evil.”
“That’s murder.”
The moral math crashes.
Because now the action is not “letting one die” or “redirecting harm.”
It’s intentionally killing an innocent person who did nothing wrong—treating him like spare parts.
The professor lets the silence stretch long enough to sting.
“Why did you switch?” he asks.
And the class finally starts naming the hidden rules:
-
It matters whether harm is a side effect or the means to your goal.
-
It matters whether you’re saving people or sacrificing someone like an object.
-
It matters whether you violate a person’s rights, even for a good outcome.
This is where the lecture introduces the deep conflict:
-
Consequentialist thinking: “Do what produces the best overall result.”
-
Categorical / duty-based thinking: “Some actions are wrong no matter how good the result looks.”
And the class realizes the worst part:
Most of us hold both instincts at the same time.
We want to save the five…
but we don’t want to become the kind of person who kills an innocent to do it.
PART 3
Then the professor stops treating it like a puzzle.
He tells a real story: Queen v. Dudley and Stephens.
Four sailors survive a shipwreck. Days pass. No food. No water. They believe death is coming. And then two of them kill the cabin boy, Richard Parker, and eat him to survive.
Now the trolley problem is no longer a classroom game.
It’s blood. Fear. Desperation. A human life taken on purpose.
The sailors argue: necessity.
“If we didn’t do it, we would all die.”
And the class splits—hard.
Some students say:
-
“It was survival.”
-
“One died so three could live.”
-
“Necessity changes everything.”
Others refuse:
-
“Murder is still murder.”
-
“You don’t get to choose that someone else must die for you.”
-
“Desperation doesn’t create moral permission.”
Then the professor introduces two “fixes” people often reach for:
-
Fair procedure (a lottery):
What if they drew lots, and the loser would be killed? Would that make it acceptable? -
Consent:
What if Parker agreed? Would that justify it?
And here’s the brutal twist: both fixes still feel contaminated.
Because a “fair lottery” in starvation may still be coercion with paperwork.
And consent under extreme desperation may not be fully free.
The lecture lands on the big purpose of the course:
This isn’t about giving you one perfect answer.
It’s about forcing you to see the two moral engines behind modern debates:
-
Bentham / Mill (Utilitarianism): maximize overall welfare, even if it demands hard sacrifices.
-
Kant (Categorical imperative): never treat a person merely as a means; some lines must not be crossed.
And the professor ends with the uncomfortable truth:
You can’t opt out.
Even saying “there’s no right answer” is still a moral stance.
Because in real life—law, policy, war, healthcare, equality—we constantly choose who bears the cost.
So the lecture doesn’t finish with a solution.
It finishes with a mirror:
Most people will pull the lever to save five…
but they won’t push the man.
And that contradiction is the doorway into the real question of justice:
Is morality about maximizing outcomes… or protecting human dignity even when it costs more