2 Comments
User's avatar
James Maconochie's avatar

Louise, this is the empirical anchor I’ve been looking for. Two extensions worth flagging.

Darley & Batson’s 1973 “Jerusalem to Jericho” study found seminary students about to lecture on the Good Samaritan walked past someone in distress when they were running late. Time pressure didn’t make their moral reasoning inconsistent — it switched it off.

The remote-button variant of the trolley problem points the same direction. Distance from the consequence stops recruiting the socio-emotional machinery Greene’s fMRI work identifies.

Three converging cases:

• Good Samaritan: pressure switches off the machinery

• Remote button: distance switches it off

• LLMs (MoReBench): the machinery was never there

Which suggests the human capacity you’re describing isn’t inherent, it’s conditional on the architectural conditions for its exercise. AI-mediation introduces structural distance by default. That’s the design problem worth naming: not whether to trust human judgment over AI, but what conditions let human judgment engage at all.

Louise Vigeant, PhD's avatar

Great connection there, James. The seminary student study is canonical and one of the most interesting. I like your focus on distance as a mediating factor. If this stuff interests you, Singer’s Famine, Affluence, and Morality could help strengthen your point. He similarly explores why people tend to feel less obligated to help those who are distant or out of sight, even when they still could help them.