Discussion about this post

User's avatar
James Maconochie's avatar

Louise, this is the empirical anchor I’ve been looking for. Two extensions worth flagging.

Darley & Batson’s 1973 “Jerusalem to Jericho” study found seminary students about to lecture on the Good Samaritan walked past someone in distress when they were running late. Time pressure didn’t make their moral reasoning inconsistent — it switched it off.

The remote-button variant of the trolley problem points the same direction. Distance from the consequence stops recruiting the socio-emotional machinery Greene’s fMRI work identifies.

Three converging cases:

• Good Samaritan: pressure switches off the machinery

• Remote button: distance switches it off

• LLMs (MoReBench): the machinery was never there

Which suggests the human capacity you’re describing isn’t inherent, it’s conditional on the architectural conditions for its exercise. AI-mediation introduces structural distance by default. That’s the design problem worth naming: not whether to trust human judgment over AI, but what conditions let human judgment engage at all.

1 more comment...

No posts

Ready for more?