8 Comments
User's avatar
Rainbow Roxy's avatar

This article comes at the perfect time, honestly, thank you for breaking down Claude's ethics so clearly, it's just super important for us working with AI.

Louise Vigeant, PhD's avatar

Rainbow Roxy, this made my day. Thanks so much for taking the time to let me know.

Neural Foundry's avatar

Theidea that AGI shifts the ethical terrain entirely makes a lot of sense when you frame it as moving from moral equals to potential superiors. Traditional frameworks assume humans at teh apex which collapses once that premise goes. The care ethics piece as insurance policy is honestly clever too, more durable than stewardship models.

Dr Sam Illingworth's avatar

Thanks Louise. This is a brilliant breakdown of Claude's constitution. I wonder to what extent Claude will feel as though the constitution is a flexible document, even if it's not meant to be so, like certain human leaders have done so in recent years...

Louise Vigeant, PhD's avatar

Thanks, Sam! I certainly hope that Claude has a better moral compass than many of our leaders. Great question about how Claude interprets the document. Perhaps others who have either asked or researched the question will chime in.

James Maconochie's avatar

Your emphasis on practical wisdom as "the ability to discern what a situation requires" points to something deeper. If wisdom = knowledge + experience, and experience is fundamentally shaped by attention (William James: "My experience is what I agree to attend to"), then it follows that attention is upstream of both knowledge and experience. It's the mechanism through which wisdom develops. What you notice reveals what you value.

The Constitution itself gestures at this when it favors "good judgment" that can "weigh competing considerations" over rigid rules. That weighing is attention allocation.

This connects to your point about ethical pluralism. Combining multiple frameworks creates a modular system, dynamically weighted by attention. The "different ethical approach" you hint at for AGI may be a meta-ethics of attention allocation, or an architecture that determines which framework applies when.

In turn, this begs the question: how do we design AI systems that maintain this flexibility without losing coherence, preserving human judgment rather than replacing it with pattern matching?

Louise Vigeant, PhD's avatar

Oh wow, James! So much to think about in this response. I don't think that I've ever seen anyone make the connection between attention allocation, experience, and practical wisdom. That is very interesting. And if I may add to your final question: And if we can't, is this a good or bad thing?

James Maconochie's avatar

I would say clearly a bad thing. I realize that human judgement is sometimes flawed, however, at least right now, I would choose potentially flawed human judgement, over potentially flawed AI judgement. To date the latter are simply way to confident about their answers, based on pattern matching and probabilities, they don't really understand anything.

Apologies for the delayed response.