Discussion about this post

User's avatar
James Maconochie's avatar

Great article, Louise, and I could not agree more. A couple of thoughts:

Fascinating research, but the headline finding feels obvious (at least to me). Of course, domain knowledge helps you interrogate AI output. No framework, no filter.

What the study misses is a second moderating variable: worldly wisdom. Not knowledge, but actual wisdom. The distinction matters. Knowledge is what you have learned. Wisdom is what you understand after you watch your knowledge play out in the real world, get it wrong, absorb the consequences, and adjust. You can be book-smart without being wise. You cannot be wise without experience.

An 18-year-old outside their domain has almost no chance of calling bullshit on a confidently wrong LLM. That's not an insult; they haven't yet accumulated the pattern recognition that only comes from repeated real-world feedback. The good news is that this effect diminishes with experience. Life gives you heuristics. You know when something smells off, even if you can't immediately say why.

Which brings me to the real concern. If we replace junior developers, researchers, and lawyers with agents before those people have lived through enough to develop judgment, we're not just automating tasks. We're eliminating the developmental pathway that produces wisdom in the first place.

We're eating our own seed corn.

Unless you subscribe to the idea that at some point AI and LLMs will result in 'AGI' - which I believe is simply flat out wrong.

Lauren Daly's avatar

This definitely echos my experience using AI as a thought partner! Love it. Great reminder to vet the output I know the least about, the most.

40 more comments...

No posts

Ready for more?