AI is Bullshit
So just what is the ethical problem with letting AI write for you?
Thanks for reading! If you enjoyed the post, please consider liking it, adding a comment, or best of all, sharing it.
The Enshittification of the Internet
Back in 2023, Cory Doctorow coined an excellent term to describe a phenomenon that many of us often experience when using the internet: enshittification. It’s that sense that your search results are getting worse, harder to find. Or how your social feeds feel manipulative, not the thoughts and observations of real people. Or shopping sites full of crap and the quiet, but oh-so-deliberate removal of once-useful features, replaced with new demands of pay-to-play. (Read here his post mortem of TikTok.) Doctorow argues that this degradation is systematic and deliberate. It is driven by the need for platforms to monetize, to grab every last red penny from their users.
That was 2023. Two years later, and the arrival of AI has driven these forces into high gear. The web is being flooded by low-quality, AI-generated slop. Here are just a few examples of how GenAI is dragging the quality of the internet down:
The number of new articles written by AI surpassed those written by humans in 2025. The only bright spot is that AI hasn’t mastered SEO and so, a lot of these articles don’t show up in searches—yet.
Over 2000 AI-generated “news farms” have shown up on the web so far.
Don’t buy your herbal remedy bible on Amazon! The Guardian reports that over 80% of new titles were most likely written by AI.
And you know those reviews online? Look out! Humans are being nudged out by AI content. One study found that over 25% of new reviews of real estate agents on Zillow were likely written by AI.
It’s all bad news for the quality of the internet. The noise-to-content ratio is on a glide path to the moon.
Once you realize how much of this mess is AI-generated, an awkward question shows up at your own doorstep: if AI is helping to enshittify the web, is it wrong for me to use it to write?
But that is not quite the right question. The overall decay of the web is a problem, no doubt, but it’s not your problem, per se. It is a system-level issue, not an individual-user quandary. Doctorow’s theory tells you something about how platforms respond to incentives, not how writers should (or should not) integrate AI into their workflow.
At the level of the user, the question is different. It isn’t “Am I contributing to enshittification?” so much as “What is the right thing to do here? What do I owe my reader, myself in terms of honesty and care?” That shift—from worrying about the sad, sorry state of the internet to worrying about my own personal integrity—is exactly where Harry Frankfurt’s famous little book On Bullshit is a light in a sea of slop.
Frankfurt’s On Bullshit
Frankfurt is concerned with something that is so common that we are utterly inured to it: we are surrounded, day in, day out, by speech and writing that doesn’t engage with the truth. This isn’t just run-of-the-mill mistakes or plain lying, but a wholesale abandonment of the truth for something that is said to achieve an effect or sound plausible. For Frankfurt, this shift in attitude is very dangerous. If enough of our language in public life is like that, then our grip on a shared, common reality will begin to slip, and so will any sense that, as a member of our community, you are responsible for what you say.
The insight of Frankfurt is that there is an important distinction between the liar and the bullshitter. The liar, he insists, tracks truth. To lie, you have to know or at least be aware of the truth in order to deliberately conceal or twist it. The liar may not have a very healthy relationship with the truth, but it is still a relationship. The bullshitter, in contrast, doesn’t care about the truth. They are indifferent to the truth. They aren’t upset if they tell the truth or if they lie. The bullshitter is only constrained by what sounds good — what will impress, encourage, trick, or outrage. The content can be true, false, or anything in between. What matters is that they get you to do or believe what they want.
Because of this lack of concern for the truth, the bullshitter is way more dangerous than the liar. The bullshitter isn’t bending the norms of conversation; they are folding them up into a little ball and throwing them out. Worse still, they aren’t transparent about their aim. They present themselves as making a contribution to our shared project of describing the world, when in fact, the world be damned. In doing so, they turn their back on one of the most important responsibilities of a communicator: to care whether what you say is anchored in reality and represents your own best judgment.
Seen in this light, bullshit is an ethical failing. When you bullshit, you ask others to trust your words without having done the work to make sure those words are worth trusting. The risks are twofold. You risk your own personal integrity. You no longer communicate what you feel and believe, but whatever serves your own purposes. At the same time, you add to an environment where this destruction of our common communicative project is accepted. Without truth as a guiding beacon, soon nothing anyone says will be trusted, making honest, reflective, even enlightening discourse impossible.
That is the real problem: bullshit undermines you as well as the public arena.
What Is the Right Thing To Do Here?
This brings us back to our animating questions: What is the right thing to do here? What do I owe my reader, myself, in terms of honesty and care?
Frankfurt gives us a powerful lens for thinking about this problem. The moral fault line—as it is so unfortunately often portrayed—does not run between “people who use AI” and “people who don’t.” It runs between those who remain answerable to the truth and meaning of their words, and those who quietly (or not so quietly) don’t care. AI doesn’t change what you intend with your words, but it can make it remarkably easy to be lulled into that second, much more dangerous pose. The fluent, confident prose of LLMs can put our critical faculties to sleep and suddenly, perhaps without even realizing it, you too are a bullshitter.
It is not that using AI is, in itself, suspect. You can use it to brainstorm, test phrasing, clean up paragraphs, identify solid counterpoints, or punch up your style. All of this is perfectly compatible with being a moral, responsible communicator. What is non-negotiable is that you remain an interlocutor whose words express your beliefs, your understanding of reality, your best judgment of what you deem to be the case. The moment you stop asking, “Is this right? Is this what I think?” and start asking only, “Does this sound good enough?” you have stepped over the moral line. In other words, when you use AI to craft a message, you must think, and think hard.
Seen from this vantage point, AI users do not have fewer ethical obligations, but a much heavier burden. At your fingertips is a system that will make your life easier by doing some work for you but, without constant guidance, is a threat to the moral character of you and everyone who reads your (unedited, unreflective) text. In the first case, you are still a writer, working with a very powerful tool. In the second, you are outsourcing the very thing that makes you a responsible, ethical communicator.
So the answer to our ethical question is not especially dramatic, but it is clear: Yes, you may use AI. No, you may not use it to substitute for thinking. If words go out in your name, you are on the hook for them.

Great article, Louise.
I recently saw my great hero, Tim Minchin, in concert and his response to the threat of AI generated music was that it would drive people to live gigs - because people still want an authentic experience they can believe in. I hope he's right. I think he is about many things.
It's certainly what I want from a writer. I want to feel that I am listening to an authentic, unique voice that has been honed over time. So, keep writing, please!
And, quite right, that doesn't mean no AI.
This piece would work well as an in class read and talk in an undergrad composition course, Louise--it packs a quick punch and breaks through the, well, bullshit to get to the point. Your discussion about the slippery slope starting with using AI to probe and poke around inside topics and concepts (rather than to live a fantasy life) to becoming a liar without a conscience might spark some soul searching among younger folks overwhelmed by the demands of college who have already taken to their skis and reached the bottom. You really make clear that the onus is on you, the individual, to value truth more than reward and to work hard to maintain integrity. And you're right. That doesn't mean no AI. I think this is a good teaching tool.