AI Can’t Fake Nuance, But Humans Can

I've caught myself doing it. Writing a sentence, reading it back, and then rewriting it to sound more like the version of me I want people to see. Sharper. More considered. A little wiser than I actually felt in the moment. The thought was real. The expression of it got... managed.
Nobody would have noticed. That's the point. That's also the problem.
The AI detection industry got the wrong target
Right now there are entire tools, browser extensions, platform features, dedicated startups, built around one mission. Detecting AI generated content. The logic is straightforward. AI writes in a particular way. Smooth, confident, slightly hollow. Pattern matching without genuine understanding. We should be able to spot it and flag it.
And mostly, we can. AI writing has a texture. It's competent in a way that feels frictionless, like something that has never been uncertain about anything. It doesn't ramble or contradict itself or say something and then half take it back. It doesn't have bad days.
But here's what nobody is building a tool for. The human who has learned to write like they're performing a version of themselves. The person who has studied what resonates, what signals intelligence, what makes someone seem thoughtful and grounded and worth following, and who now produces content that hits all those marks without any of it being particularly true.
AI sounds generic. You can feel the absence of a person behind it. Strategic human writing sounds specific, warm, considered. You can feel a person behind it. You just can't always tell if that person is really there or if they've sent a representative.
What strategic writing actually costs
When I write for an audience I'm trying to impress, something shifts. The sentences get cleaner. The structure gets tighter. The vulnerability gets calibrated, just enough to seem real, not enough to actually be exposed. The whole thing becomes a kind of architecture built to produce a specific effect in a specific reader.
The writing gets better in some ways. More readable. More controlled. But it loses the thing that made me want to write it in the first place. The original friction, the actual confusion, the moment where I didn't know how the thought was going to end. That stuff gets edited out because it doesn't perform well. It looks like uncertainty. It looks unpolished. It looks too much like a person.
The irony is that the thing that gets edited out is the only thing that couldn't have been generated by a machine.
The performance nobody names
We have a cultural conversation happening right now about AI authenticity. About whether content is real or generated. About disclosure and trust and what it means to be genuine online. It's an important conversation.
But it's aimed at the wrong target. AI generated content is obvious enough that most people feel it, even without a detector. The content that actually misleads is the human content that has been so thoroughly optimised that it has the warmth and specificity of a real person while carrying almost none of their actual interior life.
Strategic human writing is the better con. It passes every test. It sounds like someone. It just might not sound like the someone who wrote it.
The thing I keep coming back to
There's a version of me that writes well and a version of me that writes true. They overlap sometimes but not always. The well version knows how to structure a thought for maximum landing. The true version starts sentences it doesn't know how to finish and sometimes says things that make me wince on the reread.
The true version is harder to read. It's also the only one worth trusting.
I don't have a clean answer for how to stay there. I just know that the moment I start writing for the reaction rather than the thought, something leaves the room. And whatever fills the space might be good content. But it's not really mine anymore.
Which, if you think about it, is exactly what we're trying to detect in machines.
Today's micro-fable:
A young monk copied sacred texts for forty years. His copies were said to be more beautiful than the originals. When he finally read the originals for himself he wept for three days. When the abbot asked why, the monk said he had spent his life perfecting the lettering and had never once understood the words.


