Stay in the loop

Get insights on content strategy, writing, and what actually works.
We respect your inbox. Unsubscribe anytime.
Thanks for signing up. Check your inbox soon.
Something didn't work. Try again or get in touch.

There's something strange happening to content right now. Readers are checking out mid-sentence, not because the ideas are bad, but because something feels off. Their brain clocks a pattern, files the content as generic, and moves on. Usually before they've even reached the main point.

You've probably done it yourself. You're reading a LinkedIn post or blog, you hit a familiar rhythm or a certain phrase, and you're gone. The thought was probably fine. The argument might have even been useful. But it didn't feel like anyone was actually there.

That's the real AI content problem. It's not about detection tools or word counts. The thing that's causing issues is pattern recognition, the way audiences are now unconsciously trained to feel the difference between something a person wrote and something a machine produced. And once that pattern clicks for a reader, it's almost impossible to hold their attention, regardless of what the content actually says.

Why your brain detects AI writing even if you don't care

Humans have always been wired to read other humans. We're tuned to pick up on tone, rhythm, the way someone structures a thought, the specific word they reach for instead of the obvious one. That skill transferred to text a long time ago, and it's been refined by every book, email, and article you've ever read.

What AI writing tends to produce is content that sits in a strange middle ground. Technically coherent, structurally fine, and yet somehow... unconvincing. Psychologists would point to a few reasons why.

The first is something called source monitoring, the brain's ongoing effort to evaluate where information is coming from and whether to trust it. Authenticity cues matter here. When the writing doesn't feel like it came from a person with actual experience and skin in the game, trust gets  downgraded (without even realising). The reader may not be able to say why. They just know they're not buying it.

The second is familiarity without novelty. AI synthesises content from the most common versions of ideas that already exist. So what it produces tends to feel like something you've read before, because in a sense, you probably have. Human writers drawing on their own specific experience are far more likely to present an idea in a way the reader hasn't encountered before. That novelty is a big part of what makes content actually worth finishing.

The third is what you might call productive friction. AI writing is very, very smooth. Ideas transition cleanly, everything is well-organised, nothing asks much of the reader. That honestly sounds like good writing, but it often registers as low-effort content. A certain amount of roughness, of thinking-in-progress, signals that a human was genuinely working something out. When that's missing, readers feel the absence.

The specific AI tells that give it away

A lot of conversation around AI tells focuses on individual quirks: the dreaded em dash, the word "delve," the enthusiasm for bullet points. These aren't wrong, but they're a bit too surface-level, too basic for us marketers. The more reliable tells are things like patterns of thinking, or rather the absence of it. Here's what to actually look for:

Stock intros and conclusions

AI almost always opens with a framing statement. "In today's world..." or "X is one of the most important things a professional can do..." Then it closes by circling back to summarise what was just said and wrapping with a nudge toward action. The formula is consistent. Human writers will often open with a specific thought, an observation, something they noticed or that happened. And they often end on something open rather than resolved. AI closes the loop because it's been trained to (from us content writers). Humans know that not everything resolves neatly.

Em dashes used ornamentally

I'm going to start by saying there's nothing wrong with an em dash. The problem is AI's reliance on it as a rhythm device, inserting a pause to create the impression of a considered, digressive voice. When it's everywhere, it stops feeling like style and starts feeling like scaffolding. Unfortunately because of this overuse, the second someone sees an em dash, they disengage. I'm not saying don't ever use one, but use them sparingly if you can.

"Not X, but Y" and "Not X. Not Y. Just Z"

These constructions are everywhere in AI writing right now. They create a false sense of nuance, the impression that a distinction is being drawn, without actually making one. It can make you sound like you know what you're talking about, but in reality, it doesn't say much at all. Real contrast requires specific content on both sides. AI basically uses the form without filling it in.

Repeated single-line paragraphs

This one is worth paying attention to because I see it every day. AI leans on short single-line paragraphs for emphasis, but uses them so frequently they lose their effect entirely. In good writing, a one-line paragraph is a deliberate decision. It lands because it's rare and means something. When every paragraph is the same length, there's no weight anywhere and it actually makes it much harder to follow.

Perfectly even sentence rhythm

When you read AI-written content aloud, it has a kinda metronome quality. Sentences are consistently short or medium-length, consistently well-formed. Good writing uses rhythm as a tool. Short sentences land a point. Longer ones build and carry a thought, letting something settle before the next idea arrives. Flat rhythm signals flat thinking.

Advice that could apply to anyone

This is one of the clearest signs if I'm honest. AI writing tends toward the universal because it has no access to the specific. Tips that could slot into any industry, any audience, any situation. Human writers bring their actual experience, which means their content is naturally specific, and specific content is the kind that makes readers feel seen and understood.

Abstract business jargon

The vocabulary of AI writing orbits words like leverage, foster, empower, robust, seamless, transformative, and impactful. These words sound credible, carry positive associations, and say almost nothing. Human writers know something real, which means they reach for real details. "We cut churn by fixing the onboarding email" rather than "we leveraged strategic communication to drive meaningful engagement."

Overuse of lists of three

I hate that I've added this one in here, because it's literally the backbone of marketing. Three tips, three reasons, three things to remember. The rule of three is a real writing device, but AI applies it so mechanically and so constantly that it registers as pattern rather than craft. If everything resolves in threes, it feels like nothing was genuinely thought through.

Generic signposting phrases

"In this article, we'll cover..." or "Let's break this down" or "Here's what that means for you." These phrases are used a lot in content. They exist to help readers navigate long content, but AI uses them as filler, structuring devices that add length without adding anything. Readers skim past them, and if there's nothing else to hold attention, they don't come back.

Polished but empty sentences

This might be the subtlest one. There are sentences in AI writing that scan beautifully and contain almost no information. "The way we communicate shapes the way we connect." It sounds like a point. If you try to say what point it's actually making, you'll find there isn't one. Good writers can usually tell you exactly what they mean. AI produces the shape of meaning without the substance.

Over-explaining obvious points

AI tends to explain things the reader already knows, as if worried they might not follow. Human writers calibrate to their audience. They trust the reader to fill in the obvious, which creates a kind of intimacy, a sense that the writer actually knows who they're talking to.

Why insight-led content is where this shows up most

Not all content is equally affected. Technical documentation, FAQs, detailed how-to guides can all be AI-assisted pretty well, because the reader is there for information, not for the experience of encountering a particular mind.

Insight-led content is different. The kind of content where someone is supposed to have done the thinking, the research, the synthesis, and is now sharing what they've made of it. A LinkedIn post breaking down trends in their industry. A newsletter unpacking why something in their field matters. An article drawing on years of experience to make a point that only someone who's actually lived it could make. This is what expert-led content is actually built on, and why it's so difficult to replicate with AI.

This is exactly where AI falls flattest. Because it can research a topic and write about that topic, but it can't tell you what it actually means, what it feels like from the inside, or why it matters in a way that's grounded in real experience. The shape of the insight is there. The substance behind it isn't. And readers, even when they can't articulate what's missing, feel that gap.

The social premise of this kind of content is that a person is sharing what they've genuinely learned. When that premise isn't being met, the reader doesn't just lose interest in the post. They lose interest in the person.

The thing AI can't generate

The actual product of good content isn't information. It's interpretation: the specific, experience-shaped lens that takes a trend, an observation, or a problem and turns it into something that helps the reader see things differently. That lens is built from years of doing the work, making the mistakes, developing opinions the hard way.

AI has none of that. It can arrange ideas that already exist, in the most statistically common order, in a voice trained on the aggregate of everything ever written. It does this remarkably well. What it can't do is have a point of view, because it hasn't been anywhere.

This is why AI content can be correct, well-structured, but still entirely unpersuasive. Persuasion works by transferring trust, and trust requires evidence of genuine experience. When that evidence isn't there, when the writing could have been written by anyone, about anyone, for anyone, readers don't buy it, even when they can't say why.

Using AI without losing your voice

None of this is an argument against using AI in your workflow. Disregarding the environvmental aspects, the tools are useful, and the people who pretend otherwise are usually just performing virtue.

The meaningful distinction is between using AI to assist your thinking and using it to replace your thinking. A writer who uses AI to stress-test an argument, find structural gaps, or work through how to frame something is still the author of the content. The perspective is theirs. The specific observations are theirs. The voice, at the end, is theirs.

Publishing whatever a model produces in response to a prompt is something else. The output might be coherent. But it won't be memorable, and it probably won't move anyone.

What builds an audience is content that gives people access to a mind they find interesting. That can't be outsourced, not really anyway. The moment the thinking stops being yours, the thing people were following stops being there. If you're trying to figure out what that actually looks like in practice, this breakdown of the content every SME needs is a useful place to start.

Why human writing is getting more valuable, not less

The AI content boom is creating an odd dynamic: the more generic content fills every feed, the more valuable genuinely human writing becomes.

Every AI-produced post that sounds like every other AI-produced post is training audiences to skim, to disengage, to expect nothing. Against that backdrop, content that actually sounds like a specific person, specific in its details, honest about what it doesn't know, willing to take a real position, stands out significantly more than it would have five years ago.

The window of attention is is getting much much smaller. Readers are getting better at recognising patterns, faster at dismissing content that triggers the "produced not thought" instinct. That's not a problem to solve with better prompts or smarter editing. It's a reminder that the human voice, grounded in actual experience, is genuinely irreplaceable. And for businesses that are serious about content doing real work, the choice between fractional content support and an agency often comes down to exactly this: who is actually doing the thinking.

The question worth asking isn't really whether AI is good or bad (and yes, I hear this question a lot). It's whether the content you're putting out actually sounds like you, and whether it's saying something that only you could say.

Because that's what holds attention. That's what earns trust. And that's what no AI tell, no matter how well hidden, can convincingly fake.

Want content that actually sounds like you?

If you're putting out content that you're not sure is landing, or you're relying on AI to fill the gap and sensing it's not quite working, that's worth a conversation.

Book a discovery call and we'll figure out what your content actually needs.

FAQs

Questions about services, process, and how AX Content works

How can you tell if something was written by AI?

The most reliable signs aren't individual words or punctuation habits. They're patterns of thinking. AI-written content tends to open with broad framing statements, use abstract business jargon, apply the same even sentence rhythm throughout, and offer advice that could apply to anyone in any industry. The deeper tell is an absence of specificity: no real examples, no genuine point of view, no evidence that the person writing actually has experience with what they're describing. Readers often sense this before they can name it.

Does Google penalise AI-generated content?

Google doesn't penalise content simply for being written by AI. What it does penalise is low-quality, generic content that offers no real value to the reader, which is exactly what unedited AI output tends to produce. Google's systems reward content that demonstrates genuine expertise, real experience, and original insight, regardless of what tools were used in the process. The issue isn't AI. It's what happens when AI replaces the thinking rather than assisting it.

Why does AI-written content feel off even when it's technically correct?

ecause correctness isn't what holds attention. Readers engage with content when it feels like it came from a person with actual experience and a real point of view. AI synthesises the most statistically common version of any given idea, which means it produces content that's familiar rather than fresh, smooth rather than considered, and universal rather than specific. The brain picks up on these patterns quickly, even without consciously identifying them, and disengages.

Can AI write good content?

Good is subjective. AI can produce well-structured, readable content at speed. What it can't do is provide genuine perspective, specific experience, or the kind of interpretation that makes insight-led content worth reading. Used well, as a drafting aid, a structural tool, or a way to work through a brief, it can support good content. Used as a replacement for thinking, it tends to produce content that's coherent but forgettable, and increasingly, audiences can tell the difference.

Why is human-written content better for building an audience?

An audience follows a person, not a topic. They come back because they find that person's way of thinking worth their time. Human-written content, grounded in real experience, tends to be specific in ways AI can't replicate. A particular example, an honest admission, a take that couldn't have come from anyone else. That specificity is what creates trust, and trust is what converts readers into followers, subscribers, and eventually, buyers.

Latest news & insights

View more
Primary initial IconPrimary Button Icon
Blog Post Image
The 5 pieces of content every SME needs

A clear breakdown of the five content pieces SMEs actually need, and how they work together to support real buyer decisions.

Read More
Article Slider Arrow
Blog Post Image
Expert-led content: why B2B companies need it in 2026

In a market flooded with content but short on trust, expert-led insight is becoming the most reliable driver of B2B authority and growth.

Read More
Article Slider Arrow
Blog Post Image
Fractional content lead vs content agency: what you’re really choosing

Understand the difference between fractional content leadership and agency execution, and why confusing the two could be holding your content back.

Read More
Article Slider Arrow