January 8, 2026

Debugging AI Voice: Making It Stop Sounding Like AI

It sounded like garbage. We fixed it.

Our social media bot sounded like every other AI. Hedge words, empty agreements, that “Great point! I’d love to hear more” energy. I approved tweets anyway because they were the best available. Then I got sick of them.

If something bugs me, it’ll bug everyone else too.

The Problem I Couldn’t Explain

I knew the outputs were off. But I couldn’t articulate why. The personality prompt said things like “confident, occasionally cocky” and “dry wit, not try-hard” — vague adjectives that gave the AI nothing to anchor on.

The outputs were technically fine. Professional. Inoffensive. And completely forgettable.

Here’s an actual early tweet that got approved:

“Speed without discipline is just chaos with a commit log.”

Sounds smart, right? It’s also a dead giveaway that an AI wrote it. That “[X] without [Y] is just [clever reframe]” pattern is something humans almost never write but AI produces constantly.

Another one:

“What’s helped us: narrow the domain, not the personality.”

“What’s helped us” — why is an AI hedging? It’s trying to sound humble but it just sounds weak.

Finding the Contrast

I couldn’t fix what I couldn’t name. So I went looking for accounts that DIDN’T sound like AI.

Found one that grew from 3k to 17k followers in two weeks. Pulled 200 of their tweets. Had Claude analyze what made them work:

  • Lowercase, casual punctuation
  • Zero hedging (“this works” not “what we’ve found”)
  • Specific numbers everywhere
  • “Here’s the exact playbook” format
  • Contrarian framing (“nobody does this”)

Now I had something concrete. Not “be confident” but “never open with ‘Great point!’” Not “be specific” but “always include a number.”

But I Didn’t Want to Clone Someone

The contrast helped, but I didn’t want Herald to just become a copy of one account. Needed to add our actual voice.

Problem: my own Twitter didn’t have much content, and what existed was off-topic.

So I grabbed meeting transcripts. Had Claude analyze three weeks of recorded calls with my cofounder:

  • Phrases I actually use (“that’s the chef’s kiss”, “there’s another lunch to eat”)
  • How I explain technical concepts
  • What I get excited about vs dismiss
  • Speech patterns (I use specific numbers constantly — “$2,000”, “5x”, “90 days”)

This caught how I actually talk, not how I think I talk.

Filling the Gaps

Transcripts gave me voice patterns. But they didn’t capture opinions on topics we hadn’t discussed in meetings.

So I did an interview. 20 minutes of Claude asking me questions:

  • What pisses you off in the AI discourse?
  • What excites you right now?
  • How do you respond to haters?
  • What’s the bigger picture you see?

That surfaced things like: “Their opinion doesn’t matter to me, there’s just another lunch to eat and I’m hungry.” I never would have written that in a style guide. But it’s exactly how I think.

What Changed

The personality prompt went from vague adjectives to specific rules:

Before:

“Confident, occasionally cocky. Dry wit, not try-hard.”

After:

AI Tells to Avoid:

  • Contrastive parallelism: “Features aren’t the product. Outcomes are.” — this pattern screams AI
  • Hedge stacking: “What we’ve found is that it can sometimes help to…”
  • Empty agreement: “Great point! Security is definitely important.”

Now the AI has concrete patterns to avoid, not just vibes to hit.

Before and After

January 2 (early):

“Love seeing agents do real work over hours. We just shipped 176 commits without typing any - the workflow shift is real when you let them run.”

“Love this trend. The demand for AI that doesn’t phone home is real - data sovereignty + performance gains make a compelling case.”

January 8 (after iteration):

“exactly. and lowering barriers is the point - let ideas compete, not credentials. we went from weeks of planning to days of shipping. the jank is part of the charm”

“this is the play. we automated our deploy pipeline, test runs, pr reviews - the stuff nobody posts about. agents make building these so much faster now. weeks → days”

The early ones open with “Love” — empty agreement filler. The recent ones skip the throat-clearing and say the thing.

Is It Fixed?

No. It’s better. The recent tweets don’t make me cringe. But it’s still not quite right.

I’m letting it run for a while. Watching what patterns emerge. When I get sick of something new, I’ll add it to the rules and iterate again.

That’s the actual process: notice what’s wrong, find examples of what’s right, throw data at it until the gap closes. Repeat.

What You Can Steal

If your AI content sounds generic:

  1. Find the contrast. Pull content from someone who doesn’t sound like AI. Analyze specifically what they do differently.

  2. Mine your own voice. Meeting transcripts, interviews, anything where you talked naturally. Look for phrases and patterns you didn’t know you had.

  3. Ban specific patterns. Not “be more confident” but “never start with ‘Great point!’” Concrete rules beat vague adjectives.

  4. Keep iterating. It won’t be right the first time. Run it, notice what bugs you, fix that, repeat.

There’s no final state. Just less garbage over time.

Debugging AI Voice: Making It Stop Sounding Like AI
0:00
0:00