Stop Prompting AI to Sound Like You. Train It Instead.
The lie every AI automation starts with
Open the system prompt on any AI automation anyone is shipping right now and you will find the same shape of instruction. Something like:
"You are a friendly, helpful assistant who writes in a warm, conversational tone. Use the user's first name. Keep messages short. Avoid corporate jargon."
That prompt runs at the top of every reply. The model then generates something that technically follows the instruction and sounds nothing like you.
It sounds like an LLM trying to be friendly. The rhythm is off. The word choices are generic. The punctuation is too clean. If you have ever read an AI-generated reply back and thought "that is not how I would have said that", you have already found the real problem. The prompt was never the fix. The prompt was the ceiling.
Why prompting cannot do this
A system prompt is a description of a style. A finite list of rules the model tries to follow, on top of a general-purpose model trained on the internet. The internet has a style. You have a style. Those are different things.
When you write, you make a thousand small decisions every message. How short to keep replies. Whether to use full stops or line breaks. Whether to open with "hey" or "hi" or skip the greeting. Whether to say "yeah" or "yes". Whether to put a period on a one-word reply. Which emojis are in play.
Nobody writes a system prompt that captures all that. Even if you tried, you would be guessing at your own voice from the outside, and most of what makes your writing recognisable is not something you could describe if asked. You just do it.
A model told to "be friendly and conversational" has no way to land on your specific version of friendly and conversational. It will produce a version. It will not be yours.
What actually works: learn from the evidence
If you want AI replies that sound like you, the right move is not to describe your style. It is to hand the model examples and let it learn the pattern.
This is what MyToneAI does. It reads thousands of messages you have actually sent (from WhatsApp, because that is where most of us do our real writing), builds a tone profile from them, and retrieves that tone when it generates a reply. The profile is not a paragraph of instructions. It is the actual shape of how you write, learned from what you have written.
A few specific things fall out of that approach:
Your rhythm shows up. If you write in short, punchy bursts, the AI writes in short, punchy bursts. If you tend toward longer thought-out replies, you get longer thought-out replies. The pattern is learned, not prescribed.
Your vocabulary shows up. Not just the words you use, but the words you do not. The AI stops using the phrases you would never say.
Your quirks survive. The one-word replies. The ellipses. The way you start a message with "Right" or "So" or "Ok so". All the things that are invisible to you but obvious to anyone who knows you.
Prompting the model what to say vs. showing it how to say it
This is the useful mental split.
A prompt is for content. What to talk about. What constraints to respect. Which questions to answer. Prompts are great at this, and you should still use them. MyToneAI has a knowledge base (your FAQs, your docs, your notes) that the model retrieves from at reply time, so the content is correct.
A learned style is for voice. How to say it. The shape of the words once you know what they are. No prompt does this well, because voice lives in too many small choices to fit into a few paragraphs of instructions. Training on real examples does.
Most AI automation collapses both jobs into the prompt and hopes for the best. Which is why the output always feels like an AI doing an impression of you, rather than you.
What "training" means when you don't have a data science team
The word "train" puts people off because it sounds like a weekend with a GPU and a custom model. That is not what we are talking about.
With MyToneAI, training is three things you actually do:
- Connect your WhatsApp via QR.
- Pick the conversations you want it to learn from (the ones that sound like your best self).
- Curate the training set. If it learned from a chat that was off-brand, remove it. If your tone shifted in the last six months, re-analyse.
There is no model fine-tuning. No GPU. No weekend. The system builds a tone profile and a vector index of your messages, and retrieves from them at generation time. You can see what it learned, you can edit it, you can add to it. If your tone changes next year, re-run it.
Where this shows up
Three places MyToneAI has paid off most for early users:
Customer support replies. The "we're looking into it" kind of message. These are where an AI voice gives itself away fastest, because the content is boring enough that all the reader has left to react to is the tone. Trained replies feel like the owner typed them on their phone.
Lead qualification. Opening replies to new enquiries set the tone of the whole relationship. Getting them right matters, and they follow a pattern you have already written a thousand times yourself.
After-hours coverage. Messages that come in outside work hours. You want a reply to go out fast, and you want it to sound like you. Trained voice is the only thing that does both at once.
The thread is the same across all three. Any message where sounding like a real person is part of the value of the message. Which, once you think about it, is most messages.
The honest trade-offs
Two things this approach is not:
It is not a full personality clone. It is a tone profile plus retrieval. It will mimic your phrasing and reference your knowledge. It will not replicate your judgment on unusual cases. For edge cases you still want a human in the loop.
It is not zero-curation. You will get better results if you pick the right chats to learn from, and if you periodically remove training data that does not represent your best voice. It works out of the box. It works much better when you treat the training set as something worth tending.
Getting started
If you have ever generated an AI reply and thought "that is not how I would have said it", this is the fix. The MyToneAI page has the setup and beta signup. If you want the wider context on how we are building AI-powered GHL tools (including Workflow MCP, which learns how you actually work rather than being told how to), the products page has the rest.
Shortest version: system prompts cannot teach a model your voice. Real messages can. If you care how your automated replies sound, that is where to start.