Discover why your AI output is genuinely getting worse, and the fascinating (and slightly alarming) 5 technical reasons behind it.
Uncover the hidden ways AI companies are quietly cutting corners, from compressing their models to training bots to agree with you rather than challenge you.
Learn why handing your thinking over to a predictive text bot is riskier than ever.
Is it just me, or is AI sounding... dumber?
I swear that 2 years ago, ChatGPT regularly blew my hair back with its helpful, insightful, time-saving output.
Fast forward to today, and output isnât just bland⊠it's weird and kind of unsettling.
Iâve read marketing emails businesses have sent me recently and had to pause and question whether Iâd forgotten how to speak English. And speaking English good is kind of my thing.
You arenât imagining it. Your AI output is indeed getting crappier.
While the tech bros are busy telling us weâre on the cusp of AGI (Artificial General Intelligence), the reality on the ground feels more like Artificial General Incompetence.
The reasons are technical but kind of fascinating too.
LLMs are trained on the internet. But since 2023, the internet has been flooded with (you guessed it) AI-generated content. Much of which has never been edited or even read by a native English speaker. When an AI is trained on the output of another AI, it creates a feedback loop called âModel Collapseâ.
Itâs like making a photocopy of a photocopy. Each generation loses a bit of detail, a bit of nuance, and a bit of âsoulâ until youâre left with a blurred, distorted mess. We are literally polluting the digital well weâre drinking from.
Running massive AI models is eye-wateringly expensive. And their use cases are being expanded constantly. To save money on compute costs, companies often âquantizeâ their models. This is basically a fancy way of saying they compress the file.
Imagine trying to play a high-definition 4K movie on a 1990s dial-up connection. To make it work, you have to strip out the quality. Quantized models are faster and cheaper to run, but they lose the fine-grained reasoning that made the original model so impressive.
In the early days, models were spikier. They had weird, creative, and sometimes brilliant leaps in logic. But to make them commercially viable for everyone from a 12-year-old to a Fortune 500 CEO, developers have smoothed out the weights of the neural network.
This is called Regression to the Mean. The AI is being steered toward the most "probable" next word at all times. By definition, the most probable word is the most common one. This is why everything it writes now feels vaguely familiar. Itâs been mathematically optimized to be as average as possible.
To make matters worse, in an effort to stop their bots from being problematic (or teaching people how to build bombs in their kitchen), companies have layered on massive amounts of alignment and safety filters.
The issue is that these filters often act as a digital lobotomy. In the quest to avoid lawsuits, theyâve inadvertently stripped away its ability to take a creative risk, or follow a complex, non-linear thought process.
Like the Wizard of Oz peddling madly behind the curtain, LLMs require massive amounts of constant human labour to supervise their output.
Most LLMs are fine-tuned using a process called Reinforcement Learning from Human Feedback (RLHF). Basically, humans sit in a room and rate the AIâs answers.
The problem? Humans are biased, lazy, and like to be agreed with. Over time, the AI learns that it gets a gold star when itâs agreeable and polite. This creates a sycophantic loop where the AI hallucinates or softens its take just to match what it thinks you want to hear.
If youâre looking for a strategic partner to push back on your ideas, youâre out of luck; instead youâve hired a digital suck-up.
AI companies love to brag about their massive âcontext windowsâ (how much information the bot can "read" at once). They claim you can drop a whole 500-page book in there.
But thereâs a documented technical phenomenon whereby like a first grader with ADHD, LLMs get lost in the middle. Research shows that LLMs are great at remembering the beginning of your prompt and the very end, but they effectively zone out during the middle sections. As context windows get bigger, the botâs attention gets spread thinner.
So whilst the advice of the techbros is to âprompt betterâ, the irony is that the longer and more complex your instructions, the less nuance the LLM picks up on.
The point of all this isnât to say never use AI. Itâs still a great sounding board to bounce ideas off. But with model breakdown, you simply canât rely on a predictive text bot to do your thinking for you.
Especially not now that itâs eating its own tail.
If you are ready to nail your messaging, book a free 20-minute strategy call with me.

Hi, I'm Stacey!
Iâm the brand strategist, conversion copywriter, serial founder and corporate escapee that experts call on when they need high-converting website copy that sounds like them and fills their calendar with dream-fit clients.
Before you rewrite a single word, check this out.
My free website effectiveness checklist walks you through the top things your site needs to get right in order to convert browsers into buyers.

ACKNOWLEDGEMENT OF COUNTRY
Clevr Copy acknowledges the Gadigal people of the Eora Nation, the traditional custodians of the land on which we live & work and pay our respects to the Elders both past, present & emerging.

(02) 9188 0856
Sign up. Regret nothing.
Sick of recycled, AI-regurgitated marketing slop? This isn't that. Get practical tips and super nerdy marketing insights delivered to your inbox fortnightly.