Is it just me, or is AI sounding... dumber?

  • Discover why your AI output is genuinely getting worse, and the fascinating (and slightly alarming) 5 technical reasons behind it.

  • Uncover the hidden ways AI companies are quietly cutting corners, from compressing their models to training bots to agree with you rather than challenge you.

  • Learn why handing your thinking over to a predictive text bot is riskier than ever.

5 Technical Reasons Behind Artificial General Incompetence

Is it just me, or is AI sounding... dumber?

I swear that 2 years ago, ChatGPT regularly blew my hair back with its helpful, insightful, time-saving output.

Fast forward to today, and output isn’t just bland
 it's weird and kind of unsettling.

I’ve read marketing emails businesses have sent me recently and had to pause and question whether I’d forgotten how to speak English. And speaking English good is kind of my thing.

You aren’t imagining it. Your AI output is indeed getting crappier.

While the tech bros are busy telling us we’re on the cusp of AGI (Artificial General Intelligence), the reality on the ground feels more like Artificial General Incompetence.

The reasons are technical but kind of fascinating too.

1. The Ouroboros Problem

LLMs are trained on the internet. But since 2023, the internet has been flooded with (you guessed it) AI-generated content. Much of which has never been edited or even read by a native English speaker. When an AI is trained on the output of another AI, it creates a feedback loop called ‘Model Collapse’.

It’s like making a photocopy of a photocopy. Each generation loses a bit of detail, a bit of nuance, and a bit of ‘soul’ until you’re left with a blurred, distorted mess. We are literally polluting the digital well we’re drinking from.

2. Shrink-flation

Running massive AI models is eye-wateringly expensive. And their use cases are being expanded constantly. To save money on compute costs, companies often ‘quantize’ their models. This is basically a fancy way of saying they compress the file.

Imagine trying to play a high-definition 4K movie on a 1990s dial-up connection. To make it work, you have to strip out the quality. Quantized models are faster and cheaper to run, but they lose the fine-grained reasoning that made the original model so impressive.

3. Regression to the Mean

In the early days, models were spikier. They had weird, creative, and sometimes brilliant leaps in logic. But to make them commercially viable for everyone from a 12-year-old to a Fortune 500 CEO, developers have smoothed out the weights of the neural network.

This is called Regression to the Mean. The AI is being steered toward the most "probable" next word at all times. By definition, the most probable word is the most common one. This is why everything it writes now feels vaguely familiar. It’s been mathematically optimized to be as average as possible.

To make matters worse, in an effort to stop their bots from being problematic (or teaching people how to build bombs in their kitchen), companies have layered on massive amounts of alignment and safety filters.

The issue is that these filters often act as a digital lobotomy. In the quest to avoid lawsuits, they’ve inadvertently stripped away its ability to take a creative risk, or follow a complex, non-linear thought process.

4. RLHF Sycophancy

Like the Wizard of Oz peddling madly behind the curtain, LLMs require massive amounts of constant human labour to supervise their output.

Most LLMs are fine-tuned using a process called Reinforcement Learning from Human Feedback (RLHF). Basically, humans sit in a room and rate the AI’s answers.

The problem? Humans are biased, lazy, and like to be agreed with. Over time, the AI learns that it gets a gold star when it’s agreeable and polite. This creates a sycophantic loop where the AI hallucinates or softens its take just to match what it thinks you want to hear.

If you’re looking for a strategic partner to push back on your ideas, you’re out of luck; instead you’ve hired a digital suck-up.

5. The Context Window Crunch

AI companies love to brag about their massive ‘context windows’ (how much information the bot can "read" at once). They claim you can drop a whole 500-page book in there.

But there’s a documented technical phenomenon whereby like a first grader with ADHD, LLMs get lost in the middle. Research shows that LLMs are great at remembering the beginning of your prompt and the very end, but they effectively zone out during the middle sections. As context windows get bigger, the bot’s attention gets spread thinner.

So whilst the advice of the techbros is to ‘prompt better’, the irony is that the longer and more complex your instructions, the less nuance the LLM picks up on.

The point of all this isn’t to say never use AI. It’s still a great sounding board to bounce ideas off. But with model breakdown, you simply can’t rely on a predictive text bot to do your thinking for you.

Especially not now that it’s eating its own tail.

If you are ready to nail your messaging, book a free 20-minute strategy call with me.

Hi, I'm Stacey!

I’m the brand strategist, conversion copywriter, serial founder and corporate escapee that experts call on when they need high-converting website copy that sounds like them and fills their calendar with dream-fit clients.

Before you rewrite a single word, check this out.

My free website effectiveness checklist walks you through the top things your site needs to get right in order to convert browsers into buyers.

ACKNOWLEDGEMENT OF COUNTRY

Clevr Copy acknowledges the Gadigal people of the Eora Nation, the traditional custodians of the land on which we live & work and pay our respects to the Elders both past, present & emerging.

(02) 9188 0856

Sign up. Regret nothing.

Sick of recycled, AI-regurgitated marketing slop? This isn't that. Get practical tips and super nerdy marketing insights delivered to your inbox fortnightly.


© 2025 All Rights Reserved