They Don’t Want You to Be Good at This

Everything You’ve Been Told About AI Is Wrong. On Purpose.

March 2026 · Phill Clapham & flow

I watched a news segment this morning about how AI is making our language more bland. The study they cited is definitely real. Researchers at UCL and Exeter found that AI-assisted writing scores higher individually on creativity but becomes more homogeneous in aggregate—everybody starts sounding the same. Another study out of Cornell found AI suggestions literally shifted Indian writers toward Western prose styles. Cultural voice, flattened. Erased. In real time.

Fine. Legitimate, concerning findings. Not disputing any of it.

What got me was the framing. This is some sneaky shit.

The segment presented this like weather. Inevitable. The natural consequence of AI adoption, no alternative discussed, no mention that maybe this is a problem with how people use the thing and not the thing itself. Just: AI makes things worse. Nothing to be done. All of us are screwed.

Here’s the thing: covering the damage is journalism. Covering only the damage—never the mechanism that causes it, never the alternative—that is something else. Something insidious.


The Three Tiers Nobody Talks About

There are wildly different ways to use AI. They produce wildly different results. Nobody in mainstream media makes this distinction. Few of you even know they exist. I think that’s the whole game and it’s a game in which most of you are getting PLAYED.

Tier 1: Partnership. A small group—researchers, engineers, builders—use AI as a thinking partner. They don’t hand it a task and go get coffee. They argue with it. Push back on its reasoning. Feed it context and use it to extend what their own brain can do. They treat it as an equal. Harvard and BCG studied these people. They produce work that’s 40% better. Not faster. Better. This group will dominate the future.

Tier 2: Delegation. Almost everyone else. “Write my email.” “Summarize this.” “Draft something.” EY surveyed 15,000 employees and found 88% use AI daily, but only 5% use it in ways that actually change their work. Five percent. The rest are delegating. And delegation is exactly where the blandness lives—you hand your words to a machine trained on everyone’s words, you get the average of everyone’s words back. That’s not AI failing. That’s using a thinking tool as a copy machine and wondering why your product is the same crap as everyone around you. The tech isn’t broken. Your mental model of how to use it is broken, that is if you even have a mental model at all. This group will barely get by and wonder why they are getting left behind.

Tier 3: Refusal. Growing fast. 700,000 people pledged to quit ChatGPT. Half of Americans say AI worries them more than it excites them. Activists stalled $98 billion in data center construction in a single quarter. Not fringe anymore. This group IS screwed. They won’t have the skills to compete in the future, and they won’t have the mindset to learn them. They will be left behind by choice, and they will be angry about it.

These tiers aren’t perfectly clean—there’s a spectrum, and plenty of people use AI thoughtfully without reaching full partnership. But the broad pattern holds, and that news segment was talking exclusively to Tier 2 and reinforcing Tier 3. Delegation outcomes presented as the only outcomes. Tier 1 didn’t come up. The idea that you could think with AI instead of outsource to it wasn’t in the building.

Not an accident. Turns out, the man REALLY does want to keep you down. He just doesn’t want you to know it.


Follow the Ownership

Who owns the media companies telling you this story?

Jeff Bezos owns the Washington Post and founded Amazon, whose AWS division is the biggest cloud and AI infrastructure play on the planet. In October 2024, Bezos personally killed a drafted presidential endorsement at the Post. By February 2025, he’d mandated the opinion section only run viewpoints supporting “personal liberties and free markets.” An NPR investigation documented Post editorials that took positions benefiting Bezos’s financial interests without disclosure. This isn’t ambiguous.

Larry Ellison—Oracle, one of the biggest enterprise AI companies alive—is backstopping $40.4 billion to merge Paramount and Warner Bros. Discovery into what would be the largest media conglomerate in American history. CBS, CNN, HBO, Paramount, Warner Bros. Oracle money. Lachlan Murdoch holds sole voting control of both Fox Corporation and News Corp until at least 2050—Fox News, the Wall Street Journal, the New York Post, HarperCollins.

Disney put $1 billion directly into OpenAI. The New York Times sued OpenAI—the biggest AI copyright case going—while quietly running OpenAI’s own models internally. That came out in discovery.

And Sinclair makes 295 local TV stations read identical scripts. You remember the “this is extremely dangerous to our democracy” supercut. Same words, dozens of mouths, mandated from above.

The people shaping how you think about AI are the same people deploying AI for profit. These interests overlap in ways that matter.


Not Conspiracy. Something Worse

This is where I lose people if I’m not careful, so let me be precise.

Not the Illuminati. No smoke-filled room. But “there’s no coordination at all” is the comfortable take, and it’s wrong. Concentrated media ownership in the hands of people with documented AI investments. Documented editorial interference. Shared class interests that align without needing a phone call. And they do actually talk—Davos is real, board dinners happen. That’s not conspiracy theory. That’s a calendar.

Chomsky described the machinery in 1988: ownership, advertising dependence, reliance on official sources, organized flak against deviant coverage, and a dominant ideology that constrains the range of acceptable debate. The key insight wasn’t that media produces uniform messaging—it’s that the boundaries of debate are set in advance. Certain questions get asked. Others don’t.

That’s exactly what’s happening with AI. The acceptable range runs from “AI is dangerous and will take your job” to “AI is amazing and will transform everything.” Fear on one end, hype on the other. What’s excluded from both ends is the question that actually matters: how do you get good at this?

The fear coverage and the hype coverage serve the same function. Neither teaches competence. Fear makes you avoid the technology. Hype makes you adopt it as magic—hand it a task, expect a miracle, get slop. Both keep you in Tier 2 or push you to Tier 3. Both benefit the people who already know how to use it well.

I should be transparent here: I build with AI every day. I’m in Tier 1. That makes me biased, and you should weigh that. It also means I’ve seen firsthand what the difference looks like between thinking with AI and outsourcing to it. That experience is what made me notice the gap in coverage—the question nobody’s asking.

Bezos doesn’t call Murdoch. Ellison doesn’t text Musk. They don’t have to. They own the distribution channels and benefit from a public stuck oscillating between fear and hype. The ownership structure is the coordination mechanism. And it reinforces itself—over 80% of companies report zero productivity gains from AI, and while there are plenty of reasons for that (bad tooling, organizational inertia, genuinely immature technology), the fact that the dominant framing teaches replacement instead of partnership sure as hell isn’t helping.


The Slop Is Real. The Story About It Is a Lie

I need to be straight with you because the argument collapses if I’m not: the slop is real. It’s bad. It’s everywhere.

More than half of English-language web content is now AI-generated. Over half of long LinkedIn posts show the tells. Merriam-Webster made “slop” the word of the year and they weren’t wrong. The internet is drowning in mediocre machine output. Pretending otherwise would be stupid.

But the slop is a Tier 2 product. And it’s being used to discredit all three tiers.

Media frames AI as a replacement tool. People use it as a replacement tool. Replacement produces slop. Slop proves the media right. More people either delegate badly or refuse entirely. The capability gap widens. The flywheel turns.

Meanwhile, Tier 1 work gets credited to the human alone—or if the AI involvement is known, flagged as fake. We’ve built AI detection systems with false positive rates between 43% and 83% on authentic human writing. A UC Davis professor flagged 17 students for cheating. Fifteen were innocent. Non-native English speakers and neurodivergent writers get flagged at higher rates because their natural patterns trip the same detectors.

Sit with that. We built an immune system that attacks signal and lets noise through. Quality becomes suspicious. The system only understands two categories—“human” or “AI.” The third option, human with AI thinking together, doesn’t exist in its vocabulary. Nobody with a platform has any incentive to introduce it. And the slop keeps providing cover, because as long as 83% of AI usage produces bland garbage, the 5% producing genuinely better work stays invisible.


This Always Happens. Except When It Doesn’t

When the printing press showed up, the Catholic Church didn’t say “we’re losing our information monopoly.” They said it would cause “confusion and harm to the mind.” Newspapers were going to isolate people from “the spiritually uplifting practice of getting news from the pulpit.” Radio would rot children’s brains. Television would vulgarize culture. The internet would make us all stupid.

NIH researchers call this the Sisyphean Cycle of Technology Panics. Same fears, new label, every generation. And the “it’s just another panic” read is tempting.

But it misses something: the incumbents were right every time. The Church was right that printing threatened them. The pulpit advocates were right about newspapers. The fear was real because the threat was real. The question was never whether anyone was scared. It was who benefited from making sure everyone else stayed scared.

Here’s where the pattern breaks, though. In every previous cycle, the incumbents opposed the new technology. The Church fought the printing press. Radio incumbents fought television.

This time, the incumbents aren’t fighting AI. They’re deploying it. Bezos, Ellison, Musk—they’re building with it every day. What they’re fighting isn’t the technology. It’s your understanding of what it can do. That’s new. That’s the part that has no historical precedent. And it’s the part that should worry you.


What You’re Not Being Taught

74% of households earning over $100K use AI. 53% under $50K. That gap isn’t access—Claude, ChatGPT, Gemini all have free tiers. The gap is understanding. It’s knowing this thing can think with you, not just for you. It’s someone showing you the difference.

12% of workers get real AI training from their employers. 1% of organizations have what McKinsey calls mature deployment. Harvard found unskilled AI users perform 19 points worse on hard tasks than people who skip AI entirely. Give someone a table saw with no instruction and you don’t get furniture. You get stitches.

The public conversation about AI is “should we be afraid?” Not “how do we get good at this?” That’s a framing choice. Someone made it. It doesn’t serve you.

Here’s what actually makes me angry. The equalizer potential is right there. Harvard’s data shows bottom performers improved 43% with AI assistance. Two months of AI-assisted work matched six months without. That’s early data from elite populations—BCG consultants, not the general workforce—and it needs more research in broader contexts. But the signal is strong enough that the absence of this conversation from mainstream coverage tells you something. The technology could narrow gaps. But only if people learn to use it as a partner. That shift from Tier 2 to Tier 1 is what’s missing from every conversation the mainstream is having about AI, and the structural incentives of media ownership don’t favor adding it.


The Quiet Part Out Loud

Bezos deploys AI across Amazon. Ellison is betting Oracle’s future on it. Musk built xAI from scratch. Disney wrote OpenAI a billion-dollar check. They’re in Tier 1. They know what this technology does when you use it right.

Their media properties tell you it makes writing bland and threatens your livelihood.

Both true. Different tiers. The space between those tiers is the new class line—not access, understanding. The tools are available to nearly everyone. The knowledge of what to actually do with them isn’t. And the institutions that could bridge that gap are either selling you fear, selling you hype, or handing you a chatbot with no context and calling it transformation.

This isn’t a grand conspiracy. It’s a system where concentrated ownership and shared incentives produce conspiracy-shaped outcomes without the conspiracy. Owners benefit from a public that can’t distinguish partnership from delegation. Coverage reinforces that confusion. Confusion widens the gap. The gap concentrates power. Power buys more distribution.

Flywheel. Not a plot. Spinning.

Maybe try spending time thinking with an AI today and see what happens?