← Back to Blog

Fact-Checkers Can't Save You

Most manipulative news is factually accurate. The fact-checking movement built a global infrastructure to verify claims. It was fighting the wrong problem.

Most manipulative news is factually accurate. Sit with that for a second.

The fact-checking movement spent a decade fighting misinformation with a simple weapon: verify the claim, label it true or false, move on. Meta built a global network of over 90 fact-checking organizations across 60 languages. Nonprofits like PolitiFact and Snopes became household names. The entire “fighting misinformation” industry organized itself around one question: Is this claim true?

Good question. Wrong problem.

The question nobody asked

Fact-checking asks whether a statement is true or false. That’s useful when someone claims 10,000 people attended a rally that drew 800. But most news manipulation doesn’t work that way. Most of it is technically true.

An article reports that violent crime increased 4% in a mid-size city. True. The article uses the word “surged.” It leads with a home invasion that happened to involve an undocumented immigrant. It buries the context that violent crime is still 40% below its 1990s peak. Every fact checks out. The manipulation is in the framing, the word choice, the selection of which facts to emphasize and which to bury.

A fact-checker would look at this article and find nothing to flag. No false claims. No fabricated statistics. No doctored quotes. Clean bill of health.

And you’d still walk away more scared and more angry than the facts warrant.

This is the gap. Fact-checking catches lies. It doesn’t catch manipulation. And the most sophisticated media manipulation in 2026 doesn’t need to lie at all.

Why it fell apart

In January 2025, Meta announced it was ending its fact-checking program on Facebook, Instagram, and Threads, replacing it with community notes. X (formerly Twitter) had already moved to community notes years earlier. The two largest social platforms in the world looked at professional fact-checking and decided it wasn’t working.

Their stated reason was political bias in the fact-checking process. That’s a real critique. But there’s a deeper structural problem that would have caught up with fact-checking even without the political controversy.

The volume broke the model. Full Fact reported that in October 2025, AI was suspected in at least 27 of their published fact checks, up from just four in November 2024. And that’s one organization tracking what it can. AI-generated content is scaling faster than any human fact-checking operation can match. Full Fact itself is now building AI tools to try to keep up with AI-generated misinformation. The irony writes itself.

But even if fact-checkers could match the volume, the model itself has a ceiling. Because the most dangerous content isn’t false. It’s true and manipulative. Fact-checking was designed to catch the former. Nobody built the tool for the latter.

Two different questions

Here’s the distinction that matters.

Fact-checking asks: “Is this claim true or false?”

Manipulation detection asks: “Is this language manipulative?”

These are structurally different questions. The first evaluates the relationship between a statement and reality. The second evaluates the relationship between language and the reader’s cognitive response. You can have a perfectly true statement wrapped in language designed to make you angrier, more afraid, or more certain than the evidence supports.

“Senator responds to criticism” and “Senator SLAMS critics in EXPLOSIVE tirade” can describe the same event. Both factually accurate. One is informational. The other is engineered to trigger an emotional response before you’ve processed a single fact.

Fact-checking has no vocabulary for this. The claim is true. The senator did respond. There’s nothing to check. But the manipulation is real, and it shapes how millions of people understand the event.

The World Economic Forum’s 2026 Global Risks Report ranks misinformation and disinformation as the #2 short-term global risk, behind only geoeconomic confrontation. In the 2025 report, it held the #1 spot. Two consecutive years of experts saying: this is one of the most dangerous problems on the planet. And the primary tool we’ve been using to fight it doesn’t address the most common form it takes.

The political trap

There’s another reason fact-checking stalled. It couldn’t escape the question of who decides what’s true.

Every fact-check is an authority claim. Someone at PolitiFact or Snopes reads a statement, evaluates it, and issues a verdict. That verdict carries institutional weight. And in a polarized environment, every verdict becomes a political act. Rate a left-leaning claim as false, and the right celebrates while the left cries bias. Rate a right-leaning claim as false, and it flips. The fact-checkers became combatants in the war they were trying to referee.

Manipulation detection sidesteps this entirely. Loaded language is loaded regardless of which political direction it points. “SLAMS” is a manipulation technique whether it’s applied to a Republican or a Democrat. Urgency inflation works the same way in a progressive outlet and a conservative one. The detection is apolitical because it operates on language patterns, not truth claims.

ntrl doesn’t ask: “Is this true?” We ask: “Is this language manipulating your emotional response?” That question doesn’t require us to be the arbiter of truth. It requires us to understand how language works on the human brain. And that’s a question with much clearer answers.

What this means in practice

ntrl analyzes news articles against a taxonomy of over 100 manipulation techniques across six categories. We identify loaded language, urgency inflation, emotional appeals, framing bias, structural manipulation, and hidden incentives. Then we remove the manipulative language while preserving every fact.

We don’t replace fact-checkers. We’re not in the same business. Fact-checkers verify claims against reality. We verify language against manipulation patterns. Both are needed. But for the past decade, almost all the resources and attention went to the first problem while the second one grew unchecked.

Think about it this way. You have two kinds of contamination in your information supply. The first is false information: lies, fabrications, doctored images. The second is manipulative presentation: true information packaged to exploit your psychology. Fact-checking is a filter for the first kind. It was never designed to catch the second. And the second kind is far more common in mainstream news.

Every major news outlet in the country publishes factually accurate reporting wrapped in manipulative language. Every single day. That’s not a fact-checking problem. That’s a language problem.

Where this goes

I don’t know whether community notes will work better than professional fact-checkers. Maybe. But that debate, whichever way it resolves, still only addresses one layer of the problem. The truth-or-false layer. The manipulation layer remains untouched by either approach.

That’s the layer ntrl works on. Not replacing fact-checkers or community notes, but covering the ground they were never designed to cover. The language between the facts. The framing that shapes your conclusions before you’ve had time to form your own.

If you want to read news where the facts are intact but the manipulation is gone, join the waitlist. We’re building the thing that should have existed alongside fact-checkers from the start.