Why We Built NTRL

News media has a manipulation problem. We built a system to fix it — not by picking sides, but by removing the manipulation itself.

founder mission media-literacy

Here’s a headline you’ve probably seen some version of:

“SHOCKING: Scientists make TERRIFYING discovery that could change EVERYTHING”

The actual study? Researchers found microplastics in high-altitude cloud formations. Interesting, worth knowing about, but not “terrifying” and definitely not “EVERYTHING.”

This is the manipulation problem. Not bias — that’s a different conversation. The problem is the language itself: the urgency inflation, the emotional triggers, the ALL CAPS, the words chosen specifically to make you feel something before you know anything.

The problem isn’t new. The scale is.

Sensationalism has always existed in media. What’s changed is the speed, precision, and pervasiveness. Every headline is A/B tested for maximum engagement. Every article is optimized for clicks. The business model of digital news requires manipulation — more outrage equals more shares equals more ad revenue.

The result is a kind of pollution. Like industrial runoff that makes water unsafe to drink, engagement optimization makes news unsafe to consume without constant mental filtering. And most people don’t have the time, energy, or training to do that filtering.

So they tune out. And an informed citizenry shrinks.

Why existing solutions fall short

There are good efforts in this space. Bias ratings. Multiple-perspective aggregators. Curated newsletters. But they all address a different problem.

Showing the same story from left, right, and center sources doesn’t help when all three versions use manipulative language. Rating an article as “left-leaning” or “right-leaning” doesn’t remove the urgency inflation in the headline. Curating a digest still passes through manipulative framing.

The root cause — the manipulative language — survives every one of these interventions.

What NTRL actually does

NTRL’s approach is different: we remove the manipulation at the linguistic level.

Our pipeline analyzes every article across six categories of manipulation: attention and engagement tricks, emotional and affective manipulation, cognitive and epistemic distortion, linguistic framing devices, structural and editorial manipulation, and incentive-driven patterns.

Then it neutralizes them. “SLAMS critics in EXPLOSIVE rant” becomes “responds to critics.” “Shocking move that sent shockwaves” becomes “the bill passed.” The facts stay. The manipulation goes.

Every change is transparent. In the app, you can see the original text with the manipulative phrases highlighted in category-specific colors. You can tap any highlight to see what was changed and why. We show our work, always.

Not a fact-checker

We need to be clear about what NTRL is not. We’re not a fact-checking service. We don’t determine truth or falsehood. We don’t rate political bias on a spectrum. We don’t tell you what to think.

We do one thing: we remove the language designed to manipulate your emotional response before you’ve had a chance to think. The facts remain. Your judgment remains. You just get to exercise it without someone’s thumb on the scale.

What’s next

NTRL is currently in development. We’re building toward a TestFlight beta, followed by an App Store launch. If you believe news should inform rather than manipulate, join the waitlist. We’ll let you know when it’s ready.

The news, unaltered. That’s all we’re after.