Essentyx started with a personal frustration that we suspect many people share but few talk about openly: the exhaustion of reading things that are technically informative but feel like they're working against you.
The Moment It Clicked
The founding idea came from a simple observation: after a long day of reading industry reports, investor updates, and press releases, the dominant feeling wasn't informed — it was manipulated. Not in a dramatic conspiracy sense. Just the quiet, accumulating sense that everything had been written to produce a specific emotional response rather than to communicate clearly.
We started keeping a log. Every time we noticed an emotionally loaded phrase in professional content that added nothing factual, we wrote it down. Within a week, the document was hundreds of lines long. Within a month, we had a taxonomy.
That taxonomy became the foundation of our first neutralisation model.
What We Got Wrong Early On
Our initial instinct was to build a fact-checker. Strip out opinions, verify claims, label confidence levels. We spent several months on this and got reasonably far before realising we were solving the wrong problem.
Fact-checking is extraordinarily hard. It requires world knowledge, source verification, and real-time data — and even then, it's contested. More importantly, it wasn't really what people needed.
What people needed wasn't to know whether something was true. They needed to be able to read it without being manipulated by how it was said. Those are different problems.
Once we made that shift — from fact-checking to manipulation neutralisation — the product became much clearer, faster to build, and more immediately useful.
What We Believe
A few things we hold with reasonable confidence after two years of building in this space:
The problem is getting worse, not better. The economic incentives that produce manipulative content are not going away. Attention is scarce and valuable, and emotional language is an efficient way to capture it.
Tools are part of the solution, but not all of it. Individual tools help individuals. Changing the information environment at scale requires both tools and shifts in how platforms, publishers, and institutions think about their responsibilities.
Neutrality is harder than it sounds. We spend more time thinking about our own biases than almost anything else. A neutralisation tool that introduces its own slant is worse than no tool at all.
We are not trying to tell people what to think. We are trying to give people back the space to think at all.
We don't know exactly where this goes. But we know what problem we're working on, and we're convinced it matters.