Fight fake news 9 minutes

Information Without Boundaries: Why Fact-Checking Has Become a Democratic Imperative

Importance of fact checking
Graphic image about fake-news | Quelle: AI developed
18. Febr. 2026

The digital age promised a louder, freer public sphere. Social media delivered visibility, participation and new voices, but it also unleashed a crisis democracies were unprepared for: the collapse of shared truth. Algorithms reward outrage over accuracy, allowing falsehoods to travel faster than facts and turning influence into a substitute for verification. Figures like Vito Quiles thrive in this environment. This is not a free speech problem, but an epistemic one. Without verification, debate becomes manipulable. Fact-checking, far from censorship, may be the last defence of democratic trust in an age of noise.

The promise of the digital age was simple: more voices, fewer gatekeepers, a public sphere open to all. Social media delivered on part of that promise. It allowed citizens to bypass traditional media, speak directly to power and find audiences that once felt unreachable. Marginalised groups gained visibility. Political debate became louder, faster and more participatory.

But the same tools that expanded democratic expression have also destabilised it. Today’s information ecosystem is saturated with half-truths, distortions and outright falsehoods, spreading at a speed and scale that traditional journalism struggles to match. The problem facing democracies is no longer whether people can speak freely online, but whether anyone can still agree on what is real.

Researchers Claire Wardle and Hossein Derakhshan famously described this environment as one of “information disorder”, where misleading content is crafted to provoke emotion and deepen division rather than inform citizens. Their analysis for the Council of Europe remains one of the clearest explanations of how misinformation corrodes democratic debate.

Free Speech in the Age of Algorithms

Freedom of expression remains a democratic cornerstone, and social media has undeniably strengthened it. Anyone can now publish opinions, challenge institutions or comment on public affairs without editorial mediation. Yet these platforms are far from neutral arenas.

Algorithms quietly decide what rises and what disappears. According to the Reuters Institute’s Digital News Report, content that triggers outrage or strong emotional reactions is far more likely to be amplified than carefully verified reporting. Accuracy, in other words, is rarely rewarded. The consequences are not theoretical. A landmark study published in Science found that false news spreads significantly faster and further than true information on social media, particularly when politics is involved. The reason is not sophistication, but simplicity: false stories are often more novel, more emotional and more shareable.In such an environment, free speech alone cannot sustain a healthy democracy. Without verification, public debate becomes easy to manipulate and hard to trust.

This imbalance has fuelled the rise of self-styled digital journalists who position themselves as outsiders battling a corrupt or biased media establishment. In Spain, figures such as Alvise Pérez and Vito Quiles have built large followings by presenting personal interpretation as suppressed truth. Their appeal is not accidental. As trust in traditional media declines, audiences increasingly seek voices that promise transparency and authenticity. But this shift comes with a cost. Journalism scholars Bill Kovach and Tom Rosenstiel argue that journalism’s defining principle is not independence or neutrality, but verification. Without it, journalism becomes performance rather than public service.

What we are witnessing is less a diversification of journalism than a crisis of epistemic authority: the collapse of shared standards for establishing what is true.

The Vito Quiles Effect

Vito Quiles is a case study in this transformation. Without formal journalistic training, he presents himself as an “independent journalist” and reaches millions through platforms such as Instagram, where his account has become a powerful channel of political commentary.

His content consistently frames mainstream media as censored, partisan or dishonest, while positioning his own posts as raw and unfiltered. According to Reuters Institute, for younger, digitally native audiences, already sceptical of institutions, this narrative is deeply attractive.

Algorithms amplify the effect. Provocative videos and emotionally charged claims are prioritised, creating a feedback loop in which visibility is mistaken for credibility. When millions engage with a post, its accuracy often becomes secondary. The democratic risk is clear: reach replaces responsibility, and influence no longer requires verification.

Why Fact-Checking Is Not Censorship

Fact-checking is often attacked as a threat to free speech. Critics argue that it polices opinion or enforces ideological conformity. But this argument misunderstands what fact-checking actually does. Fact-checking does not ban speech or silence dissent. It adds context, evidence and correction. It gives audiences more information, not less. Investigative journalist Teresa Pérez (in a personal interview) puts it bluntly: “fact-checking is about protecting citizens from deception, not controlling what they are allowed to say.” Inside professional fact-checking organisations, the process is anything but political. A journalism student who interned at EFE Verifica, Spain’s national fact-checking unit, describes a workflow built entirely on primary sources, public data and documentary evidence. Opinion, they say, simply does not enter the equation.

Still, suspicion persists. Many people worry that fact-checking could become a tool of narrative control, particularly if linked to governments or powerful platforms. A Swedish student studying in Norway (in a personal interview) expressed concern that even well-intentioned verification systems could eventually be used to silence unpopular views.These fears are shaped by wider patterns of media trust. Data visualised by Voronoi shows stark differences across Europe: Nordic countries consistently report high trust in news media, while countries such as Greece and Hungary rank much lower. Where trust collapses, audiences migrate to social media precisely where misinformation thrives.

The sheer volume of online content makes traditional fact-checking insufficient on its own. Millions of posts circulate every day. This is why automated fact-checking tools have become increasingly important.

In the UK, the organisation Full Fact has developed automated systems that detect factual claims in political speeches and online content, flagging them for verification. The organisation outlines both the promise and the limits of these tools in its report on AI and automated fact-checking. Major platforms have followed suit. Meta’s third-party fact-checking programme stopped one year ago, but their idea was to combine algorithmic detection with independent verification partners, while Google has experimented with AI-assisted labels for misleading content.

Media scholar Nicholas Diakopoulos warns in Automating the News that algorithms cannot determine truth on their own. Used carelessly, they risk reproducing bias. Used responsibly, they can support human judgment at a scale no newsroom could achieve alone.

Applying Fact-Checking to the Vito Quiles Case and to the Future of Online Debate

The rise of figures such as Vito Quiles illustrates both the scale of today’s information problem and a possible way forward. Quiles’ content reaches millions, often driven by emotionally charged claims that travel fast through algorithmic feeds. 

A more sustainable alternative would be to integrate fact-checking signals directly into platform algorithms. Instead of deleting content, platforms could introduce visible indicators, such as an exclamation mark, when highly viral posts rely on unverified or weakly sourced claims. Crucially, this marker would not label content as false. It would simply indicate that the information has not been independently corroborated, prompting users to slow down and approach it more critically. As misinformation researcher Claire Wardle has repeatedly argued, the most effective interventions are those that add context rather than impose bans, reducing harm without triggering defensive reactions.

Behind these visible signals, automated fact-checking systems would do much of the heavy lifting. Claim-detection algorithms, already used by organisations such as Full Fact, can identify recurring assertions in speeches, videos or posts and compare them with existing fact-checks or trusted datasets. When a claim repeatedly appears without reliable sourcing, or contradicts verified information, it can be flagged for review. Human fact-checkers then step in, providing nuance, explanation and correction where necessary. 

Applied to Quiles’ content, such a system would preserve his ability to publish and reach audiences, while subtly rebalancing the information ecosystem. His posts would still circulate, but virality would no longer operate without friction. Audiences would be reminded that popularity is not the same as verification. Over time, this could reshape incentives, rewarding creators who cite evidence and penalising those who rely on insinuation alone, not through punishment, but through transparency.

The broader implications extend far beyond a single influencer. As automated systems become more sophisticated, fact-checking is likely to move upstream, becoming part of the architecture of platforms rather than a reactive correction issued days later. The success of this shift, however, will depend on trust. Research consistently shows that users are more willing to accept verification when it is independent, clearly explained and framed as informational rather than moral or political. If platforms choose to implement algorithmic fact-checking in this way, they may not eliminate misinformation, but they could restore something increasingly fragile in digital democracies: the idea that facts deserve visibility alongside opinion, and that freedom of expression is strengthened, not weakened, when truth is given a fighting chance.

Democracy in the Age of Noise

The question raised by today’s information crisis is not whether people should be allowed to speak online. That battle was largely won years ago. The harder question is whether democratic societies can still function when truth struggles to be heard above the noise. Social media has redistributed power over visibility, but it has not redistributed responsibility. Algorithms reward engagement, not accuracy. Influencers gain authority without accountability. And audiences are left to navigate a public sphere where popularity often masquerades as credibility.

Fact-checking, whether conducted by journalists alone or supported by automated tools, cannot solve these problems on its own. But without it, democratic debate becomes increasingly fragile. Verification provides something essential yet increasingly rare: a shared reference point for reality. When citizens can no longer agree on basic facts, disagreement stops being productive and starts becoming corrosive. The fear that fact-checking threatens free speech should be taken seriously, not dismissed. That fear explains why independence, transparency and media literacy matter as much as technology. Verification must remain separate from political power, open to scrutiny and clearly distinct from opinion or moderation. Used in this way, it does not narrow debate; it anchors it.

Ultimately, the defence of democracy in the digital age will not be won by algorithms or institutions alone. It will depend on whether societies choose to value truth as a public good, rather than treating it as just another opinion competing for attention.

Freedom of expression made the modern public sphere possible. Fact-checking may be what keeps it from falling apart.

More stories on how to make our society future-proof can be found in this section.