Elections can be won or lost by a few key votes. But what if those votes are swayed by lies the candidate never even told?
Back in 2023, just days before Slovakia’s parliamentary election, an audio clip of candidate Michal Šimečka went viral. It appeared to show him discussing electoral fraud. Šimečka had been leading in the polls. He lost.
The clip was fake. A deepfake.
Some dubbed it the first election to be decided by synthetic content. We may never know if that’s true. But it clearly created a perception of interference – one which went unchallenged, because Slovak law bans political reporting in the final 48 hours before a vote. It may have been enough to change the result.
Democratic systems and the mechanics of voting are well protected. For good reason, rigging an election is incredibly hard. In contrast, influencing voters before they cast their ballot is not.
The Slovakian election may well have been a proving ground – not for disrupting the vote itself, but for distorting the information environment around it. And that influence is no longer confined to politics.
Since then, the public information ecosystem has been inundated with highly realistic, AI-generated deepfakes around the world not just during critical national elections in the United States and India but also over destructive wars between Russia and Ukraine, and Israel and Hamas.
Has Big Tech lost control? —
In April, Stephen Bartlett and his Diary of a CEO team flagged concerns about the threat of deepfakes on social media. In one fake video, a fabricated Bartlett can be seen inviting users to an exclusive WhatsApp group with promised stock picks and huge returns. It looked slick, but it wasn’t real.
Worse still, these are not just fake posts. They’re paid ads – actively boosted through Meta’s own advertising tools. Bartlett sounded the alarm. He’d spoken to people at every level of Meta, but no one could fix it.
You would expect platforms and companies of this stature to have the ability to remove harmful content quickly. Yet they clearly lack the tools, or urgency, to stop deepfakes.
For example, Meta continues to allow harmful deepfakes to circulate on its platform freely, often monetised through its ad systems, without any real answer to the issue.
This isn’t just a Meta problem, however. Indeed, the world’s most powerful tech companies are struggling, or choosing not, to control the very technologies they’ve created.
If the owners of these platforms and technologies don’t know how, or simply don’t want to rein them in, then we’re left with no real control. And without control, we face a serious threat – for elections, for trust, and for society as a whole.
Yes, people will become more aware of deepfakes. They’ll learn to spot the signs. But even so, their presence fuels something more insidious: the liar’s dividend – a dangerous state where no one knows what’s real anymore. That uncertainty chips away at public confidence and corrodes trust.
We must learn the lessons from Big Tobacco —
Undoubtedly, it has become a political imperative to address this issue.
Right now, we’re faced with an increasingly volatile digital wild west in which deepfakes are becoming more prevalent, driving everything from increasing fraud to electoral manipulation.
If platforms are amplifying the problem, and those in a position to check and balance them are stepping away, unable or unwilling to address the issue head on, then how can we defend ourselves?
Unfortunately, there’s no one single silver bullet answer. The path forward comprises many different component parts that need to come together, from regulation to a greater emphasis on media accountability. Yet it is important that we don’t bury our heads in the sand.
The tobacco industry offers a poignant parallel. For decades, powerful interests meant that clear signs of harm were downplayed for as long as possible, until the damage was undeniable, and public health could no longer be ignored.
I see deepfakes and social media taking a similarly concerning path. The threat is growing, the evidence is mounting, yet the will and ability to act remains stagnant.
We must learn from past mistakes. We must not wait until the social and political damages of deepfakes become irreversible. Indeed, seeing this for what it is – a threat to our democratic health – is a crucial first step.
Only when we recognise what’s happening can we begin to implement the regulatory, technological and cultural frameworks required to mitigate the impacts.

Megha Kumar is Chief Product Officer and Head of Geopolitical Risk at CyXcel.
You must be logged in to post a comment.