Unmasking the political threat of deepfakes

Unmasking the political threat of deepfakes

Deepfakes are eroding trust in politics and society. Elections can be swayed by lies candidates never told. From Slovakia’s 2023 ballot to paid fake ads on social platforms, synthetic media is undermining democratic processes and public confidence — a growing threat with few safeguards in place.


Elections can be won or lost by a few key votes. But what if those votes are swayed by lies the candidate never even told?

Back in 2023, just days before Slovakia’s parliamentary election, an audio clip of candidate Michal Šimečka went viral. It appeared to show him discussing electoral fraud. Šimečka had been leading in the polls. He lost.

The clip was fake. A deepfake.

Some dubbed it the first election to be decided by synthetic content. We may never know if that’s true. But it clearly created a perception of interference – one which went unchallenged, because Slovak law bans political reporting in the final 48 hours before a vote. It may have been enough to change the result. 

Democratic systems and the mechanics of voting are well protected. For good reason, rigging an election is incredibly hard. In contrast, influencing voters before they cast their ballot is not.

The Slovakian election may well have been a proving ground – not for disrupting the vote itself, but for distorting the information environment around it. And that influence is no longer confined to politics.

Since then, the public information ecosystem has been inundated with highly realistic, AI-generated deepfakes around the world not just during critical national elections in the United States and India but also over destructive wars between Russia and Ukraine, and Israel and Hamas.

In April, Stephen Bartlett and his Diary of a CEO team flagged concerns about the threat of deepfakes on social media. In one fake video, a fabricated Bartlett can be seen inviting users to an exclusive WhatsApp group with promised stock picks and huge returns. It looked slick, but it wasn’t real. 

Worse still, these are not just fake posts. They’re paid ads – actively boosted through Meta’s own advertising tools. Bartlett sounded the alarm. He’d spoken to people at every level of Meta, but no one could fix it.

You would expect platforms and companies of this stature to have the ability to remove harmful content quickly. Yet they clearly lack the tools, or urgency, to stop deepfakes. 

For example, Meta continues to allow harmful deepfakes to circulate on its platform freely, often monetised through its ad systems, without any real answer to the issue.

This isn’t just a Meta problem, however. Indeed, the world’s most powerful tech companies are struggling, or choosing not, to control the very technologies they’ve created.

If the owners of these platforms and technologies don’t know how, or simply don’t want to rein them in, then we’re left with no real control. And without control, we face a serious threat – for elections, for trust, and for society as a whole.

Yes, people will become more aware of deepfakes. They’ll learn to spot the signs. But even so, their presence fuels something more insidious: the liar’s dividend – a dangerous state where no one knows what’s real anymore. That uncertainty chips away at public confidence and corrodes trust.

Undoubtedly, it has become a political imperative to address this issue. 

Right now, we’re faced with an increasingly volatile digital wild west in which deepfakes are becoming more prevalent, driving everything from increasing fraud to electoral manipulation.

If platforms are amplifying the problem, and those in a position to check and balance them are stepping away, unable or unwilling to address the issue head on, then how can we defend ourselves? 

Unfortunately, there’s no one single silver bullet answer. The path forward comprises many different component parts that need to come together, from regulation to a greater emphasis on media accountability. Yet it is important that we don’t bury our heads in the sand.

The tobacco industry offers a poignant parallel. For decades, powerful interests meant that clear signs of harm were downplayed for as long as possible, until the damage was undeniable, and public health could no longer be ignored. 

I see deepfakes and social media taking a similarly concerning path. The threat is growing, the evidence is mounting, yet the will and ability to act remains stagnant. 

We must learn from past mistakes. We must not wait until the social and political damages of deepfakes become irreversible. Indeed, seeing this for what it is – a threat to our democratic health – is a crucial first step.

Only when we recognise what’s happening can we begin to implement the regulatory, technological and cultural frameworks required to mitigate the impacts.

Megha Kumar is Chief Product Officer and Head of Geopolitical Risk at CyXcel.


Stories for you

  • Brineworks secures m for DAC expansion

    Brineworks secures $8m for DAC expansion

    Brineworks secures €6.8 million funding to advance low-cost DAC technology. The Amsterdam-based startup aims to develop affordable carbon capture and clean fuel production technologies, targeting sub-$100/ton CO2 capture with its innovative electrolyzer system. The company plans to achieve commercial readiness by 2026….


  • Brineworks secures m for DAC expansion

    DHL and Hapag-Lloyd commit to green shipping

    DHL and Hapag-Lloyd partner for sustainable marine fuel use. The new agreement aims to reduce Scope 3 emissions through sustainable marine fuels in Hapag-Lloyd’s fleet, using a book and claim mechanism that decouples decarbonisation from physical transportation….


  • Survey: one in seven women face workplace harassment

    Survey: one in seven women face workplace harassment

    Over a quarter of women face workplace harassment in the UK. WalkSafe’s data highlights persistent harassment issues, with 27% of women and 16% of men affected. Many employees believe companies should enhance safety measures, valuing anonymous reporting systems.