AI transcription and data privacy in newsrooms under scrutiny

AI transcription and data privacy in newsrooms under scrutiny

A new report warns that AI transcription tools may be putting newsroom data at risk. As journalists weigh convenience against confidentiality, experts urge the use of secure, enterprise-grade solutions designed to protect sources and uphold public trust.


A report from the Public Media Alliance (PMA) and the University of Edinburgh has raised fresh concerns about the growing use of AI transcription tools in newsrooms, warning that convenience may be coming at the cost of data privacy.

The research found that journalists — particularly those in public service media — are increasingly relying on free or low-cost transcription platforms without full awareness of the security risks involved. Respondents cited fears over data misuse, editorial independence, and the protection of confidential sources.

One case study referenced in the report was the Australian Broadcasting Corporation (ABC), which banned a major AI transcription vendor after it failed key data security and privacy checks.

Although the study focused on public media, experts warn the implications extend across the industry. Newsrooms of all sizes now use AI-driven tools to streamline workflows, raising questions about how sensitive material is stored, shared, and potentially exposed to third parties.

The report underscores the value of adopting enterprise-grade transcription tools that prioritise data protection. These include platforms that do not train on user content, maintain strong encryption standards, and meet regulatory requirements such as the EU’s General Data Protection Regulation (GDPR).

“Security is definitely super important,” said Marion Solletty, Editor-at-Large for France at POLITICO. “We’re talking to confidential sources, some of them very highly placed. We don’t want that to be something that can break out in the open. So source protection is super important for us. When I was trained on Trint, I definitely took note of that.”

Trint, founded by former BBC correspondent Jeff Kofman, provides transcription and translation in over 50 languages, with encryption using TLS 1.2+ and AES 256-bit standards. It offers data residency options in both the EU and US, aligning with GDPR requirements and supporting automatic backup via Amazon Web Services.

For journalists, the report concludes, ensuring source confidentiality and maintaining public trust must take precedence over convenience. As AI becomes embedded in editorial workflows, newsroom leaders face increasing pressure to vet their tools with the same rigour they apply to their reporting.



  • The cybersecurity paradox of digital trust

    The cybersecurity paradox of digital trust

    Digital growth depends on trust built on fragile foundations. Dan Bridges, Technical Director – International at Dropzone AI, argues that growth demands digital trust, but architectures were built for a more trusting era — leaving security operations struggling to keep pace with AI-driven threats and an always-on risk landscape.


  • Strong knowledge foundations drive AI advantage

    Strong knowledge foundations drive AI advantage

    Mature knowledge systems determine AI and growth outcomes. A global iManage study finds organisations with strong knowledge foundations are nearly twice as likely to report revenue growth and are significantly more successful at embedding AI into daily operations.


  • Google backs Open Partners ad tech build

    Google backs Open Partners ad tech build

    Google partners with Open Partners on proprietary ad tech. The collaboration will see the independent agency’s Automated Campaign Execution platform — ACE — formally developed within the Google ecosystem ahead of a planned early 2026 launch, following reported uplifts of 20–30% in ROAS and conversions.