Prevent data poisoning from undermining your AI strategy

Prevent data poisoning from undermining your AI strategy

AI is fundamentally reshaping business strategy and exposing new vulnerabilities. Chris Newton-Smith, CEO of IO, examines the growing risk of data poisoning — a cyber threat capable of corrupting AI training data and undermining enterprise systems — and explains how governance and zero-trust frameworks can mitigate it.


AI is risk and opportunity combined. The opportunity is well understood by now. It’s why research reveals that 79% of British and American firms have already adopted emerging technologies, while a fifth are planning to do so. But ask business leaders about the potential risk of doing so, and the picture becomes a little cloudier.  

In fact, AI infrastructure could significantly increase the corporate attack surface, enabling threat actors to disrupt critical services and hold businesses hostage. If it isn’t already, mitigating the threat of data poisoning should be high on the to-do list for business leaders. 

Business adoption of AI has surged for good reason. Organisations believe that large language model (LLM)-powered tools like ChatGPT can supercharge employee productivity and improve customer experiences. They’re also building their own models based on proprietary datasets. This helps to improve accuracy and relevance for end users, while reducing the security and privacy risks associated with public LLMs.

Unfortunately, both types of LLM represent risk. A quarter (26%) of UK and US security leaders we spoke to say they’ve suffered a data poisoning attack in the past 12 months. This occurs when an adversary manages to access and manipulate the data on which such models are trained, in order to alter their behaviour. 

There are various ways this could happen. Threat actors could directly target the model in a specific way for a specific outcome, without impacting overall performance. For example, they could insert fraudulent transactions in the training data of a banking LLM, labelling them as “legitimate” so that future transactions will not be flagged by the AI. 

Alternatively, cybercriminals might adopt a non-targeted, indirect approach where the idea is to simply degrade overall model performance. They might overwhelm an ecommerce recommendation engine with fake reviews. Or introduce a huge number of irrelevant emails to an LLM-powered spam filter, reducing its ability to detect genuine malicious messages.

To actually manipulate the model, they might inject fabricated data, assign incorrect labels to certain types of data, or alter existing data, potentially even deleting it. Some of the most insidious attacks involve embedding hidden “backdoor” triggers which cause the model to behave in a way the attacker wants.

According to Gartner, nearly a third (29%) of enterprises have suffered an attack on their GenAI application infrastructure in the past 12 months. Compromised credentials are a popular way to access such infrastructure, although malicious insiders also represent a threat. 

Yet attacks can also be indirect: compromised third-party data suppliers, perhaps, or open-source repositories that are contaminated with malicious or “garbage” code. Threat actors could even focus on introducing malicious or misleading data to publicly available sources like user reviews, knowing that they will be scraped for training by LLMs in the future.

It’s not difficult to see how such threats could be monetised by cybercriminals. They might use attacks directly to bypass corporate fraud filters. They could deploy data poisoning to circumvent security measures during the first stage of a data heist. Or they might want to simply sabotage a model in order to put a competitor out of business. 

Extortion would be another obvious scenario. After poisoning a business-critical dataset, a hacking group would threaten to trigger a backdoor leading to system failure unless they are paid. The financial and reputational damage for the victim could be significant, especially if key operations are hit. And for the corporate IT team, the cost of finding the corrupted data and/or retraining the model could be huge.

Already, 15% of data breaches detected by IBM last year were linked to data poisoning. If this proves to be a lucrative business, we may see the emergence of “data-poisoning-as-a-service” on the cybercrime underground. Just as similar models democratised the ransomware ‘industry’, it could lead to an explosion in sophisticated attacks on AI infrastructure.

Miraculously, 86% of the British and American cybersecurity leaders we spoke to say they feel prepared to detect, defend against and recover from data poisoning attacks. This may be a little optimistic on their part, although there are ways to do so. 

First, security teams should put in place steps to validate data pre-training, in order to identify anything that might negatively impact the model. “Adversarial” training might also help. This is the process of teaching an AI model how to spot and withstand adversary techniques, in order to build resilience in from the start.

More traditional cybersecurity approaches will also help here. Continuous monitoring of the environment, with AI-powered tools, will help to spot anomalies indicative of malicious behaviour. Strict access controls and least privilege policies will restrict access to said models and training datasets, limiting the potential for harm. And strong encryption can reduce the AI attack surface further. Many of these approaches align with a zero-trust security strategy based around the tenets: “never trust, always verify”. 

Another area of focus for almost all respondents is AI governance and education. Security and business leaders should educate teams about the risks of AI data poisoning but should also put structured governance and frameworks in place such as ISO 27001 (for information security management) and ISO 42001 (for AI management). These frameworks offer a systematic approach to identify security gaps and potential threats and then deliver a framework to address them. 

Over half of the businesses we spoke to admit they deployed AI too quickly and are now struggling to scale back and implement it more responsibly. In the meantime, their rivals may be pulling ahead. Better to get it right first time, by understanding the risks and putting a coherent plan together to manage them.



Stories for you

  • Apollo names Jaycee Pribulsky chief sustainability officer

    Apollo names Jaycee Pribulsky chief sustainability officer

    Apollo appoints ex-Nike CSO Jaycee Pribulsky as Chief Sustainability Officer. Pribulsky will lead Apollo’s sustainability strategy, focusing on risk management and long-term value creation. She succeeds Dave Stangis, who will transition to a senior advisor role in 2026….


  • Prevent data poisoning from undermining your AI strategy

    Prevent data poisoning from undermining your AI strategy

    AI is fundamentally reshaping business strategy and exposing new vulnerabilities. Chris Newton-Smith, CEO of IO, examines the growing risk of data poisoning — a cyber threat capable of corrupting AI training data and undermining enterprise systems — and explains how governance and zero-trust frameworks can mitigate it.


  • Apollo names Jaycee Pribulsky chief sustainability officer

    Industrial giants launch global carbon framework

    Carbon Measures coalition to enhance carbon accounting frameworks. A coalition of leading companies aims to develop consistent carbon accounting standards to improve emissions tracking and differentiate products, facilitating informed policy decisions and accelerating the transition to a lower-carbon economy….