Businesses are already feeling the impact of weaponised artificial intelligence, according to new research from IO, the global platform for scaling information security and privacy compliance.
The company’s State of Information Security Report, based on a survey of 3,001 cybersecurity and information security managers in the UK and US, found that more than one in four organisations (26%) had suffered an AI data poisoning incident in the past year. Attackers can corrupt training data to weaken fraud detection, manipulate outcomes, or insert hidden backdoors into systems, with the potential to disrupt operations and public trust.
Alongside data poisoning, 20% of respondents reported deepfake or cloning incidents over the same period, with 28% citing impersonation in virtual meetings as a rising concern for the year ahead. Looking further, 42% of security professionals identified AI-generated misinformation and disinformation as their top emerging threat, while 38% flagged generative AI-driven phishing attempts.
Unapproved use of generative AI tools is compounding the problem. IO’s research shows 37% of organisations admitted employees are using AI without authorisation, often without compliance oversight. More broadly, 40% said they face challenges with shadow IT, with AI adding to the difficulty of ensuring human checks and safeguarding sensitive data.
Chris Newton-Smith, CEO of IO, said: “AI has always been a double-edged sword. While it offers enormous promise, the risks are evolving just as fast as the technology itself. Too many organisations rushed in and are now paying the price. Data poisoning attacks, for example, don’t just undermine technical systems, but they threaten the integrity of the services we rely on. Add shadow AI to the mix, and it’s clear we need stronger governance to protect both businesses and the public.”
More than half of companies (54%) admitted they had deployed AI too quickly and are now struggling to rein in or secure its use responsibly. This has led to a sharp rise in concern, with 39% citing securing AI and machine learning technologies as a current top challenge, compared with just 9% last year. A majority — 52% — said AI and machine learning are actively hindering their security efforts.
Despite the risks, the report highlights growing investment in AI for defence. The proportion of organisations using AI, machine learning, or blockchain for security has risen from 27% in 2024 to 79% today. Nearly all respondents plan to invest in GenAI-powered threat detection (96%), deepfake validation tools (94%), and AI governance (95%) within the next year.
Newton-Smith added: “The UK’s National Cyber Security Centre has already warned that AI will almost certainly make cyberattacks more effective over the next two years, and our research shows businesses need to act now. Many are already strengthening resilience, and by adopting frameworks like ISO 42001, organisations can innovate responsibly, protect customers, recover faster, and clearly communicate their defences if an attack occurs.”
You must be logged in to post a comment.