Artificial intelligence may promise to transform business, but for many enterprises, the technology is already proving to be a liability as well as an asset, according to new research from IO (formerly ISMS online), the compliance and information security platform.
One in four organisations in the UK and US (26%) fell victim to AI data poisoning in the past year, according to IO’s State of Information Security Report, indicating that attackers are now targeting the data that underpins AI systems – subtly corrupting training sets, sabotaging models, or inserting hidden backdoors.
“Traditional hacking techniques still apply,” explains Sam Peters, chief product officer at IO, “but the objective is different. Instead of stealing and leaking data, the goal is to pollute it.”

Sam Peters, IO’s CPO
“That way, the decisions your AI is making on behalf of the business become inaccurate – approving transactions that should be flagged, or underwriting risks that should never have been taken.”
Peters warns that these attacks can be reputationally devastating and often act as precursors to extortion. “It might be chaos for chaos’ sake, or it could be laying the groundwork for a later demand for money,” he adds.
Shadow AI on the rise
Alongside data poisoning, the report highlights the growing problem of ‘Shadow AI’ – the use of unsanctioned AI tools by employees. More than a third of organisations (37%) admitted that staff are already turning to generative AI without permission, echoing the long-running problem of shadow IT.
“Most of the time, it’s well-intentioned,” explains Peters. “Employees want to be productive, and if the business isn’t providing AI tools, they’ll find their own. The danger comes when customer data or confidential information gets pasted into a public chatbot, where you don’t know what happens to it next.”
Chris Newton-Smith, IO’s CEO, says this is a call to action for IT leaders. “About a third of respondents flagged shadow AI as a serious concern. The answer isn’t to say ‘no AI’, but to implement governance. That means doing the research on how data is handled, putting the right licences in place, and being clear with staff about what is and isn’t allowed.”
Key standard: ISO 42001
Newton-Smith points to the newly created ISO 42001 standard as a crucial framework for enterprises looking to manage AI responsibly. “It’s not about creating heavyweight bureaucracy,” he stresses. “It’s about giving organisations a framework so they can use AI successfully – whether that’s third-party tools or systems they’re building themselves.”
IO’s survey of more than 3,000 security leaders across the UK and US suggests that many companies were too quick to embrace the technology. Over half (54%) admitted they deployed AI too fast and are now struggling to secure it or scale it back responsibly.

IO’s CEO Chris Newton-Smith,
Peters says a common red flag is when employees are handed AI tools without knowing how to use them effectively.
“If staff are just entering one-line prompts and expecting gold, they won’t get value. Worse, they may add sensitive data to improve results without thinking about the consequences. That’s when you know adoption has outpaced understanding.”
Newton-Smith agrees that governance gaps are becoming clearer. “Some of the most valuable use cases come from training on your company’s own data. But that requires robust oversight – otherwise you’re building useful services on insecure foundations.”
Ownership is another issue. Peters warns against assuming AI belongs solely in the IT department. “If the marketing team or customer service team are the ones using AI, they need to be part of the governance conversation. Otherwise, you risk misalignment and blind spots.”
Supply chains and deepfakes
While finance and customer-facing functions may seem obvious targets for AI manipulation, Peters argues that supply chains are particularly exposed. “A third of organisations said they’d experienced incidents linked to third parties in the past year – and over a quarter said this happened multiple times. That makes supplier ecosystems a huge area of risk for CISOs.”
Mastercard: How cybersecurity is changing everything
Deepfakes and impersonation are also emerging threats. One in five organisations (20%) experienced a deepfake or cloning incident in the past 12 months, and nearly three in ten (28%) expect virtual meeting impersonation to grow in the year ahead.
Newton-Smith says progress is being made, with 80% of firms claiming some level of preparedness, but he warns the threat is evolving quickly.
Concerningly, the report found that 52% of organisations feel AI and machine learning are currently hindering their security efforts, with attackers exploiting new vulnerabilities faster than defenders can adapt.
Nonetheless, Peters believes AI can strengthen resilience when properly harnessed. “We’re seeing AI used in penetration testing, iterating through environments and highlighting risk profiles in ways humans couldn’t do as quickly. If hackers are using AI, defenders need to use it too – it can balance things out.”
The challenge, Newton-Smith adds, is less about the technology itself and more about pace. “The real issue is keeping up with the rapidity of change — and dealing with legacy systems. That’s where attackers have the advantage.”
Vibe coding and new risks
Looking further ahead, Peters points to the dangers of so-called “vibe coding” — the rapid use of low-code or no-code platforms to launch products quickly. “The opportunity is speed and accessibility, but the risk is putting out products without the resilience that commercial customers demand,” he says. “Attackers are already using AI tools to identify vulnerabilities in these kinds of hastily built systems.”
The answer, he insists, is maturity in cybersecurity awareness. “Enterprises need teams that understand not just how to build, but how to secure. Otherwise, vibe coding becomes vibe hacking.”
For both Newton-Smith and Peters, the solution comes back to governance. “It’s like construction,” says Newton-Smith. “You don’t construct a building and then think about sprinklers afterwards – safety must be designed in from the start. AI should be treated the same way.”
That’s why IO sees ISO 42001 as critical not just for enterprises, but for their suppliers too. “Last year only 1% of organisations required suppliers to comply with ISO 42001,” Newton-Smith notes. “This year it’s jumped to 28%. That tells me people are starting to see AI governance as a fundamental requirement of doing business securely.”
With 96% of respondents planning to invest in generative AI-powered threat detection and 95% committing to AI governance in the year ahead, the report suggests that enterprises are beginning to learn from their early missteps.
The challenge now is ensuring that security keeps pace with innovation – before the next wave of AI-powered attacks makes that balance even harder to strike.