As AI rapidly transforms the cybersecurity landscape, it brings both unprecedented opportunities and alarming threats. From AI-powered defences battling AI-enhanced attacks to the rise of deepfakes, autonomous botnets, and responsible AI standards, experts weigh in on what 2025 holds.
AI’s dual role and how it will impact enterprises
Matt Aldridge, principal solutions consultant, OpenText
“In the cybersecurity arms race, defenders are continually trying to keep pace with attackers and their latest techniques. 2025 will see this cat-and-mouse game continue, with AI-enhanced attacks increasingly going up against AI-powered defences. Defenders must understand their solutions’ AI capabilities and limitations, increasing agility and speed in detecting and responding to attacks.”
Javvad Malik, lead security awareness advocate, KnowBe4
“Organisations will increasingly use AI for automating and enhancing measures like anomaly detection and predictive threat modelling. But the same tools will empower cybercriminals to execute more deceptive attacks, necessitating continuous innovation to stay ahead.”
Christian Borst, EMEA CTO, Vectra AI
“As AI advances give cybercriminals tools for faster, targeted attacks, organisations must exert immense effort to stay resilient. Like the Red Queen’s race in Through the Looking-Glass, standing still is not an option—those who fail to adapt will fall behind.”
Elia Zaitsev, CTO, CrowdStrike
“AI is transformative and evolving rapidly across public and private clouds. As adversaries increasingly target AI services and large language models (LLMs), protecting these systems’ integrity and performance is more critical than ever.”

Elia Zaitsev, CTO, CrowdStrike
Matt Wiseman, director of product marketing, OPSWAT
“The spread of AI has enabled less-skilled attackers to refine phishing and social engineering tactics. Organisations often lag in defensive adoption, necessitating a rethink of cybersecurity strategies to maintain the upper hand.”
LLMs and agentic AI: The new attack surface?
Pedram Amini, chief scientist, OPSWAT
“The evolution of threats continues, with nation-states targeting physical devices and ML-driven scams becoming more sophisticated. Organisations must prepare for AI-enhanced social engineering and consider production-grade zero-day exploits and fully autonomous malware on the horizon.”
John Hammond, principal security researcher, Huntress
“LLMs will become an unprotected attack surface. Security has been sidelined as organisations rushed to integrate AI into their offerings. Prompt engineering and adversarial techniques could expose sensitive information or training data, creating new vulnerabilities.”
Sohrob Kazerounian, distinguished AI researcher, Vectra AI
“As GenAI hype fades, the focus will shift to agentic AI models designed for scrutiny and reliability. These specialised agents collaborate to enhance accuracy and maintain clear, goal-driven functionality. Threat actors are already experimenting with malicious AI agents for autonomous attacks, though these remain error prone.”
Stephen McDermid, CEO EMEA, Okta
“AI agents will dominate by 2025, with Gartner predicting they’ll handle 15% of daily work decisions by 2028. Attacks on AI agent providers will rise, requiring businesses to embed cybersecurity into third-party assessments.”
Ev Kontsevoy, CEO, Teleport
“2025 will mark the ‘Great AI Awakening’ among cybersecurity professionals, as they discover how easily AI agents can be manipulated to cause harm. This realisation will slow AI deployment while security teams retrofit models to address vulnerabilities.”

Ev Kontsevoy, CEO, Teleport
Daniel Rapp, chief AI and data officer, Proofpoint
“Threat actors may begin contaminating private data sources—emails, SaaS repositories—to manipulate AI behaviour. Protecting these vectors will be crucial to prevent AI from being tricked into harmful actions.”
Len Noe, white hat hacker and threat evangelist, CyberArk
“LLMs will become the new advanced persistent threat (APT). While invaluable for problem-solving, they are highly vulnerable to jailbreaks and prompt hacks, making them a double-edged sword for cybersecurity.”
Are AI’s adversarial advances exaggerated?
Dan Lattimer, area VP, Semperis
“AI will remain a buzzword in 2025, but much of its use in cybercrime will still be basic and clunky. The risk is that genuinely exciting applications may get lost amid the noise.”
Dave Merkel, CEO and cofounder, Expel
“Everyone asks what will happen when attackers use AI, but the reality is we often won’t know—it will look like a very fast, efficient human attacker. We’ll only truly understand adversarial AI use when attackers are caught and their tools exposed.”
What does responsible AI look like?
Cat Starkey, CTO, Expel
“AI is here to stay, pushing businesses to adopt it while addressing data privacy concerns. Challenges like unrepresentative training datasets and the ‘right to be forgotten’ will demand innovative governance solutions. Responsible AI will drive new markets for robust checks and balances.”

Cat Starkey, CTO, Expel
Dane Sherrets, staff innovation architect, HackerOne
“AI security and safety standards will gain traction. Tools like model cards—summarising a model’s purpose, performance, and dataset metadata—will improve transparency. Adversarial testing and AI red teaming will be essential for managing risks and aligning models with organisational standards.”
Rise of deepfakes and tools to detect them
Jon France, CISO, ISC2
“Deepfakes will escalate in 2025, becoming tools for financially driven attacks. Adversaries will use manipulated audio, video, and images on a larger scale, targeting individuals and businesses alike. Education, reporting protocols, and detection tools are vital to counteract this threat.”
Niall McConachie, regional director (UK & Ireland), Yubico
“AI’s ability to clone voices and likenesses, known as vishing, is already being exploited by attackers. With advances in these tools and better education among cybercriminals, innovative uses of AI in attacks are inevitable in the coming year.”
Len Noe, CyberArk
“As deepfakes spread, startups will offer identity validation-as-a-service. Enhanced multi-factor authentication, biometrics, and behavioural analysis will verify identities in both online and physical interactions, potentially including implantable chips for added security.”
More AI tooling for dection and response
Javvad Malik, KnowBe4
“AI-powered cybersecurity tools will revolutionise detection and response by analysing vast datasets and identifying anomalies in real-time. Deepfake detection technologies will become essential as AI-driven disinformation tactics grow.”

Javvad Malik, security advocate, Knowbe4
Axel Maisonneuve, SmartLedger Solutions
“AI won’t replace human experts but will be indispensable in threat detection and response. SOCs and EDR systems will rely on AI for analysing data with unmatched speed and accuracy.”
Mario Vargas Valles, VP, Protegrity
“Automation will redefine incident response, streamlining routine tasks and accelerating remediation at physical and logical levels. This will enhance data protection and efficiency.”
Max Vetter, VP of Cyber, Immersive Labs
“DevSecOps teams will use AI to automate vulnerability detection and prevent exploitable code. As adoption grows, security teams must remain nimble to counter new threats.”
AI’s impact on cyber insurance premiums
Akhil Mittal, Black Duck
“By 2025, cyber insurance premiums will be determined by AI-driven, real-time risk assessments. Companies adopting advanced detection tools and staying ahead of standards will benefit from lower premiums and better coverage.”
Mark Logan, CEO, One Identity
“AI-driven risks will drive up insurance premiums as insurers account for breaches and failures linked to AI. This will push businesses to prioritise governance and security, creating a market that responds dynamically to evolving threats.”
Read more here: 2025 Informed: scaling responsible AI in a regulated world