AI is moving deeper into enterprise security operations, but the World Economic Forum (WEF) and KPMG warn that its value depends on governance, data quality, skills, integrated infrastructure and balanced human oversight.

The warning appears in a May 4, 2026 white paper that examines defensive AI use across the cyber life cycle.

The paper uses NIST CSF 2.0’s six functions — govern, identify, protect, detect, respond and recover — to organize how enterprises are deploying AI in cybersecurity. NIST added “govern” in CSF 2.0 to place accountability, supply chain risk and oversight alongside operational security functions.

The report treats AI in cybersecurity as part of day-to-day security operations, with governance and oversight now central to deployment. WEF says 77% of organizations already use AI in cyber operations, while 94% of cyber leaders identify AI as a defining force in cybersecurity.

Operational gains and the divide in adoption

IBM’s 2025 Cost of a Data Breach research found organizations using AI and automation extensively in security operations resolved breaches 80 days faster and saved $1.9 million on average compared with organizations that did not use those tools.

The most measurable examples in WEF’s case studies are in detection, threat intelligence and response. In one WEF case study, IBM’s ATOM platform automated more than 850 analyst hours per month and reduced end-to-end investigation time by 37%.

Accenture’s Agent Oliver cut per-site analysis time across more than 100,000 sites from about 15 minutes to under one minute, while PETRONAS reported a 30–40% reduction in incident response and resolution times within three months. The figures are presented as case-study outcomes, not sector-wide benchmarks.

Though the gains are not evenly distributed as WEF says adoption tracks organizational size and resources, with larger enterprises reporting higher adoption and smaller entities, governments and NGOs lagging because of financial constraints, skills gaps and immature data infrastructure.

Regulatory deadlines add compliance pressure

Regulatory deadlines now turn slower detection, documentation and escalation into compliance exposure. SEC rules require U.S. public companies to file material cyber incident disclosures within four business days after determining materiality, while NIS2 uses a staged process that begins with a 24-hour early warning and a 72-hour incident notification.

DORA adds a similar operational pressure point for financial entities in Europe. European Commission delegated rules specify the content and time limits for initial, intermediate and final reports on major ICT-related incidents. That makes detection, documentation and escalation part of compliance readiness, not just SOC efficiency.

The risks of moving toward agentic autonomy

Agentic AI adds another oversight challenge. KPMG’s 2026 Global Tech Report found 88% of surveyed companies are investing in agentic AI integration and 92% expect AI agent management to become an important competence within five years.

WEF separates autonomy into four levels, from AI that summarizes alerts for human review to systems that act independently without real-time human involvement and are reviewed later by supervisor agents or audits.

Higher-autonomy systems carry specific risks. WEF lists expanded attack surfaces, unintended agent behavior from hallucinations or misconfigured objectives and governance gaps that can leave no clear accountability for unwanted actions.

The Forum’s companion report on AI agents separately says most organizations remain unsure how to evaluate, manage and govern agents responsibly as deployments move from prototypes into real-world use.

Shrinking attack timelines demand faster response

Palo Alto Networks’ incident data shows why response speed is central to the debate. Palo Alto Networks 2026 Global Incident Response Report, based on more than 750 major incidents, found that 87% of intrusions involved multiple attack surfaces and identity weaknesses played a material role in almost 90% of investigations. The fastest 25% of intrusions reached exfiltration in 1.2 hours, down from 4.8 hours the previous year.

Bridging the gap in recovery and oversight

The least mature area remains recovery according to WEF. It says AI use in the “recover” function is still limited in practice and is mostly conceptual or early-stage, even as detection and response use cases show measurable gains.

The report identifies recovery plan creation, recovery-plan testing and failure simulation as potential use cases, but it does not provide mature production examples for that part of the cyber life cycle.

Personalized Feed
Personalized Feed