The greatest challenge facing cyber security leaders is not malware or hackers but hype, according to two Gartner analysts who have urged companies to exploit, rather than ignore, the frenzy surrounding artificial intelligence.
Speaking at Gartner’s annual Risk and Security Management Summit in London this morning, Christine Lee, vice-president analyst, and Leigh McMullen, VP analyst and Gartner fellow, said security leaders must learn to use hype cycles to win executive support, invest in AI literacy and prepare for the rise of “agentic AI” — autonomous systems able to perform tasks across digital environments.
“Hype makes it hard to think straight and chart a steady path forward,” said McMullen. “But if we become students of hype, we can use it to our advantage.”
Gartner’s well-known hype cycle model charts how new technologies rise rapidly to a “peak of inflated expectations” before often falling into a “trough of disillusionment”. Lee argued that understanding this trajectory is critical for CISOs as AI accelerates through the cycle.
“On the upside there are advantages to being an early adopter,” she said. “At the peak, you are most at risk of being taken in. If the technology survives the trough, you can then identify what is of real value to your organisation.”
Executives, meanwhile, are watching closely. Gartner research shows 74% of executives expect generative AI to significantly affect their industry within three years, with 34% already increasing investment this year.
More broadly, 85% of chief executives view cybersecurity funding as critical to growth, and 69% of executives expect to spend most of the next year focused on managing cyber and other risks.
Beyond fear
In a climate where new tools promise dramatic gains, Lee and McMullen warned that CISOs risk two extremes: moving too quickly or seeking to block progress. Both approaches, they said, can undermine credibility.
“Fear, uncertainty and doubt might secure a quick budget increase,” said McMullen. “But it rarely aligns with long-term mission objectives.”
Lee suggested an alternative: aligning cyber planning with business priorities. She offered the example of a new chief executive pursuing an aggressive automation strategy. If a rival suffers a ransomware attack, the CISO could respond by demonstrating how current investments not only protect against such threats but also support the CEO’s strategic goals. “That is how you win trust and funding,” she said.
A central theme of the session was the use of outcome-driven metrics (OEMs) and protection level agreements (PLAs)to ground conversations with boards.
OEMs measure protection levels for specific risks such as ransomware, while PLAs establish the level of resilience the organisation is prepared to fund.
For example, if only 20% of critical systems have ransomware recovery procedures, the board can be shown what it would cost to raise the figure to 70% or 80%. “It becomes a fact-based dialogue, not a fear-fuelled debate,” said Lee.
McMullen noted the advantage: “Have you ever tried to calculate the ROI of an anti-ransomware tool? With PLAs, we move away from that impossible task to something executives understand — cost, benefit and trade-offs.”
During the presentation Lee revealed that The Institute for Cancer Research in London has adopted this approach. Jonathan Monk, CIO and cyber lead, piloted OEMs alongside the institute’s NIST framework, focusing initially on 11 metrics where data was readily available.
The executive committee was presented with protection level options, voted individually, and agreed consensus in a single meeting. The clarity impressed even seasoned board members. One audit committee member described it as “the clearest process for determining risk appetite in cybersecurity investment we’ve seen”.
The results have been tangible: Monk now leads quarterly reviews of OEM and PLA performance, and the institute has recorded a 37% increase in cybersecurity budget since the model was introduced.
Agentic AI: a new risk surface
While generative AI is dominating headlines, Lee highlighted the greater long-term significance of agentic AI – systems capable of acting autonomously by running web searches, triggering APIs and collaborating with other agents.
“Teams must understand the attack surfaces of AI agents,” she said, pointing to risks such as prompt injection and agent hijacking. Unlike large language models, which CISOs are already learning to secure, agentic AI will require new identity and access controls, including unique digital identities for agents and fine-grained, policy-based authorisation models.
Inter-agent communication will also need to be secured in the same way APIs are protected today. McMullen gave one pragmatic example: “If you are concerned about AI agents writing code, let them code inside a container. Developers already work this way, and it limits the fallout if one goes rogue.”
Both analysts emphasised that avoidance of AI tools is not an option. Organisations should designate AI champions, pilot specific use cases over the next 18–24 months and critically evaluate results.
The duo also highlighted how some companies are already showing how well-scoped projects can deliver tangible benefits.
Sabre Travel, for instance, has developed Viper, a large language model tool trained to remediate only four categories of high-risk code vulnerabilities. By narrowing its remit, the company achieved measurable success: Viper remediated 55% of vulnerabilities within six months, saving an estimated 100,000 developer hours.
US HR platform Workday took a different approach, launching a “policy bot” that automated routine policy queries. Within weeks, it had eliminated 90% of tickets and achieved a 95% satisfaction rating among users.
Others are responding to the proliferation of “shadow AI”. Gaming group Playtika built a monitoring system to detect unsanctioned tools in use across the business. Rather than banning them outright, the company assessed risks, permitted use where manageable, and investigated why staff bypassed official channels.
As McMullen observed, “shadow AI is becoming ambient AI. The key is to bring it into the light with governance, not to shut it down.”
For Gartner, the lesson is clear: hype cannot be wished away. With execs alert to AI’s potential, CISOs who study the cycle, measure outcomes and build AI literacy, particularly around agentic systems, will be best placed to steer their organisations through the frenzy.
“Hype is energy,” said McMullen. “The power move is to channel it into strategy.”