Lenovo has put a fresh number on shadow AI inside large companies, reporting that more than 70% of enterprise employees use AI weekly and up to one-third do so beyond IT oversight.

The finding, drawn from a survey of 6,000 full-time employees at organizations with 1,000 or more staff, shows how far workplace AI use has moved ahead of formal controls.

Lenovo’s survey also links the control gap to rising security concern among IT leaders and employees. Lenovo said 61% of IT leaders have seen cybersecurity threats linked to AI rise, while only 31% feel confident managing those risks. The company also reported that 43% of employees are worried about AI-driven data exposure or attacks.

The SMB blind spot

Lenovo’s sample does not measure small and midsize businesses directly. But the findings matter beyond large enterprises because the same workplace behavior — employees using accessible AI tools to work faster — can move sensitive data outside normal security review.

The FTC’s small-business cybersecurity guidance warns that cybercriminals target companies of all sizes and urges businesses to protect data, use multifactor authentication and manage vendor risk.

Governance risks beyond unapproved software

The problem is not simply that employees are using unapproved software. With AI tools, the exposure can involve sensitive customer, employee or corporate information being entered into systems outside normal governance.

OWASP lists prompt injection, sensitive information disclosure, supply chain weaknesses and excessive agency among major LLM application risks, including cases where manipulated inputs can lead to unauthorized access or compromised decisions.

The financial impact of shadow AI

IBM’s 2025 Cost of a Data Breach research shows why that distinction matters. In research conducted by Ponemon Institute, sponsored and analyzed by IBM, across 600 breached organizations globally, IBM found that one in five organizations reported a breach due to shadow AI.

Organizations with high levels of shadow AI had $670,000 higher average breach costs than those with low or no shadow AI. The same report found that 63% of breached organizations either lacked an AI governance policy or were still developing one.

The rise of autonomous enterprise agents

The risk surface is expanding as workplace AI moves from chat tools into agents that can act inside enterprise applications. Gartner projected that 40% of enterprise applications will include task-specific AI agents by the end of 2026, up from less than 5% in 2025.

Gartner’s examples include agents that scan network traffic, system logs and user behavior patterns before assessing and initiating a response.

That shift also changes what security teams have to monitor. CrowdStrike’s 2026 Global Threat Report found adversaries exploited legitimate generative AI tools at more than 90 organizations by injecting malicious prompts to generate commands for stealing credentials and cryptocurrency.

Shrinking windows for defensive response

CrowdStrike also reported that average eCrime breakout time fell to 29 minutes in 2025, with the fastest observed case taking 27 seconds.

The next signal is whether companies can close the inventory gap before AI agents become normal features in business software. Lenovo’s survey shows the gap inside large enterprises.

IBM shows the breach costs associated with shadow AI, while CrowdStrike shows attackers are already exploiting generative AI tools and AI development platforms. Gartner’s agent forecast suggests the number of places where that oversight is needed is about to grow.

Personalized Feed
Personalized Feed