Attackers are increasingly able to “log in rather than break in”, according to PWC’s “Annual Threat Dynamics 2026” report, by abusing credentials, session tokens and federated access.

PwC says the shift is happening in an identity-driven, AI-accelerated threat landscape where cyber risk is increasingly inseparable from business and geopolitical strategy.

It’s an issue long repeated by the cyber industry, and is still being highlighted in reports today. IBM recently said abuse of user identities was the preferred entry point in 30% of the cases it tracked in 2024.

Verizon said 88% of breaches in its basic web application attack pattern involved stolen credentials.IBM’s 2026 X-Force index, meanwhile, said large supply chain or third-party compromises had nearly quadrupled since 2020 as attackers exploited trust relationships, CI/CD workflows and SaaS integrations.

AI is compressing the attack cycle

According to PwC, AI is helping compress the attack cycle. The report says threat actors are using AI to automate reconnaissance, generate phishing lures and scale social engineering across languages and platforms.

A late-2024 arXiv preprint reporting a human-subjects study found that fully AI-automated spear-phishing emails achieved a 54% click-through rate, matching human experts and far exceeding the 12% rate for generic phishing messages.

One identity, many doors

The identity problem now extends beyond named employees. PwC says expanding SaaS ecosystems, cloud dependencies and AI-driven workflows mean one compromised identity can open access across connected environments.

IBM’s 2025 Cost of a Data Breach research reported that 97% of organizations that had an AI-related security incident lacked proper AI access controls, and 63% lacked AI governance policies to manage AI or prevent the spread of shadow AI.

Netskope’s 2026 Cloud and Threat Report suggests why shadow AI is still hard to contain.

Personal-account use of AI apps for work fell to 47% from 78% over the past year, but the company said the average organization still sees 223 generative AI data policy violations a month. It also said prompts sent to SaaS genAI apps rose sixfold, from 3,000 to 18,000 a month.

How exposure shifts by sector

PwC’s sector analysis shows how that exposure changes by industry rather than disappearing.

The firm says financial services organizations are dealing with credential theft, ransomware and business email compromise that is growing more sophisticated as threat actors use AI-generated deepfakes and phishing lures.

It says retail is being targeted through identity-centric attacks aimed at customer data, while healthcare and hospitality remain exposed to ransomware that can disrupt operations and, in healthcare, become life-threatening.

Risk is converging, not siloing

The same report places that identity pressure inside a wider risk picture. PwC says financial crime, insider threats, digital-to-physical security concerns and supply chain compromise are converging, with attackers moving across executives, developers, vendors, hiring processes and financial workflows.

Stanford HAI separately said reported AI-related incidents rose to 233 in 2024, a record high and a 56.4% increase over 2023.

Personalized Feed
Personalized Feed