AI fraud is widening risk across fraud, operational resilience, intellectual property, privacy and directors-and-officers liability, according to Aon’s recently published AI risk report.
The report finds that the insurance market is responding mainly through policy wording changes and selective products rather than through a broad new class of cover.
The report comes at a time as AI use spreads faster than many companies have moved from experimentation to mature controls. McKinsey said 88% of survey respondents reported regular AI use in at least one business function in 2025, up from 78% a year earlier, even as most organizations remained primarily in pilot phases.
In a separate 2026 McKinsey survey, nearly two-thirds of respondents said security and risk concerns were the main barrier to scaling agentic AI, and active mitigation lagged perceived relevance across nearly every major risk category.
Quantifying the five pillars of exposure
Aon organizes the exposure into five categories: fraud and social engineering, compromise of models and data pipelines, dependence on third-party AI services, shadow AI and legal or reputational risk.
Netskope said 47% of generative AI users still use personal AI apps for work, down from 78% a year earlier but still leaving a large unmanaged channel for work data.
Underwriting the “Agentic Era”
For enterprise buyers, the insurance structure matters as much as the risk framing. Aon says insurers are taking three main approaches: AI-related exclusions or clarifying endorsements applied case by case, affirmative AI cover added to existing cyber or liability policies, and standalone AI products with narrower scope.
Aon names offerings from Armilla, Munich Re’s AiSure, AXA XL, Vouch and others, but says capacity remains limited and adoption is likely to be slower than earlier digital-risk transitions.
That cautious supply picture broadly matches Geneva Association research published in October 2025: in a survey of 600 corporate insurance decision-makers across six major markets, more than 90% said they wanted coverage for GenAI-related risks, yet the group also said current insurability challenges remain significant in the short term.
From compliance to D&O liability
Aon also places governance squarely inside underwriting. The firm says D&O underwriters are paying closer attention to board oversight, public disclosures, risk registers, model testing and third-party controls when assessing AI-related exposure.
That emphasis also lines up with formal EU compliance deadlines and with governance frameworks many U.S. organizations use to structure AI controls.
The regulatory roadmap to 2027
The European Commission says the AI Act entered into force on Aug. 1, 2024, became applicable in stages from Feb. 2, 2025, will be fully applicable on Aug. 2, 2026, and gives some high-risk AI-system rules a transition period until Aug. 2, 2027.
In the United States, NIST says its AI Risk Management Framework is designed to help organizations manage AI risk and that its generative-AI profile is meant to help identify risks unique to those systems.
Aon’s guidance suggests AI risk is already being underwritten, but often through endorsements, wording scrutiny and capacity limits rather than through broad, standardized cover.