Employees across the UK are adopting unsanctioned AI tools because they are easy to access, quick to use – and often free. In doing so, shadow AI has shifted from the margins to the centre of business life.
Salesforce research shows that around 40% of UK employees using AI are doing so through applications their employer has prohibited, while McKinsey reports that more than 90% of companies have staff relying on personal chatbot accounts for daily tasks.
This results in a secret shadow AI economy inside organisations, shaped by the collective decisions of employees seeking simpler ways to get work done.
For leaders, this economy creates possible liabilities. Data can move into external services without oversight or vetting and workflows form around tools with no guarantees of stability or privacy.
Yet at the same time, shadow AI marks the spot where official systems fall short, providing leaders a roadmap for smarter investment, if they have the visibility.
When availability outpaces governance
Shadow AI demonstrates how quickly technology can move beyond the reach of governance. Tools can be adopted in seconds of them being released, often without cost or approval, leaving leaders with little sense of which applications are active or what data is involved.
Flexera’s 2025 State of ITAM Report found that fewer than half of organisations retain full visibility across their IT assets, a decline from the previous year. Without this clarity, leaders will struggle both to see where shadow AI is already in play and how to decide where spend should be realigned.
This pattern echoes earlier cycles of cloud and SaaS adoption, where many firms were tied to overlapping contracts and costly commitments that could not easily be unwound.
Shadow AI compresses that risk into a shorter timeframe, with adoption beginning with employees, spreading faster than procurement can track, and holding greater compliance challenges.
As the gap between availability and governability widens, oversight detaches from reality and spending becomes harder to control. Most organisations are therefore flying blind on AI, unable to govern what they cannot see.
Risks that surface too late
Over time, risks created by shadow AI escalate gradually, undermining accountability. Information can move into unsanctioned services with unclear policies, while outputs can influence decisions without an audit trail. This leaves leaders defending choices without clear backing.
In highly regulated sectors, the problem becomes even sharper. A financial services firm unable to evidence how an AI model has been used cannot accurately answer questions from regulators. Healthcare providers, meanwhile, risk both compliance breaches and reputational harm if sensitive patient data is misplaced or fed into unsanctioned tools.
Operational resilience also weakens as employees grow reliant on tools with no guarantee of continuity. A change in terms or withdrawal of services can immediately dismantle processes that teams have come to depend on overnight.
Budgets also suffer, as tools driving daily work remain outside governance, while licensed platforms continue to draw spend despite limited use.
For boards, each of these concerns weakens confidence in the wider technology strategy at the same time they are under pressure to deliver productivity gains.
Turning signals into strategy
What leaders should understand is that shadow AI is not inherently rebellious, it’s telling us something important about how employees really work.
People adopt unsanctioned tools when the sanctioned route is too slow, too complex, or missing altogether. In that sense, every tool that appears outside the approved technology estate marks a gap between business need and process.
Putting those signals to use requires a clearer picture of the IT estate itself. That means keeping track of what assets are in play, how they are being used day to day, and aligning IT, finance, and compliance around the same view.
When leaders have that visibility, they can see which tools are truly embedded in daily work, who depends on them, and what data they handle. High-risk services can then be phased out before they become dependencies, while the tools employees already trust can be sanctioned and supported.
Governance also becomes less about imposing rules and more about reflecting reality. Policies that mirror the way people actually work are more likely to be followed, which reduces the gap between written guidance and lived behaviour.
At the board level, this creates a direct line between daily practice and priorities such as cost control, resilience, and compliance. With a clear view of what is happening in practice, investment can be directed to real usage and risk addressed before it hardens into liability.
Shadow AI will continue to expand wherever it delivers speed and convenience. Trying to block it outright only forces it further underground. The real task for leaders is to bring it into view and treat it as intelligence.
While garbage in, catastrophe out remains the warning, clarity in, resilience out can be the result for organisations who hold the visibility to act.
By Brian Shannon, CTO, Flexera
