There is an open secret in most companies today. While leadership teams deliberate over AI strategies and vendor evaluations, their employees have already decided. They are using ChatGPT, Claude, Copilot, Gemini, and dozens of browser extensions to write emails, summarize documents, generate code, and analyze data. Most of them are doing it without permission, without oversight, and without any awareness of what happens to the data they paste into those tools.
This is shadow AI. It is the fastest-growing category of shadow IT, and it is already inside your building.
The Scale of the Problem
Shadow AI is not a fringe concern. It has become the default way many knowledge workers interact with AI. According to a BlackFog research study conducted in November 2025, 49% of the 2,000 employees surveyed reported using AI tools not sanctioned by their employer at work, while 86% now use AI tools at least weekly for work-related tasks.1
Microsoft's 2025 Work Trend Index, which surveyed 31,000 knowledge workers across 31 countries, found that 75% of global knowledge workers now use AI tools regularly, yet only 39% have received any AI training from their employer.2 That gap between usage and guidance is where shadow AI thrives.
The Reco 2025 State of Shadow AI Report paints an even starker picture at the application level: the average organization now manages 490 SaaS applications, and only 47% of them are authorized by IT. Small businesses are hit hardest, with 27% of employees in companies with 11 to 50 workers using unsanctioned tools.4
This is not a technology problem. It is a human behavior problem. Employees are not acting maliciously. They are trying to work faster. BlackFog's study found that 60% of employees would accept security risks to meet deadlines, using whatever tools help them get the job done.1
Why It Is Happening Now
Three forces are converging to make shadow AI nearly impossible to prevent through traditional IT controls alone.
AI tools are consumer-grade and free
Unlike enterprise software that requires procurement and installation, most AI tools are available through a web browser with nothing more than an email address. ChatGPT, Claude, and Gemini all have free tiers. Among BlackFog's respondents using unsanctioned AI tools, 58% rely on free versions that lack enterprise-grade security, data governance, and privacy protections.1
Policy has not kept pace
Only 15% of organizations have updated their Acceptable Use Policies to include specific guidelines on AI, according to ISACA.5 Pew Research found that among all non-self-employed workers, half say their employer neither encourages nor discourages the use of AI chatbots.6 When the rules are silent, employees write their own.
AI is embedding itself everywhere
Gartner predicts that by 2026, 70% of employee AI interactions will occur through features embedded in existing, sanctioned SaaS applications.7 Notion, Canva, Slack, Gmail, and hundreds of other tools are quietly adding AI features. The boundary between "approved software" and "unauthorized AI" is dissolving.
The Real Cost: Data Exposure and Financial Risk
Shadow AI is not just a governance headache. It directly increases breach costs and data exposure.
IBM's 2025 Cost of a Data Breach Report, conducted by the Ponemon Institute across 600 organizations globally, identified shadow AI as one of the top three costliest breach factors. The numbers are substantial:
- $670,000 in additional breach costs for organizations with high levels of shadow AI compared to those with low or no shadow AI usage8
- 20% of organizations reported suffering a breach due to shadow AI security incidents8
- Shadow AI breaches compromised personally identifiable information in 65% of cases, compared to a 53% global average8
- 40% of shadow AI incidents involved intellectual property compromise8
AI-related security incidents also take 26.2% longer to identify and 20.2% longer to contain than standard breaches, according to the same report.9 When you cannot see the tool that caused the exposure, you cannot trace the incident back to its source.
It has already happened to major companies
In 2023, Samsung engineers inadvertently leaked proprietary semiconductor source code and internal meeting notes to ChatGPT across three separate incidents, less than three weeks after the company lifted its ban on employees using the tool. The data, once submitted, became impossible to retrieve from OpenAI's servers.10 Samsung subsequently restricted ChatGPT access and launched disciplinary investigations.
Samsung is not an outlier. Cisco has reported that 60% of organizations have experienced data exposure linked to employee use of public generative AI.9 The difference between Samsung and most organizations is that Samsung caught it. Many companies have no monitoring in place to detect this kind of leakage at all.
The Governance Gap
The data on AI governance readiness is concerning. Despite the clear risks, most organizations are underprepared.
According to IBM's breach report, 63% of breached organizations either lack an AI governance policy or are still developing one. Only 34% perform regular audits for unsanctioned AI usage.8
The broader picture is not much better. Only about one-third of companies have a dedicated AI policy in place, according to the 2025 Wharton-GBK AI Adoption Report.11 And even among organizations with policies, only 23% require staff to be trained on approved AI usage, based on Gartner's cybersecurity leadership survey.3
Gartner predicts that by 2030, more than 40% of global organizations will suffer security and compliance incidents due to unauthorized AI tools. The question for leadership is whether they want to be ahead of that curve or caught by it.
This governance gap is not just about security. It is about wasted spending. Zylo, a SaaS management platform, found that organizations without centralized AI governance maintain up to five times more redundant AI tool subscriptions than those with clear policies.9
What to Do About It: A Practical Framework
The instinct to ban AI tools entirely is understandable but counterproductive. Samsung tried a ban before the leaks. It did not work. Employees will find workarounds because the productivity gains are too compelling to ignore. Microsoft's data shows the top reason employees turn to AI over a colleague is its 24/7 availability.2
The organizations handling this well are not restricting AI. They are channeling it through governance that enables rather than blocks. Here is a practical framework drawn from the approaches that are actually working.
1. Conduct a shadow AI audit
You cannot govern what you cannot see. Start by discovering what AI tools are already in use across your organization. This means examining network traffic, browser extensions, SaaS authentication logs, and expense reports. The Reco report found that the average organization has hundreds of unapproved SaaS applications. Your number is probably higher than you think.4
2. Classify the risk, not just the tool
Not all shadow AI carries the same risk. An employee using ChatGPT to brainstorm marketing taglines is different from one pasting customer records into a free AI summarizer. Build a tiered risk framework that evaluates each use case by the sensitivity of the data involved, the tool's security posture, and the regulatory implications for your industry.
3. Provide sanctioned alternatives that are genuinely better
Employees turn to unsanctioned tools because approved alternatives are either absent, slower, or harder to use. Salesforce research found that 76% of workers say their preferred AI tools lack access to company data, which limits their usefulness for actual work tasks.12 Deploy enterprise-grade AI tools that connect to your business context and make them easier to access than the free alternatives.
4. Update your acceptable use policy now
With only 15% of organizations having updated their AUPs for AI,5 this is low-hanging fruit. Your policy should specify what data categories can and cannot be shared with AI tools, which tools are approved, how AI-generated outputs should be reviewed, and what the consequences are for violations. Keep it clear enough that a non-technical employee can follow it.
5. Train continuously, not once
A single onboarding slide about AI policy is not sufficient. The AI landscape changes quarterly. New tools appear, existing tools add capabilities, and the risks evolve. Build ongoing training into your operations, not as a compliance checkbox but as a genuine skill-building program. Microsoft found that 82% of leaders believe AI skills are essential, yet 60% of employees say they lack them.2
6. Implement detection and monitoring
Deploy tooling that provides visibility into AI application usage across your organization. This does not mean surveillance of individual employees. It means understanding the aggregate pattern: what tools are being used, what data flows are occurring, and where your exposure concentrations are. Organizations that have adopted this approach have seen meaningful improvements in their ability to respond to incidents before they become breaches.
The Strategic Opportunity
Here is what most shadow AI discussions miss: the presence of shadow AI in your organization is actually a strong signal. It means your employees see value in AI. They are motivated enough to seek out tools on their own. That energy is an asset if you channel it correctly.
Gartner's research describes shadow AI as the "number one indicator of unmet business needs" within an organization.7 Instead of viewing unauthorized AI use purely as a risk to mitigate, forward-thinking leaders are using shadow AI audits as a roadmap for where to invest in sanctioned AI capabilities.
The organizations that will come out ahead are not the ones that locked down every AI tool in 2024. They are the ones that built governance frameworks flexible enough to enable experimentation while protecting sensitive data. They moved from "no" to "yes, and here is how."
The Wharton-GBK AI Adoption Report found that organizations with mature AI governance frameworks report a 28% increase in staff using AI solutions and deploy AI across more than three areas of their business.11 Good governance does not slow AI adoption. It accelerates it.
The Bottom Line
Shadow AI is not a future threat. It is a current reality. Nearly half your workforce is likely using AI tools you have not approved, with data flowing to services you have not vetted. The financial exposure is real: $670,000 in additional breach costs per incident, according to IBM. The governance gaps are wide: most organizations lack policies, training, and monitoring.
But the path forward is not panic or prohibition. It is pragmatic governance that meets employees where they are, provides better alternatives, and creates guardrails that protect without paralyzing. The companies that figure this out will not just avoid breaches. They will unlock a competitive advantage by channeling the AI energy that is already inside their walls.
The first step is knowing what you are dealing with. A shadow AI assessment can give you that visibility in weeks, not months. You might be surprised by what you find, but you will be glad you looked.