So you blocked ChatGPT on the corporate network. IT sent a memo. Legal reviewed it. Done, right?

Meanwhile — right now, as you read this — someone on your team just copy-pasted a client contract into a free "PDF summarizer" website hosted on servers in a country with zero data-privacy laws. Another person installed a Grammarly AI extension that now has read access to every email they write. A third is using Notion AI to "organize" internal meeting notes about your upcoming acquisition.

Welcome to Shadow AI. It's already inside your organization, and blocking ChatGPT didn't stop it. It just redirected it.

⚠️ The Real Risk According to IBM's 2025 Cost of a Data Breach Report, shadow AI incidents add an average of $670,000 to the cost of a breach — pushing the average breach cost to $4.63 million. And 97% of organizations with breaches linked to AI had insufficient AI controls in place.

🕵️ What Is Shadow AI?

Screenshot showing a confidential PDF uploaded to a free chat AI, with a warning about data being on external, unsecured servers.
An employee unknowingly exposing confidential data via a free 'Chat with PDF' website.

Shadow AI is the use of unauthorized, unvetted artificial intelligence tools by employees — completely outside of IT's visibility, control, or approval processes.

Think of it as the AI version of "Shadow IT" — which you probably fought years ago when employees started using Dropbox instead of the company file server. Same problem, much higher stakes.

Here's what actually counts as Shadow AI. It's not just ChatGPT. It's:

  • 🤖 Free AI chatbots (ChatGPT free tier, Gemini, Claude.ai free plans)
  • 📄 "Chat with PDF" websites — AskYourPDF, PDF.ai, Smallpdf AI features
  • ✍️ AI writing tools — Grammarly AI, QuillBot, Jasper, Copy.ai
  • 📝 Productivity AI — Notion AI, Otter.ai (meeting transcription), Fireflies.ai
  • 🧩 Browser extensions with AI features — dozens of Chrome extensions with vague permission requests
  • 🖼️ Image generators — Midjourney, DALL·E free tier used for work assets
  • 💻 AI coding assistants — GitHub Copilot personal accounts used on work code

Notice something? Most of those tools aren't "nefarious hacking tools." They're genuinely useful. That's exactly the problem.

chatpdf-free.com/upload-document
📁 Drop PDF here or click to upload Q4 Financial Projections — CONFIDENTIAL.pdf Upload & Chat Files processed on our servers. By uploading you agree to Terms. AI Document Chat What are the revenue projections for Q1 2025 shown in this document? Based on your uploaded document, Q1 2025 projected revenue is $47.3M with a 12% increase target year-over-year... ⚠️ CONFIDENTIAL DATA NOW ON EXTERNAL SERVERS Your document has been uploaded & stored. IT has no visibility. Data retention: 30 days. Server location: Unknown.

⬆️ This is what happens when an employee uses a free "Chat with PDF" site on a confidential document. IT has zero visibility.

🚫 Why "Just Ban It" Fails So Dismally

Here's something I hear all the time from IT managers: "We put it in the employee handbook. We blocked the domain. We sent the memo. What else can we do?"

The answer, honestly? You've done governance theater. It looks like policy. It doesn't actually reduce risk.

Think about it from the employee's perspective. She has a 40-page contract to review before a 3pm meeting. She knows of a free AI tool that will summarize it in 30 seconds. Your approved process takes three business days to spin up a new tool. What does she do?

"Around 41% of employees using unapproved AI tools do so because they already rely on them personally. They're not malicious. They're just trying to do their jobs." CyberUnit Insights, 2025

A 2025 BCG study found that 54% of employees would use AI at work without company approval if it made them more productive. And according to Cybernews, 59% of employees already are.

So blocking is only half-effective. You can block the obvious domains, sure. But you can't block:

  • 📱 AI tools accessed on personal phones on cellular data
  • 🧩 Browser extensions already installed before your policy
  • 🏠 Work done from home on personal laptops
  • 📋 Copy-pasting into AI tools on personal accounts during work hours

There's also a morale problem. Heavy-handed blocking signals distrust. Good employees — the ones you most want to keep — will notice when you treat them like suspects. Some will leave. The ones who stay will find workarounds anyway, except now they'll be sneakier about it.

💡 The Governance Theater Problem Creating a policy you can't enforce isn't governance. It's paperwork. Worse — it creates a false sense of security for leadership while doing nothing to reduce actual risk on the ground.

📊 By the Numbers: Shadow AI in 2025

Infographic displaying statistics on Shadow AI: 59% of employees use unapproved AI, $670K added to data breach costs, and 97% of related breaches lacked sufficient controls.
The stark reality: Shadow AI is widespread and expensive for businesses.

Hover over each card to see context. These aren't predictions — these are what's already happening inside organizations right now.

⚡ The Shadow AI Reality Check

Current statistics on employee AI usage without IT approval

59%
of employees use unapproved AI tools at work
Source: Cybernews, 2025
And 75% of those employees admitted to sharing potentially sensitive information with these tools. That's not a small problem — that's a systemic one.
$670K
extra breach cost when Shadow AI is involved
Source: IBM Report, 2025
IBM's 2025 Cost of a Data Breach Report: breaches involving shadow AI cost $4.63M on average vs. $3.96M without it. That $670K difference is almost entirely avoidable.
54%
of workers would use AI without approval if it helped productivity
Source: BCG Study, 2025
BCG research shows over half your workforce has already decided they'll use AI tools whether you approve them or not. The question is whether they'll use safe ones.
75%
of shadow AI users shared sensitive data with unapproved tools
Source: Journal of Accountancy, 2025
Three-quarters of employees using shadow AI admitted to sharing possibly sensitive info — employee data, financial projections, client details. Most didn't think twice about it.
97%
of AI-related breach orgs had insufficient AI controls
Source: IBM Report, 2025
IBM found that 97% of organizations that experienced AI-related breaches had inadequate controls in place. Policy alone isn't enough — you need technical guardrails too.

💬 Hover over any card for more detail

☠️ The Hidden Dangers (It's Not Just ChatGPT)

Illustration of a clear, safe 'paved road' representing approved enterprise AI, contrasting with dark, risky 'shadowy' paths representing unapproved AI tools.
Guiding employees to secure, approved AI pathways minimizes risk.

Let's get specific, because this is where most IT policies fall short. When leadership says "ban AI tools," they're usually thinking about ChatGPT. But Shadow AI is much wider than that.

🧩 AI Browser Extensions: The Silent Data Harvester

According to research by Incogni's 2026 privacy analysis, Grammarly and QuillBot are among the most potentially privacy-damaging extensions in the Chrome ecosystem — and both have over 2 million downloads. These tools request permission to "read and change all data on websites you visit." Every email. Every document. Every form your employee fills out.

In December 2025, Malwarebytes documented a Chrome extension that specifically marketed itself as a privacy tool, but was actually intercepting AI chat prompts and sending them to third-party servers. The name was reassuring. The behavior was not.

📄 Free "Chat with PDF" Websites

These are everywhere. Upload a PDF, ask it questions, get instant answers. Super useful for reviewing long contracts or research papers. Also, a tremendous data liability.

Here's what most employees don't realize: when you upload that file, it goes to the service's servers. Where are those servers? Who owns the company? What's their data retention policy? For many free tools, the answers are "somewhere overseas," "unclear," and "we keep it forever and may use it to train models." The fine print says so. Nobody reads it.

🎙️ AI Transcription Apps: Meeting Notes Gone Rogue

Otter.ai and Fireflies.ai are genuinely great tools. They join your video calls, transcribe everything, and create searchable notes. They're also, if used without enterprise agreements, storing every word of every meeting — including the ones about unannounced layoffs, pending mergers, client disputes — on servers you don't control.

💻 Personal AI Coding Assistants

A developer uses their personal GitHub Copilot subscription (attached to their personal GitHub account, not the enterprise account) while working on proprietary code. The code snippets they write get sent to OpenAI's servers as part of the Copilot service. Your IP just walked out the door, legally.

🔴 Real-World Example: Samsung, 2023 Samsung engineers leaked proprietary source code by pasting it into ChatGPT to debug it. This happened within 20 days of Samsung allowing ChatGPT internally — three separate incidents. Samsung banned all generative AI tools company-wide afterward. But the data was already out.

📋 Case Study: What Shadow AI Looks Like at Scale

📁 Hypothetical Case Study

MidWest Financial Group: $2.1M Learning Moment

The Setup: MidWest Financial Group (a hypothetical 850-person regional financial services firm) had a strict "no AI tools" policy. They blocked ChatGPT at the network level. They thought they were clean.

The Discovery: A routine security audit using network monitoring tools revealed that 23% of employees were regularly accessing AI tools — but not through ChatGPT. They were using:

  • 12 different "free PDF summarizer" websites
  • Grammarly AI and QuillBot browser extensions (installed before the policy)
  • Otter.ai free accounts for client call transcription
  • Personal ChatGPT accounts accessed via mobile hotspot

The Data That Left: Over 6 months, an estimated 340+ client financial documents were uploaded to external AI services. Under FINRA and state regulations, this constituted a reportable data handling incident.

The Cost: Legal review, regulatory notification, remediation, and enhanced monitoring: approximately $2.1M. All from tools that cost $0.

The Fix: Instead of more blocking, they licensed Microsoft 365 Copilot for key teams (with enterprise data protections), created an AI acceptable use policy, and trained managers on what was approved. Shadow AI usage dropped 87% within 90 days. Not to zero — but 87% is transformational.

⚖️ Approved vs. Unapproved AI: What's Actually Different?

Here's the comparison nobody shows you. The tool names look similar. The data handling couldn't be more different.

Feature / Risk ChatGPT Free / Consumer AI ChatGPT Enterprise Microsoft 365 Copilot Google Gemini for Workspace
Data used to train models? Yes (by default) No No No
Admin visibility & controls None Full dashboard Microsoft Admin Center Google Admin Console
Data residency / server location Uncontrolled OpenAI US servers Your tenant region Your tenant region
Audit logs available No Limited Full audit logs Full audit logs
DLP (Data Loss Prevention) integration No No Yes (Purview) Yes (DLP policies)
HIPAA / SOC 2 / GDPR compliant Not enterprise Yes (with BAA) Yes Yes
Grammarly AI (browser extension) ⚠️ Free tier reads all text in browser, stores data, no enterprise controls. Enterprise plan available but rarely deployed by IT.
Free PDF chat websites ⚠️ Files uploaded to unknown servers. No data retention guarantees. Many hosted overseas. Zero enterprise controls.
Monthly cost (enterprise, per user) $0 (free tier) / $20 (Plus) $30/user/month $30/user/month $20–$30/user/month

🛣️ The Solution: Build a "Paved Road"

Here's the mindset shift that actually works. You can't block your way to safety. You have to build a better road.

The concept comes from platform engineering: instead of putting up roadblocks, you make the safe path so easy and so good that nobody bothers taking the side roads. In AI governance terms: give people an approved AI tool that is actually better than the shadow tools they're already using.

If your approved tool is harder to access, has fewer features, and takes three approvals to use — people will bypass it. Full stop. But if it's seamlessly integrated into the tools they already use every day (Microsoft 365, Google Workspace), works great, and they know it's safe — they'll use it. Problem mostly solved.

The Three-Step Framework

# Phase What You Do Tools / Methods Timeline
1 🔍 Audit Current Use Find out what AI tools are actually being used before you can address them Anonymous employee survey, DNS/proxy log review, browser extension audit via endpoint management Week 1–2
2 ✅ License Secure Tools Procure enterprise-grade AI tools that cover the most common use cases your employees actually need Microsoft 365 Copilot, ChatGPT Enterprise, Google Gemini for Workspace, GitHub Copilot Enterprise Week 3–6
3 📚 Train & Communicate Tell employees what's approved, how to use it, and why the approved tools are actually better (don't lecture — demonstrate) Live demos, short video walkthroughs, "AI office hours," clear acceptable use policy Week 6–8
4 🔒 Technical Controls Block the highest-risk tools, enforce DLP policies, remove dangerous browser extensions via MDM Microsoft Defender, Purview DLP, endpoint management (Intune), network proxy rules Ongoing
5 📊 Monitor & Iterate Check if shadow AI usage is dropping. Survey employees quarterly. Update approved tool list as new use cases emerge. CASB tools (Defender for Cloud Apps), employee pulse surveys, IT help desk tickets about AI Quarterly

🎥 Watch: What Is Shadow AI? (IBM Technology)

IBM Technology explains Shadow AI and why it's one of the fastest-growing cybersecurity threats for enterprises.

🛠️ Step-by-Step: Setting Up Microsoft 365 Copilot (Enterprise)

Okay, so let's actually do this. If your organization is already on Microsoft 365 (most mid-to-enterprise companies are), you're halfway there. Here's how to deploy Copilot as your "approved AI tool" — the one that replaces the shadow tools.

Before you buy anything, run this checklist:

  • ✅ Confirm you're on Microsoft 365 E3, E5, Business Standard, or Business Premium
  • ✅ Ensure users have Exchange Online, SharePoint, and Teams licenses (required for Copilot)
  • ✅ Check that your Microsoft 365 tenant is updated — Copilot requires modern auth
  • ✅ Copilot license is $30/user/month — start with a pilot group of 50–100 users

Admin action: Go to admin.microsoft.com → Billing → Purchase services → Search "Microsoft 365 Copilot"

Once purchased, assign licenses to your pilot group:

  • Go to Microsoft 365 Admin Center → Users → Active Users
  • Select your pilot users → Licenses and Apps → Check "Microsoft 365 Copilot"
  • Enable the specific Copilot apps they need: Word, Excel, Teams, Outlook
  • Set up a Copilot for Microsoft 365 security group for easier management

Privacy tip: By default, Copilot for Microsoft 365 does NOT use your data to train Microsoft's AI models. Verify this in the Microsoft 365 admin center under Privacy → AI features.

This is the step most companies skip — and it's the most important one for compliance officers:

  • Go to Microsoft Purview Compliance Portal → Data Loss Prevention
  • Create a new DLP policy targeting Microsoft 365 Copilot
  • Add sensitive info types: credit cards, SSNs, medical records, client data
  • Set action to "Block" for high-sensitivity labels, "Warn" for medium
  • Apply sensitivity labels to documents via Information Protection policies

This means even if an employee tries to paste "Confidential" labeled content into Copilot, it gets blocked — or at minimum, they get a warning and the action is logged.

Don't just email the policy document. That's how you get 12% readership. Do this instead:

  • Demo session (30 min): Show employees 3 specific tasks they can do faster with Copilot than their current shadow tools. Make it concrete.
  • Quick reference card: One page. "Here's what's approved. Here's what's not. Here's why."
  • Manager briefing: Arm team leads with talking points. Peer communication lands 3x better than IT memos.
  • Feedback channel: Create a Teams channel where employees can request new approved tools. This shows you're listening, not just blocking.

Set up these monitoring points at 30-day intervals:

  • Copilot usage reports: In Admin Center → Reports → Copilot usage. Track adoption by department.
  • Network proxy logs: Are employees still hitting blocked AI domains? High traffic = unmet needs. Add tools, don't just block more.
  • Help desk tickets: "I can't access [tool]" tickets reveal shadow AI pressure points.
  • Quarterly pulse survey: Ask anonymously: "Are you using any AI tools not listed in our approved list? Why?" Anonymity gets honest answers.

✅ Shadow AI Defense: Deployment Checklist

Use this as your practical working checklist. Click each item to mark it complete.

🛡️ Shadow AI Defense Checklist

Click items to track your progress. Your progress is saved in this session.

Progress 0 of 12 complete
  • Run an anonymous employee survey asking which AI tools they use for work Priority 1
  • Audit browser extensions on company devices via endpoint management (Intune / Jamf) Priority 1
  • Review DNS / proxy logs for AI tool traffic (ChatGPT, Grammarly, Otter.ai, etc.) Priority 1
  • Document all AI tools found — approved, unapproved, and gray-area Priority 2
  • Procure enterprise license for at least one approved AI platform (Copilot / Gemini / ChatGPT Enterprise) Priority 1
  • Configure DLP policies to block/warn on sensitive data in AI tools Priority 1
  • Apply sensitivity labels to confidential documents (Confidential, Restricted) Priority 2
  • Block highest-risk unapproved AI domains via proxy/firewall (focus on data-exfiltration risk) Priority 2
  • Remove unapproved AI browser extensions via MDM/endpoint management policy Priority 2
  • Write and publish an AI Acceptable Use Policy (keep it under 2 pages — nobody reads essays) Priority 2
  • Run 30-minute "approved AI tools" demo for each department — practical, not lecture-style Priority 3
  • Schedule quarterly shadow AI usage review and policy update cycle Priority 3

🧭 AI Policy Risk Assessment Tool

Not sure where your organization stands? Answer four quick questions and get a tailored risk level + recommended first step.

🔍 Shadow AI Risk Assessment

4 questions · ~2 minutes · Instant result

1. AI Policy
2. Monitoring
3. Approved Tools
4. Training
Result

Does your organization have a written AI Acceptable Use Policy?

Can you see which AI tools employees are accessing on company networks?

Has your organization deployed at least one enterprise-grade approved AI tool?

Have employees received training on AI data risks and what tools are approved?

⚙️ How to Set Up Approved AI Policies: Common Questions

You don't need to block everything — and trying to will backfire. Focus your blocking on the highest-risk categories: free "PDF/document upload" AI sites, unvetted browser extensions with broad permissions, and consumer AI tools with no enterprise data agreements.

For lower-risk tools (like a writing assistant that only handles non-confidential text), a "warn and log" approach is often more appropriate than an outright block. The goal is visibility and risk reduction, not maximum friction.

Keep it short (1–2 pages max) and include: (1) List of approved AI tools by name, (2) Data classification rules — what information may NOT be entered into any AI tool (e.g., PII, financial projections, source code), (3) Personal AI tools — explicitly state whether employees may use personal accounts on personal devices for work tasks, (4) How to request a new approved tool, and (5) Consequences of non-compliance.

Avoid long legal disclaimers. If employees need a lawyer to read your AI policy, they won't read it.

Use a tool like Microsoft Forms, Google Forms, or SurveyMonkey — with anonymity explicitly stated and confirmed. Ask: "Which AI tools have you used for work tasks in the last 30 days?" and provide a list of common tools (ChatGPT, Grammarly, Otter.ai, etc.) plus an "Other, please specify" field.

Critically: promise no punishment and mean it. The goal is intelligence, not discipline. If employees think the survey is a trap, you'll get 100% "I never use unauthorized tools" — which helps nobody.

A 15-minute LinkedIn article from Michael Mac describes a "15-Minute Shadow AI Audit" that starts with exactly this kind of anonymous survey.

First: don't panic, and don't immediately discipline. Assess the risk. What data was shared? With which tool? What are that tool's data retention policies? If PII was involved, check your breach notification obligations under GDPR, HIPAA, CCPA, etc.

For the employee: remediate through coaching, not punishment (unless it was willful after clear policy training). For the tool: request data deletion if the service has a process for it. Many enterprise-grade tools offer this; most free tools don't. Document the incident. Use it as a case study for your next training session — anonymized, of course.

🎯 Action Plan: What to Do This Week

So. Shadow AI is real, it's already in your organization, and blocking ChatGPT didn't fix it. The good news is that fixing it is genuinely doable. It takes about 60 days of focused effort and a mindset change: governance is about channeling AI safely, not trying to dam up a river with a memo.

✅ The Core Insight Employees will use AI tools whether you approve them or not. Your job isn't to stop that — it's to make the safe option so much better than the unsafe option that choosing the unsafe one seems pointless. Build the paved road.

Here's your short, actionable to-do list for the next 7 days:

  • Deploy a 5-question anonymous survey asking which AI tools your team uses. You need ground truth before you can fix anything.
  • Pull 30 days of DNS/proxy logs and filter for known AI tool domains. You'll find surprises.
  • Run an endpoint management report on browser extensions installed on company devices. Audit anything with "read all data" permissions.
  • Book a 30-minute call with your Microsoft or Google account rep about enterprise AI licensing options. Even a small pilot matters.
  • Draft a one-page AI Acceptable Use Policy. Not a legal brief. One page. What's approved, what's not, and how to request new tools.
  • Schedule a leadership briefing with the IBM data — specifically the $670K shadow AI surcharge. Money talks louder than security warnings.

The organizations that solve Shadow AI fastest aren't the ones that ban the most tools. They're the ones that give employees a genuinely great approved alternative — and get out of the way. That's your goal.

Frequently Asked Questions

What exactly is Shadow AI, and how is it different from Shadow IT?

Shadow IT refers to any software, hardware, or cloud service used by employees without IT's knowledge or approval. Shadow AI is a specific subset of that — it's the use of unauthorized AI tools, chatbots, or AI-powered applications at work. The key difference is the data risk: AI tools typically require you to input data to work, which means confidential information actively leaves the organization's control. Traditional Shadow IT (like a personal Dropbox) might store data somewhere unauthorized; Shadow AI tools are actively processing and potentially training on that data, often on servers with no enforceable data agreements.

Does ChatGPT use my company's data to train its AI models?

It depends on which tier you're using. ChatGPT Free and Plus tiers — by default — may use your conversations to improve OpenAI's models (though you can opt out in settings). ChatGPT Team, Enterprise, and API access do NOT use your data for model training by default, and OpenAI states this explicitly in their terms. The catch: most employees using "free ChatGPT" at work are on the free tier and haven't opted out. For enterprise use, only the paid tiers with enterprise agreements (like ChatGPT Enterprise at $30/user/month) provide contractual data protection guarantees.

How do I find out which AI tools my employees are actually using?

Three complementary methods work best together: (1) Anonymous survey — ask directly, promise no consequences, and you'll be surprised by the honesty. Use Google Forms or Microsoft Forms. (2) DNS and proxy log review — filter 30 days of outbound traffic for known AI tool domains: chat.openai.com, claude.ai, gemini.google.com, grammarly.com, otter.ai, etc. This catches browser-based tool access. (3) Browser extension audit via your endpoint management platform (Intune, Jamf, or Workspace ONE) — export all extensions installed across company devices and flag any with "read and change all website data" permissions. This combination gives you both the known and unknown picture.

Can I share sensitive information with ChatGPT Enterprise safely?

ChatGPT Enterprise is significantly safer than the free tier — conversations are encrypted, not used for training, and admins get a management dashboard. However, "safer" isn't the same as "safe for all data." You should still: (a) apply your organization's data classification system — don't paste merger documents, source code, or patient data without careful thought; (b) check your industry-specific regulations (HIPAA, FINRA, GDPR) for what constitutes a compliant use; (c) implement DLP policies that govern what sensitivity labels can be used with Copilot/enterprise AI. For highly regulated industries, pair ChatGPT Enterprise with a Microsoft Purview or similar DLP layer for full control.

What are the biggest compliance risks from Shadow AI in regulated industries?

The risks vary by regulation but the pattern is similar: HIPAA (healthcare) — uploading patient information to a non-HIPAA-compliant AI tool is a reportable breach. GDPR (EU data) — sending EU resident personal data to an AI service without a Data Processing Agreement (DPA) violates GDPR Article 28. FINRA/SEC (financial services) — client financial data sent to unapproved third-party services may violate record-keeping and supervision requirements. CCPA (California) — sharing California resident PII with unvetted AI services creates disclosure and liability issues. In all cases, the key question is: does the AI tool have a signed Business Associate Agreement (HIPAA), Data Processing Agreement (GDPR), or equivalent contractual data protection? Most free AI tools do not.

Is Grammarly AI safe to use on company documents?

Grammarly has two very different products: the free/consumer tier and Grammarly Business/Enterprise. The free browser extension requests permission to read and change all data on websites you visit — meaning it can see everything you type: emails, documents, internal tools, confidential forms. The free tier's data is retained and may be used to improve Grammarly's service. Grammarly Business/Enterprise offers a Data Processing Addendum (DPA), admin controls, and stricter data handling commitments. If employees are using the free extension on company devices, that's a Shadow AI risk. The fix: either deploy Grammarly Business as an approved tool with proper controls, or block the free extension via MDM and add Grammarly Business to your approved tools list.

What's the fastest way to reduce Shadow AI risk without a big budget?

You don't need a six-figure tool purchase to start. Here's a zero-or-low-cost 30-day plan: (1) Anonymous survey (free — Google Forms) to discover current usage. (2) DNS block the top 10 highest-risk free AI tools via your existing firewall. Focus on document-upload AI sites — they're the highest data exfiltration risk. (3) MDM policy to block or flag browser extensions with overbroad permissions — this is usually free with your existing Intune or Jamf license. (4) One-page AI policy — draft in a day, publish immediately. (5) If your organization already pays for Microsoft 365 E3/E5, you may already have access to Microsoft Copilot trial licenses — check with your Microsoft rep. These five steps alone eliminate a significant portion of shadow AI risk at minimal cost.

How do I stop employees from using AI tools on personal devices?

This is the hardest part — you can't technically block a personal device. What you can do: (1) BYOD policy — update your Bring Your Own Device policy to explicitly address AI tools. If work data (files, emails, client information) is accessed on personal devices, the AI acceptable use policy applies. (2) Mobile Application Management (MAM) via Intune — you can apply policies to the work apps (Outlook, Teams, SharePoint) on personal devices, including restrictions on copy-pasting data to external apps. (3) Data classification + rights management — "Confidential" labeled documents can be restricted from copy-paste or external sharing regardless of device. (4) Most importantly: if you give employees a great approved tool on their company device, most of them will simply use that instead. The personal-device shadow AI problem is partly a symptom of approved tools being inadequate.

What does a good AI Acceptable Use Policy look like for a small business?

For a small business (under 100 employees), keep it to one page with five sections: (1) Purpose — one sentence: "This policy governs AI tool use to protect company and client data." (2) Approved Tools — list by name (e.g., "Microsoft Copilot — approved for all internal use"). (3) Prohibited Actions — be specific: "Do not enter client names, financial data, passwords, or proprietary source code into any AI tool not on the approved list." (4) Gray Areas & How to Request Approval — "If you want to use a tool not listed, email IT. We'll review within 5 business days." (5) Consequences — keep it proportionate: first violation is coaching, repeat violation is formal disciplinary process. One page. Done. A policy employees can read in 3 minutes is infinitely more effective than a 12-page document nobody touches.