A new Gartner report reveals that 78% of enterprises experienced at least one AI-related data leak in 2025, as employees paste confidential information into ChatGPT, Copilot, and other AI tools without understanding the risks.
Common AI Security Incidents
- Employees pasting source code into public AI tools: 65%
- Confidential documents uploaded to AI assistants: 52%
- Customer PII shared with AI for analysis: 38%
- AI-generated code containing hidden vulnerabilities: 45%
Best Practices
Leading companies are deploying enterprise AI gateways that route all AI interactions through security filters, blocking sensitive data before it leaves the network. Microsoft Purview and Nightfall AI are the leading solutions in this emerging category.