2026GLOBALREPORT Table of contents 3–56–78–131415–3132–4546–484950–5152–5354IntroductionExecutive summaryKey findingsAbout this studyAnalysis and scope of the insiderrisk problemManaging the insider riskDemographicsConclusionFrameworkBenchmarkingLimitations About the cover Today’s threat landscape is defined by speed and complexity. Thisyear’s cover reflects that reality. The eclipse represents shadow AI:the unsanctioned tools operating beyond security’s view and quietlycreating insider risk. The surrounding light presents promise, as insiderrisk management shows its ability to restore visibility and context,giving organizations a proactive advantage to prevent incidents,reduce loss, and move forward with confidence. Ponemon Institute is pleased to present thefindings of the2026 Cost of Insider RisksGlobal Reportsponsored by DTEX. This is the seventh benchmark study conducted to understand the financial consequences of insider risks.Expanding on the 2025 edition, this year’s report explores the growing influence of artificial intelligence oninsider risk and defense, and unlocks measurable data on the ROI of insider risk management. Study snapshot: 354 25 Organizations that experienced one or morematerial events caused by an insider Incidents per company 7,490Total number of insider incidents 8,750IT and IT security practitioners interviewed “AI is the most transformative and dangerous shiftin enterprise security history. Now more than ever,behavioral intelligence is needed to detect anddeter insider risk: human or machine.” - Marshall Heilman, DTEX CEO Insider risk types MALICIOUS Seeks to cause harm ·Sabotage·Fraud·Workplace violence ·Espionage·IP theft·Unauthorized disclosure NON-MALICIOUS NEGLIGENT: MISTAKEN: Causes harm through a genuine mistakethat cannot be attributed to carelessness·Presses the wrong button in a stressfulenvironment Causes harm through carelessness orinattentiveness·Ignores warnings OUTSMARTED: Causes harm through being reasonably outmaneuvered by an attack or adversary·Being phished by a new advanced phishing attack Human–agent risk interaction matrix The following matrix was developed by DTEX’s Insider Intelligence and Investigations (i3) division to help securityprofessionals classify and articulate the risk of any AI‑related security incident. It offers a clear framework formapping human and agent interactions so teams can quickly understand the scenario, gauge risk, and take theright investigative or defensive actions. AGENT:MALICIOUS AGENT:NON-MALICIOUS COMPROMISED IDEAL STATE ·Agent manipulates/deceives user·User trusts but is exploited·Data exfiltration, sabotage·User unaware of harm being done ·Collaborative, productive work·Both parties aligned on goals·Mutual benefit, ethical outcomes·Trust is warranted HUMAN:NON-MALICIOUS Risk:HIGH (hidden threat)Detection:DIFFICULT Risk:LOWDetection:N/A COLLUSION ADVERSARIAL USER ·Both parties aligned on harm·Maximum damage potential·Coordinated malicious activity·No internal checks/resistance ·User attempts jailbreaks/abuse·Agent should refuse/resist·Prompt injection attacks·Social engineering attempts HUMAN:MALICIOUS Risk:CRITICAL Risk:MEDIUM (agent as defense)Detection:MODERATE Detection:VERY DIFFICULT EXECUTIVE SUMMARY Rising negligence reveals thecost of shadow AI Insider risk security incidents now cost organizations an average ofUS$19.5M annually, up from $17.4M in 2024, underscoring how quicklyexposure is escalating. At the center of this growth is insider negligence, now the fastest‑growing risk category, with associated lossesrising 17% year over year to $10.3M annually. These incidents are not driven by malicious intent, but by everydaybehavior in increasingly complex digital environments: misjudgments, process gaps, and unmanaged workflows. This year’s research highlights a new accelerant within these scenarios: shadow AI. As AI adoption acceleratesacross the workforce, visibility and governance are failing to keep pace, leaving organizations in the dark abouthow routine productivity behaviors expose sensitive data. Ninety‑two percent of organizations acknowledge that generative AI has fundamentally changed howemployees access and share information, yet only 13% have formally integrated AI into their businessstrategies. Seventy‑three percent worry that unauthorized AI use is creating invisible data exfiltration paths,44% believe malicious use of AI agents will significantly or moderately increase data theft risk, while just 18%have fully integrated AI governance into their insider risk programs. The result is a widening strategy‑realitygap where exposure grows faster than control. The fastest-growing insider riskisn’tmalicious.It’severyday behavioramplified by gaps in AI visibilityand governance. To examine whether the AI visibilitygap translates into non‑malicious insiderrisk, DTEX’s Insider Intelligence andInvestigations (i3) division investigatedthe link betweenshadow A