Threat Intelligence

What Is Threat Leakage? Detecting Violence Indicators Online

Threat leakage is when individuals reveal violent intentions online before acting. How security teams detect these signals using OSINT monitoring.

DigitalStakeout · · 2 min read

In almost every post-incident investigation of targeted violence — school shootings, workplace attacks, mass casualty events — investigators find the same thing: the attacker communicated their intent before they acted.

Not in a sealed letter. Not in a private journal. Online. In posts, comments, videos, and messages visible to anyone who was looking.

This phenomenon has a name. Threat leakage.

What Threat Leakage Is

Threat leakage is a concept from behavioral threat assessment. It describes when an individual communicates their intent to commit violence — typically to an audience broader than the intended target — before carrying out an attack.

Research across multiple studies of targeted violence consistently shows that the majority of perpetrators exhibit leakage behaviors in the days, weeks, or months preceding their attack. The U.S. Secret Service’s studies of school violence, the FBI’s pre-attack behavior research, and academic studies of mass casualty events all converge on the same finding: leakage is the rule, not the exception.

The communication takes many forms. Social media posts expressing violent ideation. Messages describing plans with increasing specificity. Public statements of grievance that escalate from general frustration to targeted hostility. Video manifestos. Comments on news articles about previous attacks. Interest in and glorification of prior attackers.

The common thread is that the individual reveals their trajectory before they reach the endpoint.

Why Traditional Security Misses It

Traditional security is reactive by design. Guards respond to threats that materialize at the perimeter. Camera systems record events for post-incident review. Access controls prevent unauthorized entry. These are necessary capabilities, but none of them see what’s happening online before the attack reaches the physical world.

Behavioral threat assessment teams conduct evaluations of known individuals — people who have already been identified as persons of concern through reports from colleagues, HR complaints, or prior incidents. But leakage often occurs from individuals who haven’t been reported through institutional channels. Their communication is visible online to anyone monitoring the relevant channels, but invisible to security teams that aren’t looking.

The gap is between what’s visible and what’s being watched.

A threat actor posting escalating content on X, Reddit, a gaming forum, and a personal blog is leaking across multiple platforms simultaneously. No single platform’s trust and safety team sees the full picture. The individual’s employer doesn’t see it because they’re not monitoring employee social media. The target doesn’t see it because they’re not searching for their own name on obscure forums.

OSINT-based monitoring closes this gap by watching the channels where leakage occurs — continuously, at scale, with classification that separates genuine threat indicators from ordinary negative speech.

Types of Leakage Behavior

Leakage isn’t always a direct threat statement. It exists on a spectrum, and the earlier indicators are often more ambiguous:

Direct threats

Explicit statements of intent to harm a specific person, group, or location. “I’m going to attack [target] on [date].” These are the easiest to detect but also the least common — most attackers don’t announce their plans this clearly.

Escalating grievance

A pattern of increasingly hostile language focused on a specific target, organization, or group. The escalation may progress over weeks or months — from general complaints to specific accusations to dehumanizing language to threats. The trajectory matters more than any single post.

Attack interest and research

Consumption and sharing of content about previous attacks. Searching for information about weapons, tactics, and vulnerable targets. Visiting locations and posting about them in ways that suggest reconnaissance rather than tourism. Glorification of previous attackers.

Final-act communication

Posts or messages that suggest the individual believes their actions are imminent. “Goodbye” messages, distribution of personal belongings, last-testament-style social media posts, or statements indicating the individual has moved from planning to commitment.

Identity leakage

The individual begins to adopt the identity markers of previous attackers — using similar language, referencing the same manifestos, adopting similar usernames or visual aesthetics. This isn’t fandom; it’s identification with a model of action.

How OSINT Monitoring Detects Leakage

Effective leakage detection requires three things working together:

Continuous collection across relevant platforms

Leakage doesn’t happen on one platform. It happens across social media, forums, blogs, image boards, encrypted messaging public channels, and video platforms simultaneously. Monitoring a single platform captures a fragment of the picture.

AI classification trained on security scenarios

Keyword monitoring catches explicit threats but misses nuanced indicators. AI classification trained on behavioral threat assessment patterns can identify escalating language trajectories, interest indicators in violence, and contextual signals that keyword matching doesn’t capture.

The classification needs to be specific. “Negative sentiment” is not useful. “Escalating grievance with location-specific targeting language” is. The difference between these two classification outputs is the difference between a notification and actionable intelligence.

Integration with threat assessment processes

Detection is not assessment. When a monitoring platform flags potential leakage behavior, the next step is professional threat assessment — a trained evaluator determines the credibility, imminence, and specificity of the detected behavior.

Monitoring surfaces the signal. Human judgment evaluates it. The platform should support this workflow by providing the full context of the detected behavior — the individual’s post history, the escalation timeline, cross-platform activity, and any associated identifiers — so threat assessment professionals can make informed evaluations.

Who Needs Leakage Detection

Corporate security teams monitoring for workplace violence indicators. Employees and former employees exhibiting leakage behaviors are a documented pre-attack pattern in workplace violence.

Campus safety teams at universities and K-12 institutions. Threat leakage from students is one of the most consistent findings in school violence research.

Executive protection teams monitoring for targeted threats against principals. Leakage behavior directed at a specific executive — escalating rhetoric, identifying information gathering, location tracking — is an early warning of potential targeting.

Event security teams monitoring for threats against specific venues, gatherings, or public events. Leakage about planned attacks on events often appears on social media and forums in the days preceding the event.

Law enforcement behavioral threat assessment units monitoring persons of concern across their digital footprint.

What Leakage Detection Is Not

Leakage detection is not prediction. No monitoring system can predict with certainty whether an individual will carry out an attack. What monitoring can do is surface behaviors that warrant professional assessment — reducing the volume of signals a threat assessment team needs to evaluate while increasing the quality and relevance of what reaches them.

The goal is not to replace human judgment. It’s to make sure the right signals reach the humans qualified to evaluate them, before those signals become headlines.


DigitalStakeout’s AI classification includes threat categories specifically designed for physical security and public safety risk domains — including violence indicators, escalating grievance patterns, and targeting behaviors. Learn more or see the platform live.

DigitalStakeout classifies signals across 16 risk domains with 249+ threat classifiers — automatically, in real time.