When Online Threats Become Real: The Supreme Court Stalking Case and Digital Threat Assessment
The Supreme Court's ruling on online threats raised critical questions about when social media posts cross the line — and how security teams should assess them.
In 2023, the US Supreme Court addressed a question that every security professional has wrestled with: when does an online post become a “true threat” that justifies legal action?
The case involved hundreds of threatening messages sent to a specific individual over social media and other channels. The question wasn’t whether the messages were disturbing — they clearly were. The question was what standard of intent must be proven for online statements to constitute criminal threats.
The Court’s ruling matters for security teams, not because it changes how you assess threats operationally, but because it shapes the legal landscape around when law enforcement can act on online threats you report.
The Legal Standard vs. the Security Standard
The Supreme Court established that prosecutors must show the speaker was at least reckless about whether their communications would be perceived as threats. The speaker doesn’t need to specifically intend to threaten — but they must have consciously disregarded a substantial risk that their communications would be understood that way.
This is the legal standard for criminal prosecution. It’s not the security standard.
Security teams operate on a lower threshold — and should. Your job isn’t to determine whether a social media post meets the legal definition of a true threat. Your job is to assess whether the person who posted it represents a credible risk to the people and assets you’re protecting. Those are different questions with different standards.
A post that doesn’t meet the legal threshold for prosecution can still represent a genuine safety concern that justifies enhanced monitoring, increased physical security, or notification of the potential target.
What the Case Reveals About Online Threat Patterns
The case involved a pattern familiar to every threat assessment professional: persistent, escalating, target-fixated online communications. Not a single angry comment, but an ongoing campaign of messaging directed at a specific individual.
Pattern Recognition Matters More Than Individual Posts
Isolated concerning posts are common. Persistent, escalating, target-specific communication patterns are rare — and they’re the ones most likely to precede harmful action. The Supreme Court case centered on a pattern, not a single message. Threat assessment should operate the same way.
Platform Reporting Has Limits
The victim in this case reported the threatening communications to the platforms hosting them. The platforms took some action. But platform content moderation is designed for terms-of-service enforcement, not threat assessment. A post that violates platform rules gets removed. A post that represents a genuine threat but doesn’t technically violate terms of service may remain visible.
Security teams cannot rely on platform content moderation as a security control. It’s a content management function, not a protective one.
Documentation Is Essential
The case succeeded in part because of documented evidence of the communication pattern. For security teams, this underscores the importance of archiving and documenting threatening online communications — screenshots, URLs, timestamps, user profile information — before content is modified or removed.
Implications for Security Teams
Don’t wait for legal certainty to act protectively. If your threat assessment identifies a credible concern, implement protective measures regardless of whether the posts meet a criminal threshold. Enhanced monitoring, physical security adjustments, and target notification are all appropriate responses to credible concerns.
Build documentation practices. When concerning online communications are identified, document them systematically. Archive the content, capture user profiles, and maintain a timeline. This documentation serves both your ongoing assessment and any future law enforcement engagement.
Know your reporting channels. When online threats do warrant law enforcement involvement, have established relationships and reporting procedures ready. The FBI’s threat reporting mechanisms, local law enforcement contacts, and corporate legal counsel should all be pre-identified — not figured out during an active threat.
Continuous monitoring catches patterns. Single posts are ambiguous. Patterns are informative. Continuous monitoring that tracks an individual’s online behavior over time provides the longitudinal view that makes threat assessment meaningful.
DigitalStakeout provides continuous social media monitoring with archival, AI-powered threat classification, and investigation tools that support structured threat assessment — helping security teams identify, document, and evaluate online threats before they become physical ones.
Build a structured online threat assessment capability. See the platform or get a demo.
DigitalStakeout classifies signals across 16 risk domains with 249+ threat classifiers — automatically, in real time.
Related Posts
AI Will Not Be 'Watching Everything' in Security
Security AI isn't about analyzing everything. It's about knowing what to ignore. Why the all-seeing AI myth is the most dangerous assumption in security today.
Threat IntelligenceSkynet Isn't Here. But the First Machine-Native Social Network Is
Moltbook is a preview of a future where autonomous systems coordinate without human oversight. Why knowledge graphs are becoming essential infrastructure.
Threat IntelligenceOSINT for Law Enforcement: Balancing Investigation Power with Civil Liberties
OSINT gives law enforcement powerful investigation capabilities. Using them responsibly requires understanding the legal and ethical boundaries.