top of page

AI will not be "Watching Everything" in Security

  • Adam Mikrut
  • 16 hours ago
  • 4 min read

Updated: 30 minutes ago

And in its current form, it never will.



There's a growing assumption in security circles that AI systems are now seeing everything. Every alert, every sensor, every log line, every report, every post, every signal across cyber, physical, and intelligence domains. Hollywood has been selling this vision for years. Eagle Eye. Person of Interest. An all-seeing AI that watches every camera, reads every message, and connects every dot in real time. Great television. Terrible architecture. The industry is now selling the same story, just with better slide decks. That assumption is wrong, and building security programs around it is how systems fail.


The Volume Reality of Security Data


Modern security environments generate staggering amounts of data. Cyber systems produce billions of events per day across identity, network, cloud, endpoints, and applications. Physical security adds video, access logs, sensors, alarms, and location telemetry. Intelligence sources contribute open-source data, reports, chatter, and updates at global scale. This isn't one firehose. It's thousands of firehoses, all running continuously. Now apply AI to that.


Compute Is the Hard Limit


There's a persistent myth beyond security that AI is quietly reading and analyzing the entire internet. Every post, every comment, every blog, every page, ingested and "understood" nonstop. That idea sounds alarming, but it's not grounded in reality, and understanding why exposes the same constraint that breaks the "AI sees everything" promise in security.


The Scale of Daily Content


Every single day, the internet produces hundreds of trillions of new tokens of text. Billions of posts, comments, articles, discussions, and updates. Even a conservative estimate of public text puts total daily volume in the hundreds of trillions of tokens. That's more raw text than any single system could possibly read in a lifetime, let alone in a day. Security data adds its own layer on top of that. Cyber systems produce billions of events per day. Physical security generates continuous video, access logs, and sensor telemetry. Intelligence sources contribute open-source data, reports, and chatter at global scale.


GPU Compute Is Finite


Neural networks that "understand" language run on GPUs, and GPUs are finite. Nvidia is by far the dominant supplier, holding roughly 92% of the discrete GPU market. In Q2 2025, the global discrete GPU market shipped about 11.6 million units, and roughly 10.9 million of those were Nvidia. But most of those are consumer and workstation cards, not AI-grade data center hardware.


On the data center side, in 2025 Nvidia reported shipping 6 million Blackwell GPU dies, roughly 3 million complete packages, while analyst estimates for total Nvidia AI GPU shipments ranged from 5.2 million (JP Morgan) to 7 million units (Mizuho). For 2026, JP Morgan projects roughly 7.5 million Nvidia AI GPUs total as production shifts from Blackwell to Rubin. Even with incredible growth, we're talking single-digit millions of AI GPUs shipped per year. Not hundreds of millions. And those GPUs serve every AI workload on Earth, not just security.


What It Would Actually Take


To fully process a single day's worth of internet content with a GPU-based language model, you would need hundreds of thousands to millions of A100-class GPUs for even a bare-bones scan of public text. Tens of millions to do deeper inference with larger models. That's for one day. Security monitoring at global scale would demand a comparable allocation, on top of everything else. That capacity does not exist. It is not being built. And it is not economically possible. AI is not "watching everything." Not in cyber. Not in physical security. Not in intelligence.


The Dangerous Myth


The myth isn't just wrong. It's harmful. When teams assume AI can analyze everything, they over-collect data, over-index on signals, over-trust automation, and under-invest in prioritization. The result is familiar: alert fatigue, delayed response, missed weak signals, and false confidence. AI doesn't magically fix volume. It compounds the problem when applied without discipline.


How AI Actually Works in Security


Effective security systems do the opposite of "analyze everything." They discard most data immediately. They filter before analysis, not after. They focus on change, deviation, and correlation. They apply AI only where risk and uncertainty intersect. In practice, this means reducing raw data by 90 to 99% before AI ever touches it. That's not a limitation. That's the design requirement.


Intelligence Is About Selectivity, Not Surveillance


Security has never been about total visibility. It's about identifying what matters early, ignoring noise confidently, connecting weak signals across domains, and acting before impact materializes. AI doesn't replace that discipline. It amplifies it, only when used correctly.


The Real Truth About AI in Security


AI is not omniscient. It is not watching everything. It is not taking over decision-making by brute force. Person of Interest had "The Machine." We have GPU max allocation. AI is a force multiplier for focus. The future of security belongs to systems that know what to ignore, know where to look, and know when to escalate. Everything else is just noise.


References


 
 
Post: Blog2_Post

Get free updates to new alerts, announcements and blogs

We won't spam you or share your data with anyone, just quality content. Promise.

bottom of page