Filter Harder. Remediate Smarter. Only Math That Works in the Post-AI Era.
tl;dr
- 48K CVEs last year. 59K this year. Six figures next. Exploits at $3 each. The industry says "patch faster."
- 80-90% of those findings are false positives. Exploitability = code × configuration × deployment × network × runtime state. Any factor at zero = not exploitable. No tool checks all five.
- Would you schedule every ER patient for surgery when they're flooding in faster than you can operate? You'd triage, then stabilize. Condition-level verification is the triage. Cheapest-dimension remediation is the tourniquet.
When Log4Shell hit, every security team on the planet got the same blast: thousands of findings across every environment. Patch now. Emergency change windows. All-hands remediation.
How many of those resources were actually exploitable? How many had the vulnerable JNDI lookup enabled, with the right configuration, reachable from a network path an attacker could reach, running in a security context that would let the exploit succeed? Most teams couldn't answer that. They patched everything, because they didn't know what code × configuration × deployment × network × runtime state looked like on each resource that reported Log4j.
That was one CVE. At 59,000+ per year with exploitation assumed, that same blind patch-everything response runs continuously. And it breaks.
What your scanners don't filter
80-90% of scanner findings aren't exploitable in your environment. The reason is structural.
Exploitability = code × configuration × deployment × network × runtime state
Multiplicative. Any factor at zero = not exploitable on this resource, regardless of CVSS score or whether a working exploit exists.
| Factor | Who checks it | What they miss |
|---|---|---|
| Code | Qualys, Tenable, Rapid7, Trivy, AWS Inspector, Snyk (version match). Endor Labs, Semgrep, Snyk (code reachability for app dependencies). | Version-match scanners report every CVE for the version. SCA reachability only covers app libraries. Doesn't help with OS, kernel, container runtime, or infrastructure CVEs. |
| Configuration | Almost nothing. | SSH CVE requires password auth, you only allow keys. OpenSSL CVE requires DTLS, you only use TLS. Kernel CVE requires unprivileged user namespaces, your sysctl disables them. Apache CVE requires mod_proxy, it's not loaded. Single largest factor that drives findings to zero. Scanners see the package. They can't see the config. |
| Deployment | Almost nothing. | Container escape requires CAP_SYS_ADMIN, your pod dropped it. Kernel exploit needs specific syscalls, your seccomp profile blocks them. Libcurl CVE targets SFTP, your build compiled without it. Runtime scanners see the process. They don't evaluate whether the deployment permits what the exploit needs. |
| Network | Wiz, Orca, Prisma Cloud (attack path modeling). Partial. | Models topology but doesn't combine with on-resource condition checks. A resource can be reachable but not exploitable (config is zero), or exploitable but not reachable (network is zero). Each factor alone isn't the answer. |
| Runtime state | Wiz, Sysdig, Aqua, Lacework (eBPF/agent-based runtime scanning). | Tells you the package is loaded and active. Doesn't tell you which features are active, which config paths are enabled, or whether the security context permits exploitation. "In use" ≠ "exploitable." |
Not a single tool on the market checks all five factors concurrently. You can buy all of them and still have no answer to "is this exploitable on this resource" because configuration, deployment, and the interactions between factors are where 80-90% of findings go to zero. The toolchain has gaps between every layer, and nobody is stitching the full equation together.
Why those unchecked factors are almost always zero
The 80-90% number isn't a guess. It's arithmetic. CVEs describe vulnerabilities at maximum potency: every feature enabled, every protocol exposed, every permission granted. Production environments are the opposite. They're narrowed down at every layer.
Configuration narrows general-purpose software to a specific use. Software ships with hundreds of features. Production enables a handful. OpenSSL supports DTLS, TLS 1.0/1.1/1.2/1.3, various cipher suites, engine plugins. Your deployment uses TLS 1.2+ with a restricted cipher list. SSH supports password auth, keyboard-interactive, Kerberos, certificates, key auth. You allow key auth only. Nginx has modules for WebDAV, XSLT, mail proxying, gRPC, Perl scripting. You use http_ssl and proxy_pass. The kernel has 15,000 config options. Your build enables around 2,000. Every disabled feature is a zero in the equation for every CVE that targets it. The attack surface described by the CVE doesn't exist in your deployment.
Deployment restricts what the exploit is allowed to do. Modern container orchestration defaults to restricted security contexts. Most pods don't run with CAP_SYS_ADMIN. Seccomp profiles block unnecessary syscalls. Multi-stage builds strip dev dependencies from production images. AppArmor and SELinux confine process behavior. Each of these constraints removes permissions the exploit needs to succeed. A CVE that requires a capability your pod doesn't have is a zero, regardless of whether the vulnerable code is present and running.
Network topology blocks the paths the exploit needs. Cloud environments default to deny. Resources sit in private subnets behind security groups that restrict ports and protocols. Load balancers expose specific paths, not entire services. WAF rules filter known attack patterns. Most resources in a production environment aren't reachable from the positions an attacker would need to reach them from. A CVE with a working exploit that requires network access to a port your security group doesn't expose is a zero.
Each factor operates independently. When configuration has a high probability of being zero, AND deployment has a high probability of being zero, AND network has a high probability of being zero, the multiplicative equation means the vast majority of findings are non-exploitable. That's not defense-in-depth optimism. It's the structural distance between how software is written and how it's deployed. The 80-90% is a consequence of that distance.
Condition-level verification makes this visible. Not by checking one factor or two, but by evaluating all five on each resource. Read the sysctl. Inspect the build flags. Check the container security context. Query the service configuration. Evaluate the security group. Read-only, agentless, production-safe, continuous. The findings where every factor is non-zero are the 10-20% that actually need remediation. Everything else is filtered before it generates a ticket, consumes an hour of your team's time, or triggers an unnecessary change window.
Why "patch faster" is the wrong answer
Patching a non-exploitable finding carries the same operational cost as patching an exploitable one. The compatibility testing, the change management process, the deployment risk, the potential for regression or downtime. Identical regardless of whether the CVE was real.
When 80-90% of your queue isn't exploitable, "patch faster" means accepting operational risk for findings that were never actual threats. At 60,000+ CVEs per year with remediation capacity that hasn't meaningfully changed in a decade (9-19 months to close half), you can't patch everything. And choosing by CVSS when 57% of severity-driven effort catches only 20% of what gets exploited isn't choosing. It's guessing.
But there's a deeper problem. "Patch faster" assumes patching is the only remediation strategy. It isn't. Each dimension of the equation has a different cost to change:
| Dimension | Remediation action | Cost | Risk | Time |
|---|---|---|---|---|
| Code | Patch or upgrade the component | Highest. Compatibility testing, regression risk, rebuild and redeploy. | Highest. Every patch is a change to production code. | Days to weeks. |
| Configuration | Disable the vulnerable feature, change the setting | Low. No rebuild, no redeploy. Single config value or sysctl. | Low. Scoped, immediately reversible. | Minutes. |
| Deployment | Drop the capability, tighten seccomp, remove the component | Low to moderate. Container or pod restart, not a full rebuild. | Low. Predictable blast radius. | Minutes to hours. |
| Network | Tighten security group, block the port, add a WAF rule | Lowest. Doesn't touch the resource at all. | Lowest. Infrastructure-level, no application risk. | Seconds to minutes. |
"Patch faster" defaults to the most expensive, highest-risk row for every finding. When you know which factor makes something exploitable, you pick the cheapest row that makes it not.
A kernel CVE exploitable because of a sysctl setting? Configuration fix. Minutes, no reboot. A container escape exploitable because of CAP_SYS_ADMIN? Deployment fix. Drop the capability, restart the pod. A network service exploit exploitable because a port is exposed? Network fix. Tighten the security group. Seconds.
All of these eliminate exploitability without touching the code. No compatibility testing. No change advisory board. No regression risk. Available right now, while the patch cycle for the code dimension takes weeks.
When condition-level verification tells you why something is exploitable, which specific factor is non-zero, remediation becomes a cost optimization problem, not a speed problem.
The math going forward
In 18 months, CVE volume will be double or triple what it is today. Exploitation will be assumed, not investigated. Every global scoring framework will say the same thing about every finding: this is dangerous.
The organizations processing noise at increasing speed will be underwater. The organizations that verify the full equation and remediate at the cheapest dimension will have smaller queues, verified risks, targeted actions, and lower operational risk.
Filter harder. Remediate smarter. It's the only math that works.
References
[1] FIRST.org. (2026, February 11). "Vulnerability Forecast for 2026."
[2] Anthropic. (2026). "Assessing Claude Mythos Preview's cybersecurity capabilities."
[3] FENRIR/AISLE/Big Sleep data from [un]prompted 2026 conference. Carlini, N. and Ptacek, T. "Vulnerability Research Is Cooked."
[4] Ullah, S. et al. "CVE-GENIE: An LLM-based Multi-Agent Framework for Automated CVE Exploitation." 51% success rate, $2.77 average cost per vulnerability.