You're triaging a CVE. Scanner flagged it. CVSS 9.8. EPSS says high probability of exploitation. It's in KEV.
Your job is to figure out whether this matters in your environment. Everything you're looking at says it should. But none of those signals actually answer the question.
CVSS scores the theoretical severity of the vulnerability itself. It doesn't know your configuration. EPSS estimates the probability of exploitation activity in the wild, across all environments. It doesn't know yours. KEV confirms that someone, somewhere has exploited it. That tells you the exploit works. It doesn't tell you it works here. A CVE in KEV that requires a feature you've disabled or a protocol you don't expose is not exploitable in your environment, regardless of what happened somewhere else.
Exploitability is not a property of the vulnerability alone. It's the product of five dimensions: the code, the configuration, the deployment, the network, and the runtime state. A CVE describes the first one. The other four are specific to your environment, different for every resource, and change continuously. No external score or catalog can capture them. That's why answering "is this exploitable here" is your job, not the scanner's. (If you're wondering why the number is so high in the first place, here's where CVE bloat comes from.)
There are three categories of conditions to check. Most toolchains only cover the first. But even without tooling, this framework gives you a structured way to work through the question.
This is the starting point. Is the vulnerable version of the software actually running on this resource? Match the package version against the CVE's affected range. Present or not present.
Every scanner does this. Qualys, Tenable, Rapid7, Snyk, Trivy, AWS Inspector, Wiz. If you're triaging a scanner finding, this step is already done for you. The scanner told you it's present. That's why you're looking at it.
For application dependencies, some SCA tools add code-level reachability: is the vulnerable function actually called from your application code? If your code never invokes the vulnerable API, the bug exists in your dependency tree but can't be triggered through your execution path. Endor Labs, Snyk, and Semgrep do this. It's a useful filter, but it only applies to libraries your application imports directly. It doesn't help with OS packages, kernel CVEs, container runtime bugs, or infrastructure components.
If you don't have SCA with reachability, you can check this manually for critical findings. Read the CVE advisory. It usually identifies the vulnerable function or module. Check whether your application actually uses that function. It takes time, but it's a valid triage step.
Present and in the affected range is necessary but not sufficient. Most of your findings will pass this step. The real filtering happens next.
If you're running a CNAPP with runtime scanning (Wiz, Sysdig, Prisma Cloud, Aqua) you have an additional filter beyond version matching. Runtime scanners use eBPF sensors, agents, or in-use package detection to tell you which vulnerable packages are actually loaded in memory, network-reachable, and part of active workloads. Packages that are installed but never loaded get filtered out. That's real progress. Your finding count drops.
But "in use" is not the same as "exploitable." A runtime scanner confirms the package is running. It doesn't check whether the specific conditions for exploitation are met. Five categories of conditions survive runtime filtering:
Feature state. OpenSSL is loaded in memory and network-reachable. The runtime scanner confirms it's in use. But the CVE requires DTLS to be active, and your deployment only uses TLS. Or Apache is running, but the CVE requires mod_proxy to be enabled, and it's not loaded. The scanner sees the package. It doesn't see which features are active.
Configuration state. SSH is running. But the CVE requires password authentication, and you only allow key auth. The kernel is loaded. But the CVE requires unprivileged user namespaces, and unprivileged_userns_clone=0. Redis is accepting connections. But it's bound to 127.0.0.1 with requirepass set. The software is running. The vulnerable feature is disabled by config. Configuration is the single largest source of false positives that survive runtime filtering.
Compile-time features. A CVE in libcurl's SFTP support doesn't apply if libcurl was compiled without SFTP. A CVE in the kernel's Bluetooth subsystem doesn't apply if CONFIG_BT wasn't set. eBPF can see a binary loaded in memory. It can't inspect what compile flags built that binary.
Security context. A container escape CVE requires CAP_SYS_ADMIN. Your pod dropped all capabilities except NET_BIND_SERVICE. The vulnerability exists in the code, the code is running, but the kernel won't allow the exploitation to succeed. Runtime scanners see the process running. They don't evaluate whether the security context permits the specific system calls the exploit requires.
Multiple exploitation paths. Many CVEs have more than one way to be exploited. Each path has different conditions. Declaring a CVE not exploitable means eliminating every known path, not just checking one condition. Runtime scanners give you a single signal: package in use, yes or no. They don't reason about path-specific conditions.
Runtime scanning closes the gap between "installed" and "in use." The gap between "in use" and "exploitable" is where your analysts still spend their time. That's Step 2.
Whether or not you have runtime scanning, this is the step that eliminates the most noise. The package is present, maybe confirmed in use. But are the specific technical conditions for exploitation actually satisfied on this resource?
Every exploit has preconditions beyond "the vulnerable version is installed" and "the package is loaded":
These are conditions on the resource itself. They can be verified directly: read the sysctl, inspect the build, check the container spec, query the config. Read-only checks. No exploitation attempt needed.
Industry data consistently shows 80-90% of scanner findings aren't exploitable in context. Not because scanners are wrong about the package being present. Because the conditions for exploitation aren't met. The feature is disabled. The configuration isn't set. The capability is dropped.
If you have tooling that checks on-resource conditions, use it. If you don't, you can still work this step manually for high-priority findings. Read the CVE advisory and any linked exploit writeups. They describe what conditions need to be true. Then check whether those conditions are true on the specific resource your scanner flagged. It's the single highest-value triage step you can add to your process.
Where to find condition details: The CVE advisory itself, NVD references, vendor security bulletins, and exploit writeups (if published) typically describe the preconditions. Look for phrases like "requires," "only affects configurations where," "when enabled," "if compiled with." Those are your on-resource conditions.
Before the next step, a word on reachability, because the industry uses this term to mean different things and conflating them leads to bad triage decisions.
Code reachability asks: is the vulnerable function called from your application code? This is what SCA tools check. Static analysis of call graphs and dependency chains. Applies to libraries your application imports.
Network reachability asks: can an attacker get to this resource? This is a topology question. Subnets, security groups, VPNs, bastions, IAM role chains. Attack path and CNAPP tools model this.
Exploit reachability is a term some vendors use loosely. It sometimes means code reachability, sometimes network reachability, sometimes a vague combination.
When you're evaluating a tool or reading a vendor claim about "reachability analysis," ask which kind. Code reachability filters application dependencies. Network reachability filters by topology. Neither one checks whether the exploitation conditions are met on the resource itself. That's Step 2, and it's a different question.
On-resource conditions tell you whether the vulnerability is exploitable on this resource. Off-resource conditions tell you whether an attacker can get there and whether something already stands in the way.
Network reachability. A resource where the exploitation conditions are met but that sits in a private subnet with no route from any attacker-controlled position is a different priority than one that's two hops from the internet. If you have attack path tooling or a CNAPP, it models this for you: network topology, security groups, identity chains, permissions, lateral movement paths. If you don't, trace it yourself. Where does this resource sit? What's between it and the internet? What's between it and other compromised-able resources? The more hops and controls between an attacker and this resource, the lower the practical risk, even if the vulnerability is genuinely exploitable on the box.
Compensating controls. Even when the resource is reachable and the conditions are met, existing controls may block or limit exploitation:
Compensating controls don't eliminate the vulnerability. They reduce the likelihood of successful exploitation. If your environment already has controls that block the attack path or detect the exploitation attempt, the effective risk is lower than the raw finding suggests. Factor that into your triage.
When you're triaging a CVE, you're answering one question: is this exploitable in my environment? Here's the framework:
| Step | Question | How to check | What it filters |
|---|---|---|---|
| 1. Presence | Is the vulnerable version running here? | Scanner output, package inventory | Starting point. Necessary but insufficient. |
| 2. On-resource conditions | Are the exploitation preconditions met on this resource? | CVE advisory, exploit writeups, resource config checks | 80-90% of remaining noise. Disabled features, unmet configs, dropped capabilities. |
| 3. Off-resource conditions | Can an attacker reach it? Is anything blocking the path? | Network topology, security group rules, compensating controls | Unreachable resources, compensated risks. |
Most triage processes effectively stop at Step 1. The scanner said it's there, CVSS says it's critical, so it goes into the queue. Step 2 and Step 3 are where you separate what's actually exploitable from what's just present.
You can work this framework with whatever tools you have. Scanners give you Step 1. SCA tools add code reachability for application dependencies. Attack path tools help with Step 3. For Step 2, you may need to check manually until your toolchain catches up. But even manual checks on your top 20 findings will change your queue dramatically.
The gap between "present" and "actually exploitable" is where most of the noise in vulnerability management lives. This framework helps you close it.