Security teams have built their prioritization stack in layers.
CVSS came first. Score everything by theoretical severity. Patch the 9.8s, then the 9.0s. The problem: 57% of CVSS-driven remediation effort catches only 20% of what actually gets exploited. Severity doesn't predict exploitation.
EPSS added probability. What's the likelihood this CVE gets exploited in the next 30 days? Better signal, better efficiency. But still global, still context-free. A high EPSS score tells you attackers are interested. It doesn't tell you the conditions are met in your environment.
SSVC added decision logic. Decision trees that consider exploitation status, technical impact, and mission context. More nuanced than a score. But the inputs are still external.
KEV became the forcing function. CISA's Known Exploited Vulnerabilities catalog: confirmed exploitation in the wild, mandatory remediation timelines for federal agencies. If it's in KEV, it's real. No debate. Patch now.
KEV works because it's selective. A few thousand CVEs out of hundreds of thousands. The selectivity is the signal.
Anthropic's Claude Mythos Preview autonomously built a working FreeBSD kernel exploit. End to end. No human involvement. Under $50. It chained four vulnerabilities together to escape both renderer and OS sandboxes in a major browser. It generated functional exploits for bugs that had survived decades of human review.
This isn't a one-off demonstration. This is the economics of exploit generation changing permanently. The CVE-GENIE framework, a multi-agent LLM system, reproduced working exploits for 51% of real-world CVEs at an average cost of $2.77 per vulnerability [5]. Israeli researchers demonstrated similar results with Claude Sonnet 4.0: working exploits in 10-15 minutes at roughly $1 each [6]. Kernel-level exploits take longer, hours instead of minutes, and cost a few hundred dollars. But when a frontier model can produce a working exploit for most CVEs for the cost of a coffee, the cost barrier that kept most CVEs unexploited collapses.
What happens when exploit generation is cheap and universal?
Every CVE eventually gets a working exploit. Not on the timescale of months or years. On the timescale of days. The 80% of CVEs that currently have no known exploitation become exploitable the moment someone points a model at them. KEV=True becomes the default state, not the exception.
At that point, KEV tells you what you already assumed: yes, this is exploitable. Somewhere. By someone. Under some conditions.
That's not a filter. That's a tautology.
Each framework added signal by answering a different question:
| Framework | Question it answers | Signal value |
|---|---|---|
| CVSS | How bad could this be theoretically? | Low. Context-free severity is guesswork at scale. |
| EPSS | How likely is exploitation in the next 30 days? | Moderate. Probability without environment context. |
| KEV | Has this been exploited in the wild? | High today. Converges to noise as AI makes exploitation universal. |
| SSVC | What action should we take given exploitation status and impact? | Moderate. Better decision logic, but inputs are still external. |
Every one of these frameworks operates on global CVE data. They answer: how bad is this CVE for the world? None of them answer: how bad is this CVE for me, specifically, in my environment?
That question has always been the one that matters. It just didn't matter as much when only a fraction of CVEs had proven exploits. You could use KEV as a rough proxy: if someone exploited it in the wild, it's probably worth worrying about in your environment.
When everything has an exploit, that proxy breaks. You're back to the fundamental question: are the specific conditions for exploitation met on this resource, in this configuration, behind this network topology?
This is the distinction the industry needs to internalize before AI-generated exploits become the norm.
A CVE has a working PoC. It's been exploited in the wild. It's in KEV. CVSS is 9.8. EPSS is high. Every external signal says "critical."
But on your resource:
The exploit is real. The exploitability on your resource is not. The conditions on the resource (configuration, features, capabilities) and off the resource (network topology, security groups, compensating controls) determine whether the proven exploit can actually fire.
This isn't edge-case reasoning. Industry data consistently shows that the vast majority of scanner findings aren't exploitable in the specific environments where they're reported. The conditions aren't met. The exploit exists but can't execute.
When every CVE has an exploit, the only filter with signal is environment-specific condition verification. Not "does an exploit exist" (it does, for everything) but "are the conditions met on this resource."
This requires checking conditions at two levels:
On-resource conditions. Is the vulnerable feature enabled? Is the configuration set? Is the required capability granted? Is the specific code path exercised? These are properties of the resource itself.
Off-resource conditions. Is the resource reachable from the internet? Through how many hops? What compensating controls are already in place? What's the blast radius if compromised? These are properties of the environment around the resource.
Together, on-resource and off-resource conditions determine whether a proven exploit can actually execute on a specific resource. That's the verification that no global framework can provide, because it depends on deployment-time decisions that are different for every organization and change continuously.
This isn't a theoretical future concern. The progression is:
Organizations that build their verification capability now will be ready when this shift arrives. Organizations that rely on KEV and EPSS as their primary filters will find those filters drowned in noise, with no mechanism to determine what's actually exploitable in their environment.
For a comparison of what CVSS, EPSS, SSVC, and KEV each measure and where they fall short, see CVSS vs EPSS vs SSVC vs KEV. For a practitioner's framework on how to determine whether a specific CVE is exploitable in your environment, see Is It Exploitable Here? A Triage Framework.
[1] CISA. "Known Exploited Vulnerabilities Catalog."
[2] FIRST. "Exploit Prediction Scoring System (EPSS)."
[3] FIRST. "CVSS: 57% effort catches 20% of exploited vulnerabilities." EPSS analysis.
[4] Anthropic. (2026, April 7). "Project Glasswing."
[5] Ullah, S. et al. "CVE-GENIE: An LLM-based Multi-Agent Framework for Automated CVE Exploitation."
[6] GBHackers. (2026). "AI Systems Capable of Generating Working Exploits for CVEs in Just 10-15 Minutes."