By Ashish Popli, Founder & CEO, Defendermate
I've spent over two decades in security: at Microsoft, Google, as founding CISO at UiPath through its IPO, and most recently leading security at Spotnana. Across every one of those roles, the same conversation played out like clockwork.
A scanner runs. Thousands of CVEs land in a spreadsheet or a dashboard. The security team stares at the list. Engineering gets a ticket. Engineering pushes back. A meeting gets scheduled. Someone builds a pivot table. And somewhere in the middle of all that, the vulnerability that actually matters, the one an attacker could chain with a misconfigured IAM role to reach your production database, sits at row 42,170, buried under noise.
I've been the person in that meeting, and I know you have too. And I think it's worth being honest about why the tools we've all invested in (scanners, CSPM, ASPM, DSPM, SSPM) still leave a gap that nobody talks about openly.
Let's start with the obvious. Vulnerability scanners are essential. They do exactly what they promise: they scan your environment and tell you which software has known vulnerabilities. Tenable, Qualys, Wiz, Snyk. They're good at what they do.
The problem isn't that they find too little. It's that they find too much, with too little context.
A scanner sees a CVE on a container image and flags it. It doesn't check whether the conditions required for exploitation are actually present on that resource: is the vulnerable function called in your deployment, is the OS configuration that enables the exploit in place, is the specific library version loaded in a way that's reachable? Those are conditions on the resource that determine whether exploitation is even possible.
But there's a second question that scanners don't even attempt to answer: can an attacker actually reach that resource in the first place? What misconfigurations, overly permissive roles, or other weaknesses across your environment would an attacker need to chain together to get to the vulnerable resource? Those are conditions off the resource, and they matter just as much. A critical CVE on a resource that no attacker can reach through your environment is not the same risk as a critical CVE on a resource that sits two hops away from an internet-facing misconfiguration.
So you get a critical CVE with a CVSS of 9.8, and your security engineer has to spend hours, sometimes days, manually investigating whether it's actually exploitable in your specific environment. Multiply that by thousands of findings, and you have a team buried in triage work that, frankly, a good chunk of is wasted effort.
Here's the number that should bother every security leader: when we actually test CVEs against the specific conditions required for exploitation in a given environment, roughly 95% turn out to be non-exploitable. Nineteen out of twenty findings that your team is triaging, debating, and sending to engineering are false positives in context.
Your scanner isn't wrong. It's just incomplete.
Now let's talk about the posture management tools. The CSPMs, DSPMs, ASPMs, and SSPMs that most organizations have invested in heavily over the past few years.
These tools have genuinely improved how we manage cloud security. CSPM catches your misconfigured S3 buckets and overly permissive security groups. DSPM tracks where your sensitive data lives. ASPM gives you visibility into your application attack surface. SSPM flags risky SaaS configurations. They're valuable. We use them. We recommend them.
But here's what none of them do: they don't connect the dots.
Your CSPM tells you that an IAM role has excessive privileges. Your scanner tells you that a container on the same host has a critical CVE. Separately, each is a finding in a dashboard. Together, they might be an attack path: the misconfigured role is the technique an attacker uses to reach the vulnerable container, the CVE is what they exploit once they're there, and the consequence of that exploitation is what carries them to your crown jewels.
No single posture management tool sees that chain. They each see their slice. Their own findings, their own severity scores, their own dashboards. But attackers don't think in silos. They think in paths. They ask: "What can I get to from here? What does this weakness unlock? What's the next hop?"
That gap, between individual findings and composite attack paths, is where breaches happen. And it's the gap that nobody in the current tool ecosystem is designed to fill.
The industry's response to scanner overload has been "better prioritization." Re-score the CVEs using EPSS, CISA KEV, threat intelligence feeds, asset criticality. Build a risk score. Sort the list differently.
It helps. I won't pretend it doesn't. But it has a fundamental limitation: you're still working with the full set of findings. You've rearranged 100,000 CVEs into a prettier order, but you haven't answered the most basic question a security engineer needs answered: is this thing actually exploitable in my environment, yes or no?
Deduplication helps too. Tools that aggregate findings from multiple scanners and remove duplicates so you're not seeing the same CVE four times from four different tools. That's useful housekeeping. But deduplicating noise is not the same as eliminating it. You still have thousands of unique findings, most of which aren't exploitable.
What's needed is a fundamentally different first step: filtration. Before you prioritize, before you score, before you assign, you need to verify which vulnerabilities can actually be exploited given the specific conditions of your environment.
That means checking conditions on the resource itself. Does this CVE require a specific kernel module to be loaded? Is it loaded? Does it need a certain library version or runtime configuration? Is that present? If the necessary conditions aren't met on the resource, the CVE isn't exploitable there. Remove it. Move on.
But it also means checking conditions off the resource. Even if a CVE is exploitable on a given resource, can an attacker actually get there? What combination of weaknesses across your environment, misconfigurations, privilege issues, exposed interfaces, would an attacker need to chain together via specific techniques to reach that resource? If no viable path exists, the exploitable CVE is still not a practical risk.
And when an attacker can reach the resource and can exploit the vulnerability, the next question is: what does that exploitation actually give them? Every successful exploit has a consequence. Maybe it gives the attacker code execution, or credential access, or the ability to escalate privileges. That consequence is what allows the attacker to move from that resource to the next one, chaining exploitation after exploitation until they reach their objective. Understanding that full chain, from initial entry through every hop to final impact, is what separates real risk assessment from theoretical scoring.
Filter first, then prioritize what's left. That changes the math entirely. Instead of prioritizing 100,000 findings, you're prioritizing 5,000 that you know for certain are exploitable. Your security engineers are focused. Your engineering teams get requests that are justified with evidence. Your GRC analysts can explain to auditors exactly why certain CVEs were accepted, because they were verified as non-exploitable, with proof.
I've been the CISO who had to explain open vulnerability counts to a board. I've been the security leader who had to negotiate with engineering teams that were pushing back on fix requests. I've been the person trying to figure out which of 150,000 open CVEs could actually lead to a breach.
What I needed, what every security practitioner I've talked to needs, comes down to a few things that sound simple but that no existing tool delivers end to end:
A way to know, not guess, which CVEs are exploitable. Not a higher-confidence score. A verdict, backed by evidence. "We tested the conditions required for exploitation. They aren't present. Here's the proof." That's what I need to confidently close a finding and move on.
A way to see what an attacker can actually do. Not just "this CVE is critical," but can an attacker reach it? What weaknesses across the environment would they chain together to get there? If they exploit it, what do they gain, and where does that take them next? And what about the weaknesses that aren't CVEs at all? The misconfigurations, the overly privileged service accounts, the exposed management interfaces that an attacker would chain together as stepping stones on the path to and from a vulnerable resource?
A way to understand and interrogate the risk assessment. When a tool gives me a risk score, I don't want a black box. I want to open it up. Show me how you calculated this. Show me the attack path. Let me explore the techniques an attacker would use. Let me ask questions and get clear answers. Because I'm the one who has to defend this assessment to my leadership, to my auditors, and to the engineering teams I'm asking to drop everything and fix something.
A tool that works with what I already have, not against it. I've already invested in scanners. I've already deployed posture management tools. I don't want to rip and replace. I want something that sits on top of my existing stack and makes every tool I've already bought more valuable. Ingest the findings from my scanners, ingest the posture data from my CSPMs and DSPMs, and give me the connected picture that none of them provide on their own.
I built Defendermate because I lived this problem for years and couldn't find a tool that solved it the way it needed to be solved.
Defendermate does three things:
We filter. We use AI agents to research every CVE deeply, understanding the specific conditions required for exploitation on the resource itself. Then a deterministic engine tests those conditions against the resource carrying the vulnerability. Is the vulnerable function called? Is the required configuration present? Is the specific library loaded in an exploitable way? If the on-resource conditions aren't met, we flag the CVE as non-exploitable, not with a probability, but with a verified verdict and the evidence to back it up. This alone eliminates the vast majority of scanner noise before any human touches it.
We prioritize. For CVEs that pass the on-resource filter, we model the full attacker journey. A digital twin of your environment maps the off-resource conditions: what weaknesses across your environment (misconfigurations, overly privileged identities, exposed interfaces) would an attacker chain together, and via which specific techniques, to reach the vulnerable resource? Then we model the consequences of successful exploitation. If an attacker exploits this CVE, what do they gain? Code execution? Credential access? Privilege escalation? And where does that consequence take them next? We trace the full path, from initial entry through each hop, all the way to business impact. The result is a DM Score that reflects real-world risk: how reachable is the resource, how exploitable is the vulnerability, and what's the actual blast radius if it happens?
We let you explore. Every score, every verdict, every attack path is fully transparent. A complete UX and AI-powered interface lets you drill into how any DM Score was calculated, trace attack paths step by step, understand attacker techniques, and get audit-ready explanations on demand. Ask it questions. Challenge the results. We built it for people like us, practitioners who trust but verify.
The vulnerability management industry has spent a decade trying to solve the wrong problem. We kept asking "how do we prioritize better?" when we should have been asking "how do we know what's real?"
Filtering before prioritizing isn't an incremental improvement. It's a structural shift in how vulnerability management works. It's the difference between asking your team to find the needle in a haystack and handing them just the needles.
If you're a security practitioner dealing with this every day, I'd genuinely love to hear from you. We built Defendermate for people like us, and the practitioners we talk to are the ones shaping what this product becomes.
Come see it for yourself. Our live demo lets you explore our capabilities in a sandbox environment, and if you like what you see, allows you to connect your own cloud environment and see results you can relate to: your CVEs, your resources, your actual attack surface. No sales call required, no credit card, takes less than a minute to set up.
Ashish Popli is the Founder and CEO of Defendermate and host of the DeFUD Podcast, where he cuts through the noise in cybersecurity with practitioners who've been in the trenches. Previously, he served as Founding CISO at UiPath, led Product Trust & Safety Engineering at Google, and spent nearly a decade in security leadership at Microsoft. Across these roles, he built enterprise-grade solutions that solve security challenges at scale. He holds an M.S. in Computer Science from Stony Brook University, a CISO certification from Carnegie Mellon University, and completed Product Management executive education at UC Berkeley Haas School of Business.