Why I Left the CISO Chair to Build Defendermate

The gap between finding vulnerabilities and fixing the right ones is where breaches happen. I've lived on both sides of it.

By Ashish Popli, Founder & CEO, Defendermate


I started on the offensive side. Red teaming, breaking into systems, learning that exploitability depends on conditions, not just presence. That instinct stayed with me through years of security engineering at Microsoft, building ML systems at Google, and three CISO roles where I became the buyer of the tools I used to test.

The pattern was always the same. Scanners produce thousands of findings. Teams triage. Engineering pushes back. Meetings happen. Meanwhile, the vulnerability that actually matters sits buried at row 42,170, under noise that no one has time to investigate.

Then the post-AI era arrived. 48,000 CVEs published in 2025. 60,000+ projected for 2026. Exploitation windows collapsed from weeks to 5 days. The playbook that barely worked before stopped working entirely.

Scanners Tell You What Exists, Not What Matters

Scanners are good at what they do: detecting the presence of known CVEs. Version-match scanners flag every installed package with a known vulnerability. Runtime-context scanners go further, checking whether the package is loaded and reachable.

But presence isn't exploitability. A package can be loaded, reachable, and running, and still not be exploitable because the specific conditions the CVE requires aren't met:

  • Is the vulnerable feature actually compiled in?
  • Is the required configuration set?
  • Does the security context allow the exploit path?
  • Is the kernel parameter the CVE depends on enabled?

When we actually verify CVEs against the specific conditions required for exploitation in a given environment, roughly 95% turn out to be non-exploitable. That's not a scanner failure. Scanners check presence. Nobody was checking conditions.

Prioritization Without Verification Is Guesswork

The industry response to scanner noise has been better scoring. EPSS, CISA KEV, threat intelligence feeds, risk-based scoring. All useful. None of them answer the question that matters: is this CVE actually exploitable on this specific resource in my specific environment?

Re-scoring noise is still noise. You need to verify what's real before you prioritize it. And you need environment context: how would an attacker reach this vulnerability, and what's the blast radius if it's exploited? CVSS doesn't know your network topology. EPSS doesn't know your IAM chain.

What Practitioners Actually Need

After three CISO roles, investing in 10+ security startups, and watching every security team hit the same wall, it comes down to four things:

Know, don't guess. Is this CVE exploitable in my environment? Not a probability. A verified answer with evidence.

See the attacker's path. Not just "this resource is vulnerable" but how an attacker reaches it, through which identities, permissions, and network hops, and what data is downstream.

Understand the cheapest fix. The conditions that make something exploitable are the same conditions you can change to fix it. Sometimes that's a patch. Sometimes it's a config change that takes minutes.

Make every team self-sufficient. Security, platform, dev, network, leadership all need answers. If every question routes through the security team, the security team becomes a helpdesk. Manual triage was already an impossible task. The post-AI era just made it undeniable.

Why I Built Defendermate

A career building security tools and ML systems taught me that vulnerability management isn't an operations challenge. It's a product problem. When classical ML, generative AI, and graph modeling matured enough to tackle it, I built Defendermate.

Filter verifies what's actually exploitable. For each CVE on each resource, it checks the specific conditions required for exploitation. Agentless, read-only, full audit trail from OSINT source to condition to probe result to verdict.

Prioritize models how an attacker would reach it. Attack techniques chained across your cloud infrastructure with typed consequences at each hop. Internet exposure and blast radius for every verified CVE. Not severity sorting. Risk ranking grounded in your environment.

Explore builds a shared understanding of risk across every team. Any team asks any question about their security posture in natural language. Evidence-backed answers from verified data. Not generative guessing.

The Shift That Matters

The industry spent years asking "how do we prioritize better?" The real question was always "how do we know what's real?"

Filter before you prioritize. Verify before you score. That's not an incremental improvement. It's a structural shift. And in the post-AI era, where volume is doubling and exploitation windows are collapsing, it's the only approach that scales.