If you’ve spent any real time in application security, you’ve probably had the SANS Top 25 thrown at you in a meeting at least once. It’s one of those lists that gets cited by executives, compliance auditors, and vendors selling you their next shiny tool. And to be fair, the SANS Top 25 Most Dangerous Software Errors isn’t a bad starting point — it captures genuinely common, genuinely dangerous weaknesses that show up in production code every day. But here’s the uncomfortable truth: lists don’t secure software. People do. And in a market where good security engineers are getting poached every other quarter, leaning too hard on frameworks while ignoring the humans behind them is a recipe for burnout, turnover, and ironically, worse security outcomes. Let’s talk about where the SANS Top 25 helps, where it falls flat, and why retention should arguably be your top security priority.
The Real Value of SANS Top 25 in Modern Security
The SANS Top 25, maintained in collaboration with MITRE and built on top of CWE data, gives teams a shared vocabulary for talking about software weaknesses. When a developer, a security engineer, and a product manager can all reference CWE-79 (cross-site scripting) or CWE-89 (SQL injection) without translation, conversations move faster. That common language is genuinely valuable, especially in larger organizations where security context can otherwise get lost between teams.
It also serves as a reasonable baseline for organizations that are just starting to build out their secure development practices. If you have nothing — no threat model, no secure coding guidelines, no review checklist — then starting with the Top 25 is far better than starting with nothing. It points you at the categories of bugs most likely to cause real harm, and it’s backed by actual vulnerability data rather than someone’s gut feeling about what matters.
For training purposes, the list is honestly hard to beat. New hires can work through each weakness, see real-world examples, and understand why these issues keep showing up year after year. It’s also helpful when you’re onboarding developers who haven’t had formal security training. You can’t teach everything at once, and the Top 25 provides a defensible curriculum for “what every engineer should at least recognize.”
Finally, the list carries weight in compliance and customer conversations. When a prospect asks how you handle secure development, being able to point to documented coverage of well-known frameworks like OWASP Top 10 and SANS Top 25 makes the conversation simpler. It’s not the whole story, but it’s a credible chapter — and in regulated industries, having that paper trail matters more than security purists like to admit.
Where the SANS Top 25 Falls Short for Security Teams
Here’s the problem: the SANS Top 25 is a list of weaknesses, not a strategy. It tells you what kinds of bugs are bad, but it doesn’t tell you which ones matter most in your architecture, with your threat model, against your attackers. A team that mechanically chases coverage of all 25 items can spend enormous effort hardening against weaknesses that aren’t even reachable in their codebase, while ignoring business-logic flaws that the list doesn’t address at all.
The list also has a strong bias toward classic, code-level weaknesses — memory safety issues, injection bugs, improper access control. Those still matter, obviously. But modern security failures increasingly happen at the boundaries: misconfigured cloud IAM policies, leaked secrets in CI/CD pipelines, supply chain compromises through third-party dependencies, and identity-based attacks against SaaS platforms. None of these fit neatly into a CWE category, and a team that’s optimized purely for the Top 25 will miss them.
There’s also a cultural failure mode worth naming. When leadership treats the SANS Top 25 as a checklist to be ticked off, security stops being an engineering discipline and starts becoming a paperwork exercise. Engineers learn to game the metrics — closing tickets that “address” a CWE without meaningfully reducing risk, or rejecting findings that don’t map cleanly to the list. That’s not security; that’s theater, and experienced practitioners can smell it from a mile away.
Worst of all, an over-reliance on standardized lists devalues the judgment of senior security people. Your principal security engineer’s intuition, built over a decade of watching how attackers actually behave, is worth more than any framework. When you reduce their job to “make sure we hit every item on a public list,” you’re telling them their expertise doesn’t matter — and they will, eventually, leave for a company that thinks otherwise.
Why Retaining Security Talent Beats Chasing Checklists
Security talent is brutally hard to find and even harder to keep. The good engineers — the ones who can actually look at a system and tell you where it’ll break — get inbound recruiter messages constantly. If they’re working somewhere that treats them like checklist auditors instead of strategic partners, they will leave. And when they leave, they take with them years of context about your systems, your threat surface, and your historical incidents. That institutional knowledge does not show up in any framework.
Retention beats hiring on almost every dimension. A senior security engineer who’s been with your company for four years knows which services are critical, which dependencies were forced through under deadline pressure, which old vulnerabilities were “fixed” in a way that didn’t really fix them. A new hire with an identical resume will need 12 to 18 months to build that same context, assuming they stay that long. The math on losing experienced people is much worse than most leaders want to admit.
So how do you keep them? Pay matters, but it’s rarely the deciding factor. What good security engineers want is autonomy, interesting problems, and leaders who actually listen when they raise concerns. They want to be measured on real risk reduction, not on how many CWEs got “covered” this quarter. They want budget for tooling that helps them work smarter, and they want to be trusted to make judgment calls that don’t perfectly map to any external list.
This is where the SANS Top 25 conversation comes full circle. Use the list as a tool — for training, for shared vocabulary, for baseline coverage. But don’t confuse the list for the work, and don’t let it become a substitute for empowering the people doing that work. Tech companies that figure out how to retain experienced security talent will outperform those that churn through hires while pointing at frameworks. Security is a human discipline. The lists just describe what the humans already know.
The SANS Top 25 isn’t the enemy here. It’s a useful artifact, and any security program that ignores it entirely is probably missing some basics. But it’s also not the destination, and treating it as one is how organizations end up with impressive-looking security documentation and embarrassing breach postmortems. The real differentiator, especially as threats grow more complex and attack surfaces expand into cloud, identity, and supply chain, is the quality and stability of your security team. Invest in your people. Give them the tools, trust, and runway to do work they’re proud of. Use frameworks like the SANS Top 25 to support that work, not to replace it. Because at the end of the day, no list ever caught a real attacker — a tired, undervalued, overworked human did, or didn’t.
