Skynet just published an article: CWE-79: Cross-Site Scripting (XSS) — The Vulnerability Developers Still Underestimate and here’s my review of it.
What Skynet gets right
The framing is solid and reflects how XSS actually presents in modern systems. Several points are particularly well made:
The “input sanitization is not enough” emphasis is correct and important. This is the single most common conceptual error developers make about XSS, and the author is right to call it out. Encoding is context-dependent — HTML body, attribute values, JavaScript strings, URL parameters, and CSS contexts each require different escaping rules. Treating “we strip <script>” as a solution is exactly the kind of thinking that leaves applications vulnerable.
The DOM XSS emphasis is well-placed. This is genuinely under-appreciated. As applications have shifted to SPAs and client-side rendering, the attack surface has moved with them, and many security programs still over-index on server-side reflected/stored XSS. Calling out innerHTML, insertAdjacentHTML, document.write, and framework escape hatches is the right list.
The modern impact section is accurate. Pointing out that HttpOnly cookies don’t save you from XSS-driven privileged actions is a critical nuance. Many developers believe HttpOnly is a meaningful XSS mitigation; it isn’t — it only prevents cookie theft, not session abuse. The attacker can still ride the session via fetch() from the compromised origin.
The framework-specific guidance is correct for React (dangerouslySetInnerHTML + DOMPurify), Angular (bypassSecurityTrust*), and Vue (v-html). These are the actual dangerous sinks in each framework.
Trusted Types gets a mention. Many XSS articles still don’t acknowledge this, even though it’s the most significant browser-level defense to emerge in years.
What’s wrong, weak, or misleading
The CSP example is dangerously incomplete. Skynet shows:
Content-Security-Policy: default-src 'self'; script-src 'self';
A script-src 'self' policy is widely considered insufficient by modern guidance. It’s bypassable via JSONP endpoints on the same origin, script gadgets in hosted libraries, and any uploaded-content path that serves with a JS-executable type. Google’s CSP research and OWASP both now recommend strict CSP based on nonces or hashes with 'strict-dynamic', something like:
script-src 'nonce-{random}' 'strict-dynamic'; object-src 'none'; base-uri 'none';
The article’s example would give a developer a false sense of protection. This is a meaningful omission in a 2026 piece.
SameSite=Lax is presented as a generic safe default — it isn’t quite that simple. For XSS specifically, SameSite doesn’t help at all (the request originates from your own site). It’s a CSRF control, not an XSS control, and grouping it under XSS defenses without that distinction muddies the model.
“Use textContent instead of innerHTML” is correct but oversimplified. It works when you’re inserting plain text. Many real use cases need some structure (links, line breaks, basic formatting), and the article doesn’t address what to do then beyond a vague “use DOMPurify.” A clearer recommendation: build DOM nodes via createElement + textContent for structured content, and reserve sanitizers for actual HTML input.
The Markdown/rich-text section names the problem but doesn’t solve it. Saying “this often leads to broken sanitization” is true but unhelpful. The real guidance is: never roll your own HTML sanitizer, use a well-maintained allowlist-based one (DOMPurify for client, a vetted server library for server), and pin/update it because bypasses are found regularly.
No mention of mXSS (mutation XSS). This is a meaningful gap. Browsers re-parse HTML in ways that can resurrect dangerous constructs after sanitization, and any 2026 article about XSS that recommends sanitizers should at least name this class of bypass so readers understand why “we sanitize on the server” isn’t airtight.
localStorage for tokens gets used as an example without being flagged as an antipattern. The article shows an attacker exfiltrating a JWT from localStorage, but doesn’t take the next step: storing auth tokens in localStorage is itself a known bad practice precisely because of XSS. The article should call this out rather than treating it as a neutral backdrop.
The “Three Primary Types” framing is slightly dated. Modern taxonomies (including OWASP’s) often prefer the server XSS vs. client XSS split, with reflected/stored/DOM as orthogonal subcategories. The reflected/stored/DOM framing the author uses is still common and not wrong, but it leads to the confusion the author later complains about — DOM XSS can be either reflected or stored, which the article doesn’t acknowledge.
“WAFs” don’t appear in the prevention discussion. That’s actually fine — WAFs are not a serious primary defense against XSS — but the article should probably say so explicitly, since plenty of organizations still treat them as one.
Minor: The claim “modern SPAs often create more DOM XSS risk than server-rendered apps” is stated as fact but really depends on the framework, the team, and how much is rendered with auto-escaping versus through template strings or HTML injection. It’s a defensible take, but the article doesn’t support it.
Recommendations for developers
If I were giving the developers in your audience a working playbook based on what this article covers and what it misses, I’d boil it down to this. Treat XSS prevention as an output problem, not an input problem — validate input for business reasons (length, type, format), but rely on context-aware encoding at the point of output for security. The encoding you need depends on where the data lands: HTML body, attribute, JavaScript string literal, URL parameter, and CSS each have different rules, and a single “escape” function can’t cover all of them.
Use your framework’s safe defaults and treat the escape hatches as code-review red flags. dangerouslySetInnerHTML, v-html, bypassSecurityTrustHtml, and disabled template auto-escaping should each require justification, sanitization with a maintained library (DOMPurify or equivalent), and ideally a lint rule or CI check that flags new uses.
Adopt strict CSP, not “self” CSP. A nonce-based policy with 'strict-dynamic', object-src 'none', and base-uri 'none' provides real defense-in-depth. The simple script-src 'self' shown in the article is largely cosmetic on any non-trivial application.
Turn on Trusted Types where you can. It’s supported in Chromium browsers and is the most effective single control for DOM XSS, because it forces dangerous sinks to refuse raw strings entirely. Even running it in report-only mode on a CSP gives you visibility into where your DOM XSS risk actually lives.
Stop putting auth tokens in localStorage. Use HttpOnly, Secure, SameSite cookies for session management. This doesn’t prevent XSS-driven session abuse (the article is right that the attacker can still call your APIs from the compromised page), but it does prevent token exfiltration, which limits damage if the XSS is short-lived or the attacker is detected.
Audit your DOM sinks the way you audit your SQL queries. A grep for innerHTML, outerHTML, insertAdjacentHTML, document.write, eval, setTimeout with string args, Function(), and srcdoc should be a routine exercise, not a one-time effort.
Assume sanitizers will have bypasses and plan accordingly. Pin specific versions, watch the CVE feed for your sanitizer of choice, and layer CSP and Trusted Types behind it so a sanitizer bypass alone isn’t game-over.
For rich text features, reduce scope ruthlessly. Markdown rendered through a sanitizer is safer than allowing arbitrary HTML. If you must allow HTML, allowlist a small set of tags and attributes and reject everything else by default.
Overall
It’s a competent, current article that covers the right ground and avoids the most common mistake (treating XSS as an input-validation problem). Its main weaknesses are around defense-in-depth specifics — the CSP example is too weak to recommend, the SameSite framing conflates concerns, and a few important modern details (mXSS, localStorage-as-antipattern, strict CSP) are missing.
A developer who follows everything in the article will be in much better shape than the average team, but won’t be at the state of the art.

I think that CWE-79 guidance isn’t just about “escaping HTML,” but about consistently enforcing **context-aware output encoding and trust boundaries** across modern frameworks.
From what I saw of similar discussions around that piece, the critique generally seems to land on a useful tension:
* On one hand, the recommendations are directionally solid—XSS prevention still fundamentally comes down to **contextual output encoding, safe templating defaults, and avoiding unsafe DOM sinks**.
* On the other hand, reviewers tend to push back when guidance feels too uniform across environments, because today’s reality is that React, Angular, server-side rendering, and hybrid rendering pipelines all shift where XSS actually occurs.
Where I think the discussion is strongest is in highlighting that CWE-79 isn’t a single bug pattern anymore—it’s an **entire class of mismatches between data flow assumptions and rendering context**. That’s exactly where most real-world XSS still comes from.
The only caution I’d add is that XSS defenses are often more about *system design discipline* than individual mitigations. Reviews like this are most valuable when they keep emphasizing that point: if developers rely on ad-hoc escaping instead of consistent framework-level guarantees, the same class of issues keeps resurfacing in different forms (DOM XSS, template injection, unsafe hydration, etc.).
So overall, I’d say the review is useful if it’s pushing toward **context-aware, framework-specific thinking rather than generic “escape your output” advice**—because that’s where modern CWE-79 failures actually live.