-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Update Fetch Metadata positioning #1875
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Extended core guidance to mention Fetch Metadata request headers (Sec-Fetch-*) as an alternative to CSRF tokens for state-changing requests. Added clarification that developers can use CSRF tokens or Fetch Metadata depending on project needs and client compatibility. Updated Fetch Metadata positionaing
removed old Fetch Metadata section
Fix typos and markdown issues
fix heading increment
|
Thanks for taking the lead on this! You'll definitely want a review by someone more knowledgeable (eg. @FiloSottile), but here's some feedback. I believe it was agreed in #1803 that "Fetch metadata is a complete and robust fix for CSRF, not just defense in depth." However, this PR reads unnecessarily precautionary.
|
|
Hi, @nickchomey! Thanks a lot for the thorough review and for the pointers, here some thoughts
You’re right — that’s on me. The phrasing ended up sounding more precautionary than intended; it reflects my personal stance on the topic
I moved the Fetch Metadata section because we’re proposing it be treated as one of the primary mitigations. If we leave it buried, readers may miss that change.
I understand your point — my intent was to reflect the current state of adoption and confidence rather than to imply it’s inherently secondary. As I mentioned in the issue discussion, Fetch Metadata is still relatively new compared to traditional CSRF mitigations. While it’s an elegant and highly effective mechanism for modern browsers, it hasn’t yet gone through the same long cycle of real-world testing, widespread deployment, and maturity that CSRF tokens have. In security, mechanisms often take time to earn broad trust — a good parallel might be the SameSite cookie attribute, which was proposed years before browsers began enforcing it by default and before most frameworks integrated it as a standard protection. That said, I’m open to rephrasing. Which parts stand out to you most?
My point wasn’t to question the coverage but rather to acknowledge that, as a community, we can’t dictate every project’s goals or constraints. If the software targets modern browsers, then Fetch Metadata is clearly the way to go. However, as we discussed, for environments where these headers aren’t yet supported, developers will still need to rely on CSRF tokens — even if that feels less practical.
Agree
Regarding the note on blocking legitimate CORS or third-party flows — I intentionally kept it, just to highlight that these areas need extra care, since enabling Fetch Metadata protections without considering such flows could break them. The relevant mitigations are already covered in section 3.2 (“How to treat Fetch Metadata headers on the server-side”) section.
Good catch! I hadn’t considered mentioning it explicitly, but that’s a great recommendation and would indeed address the “potentially trustworthy” issue as well. I’ll incorporate that. |
|
I can't help but think that your response was written almost completely by AI... It makes me wary of collaborating any further on this as its not clear to me that a thoughtful human is actually on the other side... Perhaps incorporate my suggestions into a new commit and we can see where we're at then |
|
I don’t like where this is headed, especially with things starting to feel a bit personal. It feels like the focus is mainly on getting Fetch Metadata headers recognized as the primary CSRF protection and moving away from CSRF tokens entirely, which doesn’t sit well with me. That’s just my personal view, but I worry it could lead to CSRF tokens falling out of use altogether, since most developers would naturally choose the simpler Fetch Metadata approach. I get your point though, and as I mentioned in my previous comment, this section reflects my own perspective. It’s admittedly an awkward position — I don’t have solid evidence that Fetch Metadata isn’t as robust as you’d like to present it, but at the same time, “lack of evidence doesn’t mean lack of existence” isn’t a strong argument either. I’m just not sure how to balance both sides — treating Fetch Metadata as “the best” while still keeping CSRF tokens relevant. |
|
Feel free to open your own PR — I’m not comfortable positioning Fetch Metadata as THE MAIN CSRF defense. |
|
These last 2 comments, and those from the previous issue, have no hint of AI. I don't think anyone was ever advocating for fetch metadata being THE primary csrf protection. Just that it is suitable as A Primary/standalone protection, if a few niche caveats and associated mitigation are presented.
Though, this is precisely what people are ultimately advocating for - CSRF tokens are a headache (and therefore much more likely to implemented poorly due to human error). They also make caching extremely difficult. I see no conflict or even issue with effectively saying "these are two worthy options. Take your pick". Most would, indeed, eventually pick fetch metadata, and tokens would eventually be forgotten. What's wrong with that? That's just another example of good technical progress for the web platform. |
I’m not a native speaker, so in the first comment I was just trying to politely make my point. When I saw you didn’t like my style, I just dropped it :)
I know, and that makes perfect sense. I was just trying to play it safe and leave room for discussion, something like: Let’s do it this way: I’ll fix the language according to the suggestions in your first message, add the missing bits about HSTS, keep the Metadata section at the beginning (so we don’t have to open another PR to move it later), and then ask for a review from the rest — especially the Go folks, where this all started. |
Added guidance that all Fetch Metadata implementations must include a mandatory fallback to Origin/Referer verification for compatibility. Reworked browser compatibility notes and Limitations and gotchas section. Changed language to avoid undermining Fetch Metadata headers
|
Hi, @nickchomey |
As the founder of the cheatsheet series and project lead, I agree 100%. Fetch Metadata s a very useful first line defense against CSRF, but it is not safe to rely on it alone for all production workloads. Use it as part of defense-in-depth. Combine it with SameSite cookies, origin checks, and per-request CSRF tokens. The spec’s notion of “user-initiated” navigation and some navigation flows (top-level navigations, prerender/prefetch, PaymentRequest-like flows) can result in values that permit requests you’d expect to be blocked; attackers can sometimes craft flows to exploit those behaviors. There is research showing odd corner cases. So let's revisit this in a year, but for now, Fetch Metadata is one defense with limitations and is not the only be-all defense for CSRF. Anywhere we say this, I want it softened. And again, we can revisit this in a year. |
artis3n
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First time commenter, but I am very motivated by making security simple and I like the Sec-Fetch-Site pattern very much. Added some recommendations about structuring this page.
cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.md
Outdated
Show resolved
Hide resolved
|
|
||
| Because some legacy browsers may not send `Sec-Fetch-*` headers, a fallback to [standard origin verification](#using-standard-headers-to-verify-origin) using the `Origin` and `Referer` headers **is a mandatory requirement** for any Fetch Metadata implementation. | ||
|
|
||
| The Fetch Metadata request headers are: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this section be focused on the subset of Sec-Fetch-* headers that are used to protect against CSRF? (Just Sec-Fetch-Site and Origin/Host header validation, and link off to MDN or something "if you'd like to learn more about what other Sec-Fetch-* headers are available for other purposes, see..."
| 2.1. Fail-safe (recommended for sensitive endpoints): treat absence as unknown and block the request. | ||
| 2.2. Fail-open (compatibility-first): fallback to other security measure ([standard origin verification](#using-standard-headers-to-verify-origin), CSRF tokens, and/or require additional validation). | ||
|
|
||
| 3. Additionall options |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| 3. Additionall options | |
| 3. Additional options |
|
|
||
| 3.2 Whitelist explicit cross-origin flows. If certain endpoints intentionally accept cross-origin requests (CORS JSON APIs, third-party integrations, webhooks), explicitly exempt those endpoints from the global Sec-Fetch deny policy and secure them with proper CORS configuration, authentication, and logging. | ||
|
|
||
| ### Things to consider |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about a section for Requirements / Pre-Conditions/ etc. to indicate what must already be present in order to rely on this method? (Must enforce HTTPS only or be using localhost, must not use GET requests for state-changing requests, recommended to set HSTS).
|
|
||
| For the rare cases of outdated or embedded browsers that lack `Sec-Fetch-*` support, a fallback to [standard origin verification](#using-standard-headers-to-verify-origin) should provide the required coverage. If this is acceptable for your project, consider prompting users to update their browsers, as they are running on outdated and potentially insecure versions. | ||
|
|
||
| ### How to treat Fetch Metadata headers on the server-side |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd recommend we update this section. Maintaining language agnosticity, I'd like to see the pattern outlined in https://words.filippo.io/csrf/#protecting-against-csrf-in-2025 personally.
- Validate Origin header against an allowlist (for the JS example, recommend configuring as a
new Set([domains])for an easyset.has(Origin)as this validation step - Check if Sec-Fetch-Site header is present
a. If present, allow request if value issame-originornone, else deny - If Sec-Fetch-Site and Origin headers are both missing, pass request through
- Validate Origin header against Host header and pass request if they match, else reject (do this at the end and not with initial Origin validation due to step 3 support of significantly legacy browsers pre-2020)
Also the pattern described, basically, in https://web.dev/articles/fetch-metadata#how_to_use_fetch_metadata_to_protect_against_cross-origin_attacks .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And I'm partial since it is the language I am usually in but would be nice to note with the examples that Go developers can follow the above pattern by just using https://pkg.go.dev/net/[email protected]#CrossOriginProtection in the standard library as of Go 1.25.
Perhaps similar to https://cheatsheetseries.owasp.org/cheatsheets/Cross-Site_Request_Forgery_Prevention_Cheat_Sheet.html#javascript-automatically-including-csrf-tokens-as-an-ajax-request-header , a section for Go: Use net/http CrossOriginProtection middleware with a link out to those docs.
|
|
||
| Though the Naive Double-Submit Cookie method is simple and scalable, it remains vulnerable to cookie injection attacks, especially when attackers control subdomains or network environments allowing them to plant or overwrite cookies. For instance, an attacker-controlled subdomain (e.g., via DNS takeover) could inject a matching cookie and thus forge a valid request token. [This resource](https://owasp.org/www-chapter-london/assets/slides/David_Johansson-Double_Defeat_of_Double-Submit_Cookie.pdf) details these vulnerabilities. Therefore, always prefer the _Signed Double-Submit Cookie_ pattern with session-bound HMAC tokens to mitigate these threats. | ||
|
|
||
| ## Fetch Metadata headers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps a larger reordering of the content. We have Signed Double-Submit Cookie (RECOMMENDED) in the middle of the table of contents (the most complex developer option, I might add), which is already hard to find as there are a lot of headers on this page.
Perhaps instead:
## Recommended Mitigation Patterns
### Signed Double Submit Cookie
### Sec-Fetch-Site Header
## Discouraged Mitigation Patterns
### Token-Based Mitigations
#### Naive Double-Submit Cookie Pattern
### ...We already have a Defense In Depth Techniques section so that handles defense-in-depth approaches for both recommended patterns. This simplifies the wall of content in the table of contents and raises up the actual steps we'd want developers to take to protect their applications.
Like this, go for it Co-authored-by: Ari Kalfus <[email protected]>
As agreed in #1803 :
Updated Fetch Metadata positioning
Extended core guidance to mention Fetch Metadata request headers (Sec-Fetch-*) as an alternative to CSRF tokens for state-changing requests.
Added clarification that developers can use CSRF tokens or Fetch Metadata depending on project needs and client compatibility.
In case of a new Cheat Sheet, you have used the Cheat Sheet template.
All the markdown files do not raise any validation policy violation, see the policy.
All the markdown files follow these format rules.
All your assets are stored in the assets folder.
All the images used are in the PNG format.
Any references to websites have been formatted as
[TEXT](URL)You verified/tested the effectiveness of your contribution (e.g., the defensive code proposed is really an effective remediation? Please verify it works!).
The CI build of your PR pass, see the build status here.
AI Tool Usage Disclosure (required for all PRs)
Please select one of the following options:
the contents and I affirm the results. The LLM used is
[llm name and version]and the prompt used is
[your prompt here]. [Feel free to add more details if needed]This PR fixes issue #1803.