-
Notifications
You must be signed in to change notification settings - Fork 332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cross-Origin-Resource-Policy (was: From-Origin) #687
Comments
Ping @whatwg/security. |
Previous discussion #365 |
This might be OK to add as a stopgap, as long as we're clear that it doesn't actually solve the problem in general. We generally want a safe-by-default solution, not one that requires every single server to opt in... |
Just to be clear - this is scoped to the response it occurs within, correct? |
Yes. It is not stateful. The security benefit is that the browser can cancel the load and not bring the payload or meta data into the process where JavaScript is executed. |
@johnwilander: Thanks, this does seem like a good fit for CORB, giving web sites a way to opt in to protection on arbitrary resources (not just HTML, XML, and JSON, though it would be a good failsafe for those as well). @bzbarsky: I agree that both CORB and From-Origin only protect a subset of resources-- the largest we could come up with while preserving compatibility, plus anything sites choose to label with From-Origin. I agree with you that a secure-by-default solution would be preferable, but I'm not yet aware of one that can be easily deployed, at least in the short term. (For example, the idea of using credential-less requests for all subresource requests doesn't seem web compatible.) Happy to discuss options there. I do think it's worth pursuing CORB and From-Origin at least as a stop-gap in the short term, since there's a fairly pressing need to get protection out there for as many resources as we can. |
Let's discuss the subdomain aspect of this. We'd like servers to be able to express that all pages from the same eTLD+1 are allowed to load the resource. The 2012 proposal as it stands requires the server to list all allowed domains which is error prone and uses up bytes on the wire. Instead of having a resource on example.com send a response header like this: There are at least two pieces of prior art here, none of which seem to fit our needs:
To further complicate things, eTLD+1 has many names:
Some naming ideas:
|
Chrome and Firefox efforts seem to call that boundary "site", which is also reasonably understandable. So to add to your bikeshed, I'd argue for The other thing we need to make concrete is for which responses this applies. Pretty much everything but top-level navigations? (For bonus points, would it apply to OSCP? See also #530.) |
Something in this space seems like a good idea, so I'm supportive of the general direction. I wonder, though, whether it would be simpler to build on existing primitives rather than add a new header. For example: what if we started sending an
On this point in particular, I'd suggest that we'd be well-served to follow the PSL's "registered domain"/"registrable domain" terminology, or to follow the "site" terminology that |
Pretty sure Adam Barth tried |
If the objection is purely practical, perhaps @abarth could help us recall the challenges he ran into? I'd suggest that CORS is baked-into enough of the web at this point that it might be worth trying again (especially since I think there's at least tentative agreement from Firefox folks to expand |
I don't remember exactly, but I think the idea was to avoid request bloat. Adding bytes to every request is (or at least pre-HTTP/2.0 was) expensive, whereas scoping the extra bytes to POST made them negligible. |
We've explored the idea of expanding |
I think I agree that I have a few detail questions:
|
|
Thanks, @rniwa!
If the purpose is to prevent an origin's data from entering a process, I'd suggest that we need to be as thorough as possible in reducing an attacker's opportunity. Because |
Note that the |
WebKit is working to isolate Blocking them in |
Another option for |
We've discussed this further and have some thoughts. Let's assume a main goal of From-Origin is to provide servers with a way to prevent their resources from being pulled into a process space where they can targeted by a Spectre attack. Only checking the origin of the request does not suffice in the nested frame case. This means just adding Origin headers to requests is not enough, leaving aside ease of deployment. Checking the ancestor list upward is not enough. Checking it up and down is not enough either. What is needed here is a guarantee that, at the time of the request, there is no cross-origin or non-same-site frames in the web content process. This includes sibling frames at any level. Even with such a guarantee at the time of the request, cross-origin or non-same-site frames may be loaded into the process at a later stage and a Spectre attack could be possible. The only way we see this fully working is checking that no cross-origin or non-same-site frames are in the process at the time of the request, and blocking any subsequent cross-origin or non-same-site frame loads into the process. What do you think? (We might start with the simple version of just checking against the origin of the request.) |
John, I'm not sure I follow the frame-focused reasoning in your proposal; IIUC under this logic Wouldn't the real solution for Spectre-like exfiltration be to have something like https://www.chromium.org/developers/design-documents/oop-iframes? |
@arturjanc: The idea here is that |
On Fri, Apr 6, 2018 at 7:00 PM, arturjanc ***@***.***> wrote:
John, I'm not sure I follow the frame-focused reasoning in your proposal;
IIUC under this logic evil.com could not have any frames but still load
victim.com/secret.txt as an <img> or another subresource type, which
would then allow it to exfiltrate its contents. Or am I misunderstanding
the approach?
I assume that victim.com/secret.txt would be served with a response
header like "From-Origin: https://victim.com" and therefore the browser
would prevent this resource from being loaded into the renderer process
hosting evil.com (i.e. the browser would stop the load unless it can
guarantee that the target renderer only hosts data from the
https://victim.com origin).
Wouldn't the real solution from Spectre-like exfiltration be to have
something like https://www.chromium.org/developers/design-documents/
oop-iframes?
Different browsers can approach the "can guarantee that the target renderer
only hosts data from the https://victim.com origin" problem in different
ways. Current plan of action for Chromium is to use out-of-process
iframes, but even without oop-iframes a browser can track which
origins/sites are hosted in a renderer process and block resource loads /
frame embedding as needed (possibly in a way that breaks legacy users of
resource marked this way - this is why mechanisms like "From-Origin: ..."
or "X-Frame-Options: SAMEORIGIN" are needed as an opt-in mechanism).
… —
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#687 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ALoIqKSQiKNZll_5WY9NCVpigi-Kr-Qzks5tmB29gaJpZM4S--lq>
.
|
@johnwilander does that mean you're not doing out-of-process |
Maybe all browsers will ship process-per-origin, on by default, in a month, but I doubt it. :) Maybe all browsers will be fine spinning up ~70 processes to load a news page, but I doubt it. If I’m right, we need an interim solution for sensitive resources. Hence, CORB and this From-Origin thread. Once all browsers do process-per-origin by default, this header will not be needed for Spectre protection but may still me useful to ensure that no third parties are involved in this resource load. |
This header is useful even with process-per-origin/site, since the whole point is preventing yourself from ending up as a no-cors resource in an attacker origin. Process-per-origin/site wouldn't help with that. That's what I thought we were going for. If we want to go beyond that, we should probably discuss requirements again, since it's not entirely clear to me what this is going to help with then and how. |
I think john's comment is related to:
My understanding is that the simple check on the initiating origin is probably good enough but I may be overlooking things here. |
Not useful for specter, question is whether it is useful for other kinds of attacks. |
I've been following this thread while on vacation and didn't have time to comment until now, but this is an important problem that I also feel strongly about solving (thank you @johnwilander for starting this discussion!). Since there are a lot of ideas here, I wanted to summarize the discussion as I understand it, and compare the benefits of the proposals that came up. As background, in the past we've thought a fair amount about protecting applications from cross-origin information leaks -- allowing developers to prevent their applications' endpoints from being loaded as subresources of an attacker-controlled document goes far beyond protecting the exploitation of Spectre-like bugs, and can address a large number of cross-origin attacks we've seen over the past decade. Specifically, having the browser refuse to load protected resources in the context of the attacker's origin/process, could help solve the following classes of issues: cross-site script inclusion (XSSI), cross-site search, CSS-based exfiltration, as well as Spectre. Telling the server about the requester's origin as Mike suggested above would also give developers the chance to prevent most kinds of attacks based on cross-origin timings and CSRF -- the server could be configured to only accept subresource requests coming from trusted origins. For completeness, addressing these kinds of issues was part of the motivation for EPR and Isolate-Me, as well as for same-site cookies. But these proposals are fairly heavy-weight to implement and adopt, and there is value in having a simpler mechanism to tackle the classes of issues mentioned above. IIUC the discussion here focused on two main alternatives:
The first option is somewhat simpler and makes the protection more explicit; in some cases, developers might be able to set As a data point from someone who works with a large number of non-trivial applications, I feel that it might be somewhat easier to adopt this if we go with option 2, possibly with a new request header, e.g. If browsers were to consistently set this header, developers could start by collecting data about origins which are already requesting authenticated resources from their application by collecting header values, and then turn on enforcement by returning empty responses if the requesting origin isn't trusted. This could also work even if there is a Referrer Policy of I think this would be powerful enough to allow application developers to protect from Spectre, especially if combined with default protections via CORB, and would simultaneously allow developers to protect against other cross-origin leaks. |
I looked at the spec changes in #733 and they make sense to me. I also like @youennf's solution from #687 (comment) to not relax the LGTM overall for v1. I do think some developers may encounter problems during adoption due to the lack of origin-based granularity and not having sufficient visibility into who requests their resources, but this is likely something we can tackle in the future. |
This header makes it easier for sites to block unwanted "no-cors" cross-origin requests. Tests: * web-platform-tests/wpt#11171 * web-platform-tests/wpt#11427 * web-platform-tests/wpt#11428 Follow-up: #760 & #767. Fixes #687.
Changed "From-Origin" to "Cross-Origin-Resource-Policy" and its link destination from: whatwg/fetch#687 to its specification. Also thought it would be nice to link to the bikeshed issue.
Changed "From-Origin" to "Cross-Origin-Resource-Policy" and its link destination from: whatwg/fetch#687 to its specification. Also thought it would be nice to link to the bikeshed issue.
I tried to find bugs tracking implementation status in various browsers and this is what I came up with:
I hope that the list above is useful going forward (I wasn't able to find links to these bugs in the comments above). I also looked at https://master-dot-wptdashboard.appspot.com/results/fetch/cross-origin-resource-policy?label=master&label=stable&aligned and I see that Safari passes most WPT tests for CORP, but is not yet at 100%. I don't know how to see details of the test results or how to locally/manually run the tests against Safari - I am not sure what the failures mean (incomplete implementation in Safari? test issues? |
@anforowicz #733 (comment) has links, which include https://bugs.webkit.org/show_bug.cgi?id=186761 for Safari which was supposed to track remaining failures. Perhaps those are only in Safari Technology Preview? Or perhaps new tests landed since then. cc @youennf |
script-loads.html, image-loads.html and fetch-in-iframe.html are running fine locally and using w3c-test.org for me. I do not know why they are not showing up as such in WPT. fetch.any.js seems to be buggy with regards to same-site. I'll do a PR. |
Uploaded web-platform-tests/wpt#14907 |
There are already extensions that e.g. disable X-Frame-Options; I worry that use of such would become |
Websites should have an explicit way to restrict any kind of cross-origin load to protect themselves against Spectre attacks. Content such as images, video, and audio may be sensitive and websites may be protected solely by virtue of their network position (inside a firewall), relying on the same-origin policy to protect against exfiltration.
There's a previous proposal from 2012 called the From-Origin header that we'd like to resurrect. With it, a server can send a
From-Origin : same
header on responses it wants to protect from Spectre attacks. Here's a snippet from the currently inactive proposal:Cross-Origin Read Blocking (CORB) automatically protects against Spectre attacks that load cross-origin, cross-type HTML, XML, and JSON resources, and is based on the browser’s ability to distinguish resource types. We think CORB is a good idea. From-Origin would offer servers an opt-in protection beyond CORB.
In addition to the original proposal, we might want to offer servers a way to say cross-origin requests are OK within the same eTLD+1, e.g. the server may want to say that cross-origin subresources from cdn.example.com may be loaded from *.example.com without listing all those origins.
The text was updated successfully, but these errors were encountered: