-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe when features should be limited to secure contexts. #75
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think an additional factor of "Is it trivial to exclude the feature to secure contexts?" might be useful. Basically if it's easy we should just do it. If it's not easy the other considerations would still apply of course.
index.bs
Outdated
|
||
When the new feature is defined in | ||
<a href="https://heycam.github.io/webidl/">WebIDL</a>, | ||
specification authors can limit a feature to secure contexts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can or should?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this one actually meant can; this was a description of facts, not a conformance requirement.
index.bs
Outdated
Similar ways of marking features as limited to secure contexts should be added | ||
to other points where the Web platform is extended over time | ||
(for example, the definition of a new CSS property). | ||
However, for some times of extension points (e.g., new DOM events), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't parse this sentence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/times/types/ ?
index.bs
Outdated
(for example, the definition of a new CSS property). | ||
However, for some times of extension points (e.g., new DOM events), | ||
limitation to secure contexts should just | ||
be defined in normative prose in the specification. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the new event comes with a new interface it's quite easy to restrict it though. Maybe you should clarify this by talking about "dispatching an event" instead.
index.bs
Outdated
:: If a feature depends on | ||
the expectations of authentication, integrity, or confidentiality | ||
that are met only in secure contexts, | ||
then it should be limited to secure contexts, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
must?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with Anne's general point that being more aggressive about the recommendations would be helpful. My goal inside Chrome is to reverse the default assumption, such that features generally ought to be restricted to secure contexts, and access to a feature over non-secure connections is the exception that needs to be justified.
I'd love to see this reformulated along those lines, as the TAG's opinions on the topic help me make the case internally that this isn't just crazy ol' Mike's opinion. :)
index.bs
Outdated
First, it helps encourage Web content and applications | ||
to migrate to secure contexts. | ||
Second, it can restrict new APIs where authentication, integrity, or confidentiality | ||
are important to prevent substantial increases to the privacy or security risks of using the Web. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest reversing these. At least from Chrome's perspective, the latter has been the the overriding concern internally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But the first is the main goal. If you make the second the main goal, it's easier for folks to weasel their way into an exception.
OK, I think I've addressed the feedback so far. I think it's worth another round of review at this point. I worry a little bit that I may have come down a little too hard in terms of requiring that all parts of a new feature be hidden. Maybe it's ok to just hide the major pieces (and primary detection points) such that it's not usable and not detected as present. But it wasn't obvious to me how to fix that in my current wording... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM % nit
index.bs
Outdated
since sending untrusted data to a USB device could damage that device | ||
or compromise computers that the device connects to. | ||
|
||
Specification authors can most features defined in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can limit*
I also think the intro is too heavy-handed regarding requiring all new features to be in secure contexts. I think the subsequent sections are actually pretty good, outlining some principles that help shape the decision in a more nuanced way (and tend to disagree with the all aspect laid down before). Perhaps this could be softened by suggesting that a feature should be considered for Secure Context by default given the principles [described after that]. From my implementer's hat, I'm particularly sensitive to plumbing yet another mode through the platform (speaking as a veteran of IE's document modes). We already have quirks/almost standards/standards mode, and now we'd have Secure Context as well... For APIs exposed to script, I'm OK with having this mode because the "switch" is processed once (when the type system is initialized for a given script engine) and never consulted again. Plumbing the switch into the formatting/layout system or event system is far-less clean and has performance overhead. Applying this to CSS also seems a little questionable to me. I understand the desire to put new Houdini features behind secure context. Those features tend to rely on JavaScript APIs for initialization. If the JS APIs are put behind secure context, then surely that cuts-off access to the related CSS properties? Another example: for new conceptual CSS layout types, would we really want to prevent non-HTTPS sites from adopting a new kind of layout? A new layout seems to have no legitimate ties to "security" except that we want to use it as "bait" to switch the web to HTTPs... Must we recommend that even new CSS properties should have secure context applied? There may be a similar argument for events--though events can provide output that might leak sensitive information, and so this argument seems weaker to me. |
If we can, why not? What's the rationale for continued support for insecure contexts? |
While a bit of a corner case - internal/isolated network services, in-development, prototypes come to mind. Working on secure context only features using a local server isn't that user friendly, since (AFAIK, correct me if I am wrong) you can't use acme to get a free certificate. (Wonder what kind of rules need to be bent to get a CA to issue a certificate that only works on localhost origins..) |
On Sep 12, 2017, at 7:27 PM, cynthia ***@***.***> wrote:
While a bit of a corner case - internal/isolated network services, in-development, prototypes come to mind. Working on secure context only features using a local server isn't that user friendly, since (AFAIK, correct me if I am wrong) you can't use acme to get a free certificate.
To get an acme certificate you have to have a real domain name reachable from the internet and either access to setting a DNS TXT record or a publicly reachable HTTP or HTTPS server. It’s doable, but may not be easy for most people behind consumer grade firewalls without access to a real server.
It’s also possible to get acme authorization on a publicly reachable machine, then use that authorization to get a cert on a different machine that isn’t publicly reachable (I do this regularly via my own acmebot). You still have to have a real domain name for the local machine (but that can be faked via a hosts file).
(Wonder what kind of rules need to be bent to get a CA to issue a certificate that only works on localhost origins..)
Never gonna happen (I hope).
That all said, I believe localhost is (or will soon be) considered a secure context without HTTPS. You can also generate a self-signed certificate with any common/alt name you want and tell your browser to trust it for local testing. So this shouldn’t be a burden for developers.
|
FWIW, I'm with Anne. If we can reasonably limit a given feature, we should. At a minimum, I believe that stance should be our default. We can evaluate arguments on a case-by-case basis when that stance leads to results we're unhappy with. But folks should expect to have to make those arguments.
Exactly this. Browsers should also do a better job of allowing developers to treat a given origin as "secure enough" for development purposes. Chrome has command-line flags, but we should really embed it into devtools somehow. |
@mikewest, there isn't a lot of visibility of draft-west-let-localhost-be-localhost. I think that we're somewhat stalled on that. I don't know where you were discussing it though. There are alternatives though on the browser side. Maybe we should discuss that some more; I've some ideas here that I need to look into first. |
@martinthomson: Just finished a call for adoption in DNSOP, which I think was successful though the chairs haven't yet confirmed their view. The main hangup is the fork in https://tools.ietf.org/html/draft-west-let-localhost-be-localhost-06#section-4.2. Strong opinions on both sides. shrug |
Thanks for the pointer. I predict that it will be adopted but that you will eventually regret ever attempting this. It will be an RFC worth having though. Congratudolences. |
At first I though that we in the TAG were being congratudoliated, but it was @mikewest that had that honour. |
- require justification even for not-new-feature - say that the TAG can be consulted - expand on feature detection equivalence with unimplemented features - restructure a bit in order to do both of the above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more high-level comments. Thanks for following up on this @dbaron!
index.bs
Outdated
is discouraged and requires strong justification. | ||
The TAG is interested in hearing about and discussing cases | ||
where it is unclear whether exposing the capability | ||
in non-secure contexts is justifiable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sentence is odd, as it directly undercuts the rest of the paragraph. Have the courage of your convictions!
If y'all feel the need to weaken the claim that "New capabilities added to the Web should be available only in secure context", I'd suggest doing so weakly. Perhaps "There may be reasonable justification for exposing a given capability in non-secure contexts; the TAG is interested in hearing about those edge cases, and working to resolve them."?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about just:
The TAG is interested in hearing about and working to resolve any cases where exposure in non-secure contexts is being seriously considered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or, simpler:
The TAG is interested in hearing about cases
where exposing new features in non-secure contexts is being considered.
(I'm pulling the "working to resolve" because I suspect that much of what we hear about might qualify as "not a new feature".)
(for example, the definition of a new CSS property). | ||
However, for some types of extension points (e.g., dispatching an event), | ||
limitation to secure contexts should just | ||
be defined in normative prose in the specification. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I'd suggest that you move this paragraph up above the "And here are some exceptions" bit. Then the structure would be something like:
- Y'all should do this thing.
- Here's why you should do this thing.
- Here's how you should do this thing.
- And if you really don't want to do this thing, here's some things to think about.
That seems like a clearer message to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reordered the paragraphs.
it is not possible for developers to detect whether a feature is present, | ||
limiting the feature to secure contexts | ||
might cause problems | ||
for libraries that may be used in either secure or non-secure contexts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This paragraph seems like a distinct design principle ("Thou shalt enable feature detection.") that you could discuss at length elsewhere, and reference here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved this into #82 and revised the text.
developer confusion about where the boundaries are. | ||
We also don't want to increase | ||
the complexity of implementations of Web technology | ||
by requiring tests for secure contexts in too many *types* of places. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really understand this claim. Can you help me out? What "types of places" do you mean? The example below didn't help me (but I'm also not really a CSS guy, so the distinction between the difficulty of detecting a new property vs new syntax isn't clear to me... it seems like the former would be easier, though?). :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I mean by this is that for [SecureContext]
annotations in WebIDL, the annotation presumably gets stored as a bitflag or similar, and gets checked in a small set of places that implement [SecureContext]
. Likewise for CSS properties, engines presumably have a set of data about each property, to which a secure-contexts-only bit could be added, and likewise tested in a small number of places. But it seems preferable to avoid littering IsSecureContext() tests through the CSS parser (or other language parsers), and I think this preference likely aligns with the previous justification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For WebIDL, Chrome checks when generating bindings for a given context (e.g. everything is wrapped up in the exposed
checks). For deprecations or places where [SecureContext]
isn't relevant, we do inline checks at the entry point to the API (e.g. https://cs.chromium.org/chromium/src/third_party/WebKit/Source/modules/geolocation/Geolocation.cpp?rcl=2bd5f03512bcb0b0632366109612ea4e9c4b7ce2&l=220).
I think I agree with the thrust of your comments here, but I still don't really understand what this paragraph is telling feature designers. Does it boil down to "Use [SecureContext]
when possible?"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it boils down to "use [SecureContext]
and equivalent things for other languages like CSS", and maybe be a little more hesitant about secure context restrictions that would need to be in other places.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make it a requirement for specs to have web platform tests so that all browsers are consistent in how these features are hidden/fail?
Such that "For deprecations or places where [SecureContext] isn't relevant, we do inline checks at the entry point to the API" behaves consistently?
index.bs
Outdated
the expectations of authentication, integrity, or confidentiality | ||
that are met only in secure contexts, | ||
then it must be limited to secure contexts, | ||
even if the other factors above could justify exposing it in non-secure contexts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see it as the feature depending on authentication, integrity, or confidentiality, but instead the feature posing some risk to user privacy or security which is mitigated only by requiring authentication, integrity, and confidentiality. I mean, at some level, all features depend on the page's integrity, right? :)
WDYT about something like "If a feature poses a risk to user privacy or security which can be mitigated by requiring authentication, integrity, and confidentiality, the feature must be limited to secure contexts, ..."?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid the "poses a risk" in your wording that makes it sound like the feature is problematic, I think I'm going to try:
If a feature would pose a risk to user privacy or security without the authentication, integrity, or confidentiality that is present only in secure contexts, then the feature must be limited to secure contexts, ...
@dbaron: Friendly ping. We're having some conversations about this internally in Chrome, and a clear position statement from the TAG would be helpful in that discussion. :) /cc @slightlyoff Anything I can do to help out? |
Yeah, I managed to miss the github email from your review comments 6 days ago. I'd been hoping to (a) polish the text I had so far, which was a bit rough, (b) go through the blink-api-owners thread and now also (c) go through your comments above. Not going to happen today, but hopefully sometime later this week. |
This was originally part of w3ctag#75, but it seemed worth splitting out both into a separate section and a separate pull request.
I don't support limiting features to secure contexts solely as a carrot to encourage HTTPS adoption. Limiting new features that have privacy or security implications to secure contexts is sensible of course, but absent such reasons the implementation and authoring costs of fragmenting the platform generally outweigh the benefit of paternalism here. I'm glad @dbaron's planning to expand the exception around implementation complexity; I would go farther and carve out a similar exception for authoring complexity (again, when the feature does not have privacy or security concerns). |
And in addition to "implementation complexity" for web browser developers, another aspect of "implementation complexity" is complexity for server owners. It's one thing for the operator of a web server reachable through the Internet to build a secure context. It's a bit more difficult for, say, the operator of a consumer-grade router, printer, or network-attached storage (NAS) device on a private home LAN. I estimate that it would include at least the annual cost of a real domain name, plus the annual cost of a domain-validated certificate should the single point of failure that is Let's Encrypt go under, plus one or more For Dummies books. |
@hober's comment above is the general consensus at Apple. We think features should be restricted to secure contexts only when there is a privacy or security reason to do so for that specific feature. And it definitely should not be done for features where it would require constantly checking the isSecure bit during parsing of languages like CSS, WebAssembly or JavaScript. While we agree with the goal of getting more of the web onto HTTPS, we don't think forking the web platform is an acceptable cost for doing so. |
Provided Lastly, since certificates are essentially required for HTTP2. If we can get the web on HTTPS, it will make the transition to HTTP2 even easier. |
Secure Contexts says UAs MAY treat localhost as a secure context only if they can guarantee it will only ever resolve to a loopback address (and are in any case not required to). https://w3c.github.io/webappsec-secure-contexts/#localhost |
@natewiebe13 It's not always practical to test on |
I'm not sure how or whether this recommendation should be applied to programming languages like JavaScript and WebAssembly. TC39 has been trying to avoid making more JavaScript language modes with our "1JS" policy. Just from a parsing perspective, for example, adding another parameter to the grammar with new constructs banned adds significant complexity to the language definition, implementations and the testing matrix. JavaScript library functions have not added I/O capabilities. The specification is currently not organized to give some global objects some library capabilities and others not; all global objects get all of the library. A change to this policy is possible, but it might relate to the in-development Realms specification (cc @erights). The most prominently security-relevant recent feature is SharedArrayBuffer, but the TC39 decision (taking into account apparent cross-browser consensus) has been to not remove SharedArrayBuffer from the ECMAScript specification. |
The difference is that this mode will go away long term, whereas class/module/non-strict/strict are likely to stick around. |
@annevk Many also have the goal that most/all JS code will be be able to transition to modules and in strict mode in the future. |
Most seems reasonable, but all seems far less realistic given event handlers, legacy scripts, etc. |
@annevk I'm not going to argue about whether non-secure contexts will go away, but, just saying this recommendation could be taken to affect JavaScript significantly in a way we're not currently working towards, or alternatively, it could be interpreted to be out of scope. Is anyone who is working on this policy interested in presenting it to TC39? (or at least discuss it in a bug on the ECMA-262 repo). Although the next meeting might be a little hard for some to attend physically (it's in London), it's possible to call into a VC. If you're associated with the W3C but not Ecma, it's possible to attend as a liaison or invited expert, just a matter of moving a few papers. |
@annevk To clarify, for JavaScript, do you think this policy should apply to things in TC39's ECMAScript standard itself, or only to Web APIs defined in other specifications on top of it? |
I asked TC39 if they would like to adopt this policy, and so far the response has been widespread skepticism. |
There is a good discussion around this issue in this hackernews thread: https://news.ycombinator.com/item?id=16337998 |
@natewiebe13 wrote:
Let me describe the NAS case in more detail. The Secure Contexts spec proposes requiring a secure context for the Fullscreen API to make phishing by spoofing the operating system UI more difficult. A NAS on a home LAN might use the Fullscreen API to let users view videos stored on its drive. What certificate should the NAS's web server use once that becomes the case? |
@littledan I think it depends on the feature and the impl complexity, but if a major new feature like modules would come along today, that seems like something we should restrict. |
@pinobatch see https://blog.filippo.io/how-plex-is-doing-https-for-all-its-users/ for a potential setup. |
@annevk Plex can pay for the DigiCert partnership with from revenue from users who subscribe to Plex Pass ($5 per month, $40 per year, or $120 lifetime). Many developers of free software server applications have no analogous revenue source. |
You could build equivalent infrastructure on top of Let's Encrypt, no? |
Certainly, you could get a Let's Encrypt wildcard certificate for
My proposal would be to only introduce secure contexts for new JavaScript features if at the same time there is a standardised way to get a "trust certificate on first use" mechanism (think SSH) in the browser, that does not scare users away. I've attached my humble attempt at a mockup of what something like that may look like in the UA (comparison of what it looks like right now, and what it may look like). |
After the face-to-face discussion yesterday, the TAG concluded we couldn't come to consensus on strong advice about limiting features to secure contexts. @slightlyoff drafted some text that we could have consensus on, which I've now merged with the text that was here. Given how long the history is here, I decided to create the new proposal as a separate pull request in #89 rather than continuing to revise this one. |
Closing, as this was overtaken by #89 |
FWIW, I would like to throw my (relatively insubstantial) weight 100% behind @hober’s comment here. Authoring complexity matters, release-date–based modes are capricious, and CSS authors often do not have the influence over server configs to escape this trap, making it particularly vicious to impose on them. Imho, W3C TAG should be recommending against policies like “all new Web platform features added after X date are HTTPS-only, because we want to have more HTTPS carrots”. Such is not a policy crafted in service of good technical architecture: it is a marketing project being implemented as technical architecture. I don't believe marketing is a good basis for making decisions about the architectural foundations of the Web platform, and in this case I do consider it harmful, for the various reasons described by others in this thread. |
This pull request is intended to fix #32.
I expect this text to be somewhat controversial and need a good bit of review and polishing. However, it seemed like a good way to start would be to write something down (thanks to @mikewest for the reminder), and we can try to make progress from there.
I'd also note that the place I put the new section in the document didn't feel particularly obvious, and is probably worth thinking about during the review.
I'll try to remember to address feedback as additional commits, with the intention of squashing the later commits in at the end.