-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Removing observable lookup/call of %SetPrototype%.add in new Set
#1430
Comments
Note that this would break anyone currently relying on the monkey-patching working. My hope is that no one is relying on that. |
This was absolutely intended and is necessary to allow allow a subclass to share the inherited constructor iteration behavior while changing the internal representation details.
The design already allows an implementation to optimize the constructor. Note that Prior to the development of ES6 there was significant feedback from users who were unhappy because the ES3 era built-ins were not designed to be subclassed. Hence one of the goals of ES6 was to make the new and existing intrinsics work as subclassable base classes. To that end the ES6 methods were designed by somebody (me) with lots of experience with designing for subclass extensibility. It's all intentional. Note that TC39 could have said, "no we don't want to make things subclassable because such extensibility often adds overhead that takes extra effort to optimize away" . But that wasn't the decision that TC39 made for ES6 and it s now water under the bridge.
This is a breaking change and you have no way to determine how many hundreds, thousands, or millions of JavaScript programs depend upon the ES6 specified behavior. Why are you even considering this? |
A subclass would presumably shadow I don't understand why this particular behavior is at all necessary for subclassing. Say more?
I don't have a 100% reliable way, but if we check GitHub and npm and add a use-counter to browsers looking for places which are overriding
Mainly because I find it strange that built-ins are less defensible than user code, in a way which adds cost to engines for what seems to me to be no benefit. But perhaps I have not understood the benefit; can you say more about why this is desirable? I agree that subclassing is an important design goal (though not everyone on the committee does). But I don't see how this particular behavior, which is only observable when overwriting intrinsics and invoking the Edit: let me give a specific example of why this change would be helpful. The current behavior means that if I want to use the built-in |
The example given in you're original issue is a perfect example. Subclass
It is naive to think that everybody who matter makes their code publicly available on github, or npm, or runs in a browser. The TC39 perspective has always been to strive for absolute backwards compat and only make breaking changes in exceptional circumstances. If a feature is there (even a bug) we have to assume that somebody depends upon it. In this cases, you are talking about an intentional design decision with a real use case If you disagree with the subclassing goal that ES6 was designed under then it would probably be better to try to convince T39 to totally reverse all decisions derived from that goal rather than chipping away at individual methods like this.
I don't understand this statement. You example Why isn't freeing the Finally, if you want to make a frozen defensible intrinsics Set that isn't subclassable why don't you propose a built-in that export such a think rather than breaking stuff that is already there. My sense is that your use case and the subclassability use case are ultimately incompatible with each other. Rather than compromising both, it would be better to have two separate intrinsics Set classes. |
Ah, ok, I think there was a misunderstanding. To be clear, I am not proposing to change this behavior. I understand why it's important that the example I gave works as it does (which is why I gave it), and specifically meant to highlight that it should continue to work exactly the same under my proposed change. That's what I meant by "This would preserve the subclassing behavior". I am only proposing to change what happens when calling The change would be to replace step 6 of the Set constructor algorithm, which currently reads
Hopefully this is clear from the above, but that's not what I meant by "this behavior". The behavior I am referring to is that Set.prototype.add = foo;
new Set([0]); calls
This is now somewhat off-topic, but generally speaking most libraries try not to have side effects, including affecting the configurability of builtins. |
You could imagine doing something similar to what the original post proposes with, e.g., RegExp subclassing. Such a change could lead to a significantly simpler implementation. |
@bakkot is this something you want to make a PR for, and/or place on the June or July agendas? |
The
Set
constructor when called with an iterable behaves as follows:add
on that objectadd
on the new object passing each member of the iterableI believe the lookup for
add
is to support subclasses overridingadd
, such thatBut this also has the consequence that changing
Set.prototype.add
changes the behavior of theSet
constructor, which adds implementation overhead. I don't think it's intended or necessary thatSet.prototype.add
be monkey-patchable in this way. Perhaps we could say that if NewTarget is theSet
constructor itself (edit: from the same realm), it can just use the originalSet.prototype.add
. This would preserve the subclassing behavior (i.e., my example would work the same) while reducing complexity in the common non-subclassed case.This would debatably be an inconsistency, but it's only observable if you're patching intrinsics, which is a case where I don't think we should worry too much about trying to maintain consistency.
This all applies to
WeakSet
,Map
, andWeakMap
as well.The text was updated successfully, but these errors were encountered: