-
Notifications
You must be signed in to change notification settings - Fork 57
Regarding "Don't break user intuition" #30
Comments
The problem is, what type should e.g. That said, I'd maybe support implicit conversion to Integer when one operand is an Integer and the other is a number ≤ 2**53 with no decimal part (throwing when there is a decimal part). I'm not sure if this sort of conditional throwing is a good idea from an avoiding-bugs or implementation perspective, though. (@rauschma's proposal looks like it changes the semantics of existing JS math, e.g. |
I do understand the dilemma here. However, I'd rather lose precision than throw an error. Losing precision is something we already deal with IEEE 754, so it doesn't seem bad to me. @rauschma's proposal is worth to be discussed, but this is not the place :-) |
In which direction, though? If there were a single obvious choice I'd agree, but I'm not convinced there is. I think in cases where there's two highly conflicting intuitions for the semantics of some code, and it's possible to just make that code an error and instead require more explicit expressions of intent, it's better to make it an error. (For example, this is why |
@gweax If users are OK with losing precision, then they can stick with Number, and maybe pursuing this topic is a bad idea. But the idea here is that many users want to get greater precision, and this is a proposal towards that use case. Implicitly losing precision seems like the worst thing to do for these users, doesn't it? |
Maybe this is a general issue of where JavaScript is heading to. It seems to me that currently steps are taken to change (or reduce) the nature of a loosely typed language. A part of that is implicitly coercing one type to another if the expected types don't match the actual ones. Of course this lead to silly and sometimes inconsistent rules like Introducing a new data type that leads to runtime errors (in contrast to the syntax error of My reasoning lacks an alternative to how to cope with Integers and other types. This is because I'm not too much into the matter. If for the expected use case there is no other solution than to throw errors, well, then we have to go with it. |
Pedantically: Symbol() + 1; // throws
({ valueOf() { throw new Error('unsupported'); } }) + 1; // throws So it's not a totally radical proposition, especially in es2015-land. Es2015 also tightens the type system in other ways, like making classes non-callable and generators non-constructible. But it's true that this would be a step further in that direction, especially since Integers, unlike Symbols, do have a sensible interpretation when added to floats. (In fact they have two sensible interpretations, which conflict.) I think this is worth it: I think it will prevent far more bugs than it causes, without making it that much harder to write code. |
This would only prevent bugs when mathing numbers and "integers larger than 2**53" - how common do we think this is among users who know the difference and care about it? |
@ljharb, I can't tell what your "this" is referring to. In any case, I suspect that most users of Integers will want accurate math when operating on Integers larger than 2**53 - otherwise they wouldn't use them. |
Right - I'm saying that very few users will be using integers that large and performing math ops with those integers and floats, and those rare users will be quite aware of the precision issues involved. In other words, I'd be fine with only allowing integer conversion when it didn't drop precision; but lacking that, I think dropping precision is much better than throwing, since the extremely common case will not drop precision. |
@bakkot This is also the case with other types and the plus operator. The more I think about it, the more I tend to say that Given, for example, an implementation of the Fibonacci numbers, a function written for "traditional" JavaScript could be something like that (with awful performance):
With Existing code could benefit from implicit coercion, and this is a strong argument against throwing. |
@gweax What you propose is the exact opposite of what other environments do. For instance, in Python, >>> x1 = int( 1 )
>>> type( x1 )
<type 'int'>
>>> x2 = 1.0
>>> type( x2 )
<type 'float'>
>>> x3 = x1 + x2
2.0
>>> type( x3 )
<type 'float'> In Julia,
The same type promotion rules apply to Java, C, and Perl. Were JavaScript to pursue the path you proposed, porting numeric computing code would require considerably more work to ensure type conversions were handled in accordance with existing expectations. The above, however, is not an argument against the possibility of implicit conversion, but were any conversion to occur the path would be from FWIW, in Julia,
|
This is very interesting for two reasons.
|
In C/C++, there are all these implicit casts in numeric operators and argument passing. However, they have been so error-prone in the past that many projects run with a compiler mode to reject programs which don't do explicit casts rather than depending on the implicit ones. I think many programs that people will be porting will already be uniform in this sense, and many of the ones that are not have bugs due to losing precision. |
Porting of C++/Java/Python code is cool, but it's 1000x more LOC of existing JavaScript that should be the priority. Existing code should react to calculateCompoundReturn( 5n ) ^^^^ If this works for We have business logic, complicated rendering calculation, extensive well-tuned libraries like jQuery, d3, three.js and more. If integer throws, if it is a poisoned apple, maintainers of all those libraries will have a big trouble on their hands. |
@mihailik How does this case differ from passing a Symbol into such libraries? |
Excellent point, and there is a good reason too. It's highly improbable for Symbol to be misplaced into a string/number position. |
It differs because conceptually, a Symbol isn't the same as a number, but conceptually, an integer is very much the same. |
OK, I can see how users would want Integers to behave this way. I can also see how users would want Numbers to automatically overflow into Integers. The problem, though, is how and whether we can do it while preserving other important properties. I am having trouble thinking of a way, but maybe you two have more ideas. At some point, limitations of language design and backwards compatibility seep in and make the union of all user intuitions untenable. I tried to document the issues with mixed operands in the explainer, but maybe the story I told there was missing something. |
I think you documented it well, but I think the conclusion - that throwing when mathing between two number types is acceptable - is where we differ; I think that lacking a way, integers are untenable. |
Kind of an odd idea, but like we have |
As JSON allows for 3 different notations ( I'd see that as a valid solution, and reverseable when Numbers (even when isInteger() is true) use dot notation when serializing (e.g. |
Strongly disagree: Throwing in these cases is much preferable to silently having the wrong semantics. Remember, for the error to be triggered, someone has to write the code which passes in the BigInteger. Even if the conceptual similarity between numbers and BigIntegers leads them to expect that to work, they will immediately find out that it doesn't. I don't understand why you expect it to be such an issue. (Though I guess it would be nice to hear thoughts from library maintainers too.) This doesn't seem to be that much of an issue in other languages: In Go, for example, it's a compile error to multiply a big.Int and an int. I don't think throwing a type error as soon as possible is that much less user friendly. Really I expect that BigIntegers will see two kinds of users: a.) people who actually have a use for them, who I do not expect to be surprised or confused by the throwiness, and b.) people who are just trying them out, who may be surprised by the throwiness but who I expect to then conclude that BigIntegers are not something they want to use. Which is the desired effect. |
If Integers are explicitly intended for niche usage, they should live in a library, and not surface as 1st class syntax. Compare with typed arrays. |
Possibly uncommon first hand experience: when implementing hardware drivers in JS, implicit coercion (strictly in the Integer/Number operation sense) can have dangerous (as in physical harm) consequences. I'd prefer the program to throw over the alternative.
Are you inferring that from @bakkot's message here:
I don't think the intention was to make it appear that BigInteger was for "niche usage", simply that they have a concrete role to fill. |
@rwaldron storing guids, storing Specifically, bit flags are very prone to be brittle in regards of pass/throw. They are used with a wide variety of operations: ==, ===, if, ?, >, &, |. Producing a syntax that encourages wide generic usage, but leads to brittleness would both impose heavy costs of broken compatibility, and lead to feature being dormant next to 0-octals. |
@ConorOBrien-Foxx About introducing a language mode: TC39 considered, in the ES6 cycle, restricting features to certain modes. However, @dherman made the case that we should restrict reference to modes, instead promoting a rallying cry of "1JS!", a single language which may be more complicated for implementers and spec writers, but a simpler mental model for users. |
@mihailik I think 0-octals are a bit different--the really problematic thing about them is that if you include an 8 or a 9, it silently falls back to non-octal. I'd bucket this as a big "oops, that really never helped anyone in the first place, and it seems like an accident of history", together with divergence in line endings among platforms. By contrast, this is an explicit design decision, and it's in the direction of being more strict rather than more loosey-goosey as 0-octals are. By the way, I wish 0-octals were out of use--then we could remove the feature. @MMeent I think we could do something like that if we were starting over with a new language. asm.js makes such a distinction, actually, but in a way such that it will only reject programs with the wrong/missing decimal, not actually change runtime semantics. Unfortunately, it's likely to change semantics of existing programs to decide whether or not to make something an Integer or Number. For example, in Python, I just want to mention, all of these issues come up in exactly the same way for Int64 (use cases that this proposal attempts to meet, but which has previously been proposed separately). If we think Integer is untenable due to them, that means we are shutting all uses of JavaScript out from what's been a frequent user request from Node, embedded use cases, or authors of several libraries which need to do complex calculations. |
@littledan I don't think integer is untenable at all. The proposed semantics are sound and may still be relaxed in the future if there is sufficient evidence that throwing is undesirable. (Although, I predict that will not be the case.) |
@littledan the current TC39 proposal suggests wide common usages of Integer: for storing large unique identifiers, and for storing bit flags. These use cases are at the top of the README. The idea that this proposal can only apply to code dealing with 2^53-sized numbers quite simply contradicts the spec's stated goals. It's not the browsers' maintainers we should worry about — it's about the massive amount of JS code out there. Just as with zero-octals, you can easily slip on a single character Again, it's not about how hard it is to implement the new feature, not how easy it is to write the new fancy code. It's how bad it is for the existing code. And remember, you're not in writer/consumer world. TypeErrors are fired in the face of end user, due to one value somewhere falling out of the previously tested range. Off by one and you're out (think bit flags issues I mentioned earlier). |
Just for some context, I'm not making arguments here for implementation complexity. The implementation of Integer will be complex, no bones about it. All of the arguments I've been making here are about language consistency, users getting the right answers, and being open to future evolution. Generally, as a design principle, I think when a user slips on a character and writes the wrong thing, the best case scenario would be for the language to throw an exception as often as possible (rather than intuit what might be the right answer) so that the error can be caught earlier in the development cycle. If we can throw an early error, that's the best. If we can throw a TypeError each time that code is reached, well, better than silently getting the wrong answer. I don't know how to solve the problem of users writing code that doesn't work--maybe systems like TypeScript can help detect these cases. |
I do agree. But that's not the way JavaScript was designed in the beginning. There seems to be a consent that this was a design mistake, thus we have Douglas Crockford's "Good Parts", which can be condensed into one statement "Don't use implicit type coercion". Throwing an exception is another way to deal with it - maybe following a reasonable design principle, but breaking the design principle of JavaScript's old days. Is there a general consent that this is the way JavaScript should go? |
@gweax I don't know if there's general consensus about anything in this area. Certainly the "don't use implicit type coercion" thesis has its detractors. But ES2015 has definitely been moving in this direction, as @bakkot pointed out, and throwing in these contexts was well-received at TC39. |
Another option would be to make it so that Numbers in the safe integer range get coerced to Integer when being added to an Integer and all other Number values throw an error so that loss of precision can never occur but operations still make sense: e.g. 1n + 1 == 1n
1n * 2 == 2n
1n - 1000 == -999n
1n + 1.5 // Throws
1n + 1e16 // Throws
1n + NaN // Throws
1n + Infinity // Throws |
@Jamesernator that has been suggested here and some counterarguments are here and here |
And we can already spot some side effects of that: nodejs/node#11637 If integer will be handled same, it's likely we'll end with tons of similar bugs (as someone already noted it's way more probable to collide number and interger than symbol and string). |
@medikoo that's a side effect of the core code making the forever-false assumption that you can safely stringify a value - |
I agree it's not the best example, as indeed at such level (and in ES5+ env) we should never assume that value is coercible to string. |
@ljharb and in fact I suspect node is still broken for that case. |
It's nothing to do with WHERE js is going, it's all about how it chooses to travel. Every household has knives aplenty. If you set your mind to it, you can injure yourself badly. The question is how to make it predictably very hard to cut yourself. Removing knife's handle is bad. You can argue the blade pre-existed. But the point stands: you KNOW people will open the drawer, pick the knife and bleed. Normal people, not nerdy compiler writers. |
Perhaps a better analogy than removing the handle would be, covering the blade in a piece of tissue paper (so that its easy to not realize that picking it up by the wrapped blade could still cut you later), versus removing the tissue paper so that the only safe way to hold it is obviously the handle. Adding these exceptions means you're more likely to get cut, but only when you're doing the wrong thing - which is the time you're supposed to get cut, so you learn to stop picking up knives by the blade. |
People have been using numerical values in JS for years. No, decades. Your know they will cut themselves with integers. The INTENDED USE CASES for integer and number types overlap significantly. Not by mistake, BY DESIGN they're going to mix and inevitably blow up. Read up the proposal, it's a general purpose feature. This proposal creates bugs, it definitely fixes no pre-existing bugs. |
@mihailik What you're saying isn't unreasonable--maybe this proposal is fatally un-ergonomic, and there's just no good way to do it, and we therefore should never expose other numerical types in JavaScript. I think this hazard is something to watch out for as this proposal advances. We need more data. What about this--we'll get this implemented in Babel, and then it will be possible to write real programs containing BigInts, with these possibly-unergonomic exception semantics. That way, we can see how bad things really are. Would that be a good way to resolve the issue for you? |
@littledan two caveats here.
If (1) can be done, maybe (2) can be handled with HTTP proxying and on-the-fly transaltion of existing websites? We can list possible bug-prone patterns, and detect/inject them alongside Babel/integer conversion into the JS code when it's proxied. Besides, that approach and proxy/translation infrastructure can be used to assess risks of other syntactical breaking changes considered for JS. |
|
For example, we can replace whole libraries with integer-enhanced versions -- and see whether websites cope. |
@mihailik Clearly it's possible to do such an on-the-fly translation into broken code. The idea of this proposal is to discourage people from using BigInts in places where they use Numbers, not to silently "integer-enhance" existing code. It's clear that what you're suggesting will lead to exceptions being thrown, but that doesn't necessarily mean that this proposal won't work. I think we need a slightly smarter method to test how bad this will be for users. |
@littledan actually, the proposal does not discourage that at all. Consider the first code sample on the page: function nthPrime(nth) {
function isPrime(p) { It's a convincing code sample for using BigInt in general-purpose integer maths. Sizeable subset of what today's JS deals with is integer maths, especially offset/length logic. Or look at use cases explicitly encouraging overlapped use of BigInt/Number:
Apart from grey areas of machine registers, GUIDs etc. -- the use-case of |
@mihailik Untested code in JavaScript is problematic for all sorts of reasons. For example, you may read to a missing variable, and you'll only get the ReferenceError at runtime, not compile-time. For this, there are solutions like TypeScript, which presumably would be capable of finding errors in code using BigInts as well. |
#36 seemed like our best shot at meeting the user intuitions expressed here, but as explained in that both of these threads, it doesn't seem like a tenable path. |
The proposal states that "When a messy situation comes up, this proposal errs on the side of throwing an exception rather than silently giving a bad answer."
To me this is exactly the opposite of what my intuition says. The forgiving nature of JavaScript is to me an integral part of the language. Throwing an error seems so Java-ish. Especially when using the
+
operator. Although I understand the rationale behind it, I think the plus operator should never throw an error, because that's the way it currently is. Doing otherwise would break user intuition.Axel Rauschmayer's proposal is in my opinion more consistant with regard to user intuition.
The text was updated successfully, but these errors were encountered: