-
Notifications
You must be signed in to change notification settings - Fork 26.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
no-param-reassign with props, again #719
Comments
I totally agree that this can be annoying particularly in non-pure reducer functions (even though the overall reduce operation is pure). It's also a good point that an eslint rule will have a very hard time covering the nuances. However, I'd rather a lint rule that's overly prescriptive (that can be turned off inline with |
I for one am for reversing this patch that prohibited this. I've asked the question upstream as to what this option is trying to accomplish, because I don't see the point and I have a very low-bar from a style-standpoint. I've also updated the original ticket (which the patch that implemented this did nothing to address). #626 |
This is a rule we use at Airbnb - in the reducer case, if the body of your reducer is Mutation should be avoided in all its forms, and our style guide tries to be consistent with this belief. |
A note about See an example in Chrome console: > var obj = {};
> var newObj = Object.assign(obj, {a: 1});
> obj
Object {a: 1}
> newObj
Object {a: 1} To avoid mutation, it should be written as |
You're right, my mistake. I've updated my original comment. Performance is the last and least important concern when programming - it's easy to speed up clean code, but it's hard to clean up fast code. |
If anybody is interested, I created a fork of the is now ok: ["foo", "bar"].reduce((accumulator, value) => {
accumulator[value] = true;
return accumulator;
}, {}); can be configured:
|
My solution to avoid this lint error is to use return Object.keys(obj).reduce((acc, key) => {
Object.assign(acc, { [key]: obj[key].value })
return acc
}, {}) |
@ghengeveld creating new copies on each iteration is not actually a performance issue in practice. Engines are able to optimize it just fine. |
Is it possible to incorporate @mathieudutour's fork? |
@shinzui that would need to be done in core eslint, and I don't think one can statically know in a reliable manner that a given "reduce" is |
If you are using the object spread https://babeljs.io/docs/plugins/transform-object-rest-spread/.
Output:
|
From benchmarking, that doesn't really seem to be the case in any browsers: https://jsperf.com/object-create-vs-mutation-in-reduce/7. There's a difference of a couple orders of magnitude in speed. |
@masaeedu jsperf's microbenchmarks are fun diversions, but aren't actually useful in a practical sense because they don't mimic real usage in an actual application lifecycle. That said, whether it can do 200K operations in a second or 1.6 million (the numbers my browser showed) is irrelevant; you're not even likely to be doing 200 operations in a second let alone 200K. |
@ljharb I was just letting you know that this: "Engines are able to optimize it just fine" is wrong. There is a cost to creating copies in terms of GC pressure and speed, it isn't a zero cost abstraction in JS. The browser isn't the only place you have to run JS, and there's often hot paths where you'll have to use mutation instead of creating millions of immediately discarded copies of increasing size. |
Fair enough. However, when performance is a concern, best practices tend to be intentionally sacrificed, but that shouldn't be done without extensive real-world (not JSPerf) benchmarks. |
True. But I don't think considering the accumulator immutable is a good practice in any case. |
The performance of the If the answer is "to make the linter happy" then that's the tail wagging the dog. This is a corner case that eslint does not (yet) cover (which is OK, we evolve it as things come up). For now we can If the answer is "to make the code more functional", then I ask you: where do we draw the line? JavaScript is not functional. It is imperative and object-based. We can write pure functions and pass them around because functions are first-class citizens. And that's really great when the situation is appropriate. But to say "all functions should always be pure" is to chose to ignore the realities of the language in favour of some ideal which really isn't applicable here. What do you think happens under the covers when you call
Interpreters in true functional languages like Haskell, Lisp, and Scala can make assumptions, though, because of the mathematical properties of those languages. You can have your cake and eat it, too. You can write naive declarative code without incurring the cost of a naive imperative implementation. The interpreter can safely make assumptions about what you mean and fudge the true implementation on the metal. Now we have to ask ourselves "what is the purpose of this linting rule?" To prevent people making this mistake: function sayHelloTo(person) {
// Don't waste memory lol! Just store the capitalised name here!
// With an exclamation point! WCGW?
person.name = person.name.toUpperCase() + '!';
console.log('Well hello ' + person.name);
}
let person = { name: 'Mr. Bean' };
sayHelloTo(person); // Well hello MR. BEAN!
savePersonToDatabase(person); // Saved as { name: 'MR. BEAN!' } The person implementing Compare to this: function truthify(array) {
return array.reduce((prev, cur) => {
prev[`_${cur}`] = true;
return prev;
}, {});
}
To wrap up, just because you can write pure functions in JavaScript doesn't mean JavaScript is a functional language. Don't pretend the interpreter is written to turn naive declarative code into a sensible imperative implementation. Do functional when it makes sense and use it to your advantage. But don't follow an ideal that conflicts with the problem at hand and a practical solution just because the linter complained about it. |
@iknowcss performance is irrelevant, and the least important thing for a style guide, or when writing code overall. What happens under the covers is also entirely irrelevant - that (eslint also now covers this case, fwiw.) To wrap up, mutation should always be avoided; performance of a given function almost never matters; and don't pretend the way the interpreter is written matters. |
Perhaps Airbnb only targets $2000 MacBooks, but if you want to run JavaScript on a 5 year old Android device in low-power mode, these are the kind of things that can cause frustration among your users. Proclaiming performance is irrelevant is silly because it completely depends on context, but it's especially silly if the only reason to forego a simple and easy optimization is to hold true to your "mutation is evil" belief. Localized, controlled mutation as described by @iknowcss is a perfectly good practice. The goal we're trying to achieve by avoiding mutation is to keep code maintainable and predictable, and that's exactly what's achieved here, despite the implementation detail of using mutation in a controlled manner. |
I've had some more time to think about this and have come up with some different opinions. First, truly and honestly thank you @ljharb for your contributions to this repo and the AirBnB styleguide. If it wasn't for your work on this project it wouldn't be where it is today. I use the AirBnB linting rules in my projects. They're consistent and reasonable, and while I'm not always used to them, they're easy enough to get used to. I'm glad this is here for everyone to use for free and I'm sorry my only contribution (so far) is to resurrect this thread. Second, I think the thing that everyone is worked up the most about is not actually the style rule so much as the proposed alternative. Let's consider the two separately: 1. Keeping the style guide consistent is hard Writing a good style guide to begin with requires more foresight and planning than most can muster. On top of that it must remain consistent and prescriptive without micro-managing too much. Little exceptions to the guide like our Now to the part I can't get behind 2. The proposed alternative has unnecessarily poor performance Start with this implementation function truthify(array) {
const result = array.reduce((prev, cur) => {
prev[cur] = true;
return prev;
}, {});
} This code is function truthify(array) {
const result = array.reduce((prev, cur) => {
return Object.assign({}, prev, { [cur]: true });
}, {});
} The proposed alternative does not violate the rule but its performance profile is Why are we using function truthify(array) {
var result = {};
for (var i = 0; i < array.length; i++) {
result[array[i]] = true;
}
return result;
} Now that we have function truthify(array) {
const result = {};
array.forEach((item) => {
result[item] = true;
});
return result;
} This alternative does not violate the rule and its performance profile is What does everyone think about that? I personally will stop using |
@ghengeveld Obviously you always will want to profile your app on the devices that matter, and then optimize the slow parts - but that comes after it's built (cleanly), it works, and it's tested. I'm not saying "screw slow devices and the users who have them", I'm saying that it's always easier to make clean code fast, then it will be to make fast code clean - so start clean, optimize later. @iknowcss I appreciate your thorough analysis. It's worth noting that "O(n^2)" literally will not matter in the slightest unless your array has millions of items, or at least thousands. How large an array are you realistically using here? Also, loops should be avoided whenever possible (as they fail to clearly signal intent), as should mutations and side effects (also for clarity) — but encapsulated in a very specific abstraction like "truthify", that makes perfect sense (linter override comments inside abstractions are always totally acceptable; that's what that feature is for). Note that this is not a general solution, but it works well for a specific use case like you've provided. Overall, big O notation, and theoretical algorithmic complexity, are useful ways to think about general complexity, but rarely have a practical impact when |
Just came here to say that this advice about Object.assign inside a reduce occasionally does have real world performance implications, and not just toy examples from jsfiddle.net, and I'd personally avoid it in the general case. Here's the output of a perf run from a real webpack run for the unused-files-webpack-plugin. And this is a case where an array has ~1800 entries. Anyway, just thought I'd share a real case where this is happening! |
@mikesherov thanks! When you do find a rare case where it is a performance bottleneck (and you've proven it with benchmarking/profiling), that's a perfect case for an eslint override comment. The default, however, should always ignore performance in favor of clarity. |
Just to add to this old discussion - for cases where you just want to use
|
@RReverser good point about the Set. I was thinking about that this morning as well... good tip! |
The original code snippet can be rewritten this way: Object.assign(...['foo', 'bar', 'qux', 'foo'].map(value => ({ [`_${value}`]: true }))); It's a shorter and faster[0] alternative compared to the reduce-with-assign pattern: ['foo', 'bar', 'qux', 'foo'].reduce((acc, value) => Object.assign({}, acc, { [`_${value}`]: true }), {}) [0] See: https://jsperf.com/reduce-with-assign-vs-assign-with-map/ |
yes but it is a lot slower than ['foo', 'bar', 'qux', 'foo'].reduce((acc, value) => {
acc[`_${value}`] = true
return acc
}), {}) and that's what the issue is about |
“Slower” on an array with 4 items, once? I’m happy to look at benchmarks, but I’d expect that for iterating non-large arrays (let’s say, less than a thousand elements) a reasonable number of times (let’s say, less than a thousand times in a second), all methods you’re likely to think of are identically fast. |
as posted a while ago: https://jsperf.com/object-create-vs-mutation-in-reduce/7 So of course, you might consider that as negligible but I don't think it is. Eventually, that's your repo so you are free to do what you want :) I'll stick to my plugin: https://github.com/mathieudutour/eslint-plugin-no-not-accumulator-reassign as I don't see any way to justify such an easily avoidable slowdown (and I personally don't see any readability improvement, I'd even say it's the opposite). (I'll also unsubscribe from this issue since it's going nowhere. Should probably be locked) |
@mathieudutour that very jsperf shows that a single iteration does 40,000 operations per second on the slowest example. That means that one operation will take, at worst, less than a tenth of a millisecond. If that level of performance is tanking your application for some strange reason, then great! Use an eslint override comment. For the majority case tho, it is objectively negligible. |
I've been bitten by this a couple of days ago in a production application, where an array of ~10000 elements was taking seconds to be converted to an object. I don't think reducing arrays of a few thousand elements is an impractical, edge or rare case. I suggest opening the browser console and doing this enlightening little experiment: var array = Array.from(Array(10000).keys());
// Spread operator (same with Object.assign)
var t0 = performance.now();
array.reduce((acc, number) => ({ ...acc, [number]: null }), {});
var t1 = performance.now();
console.log(t1 - t0); // > 12 seconds
// Mutating the accumulator
t0 = performance.now();
array.reduce((acc, number) => { acc[number] = null; return acc }, {});
t1 = performance.now();
console.log(t1 - t0); // ~1 millisecond Come on, this is obscene and should invalidate any loyalty to the immutability of accumulators. |
@vibragiel when you need to override best practices for performance reasons, go ahead! That’s what eslint override comments are for. |
The problem (which cost developer time to find and diagnose) wouldn't have happened without style guides/articles promoting the |
The even bigger problem is that it can go unnoticed (because some (most?) developers will follow the "best practices") and that in the end, it's hurting the user experience |
It absolutely is a best practice; and it’s not a “problem” in most cases. If it can go unnoticed, that’s only because the developer isn’t profiling their app - that hurts the user experience. Writing less clean code because of an occasionally needed performance optimization will hurt the user experience too - by making the codebases less maintainable. |
Again that's really debatable. Most people I know just don't use let acc = {}
array.forEach() instead which is worse for maintainability I'd say. Even with just 30 elements (which is, for example, the default page size for the GitHub API), it takes 2ms to run only the reduce from @vibragiel example. If you go to 100 elements, that's 10ms. That's not huge numbers... |
I know this is an old issue, but we've recently been hit by this multiple times in one project as we're running on low-CPU lambdas. The advent of this type of environment means that the spreading style hurts performance way more often than you might think. |
This is an old, closed issue but still wanted to put a small feedback point here regarding how helpful this rule is, as I experienced it first-hand today. I faced an issue with this rule, and my first thought was to simply put a disable command before that line. But I realized after a few moments how bad an idea it would have been. This is what I was doing where I had a list of objects, and I needed to update each object by replacing the value of one of its properties and deleting another property.
The new array comes as expected, but it modifies the original array too. Problem? This is part of a method with quite a few chained promises, and someone down the line can try an operation on the original array without realizing that it had been modified (unintentionally) somewhere in the code. In my case, I finally rewrote the existing piece in the following way to completely avoid mutation (and not compromise on the performance at the same time).
We should make changes as per the rule ( without compromising on performance ) before quickly jumping on to disable the rule. At the end, thanks a lot, @ljharb for this rule. Saves from a lot of future bugs :) PS: Enjoyed reading other detailed comments here too that talk about fixing the code as per the rule, while also keeping performance in mind. Esp @iknowcss's comment is definitely worth a read here. |
Hello!
A comment regarding this change:
#626
#627
While I agree in general that function parameters should not be mutated, this change gave me a lot of errors in code that I consider as safe. For example:
I know, I can change this rule in my
.eslintrc
or for this particular line of code. But thisreduce
pattern is quite common in JavaScript. Lodash has also a bunch of functions that work this way. There’s little risk mutating the accumulator since the reduce operation as a whole is non-mutating.While the
{ "props": true }
rule might catch some errors, it mostly produces false positives in my codebases. I’m already writing in a non-mutating, mostly pure functional style.Related, the example in https://github.com/airbnb/javascript#7.12 isn’t helpful IMHO. The “good” and “bad” examples do not seem to match. The “bad” example mutates an object. The “good” example illustrates input normalization. Which is a different thing IMHO. ;)
Perhaps there are cases where people accidentally mutate parameters just to normalize input. Then a better example would be:
But I think people are mutating parameters not just accidentally, but more commonly to make changes on objects. For example:
Then, a good alternative would be a pure function. It could clone the input, then augment it. Or use a library for immutable data that has smart lists and hashes, like Immutable.js.
Assuming this case, a better example would be:
Or with prototypal delegation:
(Or with a library like Immutable.js…)
I think the bottom line is that the topic of immutability is too complex to be covered in a simple example. And the rule
no-param-reassign: [2, { "props": true }]
cannot cover the nuances here. I propose to reconsider the switch to{ "props": true }
.The text was updated successfully, but these errors were encountered: