-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unloadRecord() jumps the gun on peekAll arrays using dependentArrayWillChange observers #5395
Comments
unloadRecord()
jumps the gun on peekAll
arrays using dependentArrayWillChange
observers
cc @mmun I think this may be a side-effect of your recent changes? |
@Kilowhisky for what it's worth, it should be easier to write a test for this with the infra that #5378 added. I'm skeptical that we still support array before observers, which is why I pinged @mmun. |
@runspired the twiddle is using Ember 2.16 so it's unrelated to my arrayproxy work. |
@mmun that's because ember twiddle doesn't like 3.0.0 yet. My app is on 3.0.0 and exhibits the issue. Anyways my use case is making an CP that can provide record array filtering that only filters new records and doesn't re-filter the array every time a record is added. I'll work on a failing test. |
Added test. |
@Kilowhisky do you have an idea of when this stopped working in ember-data? |
Also, thanks for the test, will look into it :) |
Not really sure unfortunately. The code block being used for this is from the pre-2.0 days. Previously when ED didn't actually dematerialize records it just resulted in a memory leak in my filter CP...... Which went undetected until ED 2.13 started destroying records "properly". Starting from 2.13 on the records were set to null/undefined as they were destroyed. From 2.13 until #5378 the record arrays just contained null entries for all the records "destroyed". It didn't actually remove the records so my observers never fired. My guess is that the issue came in between 2.12.2 and 2.13 with the change to how records are dematerialized. Unfortunately this change resulted in a whole host of bugs which #5378 attempted to resolve. It just looks like this is just one more manifestation of the issue. To me it sounds like an issue of batching unloads by ED. |
I've been running into issues with unload with all versions after 2.12.2 and have not been able to upgrade past that version due to issues like this. The issue is still present in 3.1.1 from what I can tell with my testing. |
@ewwilson I should have clarified some things on this ticket a while ago.
In general, we've addressed the various bugs reported with unloadRecord. If you have a specific issue you are hitting I'm happy to investigate, but typically remaining issues are from folks mis-using the feature as a substitute for signaling remote deletion of a record (which is not what unloadRecord does). |
Well that's unfortunate. I'm using it so i can have in place live record filtering as re-filtering 1,000 items unnecessarily every couple seconds makes the browser quite unhappy. I've tried a few alternatives but nothing seems to be able to work that doesn't run in O(N*N) time. I see a way in which additions still work in O(1) but removals are just screwed because the the array handlers fire with null as the argument so i have nothing to grab onto in the live array. Got any alternative suggestions? |
@Kilowhisky have you tried the approach recommended here? https://github.com/ember-data/ember-data-filter#ember-data-filterfilter |
Still causes a complete array recompute anytime any 1 record changes. Not really that fun. Ex..
The hunt continues then for an equivalent replacement. |
@Kilowhisky your computed is overly broad. It will work better if you drop the dependency on assetsFiltered: computed(
'[email protected]',
'model.state.assetList.{filterShowActive,filterShowInactive,searchValue}',
'model.state.assetList.filterShowGroups.[]',
'model.state.assetList.filterShowAssets.[]',
function() {
return this.model.assets.filter(asset => {
// big filter logic here
});
}) There are likely additional ways of optimizing this, including by moving the filter function into a pure-function defined in the module scope, which would allow it to be optimized instead of created newly every time. Example: function filterAsset(asset, settings) {
// big filter logic here
}
assetsFiltered: computed(
'[email protected]',
'model.state.assetList.{filterShowActive,filterShowInactive,searchValue}',
'model.state.assetList.filterShowGroups.[]',
'model.state.assetList.filterShowAssets.[]',
function() {
let settings = {}; // stash whatever settings for the filter you are storing on model.state here
return this.model.assets.filter(asset => filterAsset(asset, settings));
}); And finally, you should be able to do this in O(N), I'm unsure where you are experiencing something resulting in O(N^2) time, but if you clarify where that complexity occurs I'm happy to discuss how to optimize it. |
@Kilowhisky don't know if you still have this issue but recent refactors dramatically reduce the amount of notification that occurs and cleaned this stuff up a lot. closing with #7258 but the relationship refactors in #7505 #7516 #7510 #7493 #7491 and #7470 played the biggest role in reducing over-notification. |
@Kilowhisky PS if you want to PR a performance test to the performance test suite for the case you have (filtering a record array with N records during Y operation occurring) I'd love to make sure we keep it in a great state! |
I'm not sure exactly who's at fault here or if anything can be done but it most closely appears to be ember data's live record array's fault.
If you have a live record array recovered from using a peekAll
store.peekAll('comments')
and you attach array observers to the peekAll array
When you issue an unload request over more than 1 record
The
dependentArrayWillChange
will be called for every single removal BUT after the first call the dependent array will have the records to be remove set to null.In the above code the first removal will properly have the record that was removed. The second time the method is called the removed slice and any other records that were pending to be removed will be set to
null
orundefined
.So if you had a array of 5 and you unload 3 of them the
dependentArrayWillChange
will fire 3 times. The first time the array will be in an unmodified state as the change has not occurred. The second time it is called thesourceArray
will look like this0: {object}
1: {object}
2: null
3: null
4: null or undefined (which is odd)
After the whole run loop completes the arrays are cleaned up and the nulls are removed. But at the time the array observers are fired the array state is all sorts of wrong.
By the way If i run my unloads in their own run loop the problem is solved
Here is what i see going on. My guess is ember is batching unloads or batching observer calls.
This directly relates to #5378 and #5111.
I've made a twiddle showing the issue as it existed before #5378 but can't seem to get it to run off of canary ember data. If it can be loaded with #5378 applied it will show the issue.
https://ember-twiddle.com/099d510ed467e1621975?openFiles=index.template.hbs%2C
The text was updated successfully, but these errors were encountered: