-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix memory leaks in <Iframe>
#53406
Fix memory leaks in <Iframe>
#53406
Conversation
Size Change: +26 B (0%) Total Size: 1.44 MB
ℹ️ View Unchanged
|
@@ -167,6 +167,7 @@ function Iframe( { | |||
node.addEventListener( 'load', onLoad ); | |||
|
|||
return () => { | |||
delete node._load; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does not doing this prevent the node from being removed from memory? When React removes the iframe, this callback is removed as well?
This reminds me of #27544 (comment)
That said, it doesn't hurt of course, I'd just like to understand why.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a tricky one that's several levels deep. I'm not entirely sure why this fix helps either! I'll try to post more details after I have a clearer vision.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I still don't fully understand what happened. It seems to require a very specific setting to make it reproducible. For instance, there has to be a .focus()
call on a <input>
somewhere, the iframe has to be rendered under a condition and a wrapper, etc. This might also only be reproducible in a chromium-based browser.
However, we can still learn from this mistake and prevent it from happening again by following the general best practice: don't cross-reference between parent and child frames. In particular, the window.frameElement
call is the code smell we want to avoid. Instead, use postMessage
to communicate to the parent frame. In this case though, deleting node._load
has the same effect, and it's easier than refactoring to postMessage
.
const cache = createCache( { container, key } ); | ||
containerCacheMap.set( container, cache ); | ||
return cache; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dmsnell Weren't you talking about emotion going crazy? Does this fix it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kevin940726 Is there any way we could tests this to prevent regressions? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I suppose this could be a source of the leak.
I never thought about memize
being unbounded, though it looks like it does allow us to pass a maxSize
option as the second argument which will bound it such that it purges the least-recently-used entry when adding into the cache.
in other words, maybe it would be worth attempting this change first as this
const memoizedCreatedCacheWithContainer = memoize(
( container: HTMLElement ) => { … },
+ { maxSize: 1000 }
}
seems like both approaches might resolve the memory leak, though this one is a smaller change.
using a WeakMap
for the container cache does seem better though, since we could plausibly hold a number of references for nodes that are gone, and yeah, that could hold onto the entire iframe. yikes.
nevermind what I said. you found gold @kevin940726
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, WeakMap is a superior solution if it can be used. But yes, maybe we should always require a maxSize option for memoize #53406 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are we defeating the purpose of the WeakMap
by storing the container inside the cache?
WeakMap
allows the release of the keys, but not the values. do we need to store a WeakRef
to the container instead of the container itself?
does emotion allow this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any way we could tests this to prevent regressions?
I believe we can test this some way or another in the performance test suite (c.c. @WunderBart). Though I would rather keep it in another PR since it's not trivial to do 😆.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent catch 👍 +100% to adding a rule to require max size on memoisation functions and/or to strongly suggest a fall back to a WeakMap. It is really a much better, harder to screw-up option for simple, single-argument cases like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WeakMap does allow values to be garbage collected
thanks for double-checking @kevin940726 - it was late when I was reading through the docs and I wasn't sure that holding the key as part of a value would remain weak.
so a fair amount of time has passed between when I wrote the previous sentence and now. I've done some testing in Safari and Chrome. testing was hard because at least in Chrome, the JavaScript engine seems to overlook small WeakMap
instances, so I took a suggestion and filled one up before manually triggering collection.
we have marginal insight here because the browser consoles show the contents of the WeakMap
, which is something we can't do in code for obvious reasons. here's the summary of my testing:
- yes they do let go of references to the objects if they are values of the
WeakMap
- but at least during some test runs it stubbornly held on to the references longer than I expected. this was worse when the values were
weakmap.set( o, { o, i } )
than when they were simpler asweakmap.set( o, o )
- if the object is referenced in the value of some other key then it doesn't seem to let go. that is,
weakmap.set( b, [ a, b ] ); a = null;
leavesa
allocated and it won't seem to free it until I also clearb = null
- at times I would see this stubborn clinging lead to multiple copies of
c
in theWeakMap
long aftera
andb
were purged and long after each copy ofc
had been set asc = null
and manual garbage collection was run. that is, theWeakMap
can be leaky in this regard through multiple garbage collection cycles.
after a while of testing I found that things freed up more reliably when I tried. I'm guessing this could be related to the engine warming up.
in the process I found FinalizationRegistry()
which is really neat and I didn't know it existed. All of the debugging tools around garbage-collection though are intentionally blurry and non-deterministic. for a while I saw no finalization events, then they started appearing after several trial runs. browsers stress out to ensure we don't count on this behavior.
so take this as validation that we should be good here, but we also in general probably want to make sure we don't store the same object in two WeakMap
values or they won't get freed until both WeakMap
keys have been released.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dmsnell Thanks for the validation!
If you're using chrome devtools, clicking on the bin icon shown in my video will force garbage collection, which is consistent and instant in my testing. Another thing is that some browsers (chromium-based for instance) only collect garbage every 20 seconds even when you use some advanced memory API. So that might be the reason why you're getting the results late.
but we also in general probably want to make sure we don't store the same object in two
WeakMap
values or they won't get freed until bothWeakMap
keys have been released.
Yep! This is working as expected because the value is still reachable by the other key. WeakMap
is probably a rare advanced API that we don't normally use, but it fits perfectly in this case IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clicking on the bin icon shown in my video will force garbage collection
right, but I saw plenty of cases where garbage collection didn't free the items in the WeakMap
, which is something indicated in the docs too. they have their own heuristics for when to purge the entries and I was testing those out to see what it's like. so clicking on the button triggered garbage collection, but in many cases left the values in the WeakMap
for some time - and for the most part, until the WeakMap
was big enough that it was "heavy" and warranted purging.
maybe this appears in some of our measurements as retaining values too long, but those values aren't strong references so they could be purged at any time. the measurements might at times be misleading.
@@ -3,7 +3,6 @@ | |||
*/ | |||
import { CacheProvider } from '@emotion/react'; | |||
import createCache from '@emotion/cache'; | |||
import memoize from 'memize'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Memoize should never be used for anything except if when defining a max size imo. Always creates memory problems. Maybe we should add a rule to at least always define the maxSize option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A maxSize of 1000 will still unnecessarily leak at most 1000 detached documents, which is still a lot.
as @kevin940726 noted, this still leaves ample room for memory bloat, but I think it's better than uncapped memoization. maybe we can look into places where we use it and analyze each situation to figure out a reasonable cap.
Well, I intended to add inline comments but @ellatrix beat me to it 😆 . (Thx!) |
What?
Fix (some) memory leaks in the
<Iframe>
component. Though not completely fixed, I think we can still close #53382 in this PR.Why?
Memory leaks crash users' browsers with lower-end devices or limited available resources. The memory leaks this PR fixed leak the iframe
document
for each<BlockPreview>
, which means the whole DOM tree and the whole document frame cannot be freed even when it's navigated away or unmounted.How?
There are two separate fixes 1e4d8dc and e5e9e6f. I'll share more details about them in the inline comments below.
Testing Instructions
Follow the instructions in #53382.
Note that both commits need to be applied for the fix to work since they leak the same document. Try disabling any one of them and see that the document is still leaked.
Testing Instructions for Keyboard
Same as above.
Screenshots or screencast
Kapture.2023-08-08.at.12.46.06.mp4
You can see that there are still around 2KB of leaks, but the majority are fixed now.