-
We have a site with several thousand pages, which uses the following features and customization:
After upgrading to Docusaurus 3.3.2, we're encountering the "JavaScript heap out of memory" error, similar to the ones reported earlier. https://github.com/facebook/docusaurus/discussions?discussions_q=JavaScript+heap+out+of+memory+ We just finished migrating from Docusaurus 2.4.3 to 3.3.2 (could not use 3.4 or higher due to #10460), and there was a new document versioning that added several hundred pages. I'm specifying the Docusaurus version here because this issue haven't occur in Docusaurus v2 so far and this might be a Docusaurus version-specific issue. We could build the site on the local machine by increasing the heap size, but need to test on the CI/CD environment for real deployment. Meanwhile, I want to ask if there's any better way to prevent the error.
|
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
I'm sorry to hear you encounter memory issues @allenscha Unfortunately, I'm not super familiar with memory profiling (although I plan to learn soon) and can't really tell you what cause the memory increase so far. We don't have any tooling to monitor a possible memory regression on PR and it would be quite difficult to find back which past commit lead to an increase in memory consumption. Even if I dig into memory problems on our site, it's possible that your site is affected by something else that we don't use. It would be interesting to run your site under multiple versions from v3.0 to v3.4 to at least see if the regression comes from a major. Also helpful to remove the newly added docs version to be sure that it's not responsible. It's also possible that we have a memory leak that mostly affects i18n sites since we build multiple sites one after another maybe all the memory from the first locale isn't entirely released. I'm currently working on making the build faster by replacing JS tooling by Rust tooling (SWC, LightningCSS, Rspack...), and it's possible that it will fix your memory problem but I can't be 100% sure. Rspack PR ready soon: #10402 Overall, if you don't update docs older versions, it's preferable to archive them to standalone deployments instead of keeping them forever in your site. If each doc instance is large you can also split your site into multiple single-instance sites that you connect together. They can still share the same layout so that users do not really see any difference. |
Beta Was this translation helpful? Give feedback.
-
@slorber Thank you for your suggestions and related information. I hope the new tooling helps mitigate the memory issue.
I wish I could, but I'm not sure how we can achieve that in a site like ours where per-instance versioning is used. (only "SDK" area uses versioning per platform) |
Beta Was this translation helpful? Give feedback.
-
FYI as part of #10556 I fixed an important memory leak that affects i18n sites. When building locales sequentially, the memory wouldn't be garbage collected. Please try canary (or v3.6 when published) and tell me if it works better |
Beta Was this translation helpful? Give feedback.
I'm sorry to hear you encounter memory issues @allenscha
Unfortunately, I'm not super familiar with memory profiling (although I plan to learn soon) and can't really tell you what cause the memory increase so far. We don't have any tooling to monitor a possible memory regression on PR and it would be quite difficult to find back which past commit lead to an increase in memory consumption. Even if I dig into memory problems on our site, it's possible that your site is affected by something else that we don't use.
It would be interesting to run your site under multiple versions from v3.0 to v3.4 to at least see if the regression comes from a major. Also helpful to remove the newly added doc…