-
-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect keepAliveMode used for Gradle worker daemon #416
Comments
Our fork spec is specified as follows: Lines 56 to 64 in 3be9d63
It seems that we use the default
|
Like I said, I was not certain about the keepAliveMode. What is the real issue is that when you have multiple submodules and more than just main and test sourceSets then Spotbugs creates multiple forks and does not release the memory. For that reason we are running ./gradlew spotbugs --no-daemon as a separate task right now and after that we run ./gradlew build. Otherwise the project takes more memory than we can allow it to take. I think this problem is even worse when you don't create a clean container for every task. If you have pre created CI agent workers then you have to restart those agents periodically to get back the memory from zombi Spotbugs forks. |
Got it. Document says nothing aboit how to terminate worker process, I will investigate. Thanks! |
Successfully reproduced high memory usage by Gradle workers:
|
posted a related question to the community forum. |
From Gradle 6.0, we can inject `ExecOperations` to `WorkAction` instance so we can run java process in worker thread. refs #416
* feat: run java process from no-isolated workers From Gradle 6.0, we can inject `ExecOperations` to `WorkAction` instance so we can run java process in worker thread. refs #416 * test: fix test with Gradle 5.6 * style: replace the year in copyright notice * chore: disable the new experimental feature by default * fix: use legacy API to support Gradle 5.6
🎉 This issue has been resolved in version 4.7.0 🎉 The release is available on:
Your semantic-release bot 📦🚀 |
@remy-tiitre I released #429 as version 4.7.0. Please run your build with This option will be needless in future release, after I confirm that the new strategy works as expected. |
memo: in my local, confirmed that performance isn't so changed
|
Hi @remy-tiitre could you check my previous comment? |
@remy-tiitre ping |
This fix is applied by default in the beta channel. See #571 for detail. |
Right now spotbug tasks are executed in a separate java fork and that fork is started with keepAliveMode=DAEMON option.
Started Gradle worker daemon (0.999 secs) with fork options DaemonForkOptions{executable=/usr/lib/jvm/java-11-openjdk-11.0.6.10-1.el7_7.x86_64/bin/java, minHeapSize=null, maxHeapSize=null, jvmArgs=[], keepAliveMode=**DAEMON**}.
I'm not 100% certain but I think this should be SESSION. What happens right now is that my project has multiple modules and spotbugs worker processes are left behind. When your CI runs inside containers and they have more restrictive memory limits than developer machines the containers will just get OOMkilled. Problem gets worse when you have more modules to use the spotbugs on. We have even more than main and test sourceSets that makes the issue even worse.
Even if its not that keepAliveMode then there is something that does not allow the process to end and they are left behind hogging memory.
The text was updated successfully, but these errors were encountered: