-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 raising agent limit to 1G #3972
🐛 raising agent limit to 1G #3972
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3972 +/- ##
=========================================
+ Coverage 80.7% 85.0% +4.2%
=========================================
Files 464 943 +479
Lines 22155 40673 +18518
Branches 137 848 +711
=========================================
+ Hits 17895 34596 +16701
- Misses 4211 5854 +1643
- Partials 49 223 +174
Flags with carried forward coverage won't be shown. Click here to find out more. |
Code Climate has analyzed commit 6779da1 and detected 0 issues on this pull request. View more on Code Climate. |
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: What did you notice in the agent to take the decision to increase its resources?How did you decided for 1G? (e.g. official rsync specs, you tested it ... ) This service is just cleaning up volumes of dyamic services, right?
@sanderegg @pcrespov This service has no reservations on purpose not to take any resources form other services which need to run on the node, but has a maximum allowed memory usage, which is now set to 1Gb. Normally this service uses ~50MB or RAM. |
I understand that this service is supposed to run after the user service (e.g. s4l) ran. but what if there is a new service started there, that is supposed to get all the remaining RAM, and now the agent runs? This whole resources problem must be reviewed because we cannot just use resources and hope for the best... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thx for the explanation
Now the decision is also documented.
Ideally we would leave 1GB of margin on all machines that such services like the agent or docker logger can use them. Otherwise we can enforce that all the services we deploy from OPS and simcore do not go over a shared amount of RAM and all have limited resources. This way the the stack starts the resources are set in stone and when services are scheduled no surprises will occur. |
What do these changes do?
Agent requires slightly more RAM to properly work when rlcone is syncing. Setting the limit to a save 1GB to avoid future issues.
Related issue/s
How to test
Checklist