-
Notifications
You must be signed in to change notification settings - Fork 820
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document how to configure maximum number of pods/node that can be allocated #295
Comments
I think Kubernetes does this already for you. If within your pod spec template you provide a resource properties you can express the requested and limit CPU required see https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/ Now when you set this properly for you gameserver expectations, Kubernetes can (if activated) auto scale nodes based on pods requirements. Would this works ? |
Or another angle - I wrote about doing this: And this paragraph seems relevant: If we are happy about this - this is something we should likely document/faq on our end, as it'll be a common question. |
Yes, it looks like what I wanted. Can you use this issue as a request to add doc/examples? |
SGTM. Marked as such, and changed the title. 👍 Now to work out where we want this to go. Maybe we need a FAQ? |
I would like a default setting added to the existing configuration examples, as scaling is an important issue for dedicated servers. Or at least a comment.... |
I added Recently. Does this cover this topic, or do we feel there is more that we can write here? (Basically asking if we can close this ticket) |
@markmandel That's good enough for me, thanks! |
Awesome. Closed! 🤸♂️ |
While operating dedicated game servers, CPU usage is not a good indicator of node load. The server can mix idle periods (players loading, in lobby, etc) with burst periods (intense gameplay).
You can thus put too many pods in a node because their servers are idle and then when gameplay starts they will starve one another leading to lag, player disconnections or even server crashes.
I propose to be able to configure a maximum amount of pods/node or even better - somehow tie this cap to the number of cpus of the node (if possible).
The text was updated successfully, but these errors were encountered: