-
Notifications
You must be signed in to change notification settings - Fork 820
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make gameserver-allocator and all its resources namespaced instead of being part of agones-system namespace #854
Comments
It looks like the helm install put the allocator service and deployment into the Are you seeing some parts of the system in a different namespace than the allocator? |
Thanks. So right now we have an agones-system namespace for controllers and the gameservers CRDs are in any customer defined namespace. What I am proposing is that allocator service and its service accounts should be in the same namespace as gameservers. |
Wouldn't that move the allocator service away from the other controllers, since they would still be in the agones-system namespace? This also implies that we would expect (require?) that customers install multiple allocator services into their cluster. Right now the install allows you to create gameservers in arbitrary namespaces once you've installed agones, but this change would mean that at install time you'd need to define which namespaces would support gameservers and that unless you re-ran an installer (helm, etc) or manually re-configured agones in your cluster that you couldn't change your mind and start gameserver in other namespaces. I understand the desire for isolation, but is it more important to isolate the allocation service than the controllers? We've seen issues where the controllers going down causes errors (e.g. #398 (comment)) which are probably just as concerning as isolation within the allocator. |
Technically, you have to do this now anyway, as we have to configure service account in the supported namespaces for gameservers for the SDK. I'm personally not seeing the need for this extra complexity - but I know @pooneh-m has a strong opinion on this 😄 To @roberthbailey 's point - does that mean we should be looking at separate controllers per namespace as well? This feels slightly counter-intuitive to how Kubernetes works though, |
Allocator service and controllers are different in nature. Controllers are acting as a control plain and are eventual consistent with the desired state, while allocator service is acting as the data plain, exposing imperative APIs and meant for high bandwidth access.
Haha, @markmandel not that strong. It is up for discussion as you are more experienced in gameservers area. I agree, supporting allocator service to be deployed in multiple namespaces is more complicated. We can make it an opt in solution, meaning, by default the allocator is deployed to the agones system and acts as the front end service for allocation across all namespaces. If a customer is interested in isolation, then they can deploy allocator to the namespace that is hosting gameservers. I will refactor gameserverallocation to a library that allocator service and API extension reference to make allocator service independent of the api extension. |
It seems that there is no real application of the proposal discussed in this issue as gaming customers, according to @markmandel, are using one cluster per game vs one namespace per game with a cluster hosting multiple games. |
Is your feature request related to a problem? Please describe.
gameserver-allocator is currently only deployed to agones-system namespace. If a user is deploying gameservices for different purposes in different namespaces, the allocation traffic for all gameservices comes through gameserver-allocator in agones-system. Ideally, there should be isolation between traffics to gameserver-allocator service for different namespaces.
Describe the solution you'd like
gameserver-allocator service and all its secrets need to be part of the same namespace as the targeted gameservices.
The text was updated successfully, but these errors were encountered: