-
Notifications
You must be signed in to change notification settings - Fork 820
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move to using CRD Subresources for all Agones CRDs #329
Comments
Subresource implementation example found here: https://github.com/kubernetes/sample-controller |
We can do this now that #447 is merged! |
Started adding new Subresources into helm templates and it is working for Fleet for instance:
I followed the way which is documented under |
Regarding adding Scale subresource: it makes Fleet able to be configured with Horizontal Pod Autoscaler. And we can use metrics which was previously exported to stackdriver to control the scaling. This is just one of scale subresource gains.
|
Was just looking at the autoscaler - should that also have a state subresource? |
@markmandel Let me check that. |
Oops, missed your comment. I don't think so. I don't think scale makes sense here. WDYT? |
Yes, that would make things more complicated. So I will prepare adding a Status to Fleetautoscaler. |
Created a draft of the I'm waiting on result of e2e tests, but I'm leaning towards having Allocation is something that moves away from the K8s paradigm, and it only affects GameServers, so in this case it may be that we want to keep it as a special case because of that. There are several reasons for this:
|
In local e2e testing - both But how do people feel about the rest of the above? |
It all passed. I'm actually quite surprised. |
/cc @aLekSer @roberthbailey - wdyt? Would love some second opinions about this. Please also let me know if what I wrote was not clear. |
I'll take a look later today. |
I'm not an expert on this code path but I have a basic question regarding your second concern: In all of the other code paths you updated, it was a simple change from Assuming that there is a reasonable answer to the above, I tend to agree with your conclusion that this is adding more complexity and trouble than it is worth. If we did need to do two updates, they need to be done in way that is automatically recoverable by the controller such that it can drive the second update by seeing the first one (which led me to the above question, since that would mean putting enough info in status to make the change, at which point why not have everything in status). Based on your code changes, I think it's a breaking api change to introduce the status sub-resource later, so it's sort of now-or-never to add it in. What do we gain by having it as a subresource? How much are we breaking with conventions to not make it a subresource, especially when all of the other types do have a status subresource? |
Excellent questions, let me tackle each one:
So there are two things at play here. In all other cases, we have the workqueue + sync method. So we have a self healing system that eventually ends up in the declared state. Basically what K8s was built for. For Allocation, we're essentially doing an imperative command - which is not quite what K8s was built for. So in previous cases, even if we did add an extra step with the new SubResource, it's fine because even if something goes wrong, we will eventually self heal - so we always end up in a happy place. But in Allocations case, i feel it's ideal to do it all in one go -- we don't have a way to self heal if something goes wrong (although, we could build one in theory -- but it seems like a lot of work). The update metadata step is because you call metadata changes along with the Allocation - this is useful for passing information to the GameServer. Classic example is to pass what map the gameserver needs to loads before the players can play, or how many players the gameserver can expect. We talk about it a bit here: https://agones.dev/site/docs/reference/gameserverallocation/
Yeah, I was originally against moving it to the status, since Labels and Annotations seemed like a pre-built thing that K8s already had for storing arbitrary data about the GameServer. (I have some memories of us talking about this back in the original design stage -- I can probably dig up the tickets if need be). It also has some nice crossover in that labels are naturally searchable in the k8s api already, which can be useful.
These are good questions. The only thing I think is a big win, is being able to give the RBAC rule of How much are we breaking conventions? Honestly, I can't imagine anyone would really care - we can document it in the generated code in the client-go that there is no status endpoint (and we shouldn't generate the Hopefully that answers your questions. Please let me know if you have more. |
Actually to add a note - assuming we go with keeping the code as is - aside from removing the I figure we can add it to the gameserver.go header, and it will filter through to all the generated documentation. |
Created draft PR #959 so whichever way we decide to go - we have a PR we can merge when ready. The more I think about it, I can't actually see any real net positives for adding status as a subresource on GameServer anyway -- so I don't think we should add it. But definitely still want to hear other's opinions. |
You've convinced me that we should go with #959 and remove |
I think that is the best way forward. 👍 I'll get the PR ready. |
Design
We would want to generate the client code and write the kubectl configuration for Subresources.
I'm not 100% positive if this is enabled by default / available on cloud providers in Kubernetes 1.10. We may have to wait until Kubernetes 1.11 is available across the board.
That being said, this looks to break down to two major things:
Status Subresource
This doesn't give us any extra functionality, but does give us more control RBAC control over what can be changed by what part of the system. For example #150 will only be possible once this is in effect.
There are also some potential minor performance optimisations - but I think this will be minor. This is more of a security++ operation.
Scale on Fleets
This may be the easier one to start with - but the endpoint being that it would enable
kubectl scale Fleet simple-udp --replicas=10
which would be super nice, and make custom autoscalers much easier.As an implementation detail, this may also drive us to implement to scale on the backing
GameServerSet
as well.Research
The text was updated successfully, but these errors were encountered: