-
Notifications
You must be signed in to change notification settings - Fork 828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automate updates for k8s.io redirector service #176
Comments
Hi @ixdy what do you think about me helping with that. Can you give me some hints or directions about people who could give me more informations? :-) |
Take one of the steps I listed above and figure out some solution? e.g. figure out how to automatically update nginx when pushing a new configmap. (There are several approaches with different tradeoffs, and nobody's taken the time to figure out what makes the most sense here.) |
In one of my past projects we were using the approach of creating a sidecar container which was watching changes inside specified paths, and when they appeared it was reading configmap, parsing it and sending to specified endpoint., but here it would be different. One of the approaches would be to use feature, which is currently in beta -> Share Process Namespace between Containers in a Pod. We would create a sidecar component which will be watching configmap changes and if they will appear we could send a HUP signal to the nginx. It's good because we are separating logic related to watching changes and sending appropriate signals to separate place, and not touching basic nginx image. Problem can appear if we don't want to use beta feature, or if our kubernetes cluster's version will be lower than 1.13. Another approach will be related to running two binaries/scripts inside a container. One for nginx and second for our watcher which will send HUP signal to nginx process when changes appear.
Actually I succeeded to create a solution using https://github.com/ochinchina/supervisord and https://github.com/fsnotify/fsnotify so I can clean it and push somewhere to test. If it would be possible I think I would choose the first option, but I don't have experience with that feature so probably I couldn't see some downsides. |
@ixdy I have created simple Go app using: https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/ which can be added as a sidecar container. I have to figure out how to test it yet, but for tests we can play with it: https://github.com/bartsmykla/nginx-reloader |
Hi @ixdy did you have some time to look at my suggestions maybe? |
@ixdy WDYT? ^ |
@bartsmykla I also think I prefer the sidecar option, since that seems like a cleaner, more generalizable pattern, though I don't think GKE supports kubernetes 1.13 yet. Do you have access to a kubernetes cluster for testing? You could try updating the manifests in the k8s.io directory of this repo to try out your approach. |
though, a counterpoint: it'd be nice to take advantage of the fact that we're using a Deployment right now, and so we should perform a rolling update of the config (in case it causes nginx to crash). I've seen patterns where the ConfigMap containing the nginx config is somehow munged to work with a Deployment rolling update - maybe something like this? (I think I've seen similar but different patterns elsewhere, but I'm not immediately finding them now.) |
Let me dig into it a little bit more |
pending getting a cluster up |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/assign |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-priority backlog Since we're going to want to make changes to dl.k8s.io as part of #1569, now might be the right time to re-examine how this is deployed/managed |
/remove-lifecycle stale |
/milestone v1.22 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
/close |
@spiffxp: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Currently, changes to the k8s.io redirector service (i.e any changes to the configs under the k8s.io subdirectory) require @ixdy or @thockin to manually update the cluster, basically following something like the following process:
There are lots of steps to automate here:
The text was updated successfully, but these errors were encountered: