-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Component label + local service definition #740
Conversation
Hi, @mr-miles. Thanks for your contribution. The change itself LGTM. The problem is that it changes immutable fields (daemonset selector labels), which breaks the helm upgrade, and users will have to reinstall their daemonsets. We will need to migrate all the labels to the recommended naming conventions anyway, so I think if we can go through that only once and avoid doing it this time additionally. What if we just add the |
Please run |
Make render worked but the cri-o test is still complaining, not sure how to sort that though |
Looks like a flaky test. I restarted it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, @mr-miles. LGTM. just one nit
examples/add-receiver-creator/rendered_manifests/daemonset.yaml
Outdated
Show resolved
Hide resolved
success - all checks have passed! |
Render templates
I think we can still go ahead with this PR. But I am wondering if we should make setting |
@mr-miles can you run |
hi @jinja2 - thanks for your help! Not sure why it closed the PR, but ... I have rebased and "make render"-ed and corrected line endings and fixed up the github-actions robot's changes so should be good now (The git robot is quite determined but I think I got the better of it) |
Most importantly (and trivially), this PR adds a component label to the pods that are part of the daemonset. Without this label it is impossible to target the agent pods with any selector (the other parts of the chart have component labels).
Secondly, it defines an (optional) service resource targetting the pods in the daemonset and using a "local" internal traffic policy (This tells kube-proxy to only use node local endpoints for cluster internal traffic). This is useful because pods that have no way to be configured with the node ip can use the entry.