Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Component label + local service definition #740

Merged
merged 6 commits into from
Sep 16, 2023
Merged

Component label + local service definition #740

merged 6 commits into from
Sep 16, 2023

Conversation

mr-miles
Copy link
Contributor

Most importantly (and trivially), this PR adds a component label to the pods that are part of the daemonset. Without this label it is impossible to target the agent pods with any selector (the other parts of the chart have component labels).

Secondly, it defines an (optional) service resource targetting the pods in the daemonset and using a "local" internal traffic policy (This tells kube-proxy to only use node local endpoints for cluster internal traffic). This is useful because pods that have no way to be configured with the node ip can use the entry.

@mr-miles mr-miles requested review from a team as code owners April 18, 2023 19:47
@dmitryax
Copy link
Contributor

Hi, @mr-miles. Thanks for your contribution. The change itself LGTM.

The problem is that it changes immutable fields (daemonset selector labels), which breaks the helm upgrade, and users will have to reinstall their daemonsets.

We will need to migrate all the labels to the recommended naming conventions anyway, so I think if we can go through that only once and avoid doing it this time additionally.

What if we just add the component: otel-agent pod label only and keep the selector as is? The service selector should work well, right?

@mr-miles
Copy link
Contributor Author

Thanks @jinja2 @dmitryax - have done that and checks looks more healthy. Didn't know about the daemonset selectors being immutable so learnt something too :-)

@dmitryax
Copy link
Contributor

Please run make render to generate examples

@mr-miles
Copy link
Contributor Author

Make render worked but the cri-o test is still complaining, not sure how to sort that though

@dmitryax
Copy link
Contributor

Looks like a flaky test. I restarted it

Copy link
Contributor

@dmitryax dmitryax left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, @mr-miles. LGTM. just one nit

@mr-miles
Copy link
Contributor Author

success - all checks have passed!

@mr-miles
Copy link
Contributor Author

mr-miles commented Sep 9, 2023

Hi @jinja2 @dmitryax - wondering if it is possible to revive this PR? I think it just fell between the cracks.

I have just freshened it up by rebased it onto main and re-rendered the examples.

Alternatively please let me know if its no longer useful.

@jinja2
Copy link
Collaborator

jinja2 commented Sep 11, 2023

I think we can still go ahead with this PR. But I am wondering if we should make settinghostNetwork for the daemonset optionally false (default will still be true) possible with the PR. I can see users running non-hostmetrics receivers leveraging this service option and wanting to not run in the hostnetwork. @dmitryax wdyt?

@jinja2
Copy link
Collaborator

jinja2 commented Sep 15, 2023

@mr-miles can you run make render to regenerate examples?

@mr-miles mr-miles closed this Sep 15, 2023
@mr-miles
Copy link
Contributor Author

mr-miles commented Sep 15, 2023

hi @jinja2 - thanks for your help! Not sure why it closed the PR, but ...

I have rebased and "make render"-ed and corrected line endings and fixed up the github-actions robot's changes so should be good now

(The git robot is quite determined but I think I got the better of it)

@mr-miles mr-miles reopened this Sep 15, 2023
@jinja2 jinja2 merged commit dd9f18e into signalfx:main Sep 16, 2023
19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants