-
Notifications
You must be signed in to change notification settings - Fork 529
Kubefed controller pod level annotations #1465
Conversation
@nitinatgh |
/lgtm |
Hi @xunpan , I'm in a bit of a situation to do that, I have committed directly to my forked master so I don't have the option to squash it, can it not be squashed at the merge stage once approved? Otherwise i'll need to start everything again from scratch? Sorry about this. Thanks Nitin |
I can only approve the PR but cannot merge it. Could please you have a look at: |
New changes are detected. LGTM label has been removed. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: nitinatgh The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Sorry @xunpan , after trying to rebase it made it worse, and then I reverted the changes on my fork master and it automatically closed this PR. |
@nitinatgh |
Dear @nitinatgh, thank you for trying to contribute to kubefed. I understand that all the organizational requirements around PRs on this repo might come across as confusing to you but I would like to ask two things from you before you create the fourth (after #1448, #1453 and this one) PR for the very same change:
|
Added in description to the README.md
Bumped up the chart version of the controller manager
Bumped up the dependency version
Added a new annotation and left the existing one incase it breaks any other previous changes. But not sure if the previous one created was done correctly, so have left it intact.
What this PR does / why we need it: To have annotations set at the pod level for the kubefed controller. Currently it's only restricted to the deployment level.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #1447
Special notes for your reviewer: I have created a new annotation as the previous one committed seems to be incorrect and I didn't want to have any breaking changes related to it.
I have tested the changes in a local kind cluster, below is the output from running the helm upgrades
Added the following in the values.yaml
helm upgrade dry-run
helm upgrade
Also would like to give a special thanks to https://github.com/kiich for helping me out on this change!
Thanks
Nitin