-
Notifications
You must be signed in to change notification settings - Fork 14.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add airflow_kpo_in_cluster
label to KPO pods
#24658
Conversation
You will need to rebase @jedcunningham to account for selective check problem from #24665 just merged. |
This allows one to determine if the pod was created with in_cluster config or not, both on the k8s side and in pod_mutation_hooks.
201e58f
to
d88acaf
Compare
d88acaf
to
1d96eed
Compare
Just thinking out loud, would it make sense to (also?) add the full task spec as json to a KPO label to allow maximum future flexibility in the pod mutation hook to make mutation decisions. I realise the The alternative would be to extend the |
The PR is likely OK to be merged with just subset of tests for default Python and Database versions without running the full matrix of tests, because it does not modify the core of Airflow. If the committers decide that the full tests matrix is needed, they will add the label 'full tests needed'. Then you should rebase to the latest main or amend the last commit of the PR, and push it with --force-with-lease. |
@ianbuss, I'd be more inclined to also send the task to the That said, I think this label still has value regardless. Thoughts? |
Actually, thinking a little more, it's not terribly hard to get the TI/task from the existing labels on the pod. I wonder if anyone is already doing that now. It could be worth adding to make it easier still. However, you nailed it that knowing whether |
Yes, I agree @jedcunningham. Regardless of whether we considered adding the task spec as a new label or pass it as a parameter (which I think might be useful), I think it makes sense to always add the |
This allows one to determine if the pod was created with in_cluster config
or not, both on the k8s side and in pod_mutation_hooks.