Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add airflow_kpo_in_cluster label to KPO pods #24658

Merged
merged 6 commits into from
Jun 28, 2022

Conversation

jedcunningham
Copy link
Member

This allows one to determine if the pod was created with in_cluster config
or not, both on the k8s side and in pod_mutation_hooks.

@boring-cyborg boring-cyborg bot added provider:cncf-kubernetes Kubernetes provider related issues area:providers labels Jun 25, 2022
@jedcunningham jedcunningham requested a review from dstandish June 25, 2022 18:20
@potiuk
Copy link
Member

potiuk commented Jun 26, 2022

You will need to rebase @jedcunningham to account for selective check problem from #24665 just merged.

This allows one to determine if the pod was created with in_cluster config
or not, both on the k8s side and in pod_mutation_hooks.
@jedcunningham jedcunningham force-pushed the kpo_in_cluster_label branch from 201e58f to d88acaf Compare June 27, 2022 03:24
@jedcunningham jedcunningham force-pushed the kpo_in_cluster_label branch from d88acaf to 1d96eed Compare June 27, 2022 03:31
@ianbuss
Copy link
Contributor

ianbuss commented Jun 27, 2022

Just thinking out loud, would it make sense to (also?) add the full task spec as json to a KPO label to allow maximum future flexibility in the pod mutation hook to make mutation decisions. I realise the in_cluster here is more complicated than if the user specifies in_cluster=True|False on the task as it looks for global Airflow configs and also connection config but there may be things in the future that we haven't considered that users might want to use in their mutation logic.

The alternative would be to extend the pod_mutation_hook function signature to accept a KubernetesPodOperator argument but not sure how/if this could be done in a backwards-compatible way.

@github-actions github-actions bot added the okay to merge It's ok to merge this PR as it does not require more tests label Jun 27, 2022
@github-actions
Copy link

The PR is likely OK to be merged with just subset of tests for default Python and Database versions without running the full matrix of tests, because it does not modify the core of Airflow. If the committers decide that the full tests matrix is needed, they will add the label 'full tests needed'. Then you should rebase to the latest main or amend the last commit of the PR, and push it with --force-with-lease.

@jedcunningham
Copy link
Member Author

@ianbuss, I'd be more inclined to also send the task to the pod_mutation_hook, and it could be done in a backwards compatible way I believe.

That said, I think this label still has value regardless. Thoughts?

@jedcunningham
Copy link
Member Author

Actually, thinking a little more, it's not terribly hard to get the TI/task from the existing labels on the pod. I wonder if anyone is already doing that now. It could be worth adding to make it easier still.

However, you nailed it that knowing whether load_incluster_config was used or not gets complicated because the connection is involved. This flag makes it easy on the user side of things.

@ianbuss
Copy link
Contributor

ianbuss commented Jun 28, 2022

Yes, I agree @jedcunningham. Regardless of whether we considered adding the task spec as a new label or pass it as a parameter (which I think might be useful), I think it makes sense to always add the airflow_kpo_in_cluster as implemented here. Passing the task information could potentially be implemented as a follow up PR after some additional discussion?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:providers okay to merge It's ok to merge this PR as it does not require more tests provider:cncf-kubernetes Kubernetes provider related issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants