-
Notifications
You must be signed in to change notification settings - Fork 14.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KubernetesJobWatcher does not delete worker pods #14974
Comments
Thanks for opening your first issue here! Be sure to follow the issue template! |
After closer inspection and debugging, it seems that a urllib3.exceptions.ProtocolError is raised by the Kubernetes client. This error is not accounted for in Airflow. Airflow just goes to Exception and raises. This exits the while loop and that's it. No mercy. |
I wonder why |
After debugging the TCP/IP connections, I found that the connection to the KubeAPI was reset after some minutes of complete inactivity for the kubernetes.Watcher.stream(). However, the watcher seems to think the connection is still fine and continues listening for some (unknown) reason and no error appears. This would also explain the fact why no logging of the type of The fix seems to be to reset the watcher.stream, by adding the My previous comment about the This patch seems to solve the problem:
|
@mrpowerus please can you give more detailed steps to reproduce this? That said, I'm not Ok with your configuration.
makes every task start pulling 2.0.1-python3.8 image afresh before they can create a container if Interestingly, When I configured worker_container_repository & worker_container_tag as you did and ran with airflow master(using breeze) repository. The images were pulled correctly but I got errors that the dag I ran could not be found. When you use breeze to start a kubernetes cluster, It loads the example dags. Now using your configuration and also making sure that what's in
|
Thanks @ephraimbuddy. I am using my own helm chart. However, the error you're showing is not the same as the one I received. I'm confused about your statement. I thought that these config lines just indicated which container/tag Airflow uses to start for his worker? |
Yes. My error is specific to my setup which uses breeze. Can you maybe create a repo that can reproduce this using your own helm chart or just share how I can reproduce? |
This issue on AKS seems to be related: |
It seems that adding the TCP keepalive in the config fixed the problem after all, which was obvious in hindsight. However, this was hard to find, due to no logging output. |
I hade similar problem when we upgraded to version 2.x Pods get restarted even after the Dags ran successfully. I later resolved it after a long time of debugging by overriding the pod template and specifying it in the airflow.cfg file.
[kubernetes]
|
Apache Airflow version: 2.0.0 and 2.0.1
Kubernetes version (if you are using kubernetes) (use
kubectl version
): 1.18.4 (AKS)Environment:
uname -a
): Linux airflow-scheduler-5cf464667c-7zd6j 5.4.0-1040-azure Issue 23 Explanation #42~18.04.1-Ubuntu SMP Mon Feb 8 19:05:32 UTC 2021 x86_64 GNU/LinuxWhat happened:
KubernetesJobWatcher does not delete Worker Pods after they are assigned the 'status.phase=Succeeded'. But this only happens after 30-ish minutes of complete inactivity of the Kubernetes Cluster.
What you expected to happen:
The KubernetesJobWatcher should delete Worker Pods after they have been successful at any time. As my config states (I verfied this with
airflow config
:The Executor tries over-and-over again to adopt completed pods.
This is successful. However, the Pods are not cleaned by the KubernetesJobWatcher as no logging of the watcher appears. (I would expect logging from this line)
After some digging, I think the watch.stream() from
from kubernetes import client, watch
which is called in https://github.com/apache/airflow/blob/v2-0-stable/airflow/executors/kubernetes_executor.py#L143 expires after a long time of complete inactivity. This is also explicitly mentioned in the docstring of the kubernetes.watch.Stream, which was added in this commit after version 11.0.0.However, my Airflow is using the constraints file which uses the previous version of the Kubernetes client (version 11.0.0) which contains the following watcher.stream.
It seems that Airflow can recover itself by resetting the resource-version. But this does not seem to work for some reason. (I'm currently investigating why)
I think Airflow should be able to recover from this issue automatically. Otherwise I should run a dummy task each 30-ish minutes or so, just to keep the kubernetes.watch.stream() alive.
How to reproduce it:
Run Airflow 2+ in a Kubernetes cluster which has no activity at all for 30-ish minutes. Then start an operator. The Kubernetes Worker will not be deleted.
The text was updated successfully, but these errors were encountered: