You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
Successful task pods are terminating with error.
I did not have this error with the old airflow-helm chart. The official helm chart was redeployed into a fresh EKS cluster. Update: It does not have this issue because uses 2.0.1-python3.8. If I use 2.0.1-python3.8 with this chart them it tis fine.
▶ kubectl -n airflow logs pod/simplepipeparsing.7187c95facb6494f8538160889667df6
BACKEND=postgresql
DB_HOST=dataeng-rds-airflowmetastore-dev.c20w6vrzbehx.eu-central-1.rds.amazonaws.com
DB_PORT=5432
[2021-05-23 21:44:28,189] {dagbag.py:451} INFO - Filling up the DagBag from /opt/airflow/dags/dags/simple_pipe.py
[2021-05-23 21:44:28,445] {base_aws.py:368} INFO - Airflow Connection: aws_conn_id=aws_default
[2021-05-23 21:44:29,033] {base_aws.py:391} WARNING - Unable to use Airflow Connection for credentials.
[2021-05-23 21:44:29,033] {base_aws.py:392} INFO - Fallback on boto3 credential strategy
[2021-05-23 21:44:29,033] {base_aws.py:397} INFO - Creating session using boto3 credential strategy region_name=eu-central-1
Running <TaskInstance: simple_pipe.parsing 2020-01-01T00:00:00+00:00 [queued]> on host simplepipeparsing.7187c95facb6494f8538160889667df6
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in<module>sys.exit(main())
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/cli/cli_parser.py", line 48, incommandreturn func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/utils/cli.py", line 89, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/cli/commands/task_command.py", line 235, in task_run
_run_task_by_selected_method(args, dag, ti)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/cli/commands/task_command.py", line 64, in _run_task_by_selected_method
_run_task_by_local_task_job(args, ti)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/cli/commands/task_command.py", line 120, in _run_task_by_local_task_job
run_job.run()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/jobs/base_job.py", line 237, in run
self._execute()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/jobs/local_task_job.py", line 142, in _execute
self.on_kill()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/jobs/local_task_job.py", line 157, in on_kill
self.task_runner.on_finish()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/task/task_runner/base_task_runner.py", line 178, in on_finish
self._error_file.close()
File "/usr/local/lib/python3.6/tempfile.py", line 511, in close
self._closer.close()
File "/usr/local/lib/python3.6/tempfile.py", line 448, in close
unlink(self.name)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp_bh__su1'
Apache Airflow version: 2.0.2
Kubernetes version: 1.20
Helm chart version: 1.0.0
What happened:
Successful task pods are terminating with error.
I did not have this error with the old airflow-helm chart. The official helm chart was redeployed into a fresh EKS cluster.
Update: It does not have this issue because uses 2.0.1-python3.8. If I use 2.0.1-python3.8 with this chart them it tis fine.
How to reproduce it:
Update:
I have did further testing with different versions:
The text was updated successfully, but these errors were encountered: