You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not sure if this is related, but I am also seeing an "Import error" every now and then, apparently appearing out of thin air
Broken DAG: [/opt/airflow/dags/repo/dags/documents/DEV/file_pipeline.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 574, in serialize_operator
serialize_op['params'] = cls._serialize_params_dict(op.params)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 447, in _serialize_params_dict
if f'{v.__module__}.{v.__class__.__name__}' == 'airflow.models.param.Param':
AttributeError: 'str' object has no attribute '__module__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 935, in to_dict
json_dict = {"__version": cls.SERIALIZER_VERSION, "dag": cls.serialize_dag(var)}
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 847, in serialize_dag
raise SerializationError(f'Failed to serialize DAG {dag.dag_id!r}: {e}')
airflow.exceptions.SerializationError: Failed to serialize DAG 'DEV_file_pipeline': 'str' object has no attribute '__module__'
Some logs should be present - can you also check if you have enough resources / kubernetes logs ? It is lilkely your tasks are killed due to lack of resources (memory most likely).
Apache Airflow version
2.2.3 (latest released)
What happened
I often have tasks failing in Airflow and no logs are produced.
If I clear the tasks, it will then run successfully.
Some tasks are stuck in queue state and even if cleared, will get stuck again in queue state.
Happy to provide more details if needed.
What you expected to happen
Tasks running successfully
How to reproduce
No response
Operating System
Debian GNU/Linux 10 (buster)
Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.4.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-cncf-kubernetes==2.2.0
apache-airflow-providers-docker==2.3.0
apache-airflow-providers-elasticsearch==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.2.0
apache-airflow-providers-grpc==2.0.1
apache-airflow-providers-hashicorp==2.1.1
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-microsoft-azure==3.4.0
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-odbc==2.0.1
apache-airflow-providers-postgres==2.4.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sendgrid==2.0.1
apache-airflow-providers-sftp==2.3.0
apache-airflow-providers-slack==4.1.0
apache-airflow-providers-sqlite==2.0.1
apache-airflow-providers-ssh==2.3.0
Deployment
Official Apache Airflow Helm Chart
Deployment details
I am using the Celery Executor with KEDA enables on Kubernetes. The nodepool is set on autoscaling.
Anything else
No response
Are you willing to submit PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: