-
Notifications
You must be signed in to change notification settings - Fork 14.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sync v2-2-stable
with v2-2-test
to release 2.2.4
#21659
Conversation
(cherry picked from commit 1543dc2)
(cherry picked from commit 4198550)
Fixes: #19832 Co-authored-by: Jaroslaw Potiuk <[email protected]> (cherry picked from commit 78c815e)
Co-authored-by: Tzu-ping Chung <[email protected]> (cherry picked from commit 7c4bed0)
Minor typo corrections. I changed the filenames in the example folder structure instead of the later references to be consistent with the other examples in the documentation. (cherry picked from commit d11087c)
(cherry picked from commit 2804569)
This was deleted in an [earlier refactor](https://github.com/apache/airflow/pull/15444/files) (see `concepts.rst`). This PR brings it back. I added it under the "DAGs" section because even though it's file-based and not dag-based, excluding files that define dags is the most likely use case for this feature (I think). (cherry picked from commit 6eac2e0)
Build should be built. (cherry picked from commit de36616)
) Encountered a nasty bug where somebody basically implemented their own KubernetesPodSensor, which failed after more than one attempt when using mode="poke" + a volume + a secret. Root cause turned out to be in `secret.attach_to_pod()`. In here, a volume and volumemount is created to mount the secret. A deepcopy() is made of the given Pod spec. In order to avoid appending to None, there is this line: `cp_pod.spec.volumes = pod.spec.volumes or []`. In case a volume is set on the Pod spec, a reference is created to the original pod spec volumes, which in turn was a reference to `self.volumes`. As a result, each secret resulted in a volume added to `self.volumes`, which resulted in an error when running the sensor a second time because the secret volume was already mounted during the first sensor attempt. This PR references the deepcopied object instead, and creates a new list if pod.spec.volumes is None. Co-authored-by: Bas Harenslak <[email protected]> (cherry picked from commit 2409760)
(cherry picked from commit 9876e19)
(cherry picked from commit 0163495)
(cherry picked from commit c4d2e16)
The `execution_data` does not need to be passed to log. We send enough details to the API user in the response. (cherry picked from commit 790bc78)
Minor typo in the task decorator documentation (cherry picked from commit 7f6ab06)
Port 8080 is the default port for webserver (https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html?highlight=webserver#webserver). By setting it here again explicitly, we forbid users to override it using AIRFLOW__WEBSERVER__WEB_SERVER_PORT. Removing it IMO is not a breaking change, since it will still default to 8080. (cherry picked from commit 9d36b1f)
(cherry picked from commit f743e46)
1.10.x is EOL (cherry picked from commit dcd4c49)
Clarify the value for ``sentry_on`` is not quoted by providing an example. (cherry picked from commit e8b5ab9)
(cherry picked from commit ec31b20)
* Workaround docker-compose-v2 env passing Docker Compose v2 has environment parsing broken in many ways. Until this is fixed, we cannot use env files, instead we must set all the variables directly, because parsing variables without values or parsing variables which have empty values is broken in several ways. Some of the issues are closed but not released, and until this is fixed, some extra code duplication and explicitly setting all default variables to "" when needed should solve the problem for both Docker-Compose v1 and Docker-Compose v2 (cherry picked from commit ab5b2bf)
In some shells the comparable string with version was too long. The number leading with 0 was interpreted as octal number and it had too many digits for octal number to handle. This change; 1) decreases the length of the string by using 3-digit numbers 2) strips leading 0s during comparision making comparision work in decimal (cherry picked from commit a05f0c3)
The RUN_*TEST variables are not part of the environment so they are not set when the dc_ci is generated they are overridden by Breeze when particular commands are executed. Therefore we should not hard-code those values in dc_ci script (this is useful for debugging to have the script but it is only there for environment configuration) (cherry picked from commit 7d3b6b5)
This PR attempts to decrease the likelihood of memory issues for CI for non-committers. The MSSQL and MYSQL Provider and Integration tests when run together with other tests in parallel (for MSSQL even standalone) might cause memory problems (143 or 137 exit code). This PR changes the approach slightly for low-memory conditions: 1) MSSQL - both Integration and Providers tests are skipped entirely (they will be run in High-Mem case so we will see if there are any problems anyway) 2) MySQL - both Integration and Providers tests are run separately which will lead to slightly longer test runs but likely this will save us from the occasional memory issues. (cherry picked from commit 5d9e5f6)
When we moved to github registry, the --github-image-id flag was broken as it had pulled the "latest" image when run right after pulling the tagged image (and it run that image instead). This change fixes it and uses GITHUB_PULL_IMAGE_TAG (latest if not specified) everywhere where the image is used for running. This flag is not persistent so it is not persistent. (cherry picked from commit 0a82a42)
It is important to use Pendulum in case timezone is used - because there are a number of limitations coming from using stdlib timezone implementation. However our documentation was not very clear about it, especially some examples shown using standard datetime in DAGs which could mislead our users to continue using datetime if they use timezone. This PR clarifies and stresses the use of pendulum is necessary when timezone is used. Also it points to the documentation in case serialization throws error about not using Pendulum so that the users can learn about the reasoning. This is the first part of the change - the follow up will be changing all provider examples to also use timezone and pendulum explicitly. See also #20070 (cherry picked from commit f011da2)
The PR most likely needs to run full matrix of tests because it modifies parts of the core of Airflow. However, committers might decide to merge it quickly and take the risk. If they don't merge it quickly - please rebase it to the latest main at your convenience, or amend the last commit of the PR, and push it with --force-with-lease. |
I want to add one more to that though. Today we saw the markupsafe 2.1.0 breaking our main build due to pallets/markupsafe#284 - and while our constraints prevent us from those errors, explicitly pinning markupseafe to < 2.1.0 is a good thing to do now as this is a "known and documented" problem. The change is here : #21664 - just a setup.cfg change. I will cherry-pick it |
Markupsafe 2.1.0 breaks with error: import name 'soft_unicode' from 'markupsafe'. This should be removed when either this issue is closed: pallets/markupsafe#284 or when we will be able to upgrade JINJA to newer version (currently limited due to Flask and Flask Application Builder) (cherry picked from commit 366c66b)
@jedcunningham - I added the cherry-pick and added a fixup to the changelog - feel free to reset it if you think it's something that shoudl not be added of course :) |
Pinning to MarkupSafe<2.1 or Jinja<3 should only be short term fixes, neither version range is supported. |
Yep it is short term - see the comment. But For 2.2.4 it is safe bet. 2.2.5 or 2.3.0 shoudl have it lifted providing that we fix it.
|
OK, I knew Airflow was good about pinning and updates, just wanted to make sure I understood. Thanks! |
You are welcome, BTW. Here is a talk I gave about managing dependencies at Airlfow :) https://www.youtube.com/watch?v=_SjMdQLP30s&t=2553s if you build some weekend watchlist and learn more on how we do it :) |
Time for
2.2.4rc1
!