Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove double warning in CLI when config value is deprecated #40319

Merged

Conversation

ephraimbuddy
Copy link
Contributor

There's a double warning when getting a value for a deprecated configuration. This happens because we first check if the config exists, which issues a warning and then getting the config also issues the same warning. Now that we don't error for non-existing configs, we should not check existence before getting a config value

There's a double warning when getting a value for a deprecated configuration. This happens because we first check if the config exists, which issues a warning and then getting the config also issues the same warning. Now that we don't error for non-existing configs, we should not check existence before getting a config value
@uranusjr
Copy link
Member

Hmm, the failure seems consistent. Maybe it’s not flaky?

@potiuk
Copy link
Member

potiuk commented Jun 20, 2024

Yep. Looks very much related

@ephraimbuddy
Copy link
Contributor Author

Locally, when you use --with-db-init, it succeeds every time. I have also noticed that this fails locally at the main branch. Maybe the tests are not run except when the cli commands are modified?

@potiuk
Copy link
Member

potiuk commented Jun 20, 2024

I'd say it might be a side effect of other tests. You can reproduce it locally by running the same as in ci - i.e. all non-db tests

@potiuk
Copy link
Member

potiuk commented Jun 20, 2024

https://github.com/apache/airflow/actions/runs/9579997426/job/26433542514?pr=40319#step:7:6

    breeze testing non-db-tests \
      --parallel-test-types "Always CLI"

From:

if [[ "Non-DB" == "DB" ]]; then
    breeze testing db-tests \
      --parallel-test-types "Always CLI"
  elif [[ "Non-DB" == "Non-DB" ]]; then
    breeze testing non-db-tests \
      --parallel-test-types "Always CLI"
  elif [[ "Non-DB" == "All" ]]; then
    breeze testing tests --run-in-parallel \
      --parallel-test-types "Always CLI"
  elif [[ "Non-DB" == "Quarantined" ]]; then
    breeze testing tests --test-type "All-Quarantined" || true
  elif [[ "Non-DB" == "ARM collection" ]]; then
    breeze testing tests --collect-only --remove-arm-packages
  elif [[ "Non-DB" == "System" ]]; then
    breeze testing tests tests/system/example_empty.py --system core
  else
    echo "Unknown test scope: Non-DB"
    exit 1
  fi

@potiuk
Copy link
Member

potiuk commented Jun 20, 2024

Yep. Looks like loading configuration in one test might break other tests -> likely loading confguration in some tests breaks configuration in others. This is a bit of a result of the (bad) approach of ours that configuration is automatically loaded at import time, and if tests are not written carefully with proper initialization setup/teardown, one test might impact another.

More why it's happening - when you run tests with xdist there are n parallel workers running the tests and they get the tests in some order, and if there is a tests that changes state and leaves some side effects (like configuration loaded) might impacts another test in the same worker.

So basically what we need to do is to track down the tests that are leaving side-effects/ change state and tests that are impacted and a) remove the side effect in the first one (proper teardown) and b) make sure that setup in the failing tests cleans-up the state (configuration in this case) to desired state.

@potiuk
Copy link
Member

potiuk commented Jun 20, 2024

BTW. I usually guess (if possible) or do bisecting in this case. The actual command that is executed (you can see it in the output) is this

Starting the tests with those pytest arguments: tests/always tests/cli --verbosity=0 --strict-markers --durations=100 --maxfail=50 --color=yes --junitxml=/files/test_result-all-none.xml --timeouts-order moi --setup-timeout=60 --execution-timeout=60 --teardown-timeout=60 --disable-warnings -rfEX --skip-db-tests --ignore=tests/system --ignore=tests/integration --warning-output-path=/files/warnings-all-none.txt --ignore=helm_tests --with-db-init -n 4 --no-cov

Simply entering breeze and runnig this one:

pytest tests/always tests/cli --verbosity=0 --strict-markers --durations=100 --maxfail=50 --color=yes --junitxml=/files/test_result-all-none.xml --timeouts-order moi --setup-timeout=60 --execution-timeout=60 --teardown-timeout=60 --disable-warnings -rfEX --skip-db-tests --ignore=tests/system --ignore=tests/integration --warning-output-path=/files/warnings-all-none.txt --ignore=helm_tests --with-db-init -n 4 --no-cov

Should reproduce it ~ 100% (that includes running the tests in 4 workers). You can use one worker -n 1 or remove it altogether and see if it can be reproduced - that removes parallelism and test distribution between several workers out of the picture.

Then - when you have reproducible case you can modify this pytest command to see which tests are executed (increase verbosity). Then you can try to run just the tests before and tests after and try to run only the tests before + the test that fails. Then you can generate list of the modules that are executed and run them in the same sequence as you see them in the output.

pytest [options] tests/commands/test_....command.py tests/commands/test.....py ......

And then you can bisect the list of tests that are run before by removing half of them (and see if they still fail) - then another half and another. That allows - relatively quickly to find the tests that are causing the problem.

eventually you should be able to track it down to single (or sometimes a few) tests that are causing the problem:

When this fails test_y:

pytest [options] tests/.....::test_x tests/....::test_y

and this does not:

pytest [options] tests/....::test_y

Then it mens that there is a side effect left by test_x.

I used it quite a few times to track down such side-effects.

@ephraimbuddy
Copy link
Contributor Author

One of the failing tests can easily be reproduced by running pytest tests/cli/commands/test_plugins_command.py. However, it passes when the module is marked as DB test and runs together with other tests. Also without marking it as db test and using --with-db-init, it surprisingly passes. I have updated it with db_test marker to see what happens

@ephraimbuddy
Copy link
Contributor Author

I also marked test_cli_parser.py as db test for similar reasons. My conclusion is that any code that has to load the executor in the cli/ should be marked as DB test:

executor, _ = ExecutorLoader.import_executor_cls(executor_name)
because it accesses DB here:
if engine.dialect.name == "sqlite":

@potiuk
Copy link
Member

potiuk commented Jun 20, 2024

Hmm. I will take a look at that one then, maybe there is something we can do to fail such tests explicitly. But I think still it's some kind of side-effect from other tests.

@ephraimbuddy ephraimbuddy merged commit 3f8c1e4 into apache:main Jun 20, 2024
49 checks passed
@ephraimbuddy ephraimbuddy deleted the disable-double-warning-on-notfound-config branch June 20, 2024 14:50
@utkarsharma2 utkarsharma2 added the type:improvement Changelog: Improvements label Jul 1, 2024
@utkarsharma2 utkarsharma2 added this to the Airflow 2.9.3 milestone Jul 1, 2024
romsharon98 pushed a commit to romsharon98/airflow that referenced this pull request Jul 26, 2024
…40319)

* Remove double warning in CLI  when config value is deprecated

There's a double warning when getting a value for a deprecated configuration. This happens because we first check if the config exists, which issues a warning and then getting the config also issues the same warning. Now that we don't error for non-existing configs, we should not check existence before getting a config value

* mark test_plugins_command as db_test

* mark test_cli_parser.py as db_test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:CLI type:improvement Changelog: Improvements
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants