Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apply markers based on fixtures/dependent fixtures #1368

Closed
RonnyPfannschmidt opened this issue Feb 9, 2016 · 16 comments
Closed

apply markers based on fixtures/dependent fixtures #1368

RonnyPfannschmidt opened this issue Feb 9, 2016 · 16 comments
Assignees
Labels
topic: collection related to the collection phase topic: marks related to marks, either the general marks or builtin type: enhancement new feature or API change, should be merged into features branch type: feature-branch new feature or API change, should be merged into features branch type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature

Comments

@RonnyPfannschmidt
Copy link
Member

use case is deselecting tests based on markers that relate to used fixtures

@RonnyPfannschmidt RonnyPfannschmidt added type: enhancement new feature or API change, should be merged into features branch type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature topic: collection related to the collection phase type: feature-branch new feature or API change, should be merged into features branch labels Feb 9, 2016
@RonnyPfannschmidt RonnyPfannschmidt self-assigned this Feb 9, 2016
@The-Compiler
Copy link
Member

Can't this easily be done already?

def pytest_collection_modifyitems(items):
    for item in items:
        if 'qapp' in getattr(item, 'fixturenames', ()):
            item.add_marker('gui')

@RonnyPfannschmidt
Copy link
Member Author

@The-Compiler the markers are supposed to be taken from the actual fixture function, thats potentially a indirect dependency

@kprzybyla
Copy link

Hi,

I came to the point in my project, where I want to use (custom) marks on fixtures. I read thought the documentation and I found that marks can only be applied to tests and have no effect on fixtures. Then I found this issue which I suppose is actually related to the problem that I'm struggling with.

I actually found a workaround for this issue. It turns out that this "feature" is already implemented as a side effect of pytest.param() because you can define fixture with single parameter and assign marks to that parameter. This basically creates fixture with marks applied to it.

import pytest

# The value of fixture param does not matter, we only use params to pass marks to fixture
@pytest.fixture(params=[pytest.param(0, marks=pytest.mark.local_address)])
def address():
    return '127.0.0.1', 7000

The above code could be treated as equal to below fixture definition.

import pytest

@pytest.fixture
@pytest.mark.local_address
def address():
    return '127.0.0.1', 7000

Now when we define below test case

def test(address):
    pass

Running pytest -m local_address will get us following outcome:

1 passed in 0.03 seconds

Running pytest -m "not local_address"

1 deselected in 0.01 seconds

You can also use marks like skip or xfail and chaining fixtures also works and keeps the information about marks. So my question is: is this actually intended? If yes, can be make it to work with the definition in second code snippet which could basically assume that fixture without params defined is a single param fixture with marks applied to it?

@nicoddemus
Copy link
Member

Oh that's interesting. I would like to know @RonnyPfannschmidt's opinion here, but I would like to see marks working on fixtures; seems natural and what users expect.

@RonnyPfannschmidt
Copy link
Member Author

@nicoddemus if getfixturevalue is explicitly excluded as mark source, it can be reasonably done

that specification needs to ensure a awareness of lazy/delayed fixtures
special care needs to be taken for fixtures that override old values vs fixtures that replace other fixtures
(it may be necessary to have fixtures declare replacement vs extension more explicitly)

i will take a deeper look in a few months when i'm back to active

@nicoddemus
Copy link
Member

Great, thanks @RonnyPfannschmidt for the follow up

jidicula added a commit to shimming-toolbox/shimming-toolbox that referenced this issue Oct 9, 2020
**Why this change was necessary**
We only need to check the existence of soft dependencies when we're
testing for those dependencies. Therefore, these tests should be
decorated with the corresponding mark.

**What this change does**
Adds test functions in conftest to ensure that the soft dependencies
are installed and available to shimming-toolbox.

**Any side-effects?**
None.

**Additional context/notes/links**

Note that markers cannot be easily added to fixtures. I used a
workaround documented in pytest's issues[1].

Fixtures cannot be passed to a class signature, and it would
duplicate a lot of code to pass to every test class methods, so
instead we can use the `@pytest.mark.usefixtures()`
decorator[2]. Unfortunately, this cannot be used for multiple
functions within a test file unless it contains a test class[2.Warning].

1: pytest-dev/pytest#1368 (comment)
2: https://docs.pytest.org/en/stable/fixture.html#use-fixtures-in-classes-and-modules-with-usefixtures

Resolves #118 - Make sure the environment is properly setup and binaries are present before running pytest
jidicula added a commit to shimming-toolbox/shimming-toolbox that referenced this issue Oct 11, 2020
…s before running tests (#136)

* [test][conftest] Download test_data before running test suite

**Why this change was necessary**
We want to ensure that test data has been pulled into the test
environment before running tests.

**What this change does**
Executes the download_data() CLI inside a pytest hook that runs first
when the test suite session begins.

**Any side-effects?**
Creates a new temp directory and populates it with test data.

Should this temp directory be removed after the testing run is
complete? Can we expect that the testing environment will be connected
to the internet?

**Additional context/notes/links**

Resolves #118 - Make sure the environment is properly setup and
binaries are present before running pytest

* [refactor][conftest] Use __dir_testing__ pkg variable

**Why this change was necessary**
The package already has a global available for the testing dir
location and dirname.

**What this change does**
See title.

**Any side-effects?**
None.

**Additional context/notes/links**

Related to #136 (comment)

* [style][tests] Reformat test scripts

* [test][prelude,dcm2niix] Add pytest markers for soft dependencies

**Why this change was necessary**
We want to be able to conditionally run some tests for interfacing
shimming-toolbox with its soft dependencies.

Currently, `prelude` is the only tool with its functionality, but in
the future, users may wish to use a different tool covering the same
functionality. Therefore, tests requiring interfacing with `prelude`
don't need to be run if `prelude` is not installed, and this case must
be covered by the shimming-toolbox testing suite.

`dcm2niix` is only needed by the `dcm2bids` library, which itself is
only needed by users who don't already have their files in the BIDS
format. If they already have BIDS files, they don't need to install
`dcm2niix` and therefore the shimming-toolbox testing suite must cover
the case where dcm2niix is not installed.

**What this change does**
This change uses the pytest feature allowing the marking of test
functions with specific markers[1]. Marked test functions only run
when the corresponding marker is passed as an argument to the `-m`
option to pytest. This change also updates the accompanying usage
instructions for this feature.

**Any side-effects?**
The minimal `$ pytest` command still runs all test functions. You must
specifically opt out of tests depending on the soft dependencies. See
accompanying documentation for details.

**Rejected implementation options**
pytest provides the functionality for custom markers specified at
runtime[3]. This option was rejected as each custom environment is
only defined inline in the decorator, which could lead to divergence
of decorators during an incomplete refactor. The chosen implementation
allows configuration of markers in a single source of truth,
simplifying the refactor and addition of new markers.

The desired functionality could also be implemented using the
`pytest.mark.skipif` pattern and hooking that to an automated dependency
check. However, this creates implicit rather than explicit
behaviour and could result in duplicated condition definitions in each
decorator.

pytest also provides functionality for running only tests that match a
provided keyword expression[4], but this approach was rejected as it
relies on consistent naming conventions across existing tests
depending on either of the soft dependencies. Using this approach
would require a significant refactor of function and accompanying test
function names.

**Additional context/notes/links**

You can also mark whole modules with a `pytest.mark` decorator[2].

Note that the markers are defined in a SSOT in `pytest.ini`. New
markers can be defined in this file, following the existing
formatting. The decorators for test functions are in the format
`@pytest.mark.markername`.

See also:
1: https://docs.pytest.org/en/latest/example/markers.html
2:
https://docs.pytest.org/en/latest/example/markers.html#marking-whole-classes-or-modules
3:
https://docs.pytest.org/en/latest/example/markers.html#custom-marker-and-command-line-option-to-control-test-runs
4: https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests

Resolves #118 - Make sure the environment is properly setup and binaries are present before running pytest

* [test][conftest] Test existence of soft dependencies conditionally

**Why this change was necessary**
We only need to check the existence of soft dependencies when we're
testing for those dependencies. Therefore, these tests should be
decorated with the corresponding mark.

**What this change does**
Adds test functions in conftest to ensure that the soft dependencies
are installed and available to shimming-toolbox.

**Any side-effects?**
None.

**Additional context/notes/links**

Note that markers cannot be easily added to fixtures. I used a
workaround documented in pytest's issues[1].

Fixtures cannot be passed to a class signature, and it would
duplicate a lot of code to pass to every test class methods, so
instead we can use the `@pytest.mark.usefixtures()`
decorator[2]. Unfortunately, this cannot be used for multiple
functions within a test file unless it contains a test class[2.Warning].

1: pytest-dev/pytest#1368 (comment)
2: https://docs.pytest.org/en/stable/fixture.html#use-fixtures-in-classes-and-modules-with-usefixtures

Resolves #118 - Make sure the environment is properly setup and binaries are present before running pytest

* [chore][conftest] Clean up WIP changes

**Why this change was necessary**
I left some WIP code and comments.

**What this change does**
See title.

**Any side-effects?**
New output message upon download completion.

**Additional context/notes/links**

Resolves #118 - Make sure the environment is properly setup and binaries are present before running pytest

* [tests][conftest] Update header

* Revert "[style][tests] Reformat test scripts"

This reverts commit 1b5cc4d.

* [fix][test_prelude] Restore missing import

* [refactor][conftest] Clarify variable names to reflect purpose

**Why this change was necessary**
The previous names of variables for paths did not accurately reflect
their purposes.

**What this change does**
See title.

**Any side-effects?**
None

**Additional context/notes/links**

Resolves #136 (comment)
Related to #118

* [test][Travis] Remove test data download from Travis config

**Why this change was necessary**
Since the testing suite already downloads the test data, we can remove
this functionality from the Travis build setup.

**What this change does**
See title.

**Any side-effects?**
Removes redundancy of test data download.

**Additional context/notes/links**

Resolves #136 (comment)
Related to #118
kousu pushed a commit to shimming-toolbox/shimming-toolbox that referenced this issue Mar 20, 2021
…s before running tests (#136)

* [test][conftest] Download test_data before running test suite

**Why this change was necessary**
We want to ensure that test data has been pulled into the test
environment before running tests.

**What this change does**
Executes the download_data() CLI inside a pytest hook that runs first
when the test suite session begins.

**Any side-effects?**
Creates a new temp directory and populates it with test data.

Should this temp directory be removed after the testing run is
complete? Can we expect that the testing environment will be connected
to the internet?

**Additional context/notes/links**

Resolves #118 - Make sure the environment is properly setup and
binaries are present before running pytest

* [refactor][conftest] Use __dir_testing__ pkg variable

**Why this change was necessary**
The package already has a global available for the testing dir
location and dirname.

**What this change does**
See title.

**Any side-effects?**
None.

**Additional context/notes/links**

Related to #136 (comment)

* [style][tests] Reformat test scripts

* [test][prelude,dcm2niix] Add pytest markers for soft dependencies

**Why this change was necessary**
We want to be able to conditionally run some tests for interfacing
shimming-toolbox with its soft dependencies.

Currently, `prelude` is the only tool with its functionality, but in
the future, users may wish to use a different tool covering the same
functionality. Therefore, tests requiring interfacing with `prelude`
don't need to be run if `prelude` is not installed, and this case must
be covered by the shimming-toolbox testing suite.

`dcm2niix` is only needed by the `dcm2bids` library, which itself is
only needed by users who don't already have their files in the BIDS
format. If they already have BIDS files, they don't need to install
`dcm2niix` and therefore the shimming-toolbox testing suite must cover
the case where dcm2niix is not installed.

**What this change does**
This change uses the pytest feature allowing the marking of test
functions with specific markers[1]. Marked test functions only run
when the corresponding marker is passed as an argument to the `-m`
option to pytest. This change also updates the accompanying usage
instructions for this feature.

**Any side-effects?**
The minimal `$ pytest` command still runs all test functions. You must
specifically opt out of tests depending on the soft dependencies. See
accompanying documentation for details.

**Rejected implementation options**
pytest provides the functionality for custom markers specified at
runtime[3]. This option was rejected as each custom environment is
only defined inline in the decorator, which could lead to divergence
of decorators during an incomplete refactor. The chosen implementation
allows configuration of markers in a single source of truth,
simplifying the refactor and addition of new markers.

The desired functionality could also be implemented using the
`pytest.mark.skipif` pattern and hooking that to an automated dependency
check. However, this creates implicit rather than explicit
behaviour and could result in duplicated condition definitions in each
decorator.

pytest also provides functionality for running only tests that match a
provided keyword expression[4], but this approach was rejected as it
relies on consistent naming conventions across existing tests
depending on either of the soft dependencies. Using this approach
would require a significant refactor of function and accompanying test
function names.

**Additional context/notes/links**

You can also mark whole modules with a `pytest.mark` decorator[2].

Note that the markers are defined in a SSOT in `pytest.ini`. New
markers can be defined in this file, following the existing
formatting. The decorators for test functions are in the format
`@pytest.mark.markername`.

See also:
1: https://docs.pytest.org/en/latest/example/markers.html
2:
https://docs.pytest.org/en/latest/example/markers.html#marking-whole-classes-or-modules
3:
https://docs.pytest.org/en/latest/example/markers.html#custom-marker-and-command-line-option-to-control-test-runs
4: https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests

Resolves #118 - Make sure the environment is properly setup and binaries are present before running pytest

* [test][conftest] Test existence of soft dependencies conditionally

**Why this change was necessary**
We only need to check the existence of soft dependencies when we're
testing for those dependencies. Therefore, these tests should be
decorated with the corresponding mark.

**What this change does**
Adds test functions in conftest to ensure that the soft dependencies
are installed and available to shimming-toolbox.

**Any side-effects?**
None.

**Additional context/notes/links**

Note that markers cannot be easily added to fixtures. I used a
workaround documented in pytest's issues[1].

Fixtures cannot be passed to a class signature, and it would
duplicate a lot of code to pass to every test class methods, so
instead we can use the `@pytest.mark.usefixtures()`
decorator[2]. Unfortunately, this cannot be used for multiple
functions within a test file unless it contains a test class[2.Warning].

1: pytest-dev/pytest#1368 (comment)
2: https://docs.pytest.org/en/stable/fixture.html#use-fixtures-in-classes-and-modules-with-usefixtures

Resolves #118 - Make sure the environment is properly setup and binaries are present before running pytest

* [chore][conftest] Clean up WIP changes

**Why this change was necessary**
I left some WIP code and comments.

**What this change does**
See title.

**Any side-effects?**
New output message upon download completion.

**Additional context/notes/links**

Resolves #118 - Make sure the environment is properly setup and binaries are present before running pytest

* [tests][conftest] Update header

* Revert "[style][tests] Reformat test scripts"

This reverts commit 1b5cc4d.

* [fix][test_prelude] Restore missing import

* [refactor][conftest] Clarify variable names to reflect purpose

**Why this change was necessary**
The previous names of variables for paths did not accurately reflect
their purposes.

**What this change does**
See title.

**Any side-effects?**
None

**Additional context/notes/links**

Resolves #136 (comment)
Related to #118

* [test][Travis] Remove test data download from Travis config

**Why this change was necessary**
Since the testing suite already downloads the test data, we can remove
this functionality from the Travis build setup.

**What this change does**
See title.

**Any side-effects?**
Removes redundancy of test data download.

**Additional context/notes/links**

Resolves #136 (comment)
Related to #118
@jaraco
Copy link
Contributor

jaraco commented Apr 4, 2021

I also was surprised when I applied this change to selectively skip tests that require the marker, but since the marker is silently ignored, it had no effect. Since all of the logic for skipping and xfailing tests through decorators is tied up in marking, it really would be nice for markers to be honored in fixtures.

@RonnyPfannschmidt
Copy link
Member Author

@jaraco we still have no mechanism to correctly and reliably transfer them

i haven't yet gotten around looking into it

@jaraco
Copy link
Contributor

jaraco commented Apr 4, 2021

Understood. And it makes me wonder - maybe there's another way to think about it. Instead of transferring markers, maybe pytest should honor the markers when the fixtures are invoked. I'm not sure that makes sense in the pytest model. Alternately, pytest could offer decorators that when applied to fixtures could skip or xfail their behavior. For example:

def decorator_skip_if(**skip_params):
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            pytest.skip_if(**skip_params)
            return func(*args, **kwargs)
       return wrapper
    return decorator

@decorator_skip_if("not hasattr(foo, 'bar')")
@pytest.fixture
def my_selectively_skipped_fixture():
    ...

It's sad that it wouldn't re-use the same mechanism as markers, but wouldn't be constrained to test functions.

@RonnyPfannschmidt
Copy link
Member Author

at fixture invocation doesn't make sense in the pytest model (as it removes pre-setup consideration)

the decorators would be a tricky api, harder to get structurally right than juts imperative code in the fixture function

we should document a pattern to externalize specific test requirements to session scope fixtures

veprbl added a commit to veprbl/arrow that referenced this issue Jul 30, 2021
…rams

The problem is that tests should be skippable with

  pytest pyarrow/tests -m "(not s3)"

but that doesn't deselect all the s3 tests currently.

The issue is that applying marks to fixtures is not currently supported:
pytest-dev/pytest#1368

This removes marks from fixtures to avoid confusion that this dead code may cause.
pitrou pushed a commit to apache/arrow that referenced this issue Aug 2, 2021
…rams

https://issues.apache.org/jira/browse/ARROW-13504

The problem is that tests should be skippable with

  pytest pyarrow/tests -m "(not s3)"

but that doesn't deselect all the s3 tests currently.

The issue is that applying marks to fixtures is not currently supported:
pytest-dev/pytest#1368

This removes marks from fixtures to avoid confusion that this dead code may cause.

Closes #10837 from veprbl/pr/ARROW-13504

Authored-by: Dmitry Kalinkin <[email protected]>
Signed-off-by: Antoine Pitrou <[email protected]>
@RonnyPfannschmidt
Copy link
Member Author

i came to the conclusing that markers simply dont make sense on fixtures as transfer and consideration are simplytoo unclear

we amy eventually come up with a tool that brings fixtures and markers together, fixture themselfe aint

a possible hack-ish workaround would be a single element parameterset that adds markers

@gwerbin
Copy link

gwerbin commented Feb 2, 2024

@nicoddemus if getfixturevalue is explicitly excluded as mark source, it can be reasonably done

that specification needs to ensure a awareness of lazy/delayed fixtures special care needs to be taken for fixtures that override old values vs fixtures that replace other fixtures (it may be necessary to have fixtures declare replacement vs extension more explicitly)

i will take a deeper look in a few months when i'm back to active

Pardon my ignorance, but what is a "mark source" in this context? I'm not a Pytest expert and I didn't find anything in the docs that seemed relevant.

@mfarrugi
Copy link

import pytest

# The value of fixture param does not matter, we only use params to pass marks to fixture
@pytest.fixture(params=[pytest.param(0, marks=pytest.mark.local_address)])
def address():
    return '127.0.0.1', 7000

Is this still the best way to accomplish propagating marks through fixtures?

This seems to do exactly what we want, is there a way to support proper syntax for this?

@RonnyPfannschmidt
Copy link
Member Author

There's intentionally no proper syntax as there's a harrowing amount of edge cases surrounding fixture overrides

@mfarrugi
Copy link

So is the param hack the recommended way to do this?

@RonnyPfannschmidt
Copy link
Member Author

It's more of a begrudgingly acknowledged workaround that suffers edge cases that usually don't bite but can't be ignored for a real implementation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: collection related to the collection phase topic: marks related to marks, either the general marks or builtin type: enhancement new feature or API change, should be merged into features branch type: feature-branch new feature or API change, should be merged into features branch type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature
Projects
None yet
Development

No branches or pull requests

7 participants