Skip to content

Commit

Permalink
[testing] rename skip targets + docs (#7863)
Browse files Browse the repository at this point in the history
* rename skip targets + docs

* fix quotes

* style

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <[email protected]>

* small improvements

* fix

Co-authored-by: Sylvain Gugger <[email protected]>
  • Loading branch information
stas00 and sgugger authored Oct 20, 2020
1 parent c912ba5 commit 3e31e7f
Show file tree
Hide file tree
Showing 9 changed files with 52 additions and 35 deletions.
45 changes: 31 additions & 14 deletions docs/source/testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -400,29 +400,46 @@ or if you have multiple gpus, you can specify which one is to be used by ``pytes
CUDA_VISIBLE_DEVICES="1" pytest tests/test_logging.py
This is handy when you want to run different tasks on different GPUs.

And we have these decorators that require the condition described by the marker.

``
@require_torch
@require_tf
@require_multigpu
@require_non_multigpu
@require_torch_tpu
@require_torch_and_cuda
``
Some tests must be run on CPU-only, others on either CPU or GPU or TPU, yet others on multiple-GPUs. The following skip decorators are used to set the requirements of tests CPU/GPU/TPU-wise:

* ``require_torch`` - this test will run only under torch
* ``require_torch_gpu`` - as ``require_torch`` plus requires at least 1 GPU
* ``require_torch_multigpu`` - as ``require_torch`` plus requires at least 2 GPUs
* ``require_torch_non_multigpu`` - as ``require_torch`` plus requires 0 or 1 GPUs
* ``require_torch_tpu`` - as ``require_torch`` plus requires at least 1 TPU

For example, here is a test that must be run only when there are 2 or more GPUs available and pytorch is installed:

.. code-block:: python
@require_torch_multigpu
def test_example_with_multigpu():
If a test requires ``tensorflow`` use the ``require_tf`` decorator. For example:

.. code-block:: python
@require_tf
def test_tf_thing_with_tensorflow():
These decorators can be stacked. For example, if a test is slow and requires at least one GPU under pytorch, here is how to set it up:

.. code-block:: python
@require_torch_gpu
@slow
def test_example_slow_on_gpu():
Some decorators like ``@parametrized`` rewrite test names, therefore ``@require_*`` skip decorators have to be listed last for them to work correctly. Here is an example of the correct usage:

.. code-block:: python
@parameterized.expand(...)
@require_multigpu
@require_torch_multigpu
def test_integration_foo():
There is no problem whatsoever with ``@pytest.mark.parametrize`` (but it only works with non-unittests) - can use it in any order.
This section will be expanded soon once our work in progress on those decorators is finished.
This order problem doesn't exist with ``@pytest.mark.parametrize``, you can put it first or last and it will still work. But it only works with non-unittests.

Inside tests:

Expand Down
6 changes: 3 additions & 3 deletions examples/seq2seq/test_seq2seq_examples.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
from run_eval_search import run_search
from transformers import AutoConfig, AutoModelForSeq2SeqLM
from transformers.hf_api import HfApi
from transformers.testing_utils import CaptureStderr, CaptureStdout, TestCasePlus, require_torch_and_cuda, slow
from transformers.testing_utils import CaptureStderr, CaptureStdout, TestCasePlus, require_torch_gpu, slow
from utils import ROUGE_KEYS, label_smoothed_nll_loss, lmap, load_json


Expand Down Expand Up @@ -125,9 +125,9 @@ def setUpClass(cls):
return cls

@slow
@require_torch_and_cuda
@require_torch_gpu
def test_hub_configs(self):
"""I put require_torch_and_cuda cause I only want this to run with self-scheduled."""
"""I put require_torch_gpu cause I only want this to run with self-scheduled."""

model_list = HfApi().model_list()
org = "sshleifer"
Expand Down
6 changes: 3 additions & 3 deletions src/transformers/testing_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ def require_tokenizers(test_case):
return test_case


def require_multigpu(test_case):
def require_torch_multigpu(test_case):
"""
Decorator marking a test that requires a multi-GPU setup (in PyTorch).
Expand All @@ -174,7 +174,7 @@ def require_multigpu(test_case):
return test_case


def require_non_multigpu(test_case):
def require_torch_non_multigpu(test_case):
"""
Decorator marking a test that requires 0 or 1 GPU setup (in PyTorch).
"""
Expand Down Expand Up @@ -208,7 +208,7 @@ def require_torch_tpu(test_case):
torch_device = None


def require_torch_and_cuda(test_case):
def require_torch_gpu(test_case):
"""Decorator marking a test that requires CUDA and PyTorch. """
if torch_device != "cuda":
return unittest.skip("test requires CUDA")(test_case)
Expand Down
4 changes: 2 additions & 2 deletions templates/adding_a_new_model/tests/test_modeling_xxx.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
import unittest

from transformers import is_torch_available
from transformers.testing_utils import require_torch, require_torch_and_cuda, slow, torch_device
from transformers.testing_utils import require_torch, require_torch_gpu, slow, torch_device

from .test_configuration_common import ConfigTester
from .test_modeling_common import ModelTesterMixin, ids_tensor
Expand Down Expand Up @@ -302,6 +302,6 @@ def test_XXX_backward_pass_reduces_loss(self):
"""Test loss/gradients same as reference implementation, for example."""
pass

@require_torch_and_cuda
@require_torch_gpu
def test_large_inputs_in_fp16_dont_cause_overflow(self):
pass
4 changes: 2 additions & 2 deletions tests/test_modeling_common.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
from typing import List, Tuple

from transformers import is_torch_available
from transformers.testing_utils import require_multigpu, require_torch, slow, torch_device
from transformers.testing_utils import require_torch, require_torch_multigpu, slow, torch_device


if is_torch_available():
Expand Down Expand Up @@ -980,7 +980,7 @@ def _check_match_tokens(self, generated_ids, bad_words_ids):
return True
return False

@require_multigpu
@require_torch_multigpu
def test_multigpu_data_parallel_forward(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()

Expand Down
4 changes: 2 additions & 2 deletions tests/test_modeling_layoutlm.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@

from transformers import is_torch_available
from transformers.file_utils import cached_property
from transformers.testing_utils import require_torch, require_torch_and_cuda, slow, torch_device
from transformers.testing_utils import require_torch, require_torch_gpu, slow, torch_device

from .test_configuration_common import ConfigTester
from .test_modeling_common import ModelTesterMixin, ids_tensor
Expand Down Expand Up @@ -234,6 +234,6 @@ def test_LayoutLM_backward_pass_reduces_loss(self):
"""Test loss/gradients same as reference implementation, for example."""
pass

@require_torch_and_cuda
@require_torch_gpu
def test_large_inputs_in_fp16_dont_cause_overflow(self):
pass
4 changes: 2 additions & 2 deletions tests/test_modeling_reformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,10 @@

from transformers import is_torch_available
from transformers.testing_utils import (
require_multigpu,
require_sentencepiece,
require_tokenizers,
require_torch,
require_torch_multigpu,
slow,
torch_device,
)
Expand Down Expand Up @@ -558,7 +558,7 @@ def test_reformer_model_fp16_generate(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
self.model_tester.create_and_check_reformer_model_fp16_generate(*config_and_inputs)

@require_multigpu
@require_torch_multigpu
def test_multigpu_data_parallel_forward(self):
# Opt-out of this test.
pass
Expand Down
4 changes: 2 additions & 2 deletions tests/test_modeling_transfo_xl.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
import unittest

from transformers import is_torch_available
from transformers.testing_utils import require_multigpu, require_torch, slow, torch_device
from transformers.testing_utils import require_torch, require_torch_multigpu, slow, torch_device

from .test_configuration_common import ConfigTester
from .test_modeling_common import ModelTesterMixin, ids_tensor
Expand Down Expand Up @@ -204,7 +204,7 @@ def test_transfo_xl_lm_head(self):
output_result = self.model_tester.create_transfo_xl_lm_head(*config_and_inputs)
self.model_tester.check_transfo_xl_lm_head_output(output_result)

@require_multigpu
@require_torch_multigpu
def test_multigpu_data_parallel_forward(self):
# Opt-out of this test.
pass
Expand Down
10 changes: 5 additions & 5 deletions tests/test_skip_decorators.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
import pytest

from parameterized import parameterized
from transformers.testing_utils import require_torch, require_torch_and_cuda, slow, torch_device
from transformers.testing_utils import require_torch, require_torch_gpu, slow, torch_device


# skipping in unittest tests
Expand Down Expand Up @@ -63,11 +63,11 @@ def check_slow_torch_cuda():
@require_torch
class SkipTester(unittest.TestCase):
@slow
@require_torch_and_cuda
@require_torch_gpu
def test_2_skips_slow_first(self):
check_slow_torch_cuda()

@require_torch_and_cuda
@require_torch_gpu
@slow
def test_2_skips_slow_last(self):
check_slow_torch_cuda()
Expand Down Expand Up @@ -97,12 +97,12 @@ def test_param_slow_last(self, param=None):


@slow
@require_torch_and_cuda
@require_torch_gpu
def test_pytest_2_skips_slow_first():
check_slow_torch_cuda()


@require_torch_and_cuda
@require_torch_gpu
@slow
def test_pytest_2_skips_slow_last():
check_slow_torch_cuda()
Expand Down

0 comments on commit 3e31e7f

Please sign in to comment.