Skip to content

Commit

Permalink
Preparing release version 5.1.1
Browse files Browse the repository at this point in the history
  • Loading branch information
asottile committed Aug 20, 2019
1 parent daff906 commit b135f5a
Show file tree
Hide file tree
Showing 26 changed files with 117 additions and 85 deletions.
9 changes: 9 additions & 0 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,15 @@ with advance notice in the **Deprecations** section of releases.
.. towncrier release notes start
pytest 5.1.1 (2019-08-20)
=========================

Bug Fixes
---------

- `#5751 <https://github.com/pytest-dev/pytest/issues/5751>`_: Fixed ``TypeError`` when importing pytest on Python 3.5.0 and 3.5.1.


pytest 5.1.0 (2019-08-15)
=========================

Expand Down
1 change: 0 additions & 1 deletion changelog/5751.bugfix.rst

This file was deleted.

1 change: 1 addition & 0 deletions doc/en/announce/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ Release announcements
:maxdepth: 2


release-5.1.1
release-5.1.0
release-5.0.1
release-5.0.0
Expand Down
24 changes: 24 additions & 0 deletions doc/en/announce/release-5.1.1.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
pytest-5.1.1
=======================================

pytest 5.1.1 has just been released to PyPI.

This is a bug-fix release, being a drop-in replacement. To upgrade::

pip install --upgrade pytest

The full changelog is available at https://docs.pytest.org/en/latest/changelog.html.

Thanks to all who contributed to this release, among them:

* Anthony Sottile
* Bruno Oliveira
* Daniel Hahler
* Florian Bruhin
* Hugo van Kemenade
* Ran Benita
* Ronny Pfannschmidt


Happy testing,
The pytest Development Team
6 changes: 3 additions & 3 deletions doc/en/assert.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ you will see the return value of the function call:
E + where 3 = f()
test_assert1.py:6: AssertionError
============================ 1 failed in 0.05s =============================
============================ 1 failed in 0.02s =============================
``pytest`` has support for showing the values of the most common subexpressions
including calls, attributes, comparisons, and binary and unary
Expand Down Expand Up @@ -208,7 +208,7 @@ if you run this module:
E Use -v to get the full diff
test_assert2.py:6: AssertionError
============================ 1 failed in 0.05s =============================
============================ 1 failed in 0.02s =============================
Special comparisons are done for a number of cases:

Expand Down Expand Up @@ -279,7 +279,7 @@ the conftest file:
E vals: 1 != 2
test_foocompare.py:12: AssertionError
1 failed in 0.05s
1 failed in 0.02s
.. _assert-details:
.. _`assert introspection`:
Expand Down
2 changes: 1 addition & 1 deletion doc/en/builtin.rst
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
in python < 3.6 this is a pathlib2.Path
no tests ran in 0.01s
no tests ran in 0.00s
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like:

Expand Down
12 changes: 6 additions & 6 deletions doc/en/cache.rst
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ If you run this for the first time you will see two failures:
E Failed: bad luck
test_50.py:7: Failed
2 failed, 48 passed in 0.16s
2 failed, 48 passed in 0.08s
If you then run it with ``--lf``:

Expand Down Expand Up @@ -114,7 +114,7 @@ If you then run it with ``--lf``:
E Failed: bad luck
test_50.py:7: Failed
===================== 2 failed, 48 deselected in 0.07s =====================
===================== 2 failed, 48 deselected in 0.02s =====================
You have run only the two failing tests from the last run, while the 48 passing
tests have not been run ("deselected").
Expand Down Expand Up @@ -158,7 +158,7 @@ of ``FF`` and dots):
E Failed: bad luck
test_50.py:7: Failed
======================= 2 failed, 48 passed in 0.15s =======================
======================= 2 failed, 48 passed in 0.07s =======================
.. _`config.cache`:

Expand Down Expand Up @@ -230,7 +230,7 @@ If you run this command for the first time, you can see the print statement:
test_caching.py:20: AssertionError
-------------------------- Captured stdout setup ---------------------------
running expensive computation...
1 failed in 0.05s
1 failed in 0.02s
If you run it a second time, the value will be retrieved from
the cache and nothing will be printed:
Expand All @@ -249,7 +249,7 @@ the cache and nothing will be printed:
E assert 42 == 23
test_caching.py:20: AssertionError
1 failed in 0.05s
1 failed in 0.02s
See the :ref:`cache-api` for more details.

Expand Down Expand Up @@ -300,7 +300,7 @@ filtering:
example/value contains:
42
========================== no tests ran in 0.01s ===========================
========================== no tests ran in 0.00s ===========================
Clearing Cache content
----------------------
Expand Down
2 changes: 1 addition & 1 deletion doc/en/capture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ of the failing function and hide the other one:
test_module.py:12: AssertionError
-------------------------- Captured stdout setup ---------------------------
setting up <function test_func2 at 0xdeadbeef>
======================= 1 failed, 1 passed in 0.05s ========================
======================= 1 failed, 1 passed in 0.02s ========================
Accessing captured output from a test function
---------------------------------------------------
Expand Down
4 changes: 2 additions & 2 deletions doc/en/doctest.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ then you can just invoke ``pytest`` directly:
test_example.txt . [100%]
============================ 1 passed in 0.02s =============================
============================ 1 passed in 0.01s =============================
By default, pytest will collect ``test*.txt`` files looking for doctest directives, but you
can pass additional globs using the ``--doctest-glob`` option (multi-allowed).
Expand Down Expand Up @@ -66,7 +66,7 @@ and functions, including from test modules:
mymodule.py . [ 50%]
test_example.txt . [100%]
============================ 2 passed in 0.03s =============================
============================ 2 passed in 0.01s =============================
You can make these changes permanent in your project by
putting them into a pytest.ini file like this:
Expand Down
18 changes: 9 additions & 9 deletions doc/en/example/markers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ Or the inverse, running all tests except the webtest ones:
test_server.py::test_another PASSED [ 66%]
test_server.py::TestClass::test_method PASSED [100%]
===================== 3 passed, 1 deselected in 0.02s ======================
===================== 3 passed, 1 deselected in 0.01s ======================
Selecting tests based on their node ID
--------------------------------------
Expand Down Expand Up @@ -120,7 +120,7 @@ Or select multiple nodes:
test_server.py::TestClass::test_method PASSED [ 50%]
test_server.py::test_send_http PASSED [100%]
============================ 2 passed in 0.02s =============================
============================ 2 passed in 0.01s =============================
.. _node-id:

Expand Down Expand Up @@ -176,7 +176,7 @@ And you can also run all tests except the ones that match the keyword:
test_server.py::test_another PASSED [ 66%]
test_server.py::TestClass::test_method PASSED [100%]
===================== 3 passed, 1 deselected in 0.02s ======================
===================== 3 passed, 1 deselected in 0.01s ======================
Or to select "http" and "quick" tests:

Expand All @@ -192,7 +192,7 @@ Or to select "http" and "quick" tests:
test_server.py::test_send_http PASSED [ 50%]
test_server.py::test_something_quick PASSED [100%]
===================== 2 passed, 2 deselected in 0.02s ======================
===================== 2 passed, 2 deselected in 0.01s ======================
.. note::

Expand Down Expand Up @@ -413,7 +413,7 @@ the test needs:
test_someenv.py s [100%]
============================ 1 skipped in 0.01s ============================
============================ 1 skipped in 0.00s ============================
and here is one that specifies exactly the environment needed:

Expand Down Expand Up @@ -499,7 +499,7 @@ The output is as follows:
$ pytest -q -s
Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={})
.
1 passed in 0.01s
1 passed in 0.00s
We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``.

Expand Down Expand Up @@ -623,7 +623,7 @@ then you will see two tests skipped and two executed tests as expected:
========================= short test summary info ==========================
SKIPPED [2] $REGENDOC_TMPDIR/conftest.py:13: cannot run on platform linux
======================= 2 passed, 2 skipped in 0.02s =======================
======================= 2 passed, 2 skipped in 0.01s =======================
Note that if you specify a platform via the marker-command line option like this:

Expand Down Expand Up @@ -711,7 +711,7 @@ We can now use the ``-m option`` to select one set:
test_module.py:8: in test_interface_complex
assert 0
E assert 0
===================== 2 failed, 2 deselected in 0.07s ======================
===================== 2 failed, 2 deselected in 0.02s ======================
or to select both "event" and "interface" tests:

Expand Down Expand Up @@ -739,4 +739,4 @@ or to select both "event" and "interface" tests:
test_module.py:12: in test_event_simple
assert 0
E assert 0
===================== 3 failed, 1 deselected in 0.07s ======================
===================== 3 failed, 1 deselected in 0.03s ======================
6 changes: 3 additions & 3 deletions doc/en/example/nonpython.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ now execute the test specification:
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
======================= 1 failed, 1 passed in 0.06s ========================
======================= 1 failed, 1 passed in 0.02s ========================
.. regendoc:wipe
Expand Down Expand Up @@ -77,7 +77,7 @@ consulted when reporting in ``verbose`` mode:
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
======================= 1 failed, 1 passed in 0.07s ========================
======================= 1 failed, 1 passed in 0.02s ========================
.. regendoc:wipe
Expand All @@ -97,4 +97,4 @@ interesting to just look at the collection tree:
<YamlItem hello>
<YamlItem ok>
========================== no tests ran in 0.05s ===========================
========================== no tests ran in 0.02s ===========================
27 changes: 13 additions & 14 deletions doc/en/example/parametrize.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ let's run the full monty:
E assert 4 < 4
test_compute.py:4: AssertionError
1 failed, 4 passed in 0.06s
1 failed, 4 passed in 0.02s
As expected when running the full range of ``param1`` values
we'll get an error on the last one.
Expand Down Expand Up @@ -172,7 +172,7 @@ objects, they are still using the default pytest representation:
<Function test_timedistance_v3[forward]>
<Function test_timedistance_v3[backward]>
========================== no tests ran in 0.02s ===========================
========================== no tests ran in 0.01s ===========================
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
together with the actual data, instead of listing them separately.
Expand Down Expand Up @@ -229,7 +229,7 @@ this is a fully self-contained example which you can run with:
test_scenarios.py .... [100%]
============================ 4 passed in 0.02s =============================
============================ 4 passed in 0.01s =============================
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:

Expand All @@ -248,7 +248,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
<Function test_demo1[advanced]>
<Function test_demo2[advanced]>
========================== no tests ran in 0.02s ===========================
========================== no tests ran in 0.01s ===========================
Note that we told ``metafunc.parametrize()`` that your scenario values
should be considered class-scoped. With pytest-2.3 this leads to a
Expand Down Expand Up @@ -323,7 +323,7 @@ Let's first see how it looks like at collection time:
<Function test_db_initialized[d1]>
<Function test_db_initialized[d2]>
========================== no tests ran in 0.01s ===========================
========================== no tests ran in 0.00s ===========================
And then when we run the test:

Expand All @@ -343,7 +343,7 @@ And then when we run the test:
E Failed: deliberately failing for demo purposes
test_backends.py:8: Failed
1 failed, 1 passed in 0.05s
1 failed, 1 passed in 0.02s
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.

Expand Down Expand Up @@ -394,7 +394,7 @@ The result of this test will be successful:
<Module test_indirect_list.py>
<Function test_indirect[a-b]>
========================== no tests ran in 0.01s ===========================
========================== no tests ran in 0.00s ===========================
.. regendoc:wipe
Expand Down Expand Up @@ -454,7 +454,7 @@ argument sets to use for each test function. Let's run it:
E assert 1 == 2
test_parametrize.py:21: AssertionError
1 failed, 2 passed in 0.07s
1 failed, 2 passed in 0.03s
Indirect parametrization with multiple fixtures
--------------------------------------------------------------
Expand All @@ -475,11 +475,10 @@ Running it results in some skips if we don't have all the python interpreters in
.. code-block:: pytest
. $ pytest -rs -q multipython.py
ssssssssssss...ssssssssssss [100%]
ssssssssssss......sss...... [100%]
========================= short test summary info ==========================
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.5' not found
SKIPPED [12] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.7' not found
3 passed, 24 skipped in 0.43s
SKIPPED [15] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.5' not found
12 passed, 15 skipped in 0.62s
Indirect parametrization of optional implementations/imports
--------------------------------------------------------------------
Expand Down Expand Up @@ -548,7 +547,7 @@ If you run this with reporting for skips enabled:
========================= short test summary info ==========================
SKIPPED [1] $REGENDOC_TMPDIR/conftest.py:13: could not import 'opt2': No module named 'opt2'
======================= 1 passed, 1 skipped in 0.02s =======================
======================= 1 passed, 1 skipped in 0.01s =======================
You'll see that we don't have an ``opt2`` module and thus the second test run
of our ``test_func1`` was skipped. A few notes:
Expand Down Expand Up @@ -610,7 +609,7 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
test_pytest_param_example.py::test_eval[basic_2+4] PASSED [ 66%]
test_pytest_param_example.py::test_eval[basic_6*9] XFAIL [100%]
=============== 2 passed, 15 deselected, 1 xfailed in 0.23s ================
=============== 2 passed, 15 deselected, 1 xfailed in 0.08s ================
As the result:

Expand Down
4 changes: 2 additions & 2 deletions doc/en/example/pythoncollection.rst
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ You can always peek at the collection tree without running tests like this:
<Function test_method>
<Function test_anothermethod>
========================== no tests ran in 0.01s ===========================
========================== no tests ran in 0.00s ===========================
.. _customizing-test-collection:

Expand Down Expand Up @@ -297,7 +297,7 @@ file will be left out:
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 0 items
========================== no tests ran in 0.04s ===========================
========================== no tests ran in 0.01s ===========================
It's also possible to ignore files based on Unix shell-style wildcards by adding
patterns to ``collect_ignore_glob``.
Expand Down
2 changes: 1 addition & 1 deletion doc/en/example/reportingdemo.rst
Original file line number Diff line number Diff line change
Expand Up @@ -650,4 +650,4 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
failure_demo.py:282: AssertionError
============================ 44 failed in 0.82s ============================
============================ 44 failed in 0.26s ============================
Loading

0 comments on commit b135f5a

Please sign in to comment.