summaryrefslogtreecommitdiff
path: root/doc/en/example
diff options
context:
space:
mode:
authorBruno Oliveira <nicoddemus@gmail.com>2020-07-08 17:51:01 -0400
committerBruno Oliveira <nicoddemus@gmail.com>2020-07-08 17:51:01 -0400
commit7d033a89505484d03b1fc4a885ecea1777ca7457 (patch)
tree34530bfde1091003fdc9c12af2875e8a259d5578 /doc/en/example
parent64b19595a5820ba7fc0afbeba5d891361c00c9db (diff)
downloadpytest-7d033a89505484d03b1fc4a885ecea1777ca7457.tar.gz
Prepare release version 6.0.0rc1
Diffstat (limited to 'doc/en/example')
-rw-r--r--doc/en/example/markers.rst38
-rw-r--r--doc/en/example/nonpython.rst9
-rw-r--r--doc/en/example/parametrize.rst33
-rw-r--r--doc/en/example/pythoncollection.rst14
-rw-r--r--doc/en/example/reportingdemo.rst123
-rw-r--r--doc/en/example/simple.rst24
6 files changed, 129 insertions, 112 deletions
diff --git a/doc/en/example/markers.rst b/doc/en/example/markers.rst
index 55dd8af23..454304679 100644
--- a/doc/en/example/markers.rst
+++ b/doc/en/example/markers.rst
@@ -45,7 +45,7 @@ You can then restrict a test run to only run tests marked with ``webtest``:
$ pytest -v -m webtest
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 3 deselected / 1 selected
@@ -60,7 +60,7 @@ Or the inverse, running all tests except the webtest ones:
$ pytest -v -m "not webtest"
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 1 deselected / 3 selected
@@ -82,7 +82,7 @@ tests based on their module, class, method, or function name:
$ pytest -v test_server.py::TestClass::test_method
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 1 item
@@ -97,7 +97,7 @@ You can also select on the class:
$ pytest -v test_server.py::TestClass
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 1 item
@@ -112,7 +112,7 @@ Or select multiple nodes:
$ pytest -v test_server.py::TestClass test_server.py::test_send_http
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 2 items
@@ -156,7 +156,7 @@ The expression matching is now case-insensitive.
$ pytest -v -k http # running with the above defined example module
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 3 deselected / 1 selected
@@ -171,7 +171,7 @@ And you can also run all tests except the ones that match the keyword:
$ pytest -k "not send_http" -v
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 1 deselected / 3 selected
@@ -188,7 +188,7 @@ Or to select "http" and "quick" tests:
$ pytest -k "http or quick" -v
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 2 deselected / 2 selected
@@ -228,9 +228,9 @@ You can ask which markers exist for your test suite - the list includes our just
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
- @pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see https://docs.pytest.org/en/stable/skipping.html
+ @pytest.mark.skipif(condition, ..., *, reason=...): skip the given test function if any of the conditions evaluate to True. Example: skipif(sys.platform == 'win32') skips the test if we are on the win32 platform. See https://docs.pytest.org/en/stable/reference.html#pytest-mark-skipif
- @pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/stable/skipping.html
+ @pytest.mark.xfail(condition, ..., *, reason=..., run=True, raises=None, strict=xfail_strict): mark the test function as an expected failure if any of the conditions evaluate to True. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/stable/reference.html#pytest-mark-xfail
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/stable/parametrize.html for more info and examples.
@@ -398,7 +398,7 @@ the test needs:
$ pytest -E stage2
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 1 item
@@ -413,7 +413,7 @@ and here is one that specifies exactly the environment needed:
$ pytest -E stage1
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 1 item
@@ -433,9 +433,9 @@ The ``--markers`` option always gives you a list of available markers:
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
- @pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see https://docs.pytest.org/en/stable/skipping.html
+ @pytest.mark.skipif(condition, ..., *, reason=...): skip the given test function if any of the conditions evaluate to True. Example: skipif(sys.platform == 'win32') skips the test if we are on the win32 platform. See https://docs.pytest.org/en/stable/reference.html#pytest-mark-skipif
- @pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/stable/skipping.html
+ @pytest.mark.xfail(condition, ..., *, reason=..., run=True, raises=None, strict=xfail_strict): mark the test function as an expected failure if any of the conditions evaluate to True. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/stable/reference.html#pytest-mark-xfail
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/stable/parametrize.html for more info and examples.
@@ -606,7 +606,7 @@ then you will see two tests skipped and two executed tests as expected:
$ pytest -rs # this option reports skip reasons
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 4 items
@@ -614,7 +614,7 @@ then you will see two tests skipped and two executed tests as expected:
test_plat.py s.s. [100%]
========================= short test summary info ==========================
- SKIPPED [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux
+ SKIPPED [2] conftest.py:12: cannot run on platform linux
======================= 2 passed, 2 skipped in 0.12s =======================
Note that if you specify a platform via the marker-command line option like this:
@@ -623,7 +623,7 @@ Note that if you specify a platform via the marker-command line option like this
$ pytest -m linux
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 4 items / 3 deselected / 1 selected
@@ -687,7 +687,7 @@ We can now use the ``-m option`` to select one set:
$ pytest -m interface --tb=short
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 4 items / 2 deselected / 2 selected
@@ -714,7 +714,7 @@ or to select both "event" and "interface" tests:
$ pytest -m "interface or event" --tb=short
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 4 items / 1 deselected / 3 selected
diff --git a/doc/en/example/nonpython.rst b/doc/en/example/nonpython.rst
index 083f6b439..d15b7ae8b 100644
--- a/doc/en/example/nonpython.rst
+++ b/doc/en/example/nonpython.rst
@@ -29,7 +29,7 @@ now execute the test specification:
nonpython $ pytest test_simple.yaml
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython
collected 2 items
@@ -66,7 +66,7 @@ consulted when reporting in ``verbose`` mode:
nonpython $ pytest -v
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython
collecting ... collected 2 items
@@ -92,11 +92,12 @@ interesting to just look at the collection tree:
nonpython $ pytest --collect-only
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython
collected 2 items
- <Package $REGENDOC_TMPDIR/nonpython>
+
+ <Package nonpython>
<YamlFile test_simple.yaml>
<YamlItem hello>
<YamlItem ok>
diff --git a/doc/en/example/parametrize.rst b/doc/en/example/parametrize.rst
index 0b61a19bc..8fa48bfe3 100644
--- a/doc/en/example/parametrize.rst
+++ b/doc/en/example/parametrize.rst
@@ -160,10 +160,11 @@ objects, they are still using the default pytest representation:
$ pytest test_time.py --collect-only
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 8 items
+
<Module test_time.py>
<Function test_timedistance_v0[a0-b0-expected0]>
<Function test_timedistance_v0[a1-b1-expected1]>
@@ -224,7 +225,7 @@ this is a fully self-contained example which you can run with:
$ pytest test_scenarios.py
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 4 items
@@ -239,10 +240,11 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
$ pytest --collect-only test_scenarios.py
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 4 items
+
<Module test_scenarios.py>
<Class TestSampleWithScenarios>
<Function test_demo1[basic]>
@@ -317,10 +319,11 @@ Let's first see how it looks like at collection time:
$ pytest test_backends.py --collect-only
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 2 items
+
<Module test_backends.py>
<Function test_db_initialized[d1]>
<Function test_db_initialized[d2]>
@@ -415,15 +418,14 @@ The result of this test will be successful:
$ pytest -v test_indirect_list.py
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
- collected 1 item
-
- test_indirect_list.py::test_indirect[a-b] PASSED
+ collecting ... collected 1 item
- ========================== 1 passed in 0.01s ===============================
+ test_indirect_list.py::test_indirect[a-b] PASSED [100%]
+ ============================ 1 passed in 0.12s =============================
.. regendoc:wipe
@@ -506,10 +508,11 @@ Running it results in some skips if we don't have all the python interpreters in
.. code-block:: pytest
. $ pytest -rs -q multipython.py
- ssssssssssss......sss...... [100%]
+ ssssssssssssssssssssssss... [100%]
========================= short test summary info ==========================
- SKIPPED [15] $REGENDOC_TMPDIR/CWD/multipython.py:29: 'python3.5' not found
- 12 passed, 15 skipped in 0.12s
+ SKIPPED [12] multipython.py:29: 'python3.5' not found
+ SKIPPED [12] multipython.py:29: 'python3.6' not found
+ 3 passed, 24 skipped in 0.12s
Indirect parametrization of optional implementations/imports
--------------------------------------------------------------------
@@ -569,7 +572,7 @@ If you run this with reporting for skips enabled:
$ pytest -rs test_module.py
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 2 items
@@ -577,7 +580,7 @@ If you run this with reporting for skips enabled:
test_module.py .s [100%]
========================= short test summary info ==========================
- SKIPPED [1] $REGENDOC_TMPDIR/conftest.py:12: could not import 'opt2': No module named 'opt2'
+ SKIPPED [1] conftest.py:12: could not import 'opt2': No module named 'opt2'
======================= 1 passed, 1 skipped in 0.12s =======================
You'll see that we don't have an ``opt2`` module and thus the second test run
@@ -631,7 +634,7 @@ Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
$ pytest -v -m basic
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 14 items / 11 deselected / 3 selected
diff --git a/doc/en/example/pythoncollection.rst b/doc/en/example/pythoncollection.rst
index 30d106ada..85e5da263 100644
--- a/doc/en/example/pythoncollection.rst
+++ b/doc/en/example/pythoncollection.rst
@@ -147,10 +147,11 @@ The test collection would look like this:
$ pytest --collect-only
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
- rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
+ rootdir: $REGENDOC_TMPDIR, configfile: pytest.ini
collected 2 items
+
<Module check_myapp.py>
<Class CheckMyApp>
<Function simple_check>
@@ -208,10 +209,11 @@ You can always peek at the collection tree without running tests like this:
. $ pytest --collect-only pythoncollection.py
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
- rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
+ rootdir: $REGENDOC_TMPDIR, configfile: pytest.ini
collected 3 items
+
<Module CWD/pythoncollection.py>
<Function test_function>
<Class TestClass>
@@ -289,9 +291,9 @@ file will be left out:
$ pytest --collect-only
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
- rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
+ rootdir: $REGENDOC_TMPDIR, configfile: pytest.ini
collected 0 items
========================== no tests ran in 0.12s ===========================
diff --git a/doc/en/example/reportingdemo.rst b/doc/en/example/reportingdemo.rst
index 23c302eca..f1b973f3b 100644
--- a/doc/en/example/reportingdemo.rst
+++ b/doc/en/example/reportingdemo.rst
@@ -9,7 +9,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
assertion $ pytest failure_demo.py
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR/assertion
collected 44 items
@@ -26,7 +26,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> assert param1 * 2 < param2
E assert (3 * 2) < 6
- failure_demo.py:20: AssertionError
+ failure_demo.py:19: AssertionError
_________________________ TestFailing.test_simple __________________________
self = <failure_demo.TestFailing object at 0xdeadbeef>
@@ -43,7 +43,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E + where 42 = <function TestFailing.test_simple.<locals>.f at 0xdeadbeef>()
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>()
- failure_demo.py:31: AssertionError
+ failure_demo.py:30: AssertionError
____________________ TestFailing.test_simple_multiline _____________________
self = <failure_demo.TestFailing object at 0xdeadbeef>
@@ -51,7 +51,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
def test_simple_multiline(self):
> otherfunc_multi(42, 6 * 9)
- failure_demo.py:34:
+ failure_demo.py:33:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 42, b = 54
@@ -60,7 +60,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> assert a == b
E assert 42 == 54
- failure_demo.py:15: AssertionError
+ failure_demo.py:14: AssertionError
___________________________ TestFailing.test_not ___________________________
self = <failure_demo.TestFailing object at 0xdeadbeef>
@@ -73,7 +73,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E assert not 42
E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>()
- failure_demo.py:40: AssertionError
+ failure_demo.py:39: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -84,7 +84,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E - eggs
E + spam
- failure_demo.py:45: AssertionError
+ failure_demo.py:44: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -97,7 +97,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E + foo 1 bar
E ? ^
- failure_demo.py:48: AssertionError
+ failure_demo.py:47: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -110,7 +110,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E + spam
E bar
- failure_demo.py:51: AssertionError
+ failure_demo.py:50: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -127,7 +127,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E + 1111111111a222222222
E ? ^
- failure_demo.py:56: AssertionError
+ failure_demo.py:55: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -147,7 +147,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E
E ...Full output truncated (7 lines hidden), use '-vv' to show
- failure_demo.py:61: AssertionError
+ failure_demo.py:60: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -158,7 +158,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E At index 2 diff: 2 != 3
E Use -v to get the full diff
- failure_demo.py:64: AssertionError
+ failure_demo.py:63: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -171,7 +171,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E At index 100 diff: 1 != 2
E Use -v to get the full diff
- failure_demo.py:69: AssertionError
+ failure_demo.py:68: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -189,7 +189,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E
E ...Full output truncated (2 lines hidden), use '-vv' to show
- failure_demo.py:72: AssertionError
+ failure_demo.py:71: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -207,7 +207,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E
E ...Full output truncated (2 lines hidden), use '-vv' to show
- failure_demo.py:75: AssertionError
+ failure_demo.py:74: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -218,7 +218,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E Right contains one more item: 3
E Use -v to get the full diff
- failure_demo.py:78: AssertionError
+ failure_demo.py:77: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -227,7 +227,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> assert 1 in [0, 2, 3, 4, 5]
E assert 1 in [0, 2, 3, 4, 5]
- failure_demo.py:81: AssertionError
+ failure_demo.py:80: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -246,7 +246,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E
E ...Full output truncated (2 lines hidden), use '-vv' to show
- failure_demo.py:85: AssertionError
+ failure_demo.py:84: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -259,7 +259,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E single foo line
E ? +++
- failure_demo.py:89: AssertionError
+ failure_demo.py:88: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -272,7 +272,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? +++
- failure_demo.py:93: AssertionError
+ failure_demo.py:92: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -285,7 +285,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- failure_demo.py:97: AssertionError
+ failure_demo.py:96: AssertionError
______________ TestSpecialisedExplanations.test_eq_dataclass _______________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -302,11 +302,17 @@ Here is a nice run of several failures and how ``pytest`` presents things:
right = Foo(1, "c")
> assert left == right
E AssertionError: assert TestSpecialis...oo(a=1, b='b') == TestSpecialis...oo(a=1, b='c')
+ E
E Omitting 1 identical items, use -vv to show
E Differing attributes:
- E b: 'b' != 'c'
+ E ['b']
+ E
+ E Drill down into differing attribute b:
+ E b: 'b' != 'c'...
+ E
+ E ...Full output truncated (3 lines hidden), use '-vv' to show
- failure_demo.py:109: AssertionError
+ failure_demo.py:108: AssertionError
________________ TestSpecialisedExplanations.test_eq_attrs _________________
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
@@ -323,11 +329,17 @@ Here is a nice run of several failures and how ``pytest`` presents things:
right = Foo(1, "c")
> assert left == right
E AssertionError: assert Foo(a=1, b='b') == Foo(a=1, b='c')
+ E
E Omitting 1 identical items, use -vv to show
E Differing attributes:
- E b: 'b' != 'c'
+ E ['b']
+ E
+ E Drill down into differing attribute b:
+ E b: 'b' != 'c'...
+ E
+ E ...Full output truncated (3 lines hidden), use '-vv' to show
- failure_demo.py:121: AssertionError
+ failure_demo.py:120: AssertionError
______________________________ test_attribute ______________________________
def test_attribute():
@@ -339,7 +351,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E assert 1 == 2
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.b
- failure_demo.py:129: AssertionError
+ failure_demo.py:128: AssertionError
_________________________ test_attribute_instance __________________________
def test_attribute_instance():
@@ -351,7 +363,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
- failure_demo.py:136: AssertionError
+ failure_demo.py:135: AssertionError
__________________________ test_attribute_failure __________________________
def test_attribute_failure():
@@ -364,7 +376,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
i = Foo()
> assert i.b == 2
- failure_demo.py:147:
+ failure_demo.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0xdeadbeef>
@@ -373,7 +385,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> raise Exception("Failed to get attrib")
E Exception: Failed to get attrib
- failure_demo.py:142: Exception
+ failure_demo.py:141: Exception
_________________________ test_attribute_multiple __________________________
def test_attribute_multiple():
@@ -390,7 +402,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef>.b
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
- failure_demo.py:157: AssertionError
+ failure_demo.py:156: AssertionError
__________________________ TestRaises.test_raises __________________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
@@ -400,7 +412,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> raises(TypeError, int, s)
E ValueError: invalid literal for int() with base 10: 'qwe'
- failure_demo.py:167: ValueError
+ failure_demo.py:166: ValueError
______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
@@ -409,7 +421,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> raises(OSError, int, "3")
E Failed: DID NOT RAISE <class 'OSError'>
- failure_demo.py:170: Failed
+ failure_demo.py:169: Failed
__________________________ TestRaises.test_raise ___________________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
@@ -418,7 +430,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> raise ValueError("demo error")
E ValueError: demo error
- failure_demo.py:173: ValueError
+ failure_demo.py:172: ValueError
________________________ TestRaises.test_tupleerror ________________________
self = <failure_demo.TestRaises object at 0xdeadbeef>
@@ -427,7 +439,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> a, b = [1] # NOQA
E ValueError: not enough values to unpack (expected 2, got 1)
- failure_demo.py:176: ValueError
+ failure_demo.py:175: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
self = <failure_demo.TestRaises object at 0xdeadbeef>
@@ -438,7 +450,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> a, b = items.pop()
E TypeError: cannot unpack non-iterable int object
- failure_demo.py:181: TypeError
+ failure_demo.py:180: TypeError
--------------------------- Captured stdout call ---------------------------
items is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________
@@ -449,7 +461,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> if namenotexi: # NOQA
E NameError: name 'namenotexi' is not defined
- failure_demo.py:184: NameError
+ failure_demo.py:183: NameError
____________________ test_dynamic_compile_shows_nicely _____________________
def test_dynamic_compile_shows_nicely():
@@ -460,19 +472,18 @@ Here is a nice run of several failures and how ``pytest`` presents things:
name = "abc-123"
spec = importlib.util.spec_from_loader(name, loader=None)
module = importlib.util.module_from_spec(spec)
- code = _pytest._code.compile(src, name, "exec")
+ code = compile(src, name, "exec")
exec(code, module.__dict__)
sys.modules[name] = module
> module.foo()
- failure_demo.py:203:
+ failure_demo.py:202:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
- def foo():
- > assert 1 == 0
- E AssertionError
+ > ???
+ E AssertionError
- <0-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:200>:2: AssertionError
+ abc-123:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@@ -486,9 +497,9 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> somefunc(f(), g())
- failure_demo.py:214:
+ failure_demo.py:213:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
- failure_demo.py:11: in somefunc
+ failure_demo.py:10: in somefunc
otherfunc(x, y)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
@@ -498,7 +509,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> assert a == b
E assert 44 == 43
- failure_demo.py:7: AssertionError
+ failure_demo.py:6: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@@ -508,7 +519,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> a, b = items
E ValueError: not enough values to unpack (expected 2, got 0)
- failure_demo.py:218: ValueError
+ failure_demo.py:217: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@@ -518,7 +529,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> a, b = items
E TypeError: cannot unpack non-iterable int object
- failure_demo.py:222: TypeError
+ failure_demo.py:221: TypeError
______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@@ -531,7 +542,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
- failure_demo.py:227: AssertionError
+ failure_demo.py:226: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@@ -550,7 +561,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef>()
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>()
- failure_demo.py:236: AssertionError
+ failure_demo.py:235: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@@ -561,7 +572,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E + where False = isinstance(43, float)
E + where 43 = globf(42)
- failure_demo.py:239: AssertionError
+ failure_demo.py:238: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@@ -572,7 +583,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
- failure_demo.py:243: AssertionError
+ failure_demo.py:242: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@@ -582,7 +593,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E assert 11 < 5
E + where 11 = globf(10)
- failure_demo.py:246: AssertionError
+ failure_demo.py:245: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@@ -593,7 +604,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
> assert x == 0
E assert 1 == 0
- failure_demo.py:251: AssertionError
+ failure_demo.py:250: AssertionError
___________________ TestCustomAssertMsg.test_single_line ___________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
@@ -608,7 +619,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a
- failure_demo.py:262: AssertionError
+ failure_demo.py:261: AssertionError
____________________ TestCustomAssertMsg.test_multiline ____________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
@@ -627,7 +638,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a
- failure_demo.py:269: AssertionError
+ failure_demo.py:268: AssertionError
___________________ TestCustomAssertMsg.test_custom_repr ___________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
@@ -649,7 +660,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E assert 1 == 2
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
- failure_demo.py:282: AssertionError
+ failure_demo.py:281: AssertionError
========================= short test summary info ==========================
FAILED failure_demo.py::test_generative[3-6] - assert (3 * 2) < 6
FAILED failure_demo.py::TestFailing::test_simple - assert 42 == 43
diff --git a/doc/en/example/simple.rst b/doc/en/example/simple.rst
index d1a1ecdfc..e9952dad4 100644
--- a/doc/en/example/simple.rst
+++ b/doc/en/example/simple.rst
@@ -175,7 +175,7 @@ directory with the above conftest.py:
$ pytest
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 0 items
@@ -240,7 +240,7 @@ and when running it will see a skipped "slow" test:
$ pytest -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 2 items
@@ -257,7 +257,7 @@ Or run it including the ``slow`` marked test:
$ pytest --runslow
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 2 items
@@ -401,7 +401,7 @@ which will add the string to the test header accordingly:
$ pytest
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
project deps: mylib-1.1
rootdir: $REGENDOC_TMPDIR
@@ -430,7 +430,7 @@ which will add info only when run with "--v":
$ pytest -v
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
info1: did you know that ...
did you?
@@ -445,7 +445,7 @@ and nothing when run plainly:
$ pytest
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 0 items
@@ -485,14 +485,14 @@ Now we can profile which test functions execute the slowest:
$ pytest --durations=3
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 3 items
test_some_are_slow.py ... [100%]
- ========================= slowest 3 test durations =========================
+ =========================== slowest 3 durations ============================
0.30s call test_some_are_slow.py::test_funcslow2
0.20s call test_some_are_slow.py::test_funcslow1
0.10s call test_some_are_slow.py::test_funcfast
@@ -591,7 +591,7 @@ If we run this:
$ pytest -rx
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 4 items
@@ -675,7 +675,7 @@ We can run this:
$ pytest
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 7 items
@@ -794,7 +794,7 @@ and run them:
$ pytest test_module.py
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 2 items
@@ -901,7 +901,7 @@ and run it:
$ pytest -s test_module.py
=========================== test session starts ============================
- platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
+ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 3 items