summaryrefslogtreecommitdiff
path: root/doc/en/usage.rst
blob: e25793f876905b569b06956fa2e86fb0a43306a5 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837

.. _usage:

Usage and Invocations
==========================================


.. _cmdline:

Calling pytest through ``python -m pytest``
-----------------------------------------------------



You can invoke testing through the Python interpreter from the command line:

.. code-block:: text

    python -m pytest [...]

This is almost equivalent to invoking the command line script ``pytest [...]``
directly, except that calling via ``python`` will also add the current directory to ``sys.path``.

Possible exit codes
--------------------------------------------------------------

Running ``pytest`` can result in six different exit codes:

:Exit code 0: All tests were collected and passed successfully
:Exit code 1: Tests were collected and run but some of the tests failed
:Exit code 2: Test execution was interrupted by the user
:Exit code 3: Internal error happened while executing tests
:Exit code 4: pytest command line usage error
:Exit code 5: No tests were collected

They are represented by the :class:`_pytest.config.ExitCode` enum. The exit codes being a part of the public API can be imported and accessed directly using:

.. code-block:: python

    from pytest import ExitCode

.. note::

    If you would like to customize the exit code in some scenarios, specially when
    no tests are collected, consider using the
    `pytest-custom_exit_code <https://github.com/yashtodi94/pytest-custom_exit_code>`__
    plugin.


Getting help on version, option names, environment variables
--------------------------------------------------------------

.. code-block:: bash

    pytest --version   # shows where pytest was imported from
    pytest --fixtures  # show available builtin function arguments
    pytest -h | --help # show help on command line and config file options


.. _maxfail:

Stopping after the first (or N) failures
---------------------------------------------------

To stop the testing process after the first (N) failures:

.. code-block:: bash

    pytest -x           # stop after first failure
    pytest --maxfail=2  # stop after two failures

.. _select-tests:

Specifying tests / selecting tests
---------------------------------------------------

Pytest supports several ways to run and select tests from the command-line.

**Run tests in a module**

.. code-block:: bash

    pytest test_mod.py

**Run tests in a directory**

.. code-block:: bash

    pytest testing/

**Run tests by keyword expressions**

.. code-block:: bash

    pytest -k "MyClass and not method"

This will run tests which contain names that match the given *string expression* (case-insensitive),
which can include Python operators that use filenames, class names and function names as variables.
The example above will run ``TestMyClass.test_something``  but not ``TestMyClass.test_method_simple``.

.. _nodeids:

**Run tests by node ids**

Each collected test is assigned a unique ``nodeid`` which consist of the module filename followed
by specifiers like class names, function names and parameters from parametrization, separated by ``::`` characters.

To run a specific test within a module:

.. code-block:: bash

    pytest test_mod.py::test_func


Another example specifying a test method in the command line:

.. code-block:: bash

    pytest test_mod.py::TestClass::test_method

**Run tests by marker expressions**

.. code-block:: bash

    pytest -m slow

Will run all tests which are decorated with the ``@pytest.mark.slow`` decorator.

For more information see :ref:`marks <mark>`.

**Run tests from packages**

.. code-block:: bash

    pytest --pyargs pkg.testing

This will import ``pkg.testing`` and use its filesystem location to find and run tests from.


Modifying Python traceback printing
----------------------------------------------

Examples for modifying traceback printing:

.. code-block:: bash

    pytest --showlocals # show local variables in tracebacks
    pytest -l           # show local variables (shortcut)

    pytest --tb=auto    # (default) 'long' tracebacks for the first and last
                         # entry, but 'short' style for the other entries
    pytest --tb=long    # exhaustive, informative traceback formatting
    pytest --tb=short   # shorter traceback format
    pytest --tb=line    # only one line per failure
    pytest --tb=native  # Python standard library formatting
    pytest --tb=no      # no traceback at all

The ``--full-trace`` causes very long traces to be printed on error (longer
than ``--tb=long``). It also ensures that a stack trace is printed on
**KeyboardInterrupt** (Ctrl+C).
This is very useful if the tests are taking too long and you interrupt them
with Ctrl+C to find out where the tests are *hanging*. By default no output
will be shown (because KeyboardInterrupt is caught by pytest). By using this
option you make sure a trace is shown.


.. _`pytest.detailed_failed_tests_usage`:

Detailed summary report
-----------------------

The ``-r`` flag can be used to display a "short test summary info" at the end of the test session,
making it easy in large test suites to get a clear picture of all failures, skips, xfails, etc.

It defaults to ``fE`` to list failures and errors.

Example:

.. code-block:: python

    # content of test_example.py
    import pytest


    @pytest.fixture
    def error_fixture():
        assert 0


    def test_ok():
        print("ok")


    def test_fail():
        assert 0


    def test_error(error_fixture):
        pass


    def test_skip():
        pytest.skip("skipping this test")


    def test_xfail():
        pytest.xfail("xfailing this test")


    @pytest.mark.xfail(reason="always xfail")
    def test_xpass():
        pass


.. code-block:: pytest

    $ pytest -ra
    =========================== test session starts ============================
    platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
    cachedir: $PYTHON_PREFIX/.pytest_cache
    rootdir: $REGENDOC_TMPDIR
    collected 6 items

    test_example.py .FEsxX                                               [100%]

    ================================== ERRORS ==================================
    _______________________ ERROR at setup of test_error _______________________

        @pytest.fixture
        def error_fixture():
    >       assert 0
    E       assert 0

    test_example.py:6: AssertionError
    ================================= FAILURES =================================
    ________________________________ test_fail _________________________________

        def test_fail():
    >       assert 0
    E       assert 0

    test_example.py:14: AssertionError
    ========================= short test summary info ==========================
    SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:22: skipping this test
    XFAIL test_example.py::test_xfail
      reason: xfailing this test
    XPASS test_example.py::test_xpass always xfail
    ERROR test_example.py::test_error - assert 0
    FAILED test_example.py::test_fail - assert 0
    == 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===

The ``-r`` options accepts a number of characters after it, with ``a`` used
above meaning "all except passes".

Here is the full list of available characters that can be used:

 - ``f`` - failed
 - ``E`` - error
 - ``s`` - skipped
 - ``x`` - xfailed
 - ``X`` - xpassed
 - ``p`` - passed
 - ``P`` - passed with output

Special characters for (de)selection of groups:

 - ``a`` - all except ``pP``
 - ``A`` - all
 - ``N`` - none, this can be used to display nothing (since ``fE`` is the default)

More than one character can be used, so for example to only see failed and skipped tests, you can execute:

.. code-block:: pytest

    $ pytest -rfs
    =========================== test session starts ============================
    platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
    cachedir: $PYTHON_PREFIX/.pytest_cache
    rootdir: $REGENDOC_TMPDIR
    collected 6 items

    test_example.py .FEsxX                                               [100%]

    ================================== ERRORS ==================================
    _______________________ ERROR at setup of test_error _______________________

        @pytest.fixture
        def error_fixture():
    >       assert 0
    E       assert 0

    test_example.py:6: AssertionError
    ================================= FAILURES =================================
    ________________________________ test_fail _________________________________

        def test_fail():
    >       assert 0
    E       assert 0

    test_example.py:14: AssertionError
    ========================= short test summary info ==========================
    FAILED test_example.py::test_fail - assert 0
    SKIPPED [1] $REGENDOC_TMPDIR/test_example.py:22: skipping this test
    == 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===

Using ``p`` lists the passing tests, whilst ``P`` adds an extra section "PASSES" with those tests that passed but had
captured output:

.. code-block:: pytest

    $ pytest -rpP
    =========================== test session starts ============================
    platform linux -- Python 3.x.y, pytest-5.x.y, py-1.x.y, pluggy-0.x.y
    cachedir: $PYTHON_PREFIX/.pytest_cache
    rootdir: $REGENDOC_TMPDIR
    collected 6 items

    test_example.py .FEsxX                                               [100%]

    ================================== ERRORS ==================================
    _______________________ ERROR at setup of test_error _______________________

        @pytest.fixture
        def error_fixture():
    >       assert 0
    E       assert 0

    test_example.py:6: AssertionError
    ================================= FAILURES =================================
    ________________________________ test_fail _________________________________

        def test_fail():
    >       assert 0
    E       assert 0

    test_example.py:14: AssertionError
    ================================== PASSES ==================================
    _________________________________ test_ok __________________________________
    --------------------------- Captured stdout call ---------------------------
    ok
    ========================= short test summary info ==========================
    PASSED test_example.py::test_ok
    == 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===

.. _pdb-option:

Dropping to PDB_ (Python Debugger) on failures
-----------------------------------------------

.. _PDB: http://docs.python.org/library/pdb.html

Python comes with a builtin Python debugger called PDB_.  ``pytest``
allows one to drop into the PDB_ prompt via a command line option:

.. code-block:: bash

    pytest --pdb

This will invoke the Python debugger on every failure (or KeyboardInterrupt).
Often you might only want to do this for the first failing test to understand
a certain failure situation:

.. code-block:: bash

    pytest -x --pdb   # drop to PDB on first failure, then end test session
    pytest --pdb --maxfail=3  # drop to PDB for first three failures

Note that on any failure the exception information is stored on
``sys.last_value``, ``sys.last_type`` and ``sys.last_traceback``. In
interactive use, this allows one to drop into postmortem debugging with
any debug tool. One can also manually access the exception information,
for example::

    >>> import sys
    >>> sys.last_traceback.tb_lineno
    42
    >>> sys.last_value
    AssertionError('assert result == "ok"',)

.. _trace-option:

Dropping to PDB_ (Python Debugger) at the start of a test
----------------------------------------------------------


``pytest`` allows one to drop into the PDB_ prompt immediately at the start of each test via a command line option:

.. code-block:: bash

    pytest --trace

This will invoke the Python debugger at the start of every test.

.. _breakpoints:

Setting breakpoints
-------------------

.. versionadded: 2.4.0

To set a breakpoint in your code use the native Python ``import pdb;pdb.set_trace()`` call
in your code and pytest automatically disables its output capture for that test:

* Output capture in other tests is not affected.
* Any prior test output that has already been captured and will be processed as
  such.
* Output capture gets resumed when ending the debugger session (via the
  ``continue`` command).


.. _`breakpoint-builtin`:

Using the builtin breakpoint function
-------------------------------------

Python 3.7 introduces a builtin ``breakpoint()`` function.
Pytest supports the use of ``breakpoint()`` with the following behaviours:

 - When ``breakpoint()`` is called and ``PYTHONBREAKPOINT`` is set to the default value, pytest will use the custom internal PDB trace UI instead of the system default ``Pdb``.
 - When tests are complete, the system will default back to the system ``Pdb`` trace UI.
 - With ``--pdb`` passed to pytest, the custom internal Pdb trace UI is used with both ``breakpoint()`` and failed tests/unhandled exceptions.
 - ``--pdbcls`` can be used to specify a custom debugger class.

.. _durations:

Profiling test execution duration
-------------------------------------


To get a list of the slowest 10 test durations:

.. code-block:: bash

    pytest --durations=10

By default, pytest will not show test durations that are too small (<0.01s) unless ``-vv`` is passed on the command-line.


.. _faulthandler:

Fault Handler
-------------

.. versionadded:: 5.0

The `faulthandler <https://docs.python.org/3/library/faulthandler.html>`__ standard module
can be used to dump Python tracebacks on a segfault or after a timeout.

The module is automatically enabled for pytest runs, unless the ``-p no:faulthandler`` is given
on the command-line.

Also the :confval:`faulthandler_timeout=X<faulthandler_timeout>` configuration option can be used
to dump the traceback of all threads if a test takes longer than ``X``
seconds to finish (not available on Windows).

.. note::

    This functionality has been integrated from the external
    `pytest-faulthandler <https://github.com/pytest-dev/pytest-faulthandler>`__ plugin, with two
    small differences:

    * To disable it, use ``-p no:faulthandler`` instead of ``--no-faulthandler``: the former
      can be used with any plugin, so it saves one option.

    * The ``--faulthandler-timeout`` command-line option has become the
      :confval:`faulthandler_timeout` configuration option. It can still be configured from
      the command-line using ``-o faulthandler_timeout=X``.


Creating JUnitXML format files
----------------------------------------------------

To create result files which can be read by Jenkins_ or other Continuous
integration servers, use this invocation:

.. code-block:: bash

    pytest --junitxml=path

to create an XML file at ``path``.



To set the name of the root test suite xml item, you can configure the ``junit_suite_name`` option in your config file:

.. code-block:: ini

    [pytest]
    junit_suite_name = my_suite

.. versionadded:: 4.0

JUnit XML specification seems to indicate that ``"time"`` attribute
should report total test execution times, including setup and teardown
(`1 <http://windyroad.com.au/dl/Open%20Source/JUnit.xsd>`_, `2
<https://www.ibm.com/support/knowledgecenter/en/SSQ2R2_14.1.0/com.ibm.rsar.analysis.codereview.cobol.doc/topics/cac_useresults_junit.html>`_).
It is the default pytest behavior. To report just call durations
instead, configure the ``junit_duration_report`` option like this:

.. code-block:: ini

    [pytest]
    junit_duration_report = call

.. _record_property example:

record_property
^^^^^^^^^^^^^^^

If you want to log additional information for a test, you can use the
``record_property`` fixture:

.. code-block:: python

    def test_function(record_property):
        record_property("example_key", 1)
        assert True

This will add an extra property ``example_key="1"`` to the generated
``testcase`` tag:

.. code-block:: xml

    <testcase classname="test_function" file="test_function.py" line="0" name="test_function" time="0.0009">
      <properties>
        <property name="example_key" value="1" />
      </properties>
    </testcase>

Alternatively, you can integrate this functionality with custom markers:

.. code-block:: python

    # content of conftest.py


    def pytest_collection_modifyitems(session, config, items):
        for item in items:
            for marker in item.iter_markers(name="test_id"):
                test_id = marker.args[0]
                item.user_properties.append(("test_id", test_id))

And in your tests:

.. code-block:: python

    # content of test_function.py
    import pytest


    @pytest.mark.test_id(1501)
    def test_function():
        assert True

Will result in:

.. code-block:: xml

    <testcase classname="test_function" file="test_function.py" line="0" name="test_function" time="0.0009">
      <properties>
        <property name="test_id" value="1501" />
      </properties>
    </testcase>

.. warning::

    Please note that using this feature will break schema verifications for the latest JUnitXML schema.
    This might be a problem when used with some CI servers.

record_xml_attribute
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^



To add an additional xml attribute to a testcase element, you can use
``record_xml_attribute`` fixture. This can also be used to override existing values:

.. code-block:: python

    def test_function(record_xml_attribute):
        record_xml_attribute("assertions", "REQ-1234")
        record_xml_attribute("classname", "custom_classname")
        print("hello world")
        assert True

Unlike ``record_property``, this will not add a new child element.
Instead, this will add an attribute ``assertions="REQ-1234"`` inside the generated
``testcase`` tag and override the default ``classname`` with ``"classname=custom_classname"``:

.. code-block:: xml

    <testcase classname="custom_classname" file="test_function.py" line="0" name="test_function" time="0.003" assertions="REQ-1234">
        <system-out>
            hello world
        </system-out>
    </testcase>

.. warning::

    ``record_xml_attribute`` is an experimental feature, and its interface might be replaced
    by something more powerful and general in future versions. The
    functionality per-se will be kept, however.

    Using this over ``record_xml_property`` can help when using ci tools to parse the xml report.
    However, some parsers are quite strict about the elements and attributes that are allowed.
    Many tools use an xsd schema (like the example below) to validate incoming xml.
    Make sure you are using attribute names that are allowed by your parser.

    Below is the Scheme used by Jenkins to validate the XML report:

    .. code-block:: xml

        <xs:element name="testcase">
            <xs:complexType>
                <xs:sequence>
                    <xs:element ref="skipped" minOccurs="0" maxOccurs="1"/>
                    <xs:element ref="error" minOccurs="0" maxOccurs="unbounded"/>
                    <xs:element ref="failure" minOccurs="0" maxOccurs="unbounded"/>
                    <xs:element ref="system-out" minOccurs="0" maxOccurs="unbounded"/>
                    <xs:element ref="system-err" minOccurs="0" maxOccurs="unbounded"/>
                </xs:sequence>
                <xs:attribute name="name" type="xs:string" use="required"/>
                <xs:attribute name="assertions" type="xs:string" use="optional"/>
                <xs:attribute name="time" type="xs:string" use="optional"/>
                <xs:attribute name="classname" type="xs:string" use="optional"/>
                <xs:attribute name="status" type="xs:string" use="optional"/>
            </xs:complexType>
        </xs:element>

.. warning::

    Please note that using this feature will break schema verifications for the latest JUnitXML schema.
    This might be a problem when used with some CI servers.

.. _record_testsuite_property example:

record_testsuite_property
^^^^^^^^^^^^^^^^^^^^^^^^^

.. versionadded:: 4.5

If you want to add a properties node at the test-suite level, which may contains properties
that are relevant to all tests, you can use the ``record_testsuite_property`` session-scoped fixture:

The ``record_testsuite_property`` session-scoped fixture can be used to add properties relevant
to all tests.

.. code-block:: python

    import pytest


    @pytest.fixture(scope="session", autouse=True)
    def log_global_env_facts(record_testsuite_property):
        record_testsuite_property("ARCH", "PPC")
        record_testsuite_property("STORAGE_TYPE", "CEPH")


    class TestMe:
        def test_foo(self):
            assert True

The fixture is a callable which receives ``name`` and ``value`` of a ``<property>`` tag
added at the test-suite level of the generated xml:

.. code-block:: xml

    <testsuite errors="0" failures="0" name="pytest" skipped="0" tests="1" time="0.006">
      <properties>
        <property name="ARCH" value="PPC"/>
        <property name="STORAGE_TYPE" value="CEPH"/>
      </properties>
      <testcase classname="test_me.TestMe" file="test_me.py" line="16" name="test_foo" time="0.000243663787842"/>
    </testsuite>

``name`` must be a string, ``value`` will be converted to a string and properly xml-escaped.

The generated XML is compatible with the latest ``xunit`` standard, contrary to `record_property`_
and `record_xml_attribute`_.


Creating resultlog format files
----------------------------------------------------


To create plain-text machine-readable result files you can issue:

.. code-block:: bash

    pytest --resultlog=path

and look at the content at the ``path`` location.  Such files are used e.g.
by the `PyPy-test`_ web page to show test results over several revisions.

.. warning::

    This option is rarely used and is scheduled for removal in pytest 6.0.

    If you use this option, consider using the new `pytest-reportlog <https://github.com/pytest-dev/pytest-reportlog>`__ plugin instead.

    See `the deprecation docs <https://docs.pytest.org/en/stable/deprecations.html#result-log-result-log>`__
    for more information.


.. _`PyPy-test`: http://buildbot.pypy.org/summary


Sending test report to online pastebin service
-----------------------------------------------------

**Creating a URL for each test failure**:

.. code-block:: bash

    pytest --pastebin=failed

This will submit test run information to a remote Paste service and
provide a URL for each failure.  You may select tests as usual or add
for example ``-x`` if you only want to send one particular failure.

**Creating a URL for a whole test session log**:

.. code-block:: bash

    pytest --pastebin=all

Currently only pasting to the http://bpaste.net service is implemented.

.. versionchanged:: 5.2

If creating the URL fails for any reason, a warning is generated instead of failing the
entire test suite.

Early loading plugins
---------------------

You can early-load plugins (internal and external) explicitly in the command-line with the ``-p`` option::

    pytest -p mypluginmodule

The option receives a ``name`` parameter, which can be:

* A full module dotted name, for example ``myproject.plugins``. This dotted name must be importable.
* The entry-point name of a plugin. This is the name passed to ``setuptools`` when the plugin is
  registered. For example to early-load the `pytest-cov <https://pypi.org/project/pytest-cov/>`__ plugin you can use::

    pytest -p pytest_cov


Disabling plugins
-----------------

To disable loading specific plugins at invocation time, use the ``-p`` option
together with the prefix ``no:``.

Example: to disable loading the plugin ``doctest``, which is responsible for
executing doctest tests from text files, invoke pytest like this:

.. code-block:: bash

    pytest -p no:doctest

.. _`pytest.main-usage`:

Calling pytest from Python code
----------------------------------------------------



You can invoke ``pytest`` from Python code directly:

.. code-block:: python

    pytest.main()

this acts as if you would call "pytest" from the command line.
It will not raise ``SystemExit`` but return the exitcode instead.
You can pass in options and arguments:

.. code-block:: python

    pytest.main(["-x", "mytestdir"])

You can specify additional plugins to ``pytest.main``:

.. code-block:: python

    # content of myinvoke.py
    import pytest


    class MyPlugin:
        def pytest_sessionfinish(self):
            print("*** test run reporting finishing")


    pytest.main(["-qq"], plugins=[MyPlugin()])

Running it will show that ``MyPlugin`` was added and its
hook was invoked:

.. code-block:: pytest

    $ python myinvoke.py
    .FEsxX.                                                              [100%]*** test run reporting finishing

    ================================== ERRORS ==================================
    _______________________ ERROR at setup of test_error _______________________

        @pytest.fixture
        def error_fixture():
    >       assert 0
    E       assert 0

    test_example.py:6: AssertionError
    ================================= FAILURES =================================
    ________________________________ test_fail _________________________________

        def test_fail():
    >       assert 0
    E       assert 0

    test_example.py:14: AssertionError
    ========================= short test summary info ==========================
    FAILED test_example.py::test_fail - assert 0
    ERROR test_example.py::test_error - assert 0

.. note::

    Calling ``pytest.main()`` will result in importing your tests and any modules
    that they import. Due to the caching mechanism of python's import system,
    making subsequent calls to ``pytest.main()`` from the same process will not
    reflect changes to those files between the calls. For this reason, making
    multiple calls to ``pytest.main()`` from the same process (in order to re-run
    tests, for example) is not recommended.

.. _jenkins: http://jenkins-ci.org/