pytest package

Module contents

pytest: unit and functional testing with Python.

_fillfuncargs(function)

fill missing funcargs for a test function.

approx(expected, rel=None, abs=None, nan_ok=False)[source]

Assert that two numbers (or two sets of numbers) are equal to each other within some tolerance.

Due to the intricacies of floating-point arithmetic, numbers that we would intuitively expect to be equal are not always so:

>>> 0.1 + 0.2 == 0.3
False

This problem is commonly encountered when writing tests, e.g. when making sure that floating-point values are what you expect them to be. One way to deal with this problem is to assert that two floating-point numbers are equal to within some appropriate tolerance:

>>> abs((0.1 + 0.2) - 0.3) < 1e-6
True

However, comparisons like this are tedious to write and difficult to understand. Furthermore, absolute comparisons like the one above are usually discouraged because there’s no tolerance that works well for all situations. 1e-6 is good for numbers around 1, but too small for very big numbers and too big for very small ones. It’s better to express the tolerance as a fraction of the expected value, but relative comparisons like that are even more difficult to write correctly and concisely.

The approx class performs floating-point comparisons using a syntax that’s as intuitive as possible:

>>> from pytest import approx
>>> 0.1 + 0.2 == approx(0.3)
True

The same syntax also works for sequences of numbers:

>>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6))
True

Dictionary values:

>>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6})
True

numpy arrays:

>>> import numpy as np                                                          
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6])) 
True

And for a numpy array against a scalar:

>>> import numpy as np                                         
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.1]) == approx(0.3) 
True

By default, approx considers numbers within a relative tolerance of 1e-6 (i.e. one part in a million) of its expected value to be equal. This treatment would lead to surprising results if the expected value was 0.0, because nothing but 0.0 itself is relatively close to 0.0. To handle this case less surprisingly, approx also considers numbers within an absolute tolerance of 1e-12 of its expected value to be equal. Infinity and NaN are special cases. Infinity is only considered equal to itself, regardless of the relative tolerance. NaN is not considered equal to anything by default, but you can make it be equal to itself by setting the nan_ok argument to True. (This is meant to facilitate comparing arrays that use NaN to mean “no data”.)

Both the relative and absolute tolerances can be changed by passing arguments to the approx constructor:

>>> 1.0001 == approx(1)
False
>>> 1.0001 == approx(1, rel=1e-3)
True
>>> 1.0001 == approx(1, abs=1e-3)
True

If you specify abs but not rel, the comparison will not consider the relative tolerance at all. In other words, two numbers that are within the default relative tolerance of 1e-6 will still be considered unequal if they exceed the specified absolute tolerance. If you specify both abs and rel, the numbers will be considered equal if either tolerance is met:

>>> 1 + 1e-8 == approx(1)
True
>>> 1 + 1e-8 == approx(1, abs=1e-12)
False
>>> 1 + 1e-8 == approx(1, rel=1e-6, abs=1e-12)
True

If you’re thinking about using approx, then you might want to know how it compares to other good ways of comparing floating-point numbers. All of these algorithms are based on relative and absolute tolerances and should agree for the most part, but they do have meaningful differences:

  • math.isclose(a, b, rel_tol=1e-9, abs_tol=0.0): True if the relative tolerance is met w.r.t. either a or b or if the absolute tolerance is met. Because the relative tolerance is calculated w.r.t. both a and b, this test is symmetric (i.e. neither a nor b is a “reference value”). You have to specify an absolute tolerance if you want to compare to 0.0 because there is no tolerance by default. Only available in python>=3.5. More information…

  • numpy.isclose(a, b, rtol=1e-5, atol=1e-8): True if the difference between a and b is less that the sum of the relative tolerance w.r.t. b and the absolute tolerance. Because the relative tolerance is only calculated w.r.t. b, this test is asymmetric and you can think of b as the reference value. Support for comparing sequences is provided by numpy.allclose. More information…

  • unittest.TestCase.assertAlmostEqual(a, b): True if a and b are within an absolute tolerance of 1e-7. No relative tolerance is considered and the absolute tolerance cannot be changed, so this function is not appropriate for very large or very small numbers. Also, it’s only available in subclasses of unittest.TestCase and it’s ugly because it doesn’t follow PEP8. More information…

  • a == pytest.approx(b, rel=1e-6, abs=1e-12): True if the relative tolerance is met w.r.t. b or if the absolute tolerance is met. Because the relative tolerance is only calculated w.r.t. b, this test is asymmetric and you can think of b as the reference value. In the special case that you explicitly specify an absolute tolerance but not a relative tolerance, only the absolute tolerance is considered.

Warning

Changed in version 3.2.

In order to avoid inconsistent behavior, TypeError is raised for >, >=, < and <= comparisons. The example below illustrates the problem:

assert approx(0.1) > 0.1 + 1e-10  # calls approx(0.1).__gt__(0.1 + 1e-10)
assert 0.1 + 1e-10 > approx(0.1)  # calls approx(0.1).__lt__(0.1 + 1e-10)

In the second example one expects approx(0.1).__le__(0.1 + 1e-10) to be called. But instead, approx(0.1).__lt__(0.1 + 1e-10) is used to comparison. This is because the call hierarchy of rich comparisons follows a fixed behavior. More information…

class Class(*k, **kw)[source]

Bases: _pytest.python.PyCollector

Collector for test methods.

classmethod from_parent(parent, *, name, obj=None)[source]

The public constructor

collect()[source]

returns a list of children (items and collectors) for this collection node.

_inject_setup_class_fixture()[source]

Injects a hidden autouse, class scoped fixture into the collected class object that invokes setup_class/teardown_class if either or both are available.

Using a fixture to invoke this methods ensures we play nicely and unsurprisingly with other fixtures (#517).

_inject_setup_method_fixture()[source]

Injects a hidden autouse, function scoped fixture into the collected class object that invokes setup_method/teardown_method if either or both are available.

Using a fixture to invoke this methods ensures we play nicely and unsurprisingly with other fixtures (#517).

class cmdline[source]

Bases: object

staticmethod main(args=None, plugins=None) → Union[int, _pytest.config.ExitCode]

return exit code, after performing an in-process test run.

Parameters
  • args – list of command line arguments.

  • plugins – list of plugin objects to be auto-registered during initialization.

class Collector(*k, **kw)[source]

Bases: _pytest.nodes.Node

Collector instances create children through collect() and thus iteratively build a tree.

exception CollectError[source]

Bases: Exception

an error during collection, contains a custom message.

collect()[source]

returns a list of children (items and collectors) for this collection node.

repr_failure(excinfo)[source]

represent a collection failure.

_prunetraceback(excinfo)[source]
deprecated_call(func=None, *args, **kwargs)[source]

context manager that can be used to ensure a block of code triggers a DeprecationWarning or PendingDeprecationWarning:

>>> import warnings
>>> def api_call_v2():
...     warnings.warn('use v3 of this api', DeprecationWarning)
...     return 200

>>> with deprecated_call():
...    assert api_call_v2() == 200

deprecated_call can also be used by passing a function and *args and *kwargs, in which case it will ensure calling func(*args, **kwargs) produces one of the warnings types above.

exit(msg: str, returncode: Optional[int] = None) → NoReturn[source]

Exit testing process.

Parameters
  • msg (str) – message to display upon exit.

  • returncode (int) – return code to be used when exiting pytest.

class ExitCode(value)[source]

Bases: enum.IntEnum

New in version 5.0.

Encodes the valid exit codes by pytest.

Currently users and plugins may supply other exit codes as well.

OK = 0

tests passed

TESTS_FAILED = 1

tests failed

INTERRUPTED = 2

pytest was interrupted

INTERNAL_ERROR = 3

an internal error got in the way

USAGE_ERROR = 4

pytest was misused

NO_TESTS_COLLECTED = 5

pytest couldn’t find tests

fail(msg: str = '', pytrace: bool = True) → NoReturn[source]

Explicitly fail an executing test with the given message.

Parameters
  • msg (str) – the message to show the user as reason for the failure.

  • pytrace (bool) – if false the msg represents the full failure information and no python traceback will be reported.

class File(*k, **kw)[source]

Bases: _pytest.nodes.FSCollector

base class for collecting tests from a file.

fixture(callable_or_scope=None, *args, scope='function', params=None, autouse=False, ids=None, name=None)[source]

Decorator to mark a fixture factory function.

This decorator can be used, with or without parameters, to define a fixture function.

The name of the fixture function can later be referenced to cause its invocation ahead of running tests: test modules or classes can use the pytest.mark.usefixtures(fixturename) marker.

Test functions can directly use fixture names as input arguments in which case the fixture instance returned from the fixture function will be injected.

Fixtures can provide their values to test functions using return or yield statements. When using yield the code block after the yield statement is executed as teardown code regardless of the test outcome, and must yield exactly once.

Parameters
  • scope

    the scope for which this fixture is shared, one of "function" (default), "class", "module", "package" or "session" ("package" is considered experimental at this time).

    This parameter may also be a callable which receives (fixture_name, config) as parameters, and must return a str with one of the values mentioned above.

    See dynamic scope in the docs for more information.

  • params – an optional list of parameters which will cause multiple invocations of the fixture function and all of the tests using it. The current parameter is available in request.param.

  • autouse – if True, the fixture func is activated for all tests that can see it. If False (the default) then an explicit reference is needed to activate the fixture.

  • ids – list of string ids each corresponding to the params so that they are part of the test id. If no ids are provided they will be generated automatically from the params.

  • name – the name of the fixture. This defaults to the name of the decorated function. If a fixture is used in the same module in which it is defined, the function name of the fixture will be shadowed by the function arg that requests the fixture; one way to resolve this is to name the decorated function fixture_<fixturename> and then use @pytest.fixture(name='<fixturename>').

freeze_includes()[source]

Returns a list of module names used by pytest that should be included by cx_freeze.

class Function(*k, **kw)[source]

Bases: _pytest.python.PyobjMixin, _pytest.nodes.Item

a Function Item is responsible for setting up and executing a Python test function.

_ALLOW_MARKERS = False
originalname

original function name, without any decorations (for example parametrization adds a "[...]" suffix to function names).

New in version 3.0.

classmethod from_parent(parent, **kw)[source]

The public constructor

_initrequest()[source]
function

underlying python ‘function’ object

_getobj()[source]

Gets the underlying Python object. May be overwritten by subclasses.

_pyfuncitem

(compatonly) for code expecting pytest-2.2 style request objects

funcargnames

alias attribute for fixturenames for pre-2.3 compatibility

runtest()None[source]

execute the underlying test function.

setup()None[source]
_prunetraceback(excinfo: _pytest._code.code.ExceptionInfo)None[source]
repr_failure(excinfo, outerr=None)[source]
importorskip(modname: str, minversion: Optional[str] = None, reason: Optional[str] = None) → Any[source]

Imports and returns the requested module modname, or skip the current test if the module cannot be imported.

Parameters
  • modname (str) – the name of the module to import

  • minversion (str) – if given, the imported module’s __version__ attribute must be at least this minimal version, otherwise the test is still skipped.

  • reason (str) – if given, this reason is shown as the message when the module cannot be imported.

Returns

The imported module. This should be assigned to its canonical name.

Example:

docutils = pytest.importorskip("docutils")
class Instance(*k, **kw)[source]

Bases: _pytest.python.PyCollector

_ALLOW_MARKERS = False
_getobj()[source]

Gets the underlying Python object. May be overwritten by subclasses.

collect()[source]

returns a list of children (items and collectors) for this collection node.

newinstance()[source]
class Item(*k, **kw)[source]

Bases: _pytest.nodes.Node

a basic test invocation item. Note that for a single function there might be multiple test invocation items.

nextitem = None
user_properties: List[Tuple[str, Any]]

user properties is a list of tuples (name, value) that holds user defined properties for this test.

runtest()None[source]
add_report_section(when: str, key: str, content: str)None[source]

Adds a new report section, similar to what’s done internally to add stdout and stderr captured output:

item.add_report_section("call", "stdout", "report section contents")
Parameters
  • when (str) – One of the possible capture states, "setup", "call", "teardown".

  • key (str) – Name of the section, can be customized at will. Pytest uses "stdout" and "stderr" internally.

  • content (str) – The full contents as a string.

reportinfo() → Tuple[Union[py._path.local.LocalPath, str], Optional[int], str][source]
location
main(args=None, plugins=None) → Union[int, _pytest.config.ExitCode][source]

return exit code, after performing an in-process test run.

Parameters
  • args – list of command line arguments.

  • plugins – list of plugin objects to be auto-registered during initialization.

class Module(*k, **kw)[source]

Bases: _pytest.nodes.File, _pytest.python.PyCollector

Collector for test classes and functions.

_getobj()[source]

Gets the underlying Python object. May be overwritten by subclasses.

collect()[source]

returns a list of children (items and collectors) for this collection node.

_inject_setup_module_fixture()[source]

Injects a hidden autouse, module scoped fixture into the collected module object that invokes setUpModule/tearDownModule if either or both are available.

Using a fixture to invoke this methods ensures we play nicely and unsurprisingly with other fixtures (#517).

_inject_setup_function_fixture()[source]

Injects a hidden autouse, function scoped fixture into the collected module object that invokes setup_function/teardown_function if either or both are available.

Using a fixture to invoke this methods ensures we play nicely and unsurprisingly with other fixtures (#517).

_importtestmodule() → module[source]
class Package(*k, **kw)[source]

Bases: _pytest.python.Module

setup()[source]
gethookproxy(fspath: py._path.local.LocalPath)[source]
isinitpath(path)[source]
for ... in collect()[source]

returns a list of children (items and collectors) for this collection node.

param(*values, **kw)[source]

Specify a parameter in `pytest.mark.parametrize`_ calls or parametrized fixtures.

@pytest.mark.parametrize("test_input,expected", [
    ("3+5", 8),
    pytest.param("6*9", 42, marks=pytest.mark.xfail),
])
def test_eval(test_input, expected):
    assert eval(test_input) == expected
Parameters
  • values – variable args of the values of the parameter set, in order.

  • marks – a single mark or a list of marks to be applied to this parameter set.

  • id (str) – the id to attribute to this parameter set.

exception PytestAssertRewriteWarning

Bases: pytest.PytestWarning

Warning emitted by the pytest assert rewrite module.

exception PytestCacheWarning

Bases: pytest.PytestWarning

Warning emitted by the cache plugin in various situations.

exception PytestCollectionWarning

Bases: pytest.PytestWarning

Warning emitted when pytest is not able to collect a file or symbol in a module.

exception PytestConfigWarning

Bases: pytest.PytestWarning

Warning emitted for configuration issues.

exception PytestDeprecationWarning

Bases: pytest.PytestWarning, DeprecationWarning

Warning class for features that will be removed in a future version.

exception PytestExperimentalApiWarning

Bases: pytest.PytestWarning, FutureWarning

Warning category used to denote experiments in pytest.

Use sparingly as the API might change or even be removed completely in a future version.

classmethod simple(apiname: str)pytest.PytestExperimentalApiWarning[source]
exception PytestUnhandledCoroutineWarning

Bases: pytest.PytestWarning

Warning emitted for an unhandled coroutine.

A coroutine was encountered when collecting test functions, but was not handled by any async-aware plugin. Coroutine test functions are not natively supported.

exception PytestUnknownMarkWarning

Bases: pytest.PytestWarning

Warning emitted on use of unknown markers.

See https://docs.pytest.org/en/latest/mark.html for details.

exception PytestWarning

Bases: UserWarning

Base class for all warnings emitted by pytest.

raises(expected_exception: Union[Type[_E], Tuple[Type[_E], …]], *args: Any, **kwargs: Any) → Union[RaisesContext[_E], _pytest._code.code.ExceptionInfo[_E]][source]

Assert that a code block/function call raises expected_exception or raise a failure exception otherwise.

Parameters

match

if specified, a string containing a regular expression, or a regular expression object, that is tested against the string representation of the exception using re.search. To match a literal string that may contain special characters, the pattern can first be escaped with re.escape.

(This is only used when pytest.raises is used as a context manager, and passed through to the function otherwise. When using pytest.raises as a function, you can use: pytest.raises(Exc, func, match="passed on").match("my pattern").)

Use pytest.raises as a context manager, which will capture the exception of the given type:

>>> with raises(ZeroDivisionError):
...    1/0

If the code block does not raise the expected exception (ZeroDivisionError in the example above), or no exception at all, the check will fail instead.

You can also use the keyword argument match to assert that the exception matches a text or regex:

>>> with raises(ValueError, match='must be 0 or None'):
...     raise ValueError("value must be 0 or None")

>>> with raises(ValueError, match=r'must be \d+$'):
...     raise ValueError("value must be 42")

The context manager produces an ExceptionInfo object which can be used to inspect the details of the captured exception:

>>> with raises(ValueError) as exc_info:
...     raise ValueError("value must be 42")
>>> assert exc_info.type is ValueError
>>> assert exc_info.value.args[0] == "value must be 42"

Note

When using pytest.raises as a context manager, it’s worthwhile to note that normal context manager rules apply and that the exception raised must be the final line in the scope of the context manager. Lines of code after that, within the scope of the context manager will not be executed. For example:

>>> value = 15
>>> with raises(ValueError) as exc_info:
...     if value > 10:
...         raise ValueError("value must be <= 10")
...     assert exc_info.type is ValueError  # this will not execute

Instead, the following approach must be taken (note the difference in scope):

>>> with raises(ValueError) as exc_info:
...     if value > 10:
...         raise ValueError("value must be <= 10")
...
>>> assert exc_info.type is ValueError

Using with pytest.mark.parametrize

When using pytest.mark.parametrize ref it is possible to parametrize tests such that some runs raise an exception and others do not.

See parametrizing_conditional_raising for an example.

Legacy form

It is possible to specify a callable by passing a to-be-called lambda:

>>> raises(ZeroDivisionError, lambda: 1/0)
<ExceptionInfo ...>

or you can specify an arbitrary callable with arguments:

>>> def f(x): return 1/x
...
>>> raises(ZeroDivisionError, f, 0)
<ExceptionInfo ...>
>>> raises(ZeroDivisionError, f, x=0)
<ExceptionInfo ...>

The form above is fully supported but discouraged for new code because the context manager form is regarded as more readable and less error-prone.

Note

Similar to caught exception objects in Python, explicitly clearing local references to returned ExceptionInfo objects can help the Python interpreter speed up its garbage collection.

Clearing those references breaks a reference cycle (ExceptionInfo –> caught exception –> frame stack raising the exception –> current frame stack –> local variables –> ExceptionInfo) which makes Python keep all objects referenced from that cycle (including all local variables in the current frame) alive until the next cyclic garbage collection run. More detailed information can be found in the official Python documentation for the try statement.

register_assert_rewrite(*names)None[source]

Register one or more module names to be rewritten on import.

This function will make sure that this module or all modules inside the package will get their assert statements rewritten. Thus you should make sure to call this before the module is actually imported, usually in your __init__.py if you are a plugin using a package.

Raises

TypeError – if the given module names are not strings.

class Session(*k, **kw)[source]

Bases: _pytest.nodes.FSCollector

exception Interrupted

Bases: KeyboardInterrupt

signals an interrupted test run.

exception Failed

Bases: Exception

signals a stop as failed test run.

_setupstate: SetupState = None
_fixturemanager: FixtureManager = None
exitstatus: Union[int, ExitCode] = None
classmethod from_config(config)[source]
_node_location_to_relpath(node_path: py._path.local.LocalPath)str[source]
pytest_collectstart()[source]
pytest_ignore_collect(path: py._path.local.LocalPath, config: _pytest.config.Config) → Optional[Tuple[Literal[True], Optional[str]]][source]
pytest_runtest_logreport(report)[source]
pytest_collectreport(report)
isinitpath(path)[source]
gethookproxy(fspath: py._path.local.LocalPath)[source]
pytest_deselected(items)[source]

Keep track of explicitly deselected items.

perform_collect(args=None, genitems=True)[source]
_perform_collect(args, genitems)[source]
for ... in collect()[source]

returns a list of children (items and collectors) for this collection node.

for ... in _collect(argpath, names)[source]
staticmethod _visit_filter(f)[source]
_tryconvertpyarg(x)[source]

Convert a dotted module name to path.

staticmethod _parse_fname_lineno(arg: str) → Tuple[str, Tuple[Optional[int], Optional[int]]][source]
_parsearg(arg)[source]

return (fspath, names) tuple after checking the file exists.

matchnodes(matching, names)[source]
_matchnodes(matching, names)[source]
for ... in genitems(node)[source]
set_trace(*, header=None)[source]

Placeholder for when there is no config (yet).

skip(msg: str = '', *, allow_module_level: bool = False) → NoReturn[source]

Skip an executing test with the given message.

This function should be called only during testing (setup, call or teardown) or during collection by using the allow_module_level flag. This function can be called in doctests as well.

Parameters

allow_module_level (bool) – allows this function to be called at module level, skipping the rest of the module. Default to False.

Note

It is better to use the pytest.mark.skipif ref marker when possible to declare a test to be skipped under certain conditions like mismatching platforms or dependencies. Similarly, use the # doctest: +SKIP directive (see doctest.SKIP) to skip a doctest statically.

exception UsageError[source]

Bases: Exception

error in pytest usage or invocation

warns(expected_warning: Optional[Union[Type[Warning], Tuple[Type[Warning], …]]], *args: Any, match: Optional[Union[str, Pattern]] = None, **kwargs: Any) → Union[WarningsChecker, Any][source]

Assert that code raises a particular class of warning.

Specifically, the parameter expected_warning can be a warning class or sequence of warning classes, and the inside the with block must issue a warning of that class or classes.

This helper produces a list of warnings.WarningMessage objects, one for each warning raised.

This function can be used as a context manager, or any of the other ways pytest.raises can be used:

>>> with warns(RuntimeWarning):
...    warnings.warn("my warning", RuntimeWarning)

In the context manager form you may use the keyword argument match to assert that the exception matches a text or regex:

>>> with warns(UserWarning, match='must be 0 or None'):
...     warnings.warn("value must be 0 or None", UserWarning)

>>> with warns(UserWarning, match=r'must be \d+$'):
...     warnings.warn("value must be 42", UserWarning)

>>> with warns(UserWarning, match=r'must be \d+$'):
...     warnings.warn("this is not here", UserWarning)
Traceback (most recent call last):
  ...
_pytest.outcomes.Failed: DID NOT WARN. No warnings of type ...UserWarning... was emitted...
xfail(reason: str = '') → NoReturn[source]

Imperatively xfail an executing test or setup functions with the given reason.

This function should be called only during testing (setup, call or teardown).

Note

It is better to use the pytest.mark.xfail ref marker when possible to declare a test to be xfailed under certain conditions like known bugs or missing features.

yield_fixture(callable_or_scope=None, *args, scope='function', params=None, autouse=False, ids=None, name=None)[source]

(return a) decorator to mark a yield-fixture factory function.

Deprecated since version 3.0: Use pytest.fixture() directly instead.