-
Notifications
You must be signed in to change notification settings - Fork 19
Decorator/marker to run a single test in a subprocess #15
Comments
I have found I need such functionality from time to time. I discovered it a while ago in the JPype project, where an external constraint meant that you can only do a certain action once in a process (in this case, start the JVM, which is a hard limitation of the JVM and entirely out of our control). In that case, you really want to be able to run a test in a subprocess, and communicate the results back transparently to the main testing process. and the implementation is at https://github.com/jpype-project/jpype/blob/master/test/jpypetest/subrun.py IMO this needs packaging neatly (e.g. no |
Just to follow-up to say that it can be done fairly easily (though perhaps not entirely correctly) with:
(in the conftest.py), and then tests can be marked. For example:
|
Necrobump. @beartype and I thank you profusely for your shockingly concise and actually working Actually, your In full, here's @beartype's fully-working amalgamation of your hard work: from pytest import Function
from typing import Optional
def pytest_configure(config) -> None:
'''
Hook programmatically configuring the top-level ``"pytest.ini"`` file.
'''
# Programmatically add our custom "run_in_subprocess" mark, enabling tests
# to notify the pytest_pyfunc_call() hook that they require isolation to a
# Python subprocess of the current Python process.
config.addinivalue_line(
'markers',
f'{_MARK_NAME_SUBPROCESS}: mark test to run in an isolated subprocess',
)
def pytest_pyfunc_call(pyfuncitem: Function) -> Optional[bool]:
'''
Hook intercepting the call to run the passed :mod:`pytest` test function.
Specifically, this test:
* If this test has been decorated by our custom
``@pytest.mark.run_in_subprocess`` marker, runs this test in a Python
subprocess of the current Python process isolated to this test.
* Else, runs this test in the current Python process by deferring to the
standard :mod:`pytest` logic for running this test.
Parameters
----------
pyfuncitem: Function
:mod:`pytest`-specific object encapsulating the current test function
being run.
Returns
----------
Optional[bool]
Either:
* If this hook ran this test, :data:`True`.
* If this hook did *not* run this test, :data:`None`.
See Also
----------
https://github.com/ansible/pytest-mp/issues/15#issuecomment-1342682418
GitHub comment by @pelson (Phil Elson) strongly inspiring this hook.
'''
# If this test has been decorated by our custom
# @pytest.mark.run_in_subprocess marker...
if _MARK_NAME_SUBPROCESS in pyfuncitem.keywords:
# Defer hook-specific imports.
from multiprocessing import Process
from pytest import fail
def _run_test_in_subprocess() -> object:
'''
Run the current :mod:`pytest` test function isolated to a Python
subprocess of the current Python process.
Returns
----------
object
Arbitrary object returned by this test if any *or* :data:`None`.
'''
# Defer subpracess-specific imports.
import sys
# Monkey-patch the unbuffered standard error and output streams of
# this subprocess with buffered equivalents, ensuring that pytest
# will reliably capture *all* standard error and output emitted by
# running this test.
sys.stderr = _UnbufferedOutputStream(sys.stderr)
sys.stdout = _UnbufferedOutputStream(sys.stdout)
# Run this test and return the result of doing so.
return pyfuncitem.obj()
# Python subprocess tasked with running this test.
test_subprocess = Process(target=_run_test_in_subprocess)
# Begin running this test in this subprocess.
test_subprocess.start()
# Block this parent Python process until this test completes.
test_subprocess.join()
# If this subprocess reports non-zero exit status, this test failed. In
# this case...
if test_subprocess.exitcode != 0:
# Human-readable exception message to be raised.
exception_message = (
f'Test "{pyfuncitem.name}" failed in isolated subprocess with:')
# Raise a pytest-compliant exception.
raise fail(exception_message, pytrace=False)
# Else, this subprocess reports zero exit status. In this case, this
# test succeeded.
# Notify pytest that this hook successfully ran this test.
return True
# Notify pytest that this hook avoided attempting to run this test, in which
# case pytest will continue to look for a suitable runner for this test.
return None
_MARK_NAME_SUBPROCESS = 'run_in_subprocess'
'''
**Subprocess mark** (i.e., name of our custom :mod:`pytest` mark, enabling tests
to notify the :func:`.pytest_pyfunc_call` hook that they require isolation to a
Python subprocess of the current Python process).
'''
class _UnbufferedOutputStream(object):
'''
**Unbuffered standard output stream** (i.e., proxy object encapsulating a
buffered standard output stream by forcefully flushing that stream on all
writes to that stream).
See Also
----------
https://github.com/ansible/pytest-mp/issues/15#issuecomment-1342682418
GitHub comment by @pelson (Phil Elson) strongly inspiring this class.
'''
def __init__(self, stream) -> None:
self.stream = stream
def write(self, data) -> None:
self.stream.write(data)
self.stream.flush()
def writelines(self, datas) -> None:
self.stream.writelines(datas)
self.stream.flush()
def __getattr__(self, attr: str) -> object:
return getattr(self.stream, attr) All glory be to @pelson. 💪 🐻 |
H5py is Python bindings for HDF5 library. When adding a new feature recently, I discovered that particular wrappers are non-trivial to test. Some underlying HDF5 functions modify library global state, which will influence the following tests.
One idea that came up quickly was to use process isolation, and I went ahead:
https://github.com/h5py/h5py/blob/abe7cde114526af5e3b291839066d8524941e0b1/h5py/tests/old/test_h5pl.py
But this is ugly, incomplete, and hackey. Here it was mentioned that from existing solutions,
pytest-mp
is probably the closest match for such applications. Butpytest-mp
currently implements all-or-nothing approach: either every test is executed in a subprocess, or none. Inh5py
, only a few tests require isolation; we do not want to introduce additional complexity for all tests.How hard would it be to implement something like my
@sandboxed
decorator withpytest-mp
?The text was updated successfully, but these errors were encountered: