Skip to content
This repository has been archived by the owner on May 27, 2024. It is now read-only.

Decorator/marker to run a single test in a subprocess #15

Open
aparamon opened this issue Feb 5, 2019 · 3 comments
Open

Decorator/marker to run a single test in a subprocess #15

aparamon opened this issue Feb 5, 2019 · 3 comments

Comments

@aparamon
Copy link

aparamon commented Feb 5, 2019

H5py is Python bindings for HDF5 library. When adding a new feature recently, I discovered that particular wrappers are non-trivial to test. Some underlying HDF5 functions modify library global state, which will influence the following tests.

One idea that came up quickly was to use process isolation, and I went ahead:
https://github.com/h5py/h5py/blob/abe7cde114526af5e3b291839066d8524941e0b1/h5py/tests/old/test_h5pl.py
But this is ugly, incomplete, and hackey. Here it was mentioned that from existing solutions, pytest-mp is probably the closest match for such applications. But pytest-mp currently implements all-or-nothing approach: either every test is executed in a subprocess, or none. In h5py, only a few tests require isolation; we do not want to introduce additional complexity for all tests.

How hard would it be to implement something like my @sandboxed decorator with pytest-mp?

@pelson
Copy link

pelson commented Dec 7, 2022

I have found I need such functionality from time to time. I discovered it a while ago in the JPype project, where an external constraint meant that you can only do a certain action once in a process (in this case, start the JVM, which is a hard limitation of the JVM and entirely out of our control). In that case, you really want to be able to run a test in a subprocess, and communicate the results back transparently to the main testing process.

https://github.com/jpype-project/jpype/blob/f1a25941698a87457af9e16671312f58d5bcfaaa/test/jpypetest/test_startup.py#L44

and the implementation is at

https://github.com/jpype-project/jpype/blob/master/test/jpypetest/subrun.py

IMO this needs packaging neatly (e.g. no unittest.TestCase subclassing), and would make an excellent addition to a pytest plugin. Unfortunately, it looks like this project is not actively maintained so maybe it can be its own standalone project.

@pelson
Copy link

pelson commented Dec 8, 2022

Just to follow-up to say that it can be done fairly easily (though perhaps not entirely correctly) with:

class Unbuffered:
   def __init__(self, stream):
       self.stream = stream

   def write(self, data):
       self.stream.write(data)
       self.stream.flush()

   def writelines(self, datas):
       self.stream.writelines(datas)
       self.stream.flush()

   def __getattr__(self, attr):
       return getattr(self.stream, attr)


def pytest_pyfunc_call(pyfuncitem: pytest.Function):
    # This is the pytest_pyfunc_call hook that intercepts the call to the test function

    # Check if the run_in_subprocess marker is present
    if "run_in_subprocess" in pyfuncitem.keywords:

        def run():
            sys.stdout = Unbuffered(sys.stdout)
            return pyfuncitem.obj()

        p = Process(target=run)
        p.start()
        p.join()
        if p.exitcode != 0:
            raise pytest.fail('Test executed in a subprocess failed', pytrace=False)
        return True  # We ran the test, so don't return None

    # Return None to indicate that we didn't run this test. This allows
    # pytest to try finding a suitable runner.
    return None

(in the conftest.py), and then tests can be marked. For example:

main_pid = os.getpid()

@pytest.mark.run_in_subprocess
def test_bar():
    assert os.getpid() != main_pid

@leycec
Copy link

leycec commented Jun 30, 2023

Necrobump. @beartype and I thank you profusely for your shockingly concise and actually working pytest plugin snippet, @pelson. I needed this recently to safely isolate import hook-related unit tests to Python subprocesses. It worked! It worked, everybody! The Canadian heatwave may have given me heat stroke, but at least something worked... unless that's just the heat stroke talking again. 🥵

Actually, your pytest plugin snippet wasn't quite working. pytest requires that the custom @pytest.mark.run_in_process mark be explicitly declared </makes_sense> and that both standard output and error be unbuffered. </makes_sense> I also took the liberty of marginally improving the exception message raised by test failures to refer to the name of the failing test and adding reStructuredText (reST)-formatted docstrings for all that Sphinxy goodness.

In full, here's @beartype's fully-working amalgamation of your hard work:

from pytest import Function
from typing import Optional

def pytest_configure(config) -> None:
    '''
    Hook programmatically configuring the top-level ``"pytest.ini"`` file.
    '''

    # Programmatically add our custom "run_in_subprocess" mark, enabling tests
    # to notify the pytest_pyfunc_call() hook that they require isolation to a
    # Python subprocess of the current Python process.
    config.addinivalue_line(
        'markers',
        f'{_MARK_NAME_SUBPROCESS}: mark test to run in an isolated subprocess',
    )


def pytest_pyfunc_call(pyfuncitem: Function) -> Optional[bool]:
    '''
    Hook intercepting the call to run the passed :mod:`pytest` test function.

    Specifically, this test:

    * If this test has been decorated by our custom
      ``@pytest.mark.run_in_subprocess`` marker, runs this test in a Python
      subprocess of the current Python process isolated to this test.
    * Else, runs this test in the current Python process by deferring to the
      standard :mod:`pytest` logic for running this test.

    Parameters
    ----------
    pyfuncitem: Function
        :mod:`pytest`-specific object encapsulating the current test function
        being run.

    Returns
    ----------
    Optional[bool]
        Either:

        * If this hook ran this test, :data:`True`.
        * If this hook did *not* run this test, :data:`None`.

    See Also
    ----------
    https://github.com/ansible/pytest-mp/issues/15#issuecomment-1342682418
        GitHub comment by @pelson (Phil Elson) strongly inspiring this hook.
    '''

    # If this test has been decorated by our custom
    # @pytest.mark.run_in_subprocess marker...
    if _MARK_NAME_SUBPROCESS in pyfuncitem.keywords:
        # Defer hook-specific imports.
        from multiprocessing import Process
        from pytest import fail

        def _run_test_in_subprocess() -> object:
            '''
            Run the current :mod:`pytest` test function isolated to a Python
            subprocess of the current Python process.

            Returns
            ----------
            object
                Arbitrary object returned by this test if any *or* :data:`None`.
            '''

            # Defer subpracess-specific imports.
            import sys

            # Monkey-patch the unbuffered standard error and output streams of
            # this subprocess with buffered equivalents, ensuring that pytest
            # will reliably capture *all* standard error and output emitted by
            # running this test.
            sys.stderr = _UnbufferedOutputStream(sys.stderr)
            sys.stdout = _UnbufferedOutputStream(sys.stdout)

            # Run this test and return the result of doing so.
            return pyfuncitem.obj()

        # Python subprocess tasked with running this test.
        test_subprocess = Process(target=_run_test_in_subprocess)

        # Begin running this test in this subprocess.
        test_subprocess.start()

        # Block this parent Python process until this test completes.
        test_subprocess.join()

        # If this subprocess reports non-zero exit status, this test failed. In
        # this case...
        if test_subprocess.exitcode != 0:
            # Human-readable exception message to be raised.
            exception_message = (
                f'Test "{pyfuncitem.name}" failed in isolated subprocess with:')

            # Raise a pytest-compliant exception.
            raise fail(exception_message, pytrace=False)
        # Else, this subprocess reports zero exit status. In this case, this
        # test succeeded.

        # Notify pytest that this hook successfully ran this test.
        return True

    # Notify pytest that this hook avoided attempting to run this test, in which
    # case pytest will continue to look for a suitable runner for this test.
    return None


_MARK_NAME_SUBPROCESS = 'run_in_subprocess'
'''
**Subprocess mark** (i.e., name of our custom :mod:`pytest` mark, enabling tests
to notify the :func:`.pytest_pyfunc_call` hook that they require isolation to a
Python subprocess of the current Python process).
'''


class _UnbufferedOutputStream(object):
    '''
    **Unbuffered standard output stream** (i.e., proxy object encapsulating a
    buffered standard output stream by forcefully flushing that stream on all
    writes to that stream).

    See Also
    ----------
    https://github.com/ansible/pytest-mp/issues/15#issuecomment-1342682418
        GitHub comment by @pelson (Phil Elson) strongly inspiring this class.
    '''

    def __init__(self, stream) -> None:
        self.stream = stream

    def write(self, data) -> None:
        self.stream.write(data)
        self.stream.flush()

    def writelines(self, datas) -> None:
        self.stream.writelines(datas)
        self.stream.flush()

    def __getattr__(self, attr: str) -> object:
        return getattr(self.stream, attr)

All glory be to @pelson. 💪 🐻

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants