Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix pypy3 tests #482

Open
BeyondEvil opened this issue Jan 17, 2022 · 13 comments
Open

Fix pypy3 tests #482

BeyondEvil opened this issue Jan 17, 2022 · 13 comments

Comments

@BeyondEvil
Copy link
Contributor

At least two issues to address here:

  1. pypy3-mac fails, this might be out of our control but likely related to python 3.6 EOL: Setup-python pypy-3.6 on macOS is broken actions/setup-python#311
  2. pypy3 tests are super slow, this will need some investigation.
@mattip
Copy link

mattip commented Jan 17, 2022

xref #485. As a PyPy dev, we have dropped support for PyPy3.6, and are going to drop PyPy3.7 soon.

A known issue is that coverage, which on CPython runs as a c-extension and in PyPy runs in pure-python mode, breaks the PyPy JIT so the coverage test run takes 3x-4x more time on PyPy than on CPython.

@BeyondEvil
Copy link
Contributor Author

xref #485. As a PyPy dev, we have dropped support for PyPy3.6, and are going to drop PyPy3.7 soon.

A known issue is that coverage, which on CPython runs as a c-extension and in PyPy runs in pure-python mode, breaks the PyPy JIT so the coverage test run takes 3x-4x more time on PyPy than on CPython.

Good to know, thanks!

As you saw in the PR, I'm trying to get it to work on 3.8.

Ah, interesting re. coverage! I have an issue to fix our coverage stuff as well - will keep this in mind then. 👍

@BeyondEvil
Copy link
Contributor Author

@mattip Seems adding fetch-depth: 0 to the checkout solved it. 🤷‍♂️

Any idea why the pypy3-tests on mac took almost 30 minutes compared to roughly one minute on regular python?

@mattip
Copy link

mattip commented Mar 31, 2022

No, that is indeed strange. We expect tests to run 2x slower since that is roughly the speed of our interpreter vs. CPython (without the JIT) and tests typically do not run long enough for the JIT to kick in. How would you suggest I run the tests directly (without tox) so I can profile?

@BeyondEvil
Copy link
Contributor Author

BeyondEvil commented Mar 31, 2022

No, that is indeed strange. We expect tests to run 2x slower since that is roughly the speed of our interpreter vs. CPython (without the JIT) and tests typically do not run long enough for the JIT to kick in. How would you suggest I run the tests directly (without tox) so I can profile?

Put yourself in a venv with pypy3.8 (that's the version used in GHA) and just run pytest. Make sure you install the dependencies listed in .tox

@mattip

@mattip
Copy link

mattip commented Mar 31, 2022

On master, I get a failure when trying to read from /tmp/pytest-of-matti/pytest-4/test_durations0/report.html which does not exist. I see the test run is not pulling in the pytest-html plugin, even though I pip install .. Do I need to set an environment variable?

$ ../pypy3.8-HEAD/bin/pypy -m pytest -x

================ test session starts ==================
platform linux -- Python 3.8.13[pypy-7.3.10-alpha], pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/matti/oss/pytest-html, configfile: tox.ini, testpaths: testing
plugins: hypothesis-6.36.1, mock-3.7.0, xdist-2.5.0, cov-3.0.0, forked-1.4.0, rerun-0.0.1
collected 90 items                                                                                                                                             

testing/test_pytest_html.py F

=========================== FAILURES =======================================
__________________________ TestHTML.test_durations __________________________________

self = <test_pytest_html.TestHTML object at 0x00007fda62017a60>, \
testdir = <Testdir local('/tmp/pytest-of-matti/pytest-4/test_durations0')>

    def test_durations(self, testdir):
        sleep = float(0.2)
        testdir.makepyfile(
            """
            import time
            def test_sleep():
                time.sleep({:f})
        """.format(
                sleep * 2
            )
        )
>       result, html = run(testdir)

/home/matti/oss/pytest-html/testing/test_pytest_html.py:111: 
 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/home/matti/oss/pytest-html/testing/test_pytest_html.py:42: in run
    return result, read_html(path)
 _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

path = local('/tmp/pytest-of-matti/pytest-4/test_durations0/report.html')

    def read_html(path):
>       with open(str(path)) as f:
E       FileNotFoundError: [Errno 2] No such file or directory: \
    '/tmp/pytest-of-matti/pytest-4/test_durations0/report.html'

/home/matti/oss/pytest-html/testing/test_pytest_html.py:46: FileNotFoundError
---------------------- Captured stderr call ----------------------------------
ERROR: usage: __main__.py [options] [file_or_dir] [file_or_dir] [...]
__main__.py: error: unrecognized arguments: --html
  inifile: None
  rootdir: /tmp/pytest-of-matti/pytest-4/test_durations0

============================ warnings summary ===================
testing/test_pytest_html.py:189
  /home/matti/oss/pytest-html/testing/test_pytest_html.py:189: PytestUnknownMarkWarning: Unknown pytest.mark.flaky - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.flaky(reruns=2)  # test is flaky on windows

-- Docs: https://docs.pytest.org/en/stable/warnings.html
=============================== short test summary info ==========================
FAILED testing/test_pytest_html.py::TestHTML::test_durations - FileNotFoundError: \
    [Errno 2] No such file or directory: '/tmp/pytest-of-matti/pytest-4/test_du...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
======================== 1 failed, 1 warning in 0.70s ====================
(numpy) matti@matti-ryzen:~/oss/pytest-html$ 


Edit: I did not properly install the pytest-html plugin in my virtualenv.

@mattip
Copy link

mattip commented Mar 31, 2022

I did not properly install pytest-html. Now I get a different error, the html does not capture the log:

html = '<!DOCTYPE html>\n<html>\n  <head>\n    <meta charset="utf-8"/>\n \
    <title>report.html</title>\n    <link href="asset...colspan="4">\n   \
         <div class="empty log">No log output captured.</div></td></tr></tbody></table></body></html>'
test_outcome = 'rerun', test_outcome_number = 0, label = None

Edit: got it - I had pytest-rerun, not pytest-rerunfailures. they are different.

@mattip
Copy link

mattip commented Mar 31, 2022

On ubuntu 20.04, when I run with PyPy 3.8 I get 20 seconds, with CPython 3.8 10s. This is the 2x I would expect. Running CPython with coverage does not slow down things very much, maybe 11s. On CI, ubuntu20.04, the relationship is again 2x. One windows CI shows a worse ratio: CPython is about 3x slower.

I think this is all within the range of expectations.

@BeyondEvil
Copy link
Contributor Author

On ubuntu 20.04, when I run with PyPy 3.8 I get 20 seconds, with CPython 3.8 10s. This is the 2x I would expect. Running CPython with coverage does not slow down things very much, maybe 11s. On CI, ubuntu20.04, the relationship is again 2x. One windows CI shows a worse ratio: CPython is about 3x slower.

I think this is all within the range of expectations.

That doesn't explain the 4x (without coverage) in GHA, but maybe we just have to live with it: https://github.com/pytest-dev/pytest-html/runs/5768011639?check_suite_focus=true

@mattip
Copy link

mattip commented Mar 31, 2022

I missed the 4x on macOS. Now that we see the problem we can try to work on it. Very strange that macOS and windows are that much slower. @cfbloz.

@mattip
Copy link

mattip commented May 13, 2022

Could I add something like this to the test setup to upload --durations timings as an artifact, and then compare them via https://carreau.github.io/pytest-json-report-viewer/ ?

@BeyondEvil
Copy link
Contributor Author

Could I add something like this to the test setup to upload --durations timings as an artifact, and then compare them via https://carreau.github.io/pytest-json-report-viewer/ ?

Absolutely!

@cfbolz
Copy link

cfbolz commented May 24, 2022

FWIW, I have been running the pytest-html test suite with coverage locally as a kind of real world benchmark for coverage performance (see linked issue). seems we can speed up coverage on pypy for pytest-html (and I suspect many other projects) by a decent amount.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants