Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests failing with extraneous output due to log level #7026

Open
chrahunt opened this issue Sep 16, 2019 · 1 comment
Open

Tests failing with extraneous output due to log level #7026

chrahunt opened this issue Sep 16, 2019 · 1 comment
Labels
C: tests Testing and related things state: awaiting PR Feature discussed, PR is needed type: maintenance Related to Development and Maintenance Processes

Comments

@chrahunt
Copy link
Member

Environment

  • pip version: master
  • Python version: 2.7/3.7
  • OS: Windows

Description

In #7023 there were test failures in Azure Pipelines CI for Windows. These seemed unrelated to the changes being made at the time.

The specific issue appeared to be related to unexpected log messages being traced by pep425tags, as indicated in output below. The log messages themselves are unexpected because the default log level in pip is WARNING, but these were DEBUG level. For the test in question there didn't appear to be any setting of the caplog level. There were some other tests which use caplog.set_level(DEBUG), however the pytest docs indicate that the log level should be reset at the end of the tests.

Expected behavior

Tests passing consistently.

How to Reproduce

It seems kind of arbitrary since other PRs around that time (e.g. #7025) did not fail. Creating a commit on top of 51d7a97 (the last commit for the PR where the issue was observed) and submitting a PR for it may do the trick.

Output

Python 2.7 - Windows x86 (link)
__ TestCandidateEvaluator.test_sort_best_candidate__best_yanked_but_not_all ___
[gw1] win32 -- Python 2.7.16 d:\a\1\s\.tox\py\scripts\python.exe

self = <tests.unit.test_index.TestCandidateEvaluator instance at 0x05E88530>
caplog = <_pytest.logging.LogCaptureFixture object at 0x06829250>

    def test_sort_best_candidate__best_yanked_but_not_all(
        self, caplog,
    ):
        """
            Test the best candidates being yanked, but not all.
            """
        candidates = [
            make_mock_candidate('4.0', yanked_reason='bad metadata #4'),
            # Put the best candidate in the middle, to test sorting.
            make_mock_candidate('2.0'),
            make_mock_candidate('3.0', yanked_reason='bad metadata #3'),
            make_mock_candidate('1.0'),
        ]
        expected_best = candidates[1]
        evaluator = CandidateEvaluator.create('my-project')
        actual = evaluator.sort_best_candidate(candidates)
        assert actual is expected_best
        assert str(actual.version) == '2.0'
    
        # Check the log messages.
>       assert len(caplog.records) == 0
E       assert 3 == 0
E        +  where 3 = len([<logging.LogRecord object at 0x06829650>, <logging.LogRecord object at 0x06829E70>, <logging.LogRecord object at 0x061263D0>])
E        +    where [<logging.LogRecord object at 0x06829650>, <logging.LogRecord object at 0x06829E70>, <logging.LogRecord object at 0x061263D0>] = <_pytest.logging.LogCaptureFixture object at 0x06829250>.records

tests\unit\test_index.py:550: AssertionError
Python 3.7 - Windows x64 (link)
__ TestCandidateEvaluator.test_sort_best_candidate__best_yanked_but_not_all ___
[gw2] win32 -- Python 3.7.4 d:\a\1\s\.tox\py\scripts\python.exe

self = <tests.unit.test_index.TestCandidateEvaluator object at 0x000002A1B322EBC8>
caplog = <_pytest.logging.LogCaptureFixture object at 0x000002A1B2FCDC48>

    def test_sort_best_candidate__best_yanked_but_not_all(
        self, caplog,
    ):
        """
            Test the best candidates being yanked, but not all.
            """
        candidates = [
            make_mock_candidate('4.0', yanked_reason='bad metadata #4'),
            # Put the best candidate in the middle, to test sorting.
            make_mock_candidate('2.0'),
            make_mock_candidate('3.0', yanked_reason='bad metadata #3'),
            make_mock_candidate('1.0'),
        ]
        expected_best = candidates[1]
        evaluator = CandidateEvaluator.create('my-project')
        actual = evaluator.sort_best_candidate(candidates)
        assert actual is expected_best
        assert str(actual.version) == '2.0'
    
        # Check the log messages.
>       assert len(caplog.records) == 0
E       assert 2 == 0
E        +  where 2 = len([<LogRecord: pip._internal.pep425tags, 10, d:\a\1\s\.tox\py\lib\site-packages\pip\_internal\pep425tags.py, 97, "Config...y\lib\site-packages\pip\_internal\pep425tags.py, 97, "Config variable '%s' is unset, Python ABI tag may be incorrect">])
E        +    where [<LogRecord: pip._internal.pep425tags, 10, d:\a\1\s\.tox\py\lib\site-packages\pip\_internal\pep425tags.py, 97, "Config...y\lib\site-packages\pip\_internal\pep425tags.py, 97, "Config variable '%s' is unset, Python ABI tag may be incorrect">] = <_pytest.logging.LogCaptureFixture object at 0x000002A1B2FCDC48>.records

tests\unit\test_index.py:550: AssertionError

Additional Context

@chrahunt chrahunt added C: tests Testing and related things type: maintenance Related to Development and Maintenance Processes state: awaiting PR Feature discussed, PR is needed labels Sep 16, 2019
@pradyunsg
Copy link
Member

I looked at the sources (https://docs.pytest.org/en/latest/_modules/_pytest/logging.html#LogCaptureFixture) and I think if we're ever modifying the logging module in-memory, that might cause issues?

The only "reset" that LogCaptureFixture does is when caplog.set_level is used to set the level.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: tests Testing and related things state: awaiting PR Feature discussed, PR is needed type: maintenance Related to Development and Maintenance Processes
Projects
None yet
Development

No branches or pull requests

2 participants