Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output Property in results.json is Not Being Populated When a Test Fails #66

Closed
BethanyG opened this issue May 26, 2021 · 1 comment · Fixed by #86
Closed

Output Property in results.json is Not Being Populated When a Test Fails #66

BethanyG opened this issue May 26, 2021 · 1 comment · Fixed by #86
Assignees
Labels
bug Something isn't working

Comments

@BethanyG
Copy link
Member

If a test fails during a test run and the user has used print() or other debugging methods, the corresponding output property is not being created/populated in the results.json file. output is only populated when a test succeeds, which defeats some of the purpose of having it.

Steps to reproduce:

Using the tests/example-partial-failure-with-subtests/ test:

  1. Add print("User output is captured!") on line 8 of example_partial_failure_with_subtests.py
  2. Run ./bin/run.sh example-partial-failure-with-subtests
  3. Note that results.json has an entry that looks like this for ExampleSuccessTest.test_abc:
"name": "ExampleSuccessTest.test_abc",
      "status": "pass",
      "test_code": "input_data = ['frog', 'fish', 'coconut', 'pineapple', 'carrot', 'cucumber', 'grass', 'tree']\nresult_data = [(\"Hello, World!\", param) for param in input_data]\nnumber_of_variants = range(1, len(input_data) + 1)\n\nfor variant, param, result in zip(number_of_variants, input_data, result_data):\n    with self.subTest(f\"variation #{variant}\", param=param, result=result):\n        self.assertEqual(hello(param), result,",
      "task_id": 1,
      "output": "User output is captured!\nUser output is captured!\nUser output is captured!\nUser output is captured!\nUser output is captured!\nUser output is captured!\nUser output is captured!\nUser output is captured!"
    }
  1. Go back to example_partial_failure_with_subtests.py and add print("Param is: ", param) on line 6. This condition should cause the test_hello test and its variants to fail.
  2. Run ./bin/run.sh example-partial-failure-with-subtests again.
  3. Note that results.json now has entries for the failed variants, as well as the successful test. Note the difference:
 {
      "name": "ExampleSuccessTest.test_abc",
      "status": "pass",
      "test_code": "input_data = ['frog', 'fish', 'coconut', 'pineapple', 'carrot', 'cucumber', 'grass', 'tree']\nresult_data = [(\"Hello, World!\", param) for param in input_data]\nnumber_of_variants = range(1, len(input_data) + 1)\n\nfor variant, param, result in zip(number_of_variants, input_data, result_data):\n    with self.subTest(f\"variation #{variant}\", param=param, result=result):\n        self.assertEqual(hello(param), result,",
      "task_id": 1,
      "output": "User output is captured!\nUser output is captured!\nUser output is captured!\nUser output is captured!\nUser output is captured!\nUser output is captured!\nUser output is captured!\nUser output is captured!"
    },
    {
      "name": "ExampleSuccessTest.test_hello",
      "status": "fail",
      "message": "One or more subtests for this test failed. Details can be found under each variant.",
      "test_code": "input_data = [1, 2, 5, 10]\nresult_data = [(\"Hello, World!\", param) for param in input_data]\nnumber_of_variants = range(1, len(input_data) + 1)\n\nfor variant, param, result in zip(number_of_variants, input_data, result_data):\n    with self.subTest(f\"variation #{variant}\", param=param, result=result):\n        self.assertEqual(hello(param), result,",
      "task_id": 1
    },
{
      "name": "ExampleSuccessTest.test_hello [variation #1] (param=1, result=('Hello, World!', 1))",
      "status": "fail",
      "message": "AssertionError: 'Hello, World!' != ('Hello, World!', 1) : Expected: ('Hello, World!', 1) but got something else instead.",
      "test_code": "input_data = [1, 2, 5, 10]\nresult_data = [(\"Hello, World!\", param) for param in input_data]\nnumber_of_variants = range(1, len(input_data) + 1)\n\nfor variant, param, result in zip(number_of_variants, input_data, result_data):\n    with self.subTest(f\"variation #{variant}\", param=param, result=result):\n        self.assertEqual(hello(param), result,",
      "task_id": 1
    }

The expected behavior is to have captured output for all tests, regardless of status. We should be capturing both sdtout and sderr.

@BethanyG
Copy link
Member Author

Linking the PyTest docs on capturing stdout and stderr for reference: https://docs.pytest.org/en/6.2.x/capture.html.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
1 participant