-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pbench-results-push: logging fixes and better reporting #3348
pbench-results-push: logging fixes and better reporting #3348
Conversation
The Jenkins issue looked like a transient failure, so I re-started the build. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we're not handling failure properly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And, some of the tests need additional assertions.
a83f4d4
to
b023aa9
Compare
I got rid of the changes to move.py (and its tests): although that will need attention, I just want to concentrate on push for this PR. With that proviso, I think I've addressed all the comments, except parametrizing the tests. I'll revisit that at some point but I don't want to do it now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we're reaching the point where it'd be nice to get it in; so while I'm quibbling about the testing, I'm going to approve anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, I just have extensions to Dave's nits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think move.py
needs to be updated to check for and handle errors from copy_result_tb()
.
310d069
I think this would work, no? I'll put it back in draft mode and go do the tests if that looks OK. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a few rough edges here now which need smoothing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple of places this could be streamlined, removing some of your comments about "trouble" areas ... but I won't obsess about it.
if status_code: | ||
if message is None: | ||
responses.add( | ||
responses.PUT, | ||
f"{TestResultsPush.URL}/upload/{os.path.basename(tarball)}", | ||
status=status_code, | ||
) | ||
elif isinstance(message, dict): | ||
responses.add( | ||
responses.PUT, | ||
f"{TestResultsPush.URL}/upload/{os.path.basename(tarball)}", | ||
status=status_code, | ||
json=message, | ||
) | ||
# I can only add a `json' attribute to the response. Trying `text' or `reason' makes | ||
# responses.add() barf: | ||
# File "/var/tmp/nick/tox/py39/lib/python3.9/site-packages/responses/__init__.py", line 770, in add | ||
# response = Response(method=method, url=url, body=body, **kwargs) | ||
# File "/var/tmp/nick/tox/py39/lib/python3.9/site-packages/responses/__init__.py", line 563, in __init__ | ||
# super().__init__(method, url, **kwargs) | ||
# TypeError: __init__() got an unexpected keyword argument 'reason' | ||
|
||
else: | ||
responses.add( | ||
responses.PUT, | ||
f"{TestResultsPush.URL}/upload/{os.path.basename(tarball)}", | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems straightforward enough:
if status_code:
if isinstance(message, dict):
responses.add(
responses.PUT,
f"{TestResultsPush.URL}/upload/{os.path.basename(tarball)}",
status=status_code,
json=message,
)
elif isinstance(message, (str, BufferedReader, Exception)):
responses.add(
responses.PUT,
f"{TestResultsPush.URL}/upload/{os.path.basename(tarball)}",
status=status_code,
body=message,
)
else:
responses.add(
responses.PUT,
f"{TestResultsPush.URL}/upload/{os.path.basename(tarball)}",
status=status_code,
)
else:
responses.add(
responses.PUT,
f"{TestResultsPush.URL}/upload/{os.path.basename(tarball)}",
)
Or, my biases lead me to try a single call with conditional data:
parms = {}
if status_code:
parms["status"] = status_code
if isinstance(message, dict):
parms["json"] = message
elif isinstance(message, (str, BufferedReader, Exception)):
parms["body"] = message
responses.add(
responses.PUT,
f"{TestResultsPush.URL}/upload/{os.path.basename(tarball)}",
**parms,
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's basically what I did (the first version) but where did the BufferedReader
and Exception
cases come from? I don't understand why they are needed.
I'll do one more push using your single call example probably.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those are the types, according to the documentation, that the body
parameter will accept.
9ec1294
to
1f2aa6f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. There are just two (new) small things which we can ignore if we must.
I leave it up to you whether to add Exception
back to the list of types which add_http_mock_response()
can handle (but, if you add it, then we can use add_http_mock_response()
in place of the existing add_connectionerr_mock_response()
...).
try: | ||
err_msg = "" if not message else message["message"] | ||
except TypeError: | ||
err_msg = message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the value of message
is determined by the parametrize()
list which we control, having a try
block here seems overdone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened issue #3374 to deal with the remaining problems and objections. I think I have captured them all.
if status_code >= HTTPStatus.BAD_REQUEST: | ||
err_msg = f"HTTP Error status: {status_code.value}, message: {err_msg}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll grant that the numeric values of the HTTPStatus
codes are all well-defined, but, for the most part, they are not ordered, so applying a relational operator (>=
) to them seem inappropriate.
Choosing the least of evils, here, I would use a bare 400
, as that is an architected number (although, it would be better if we had a symbolic constant for it with an appropriate semantic, unlike BAD_REQUEST
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened issue #3374 to deal with the remaining problems and objections. I think I have captured them all.
The logger only uses the default Streamhandler now which outputs to stderr.
copy_result_tb returns the response from the server. The callers are responsible for interpreting it. However, it can still raise an exception (e.g. on connection error). - push.py: construct a reasonable message out of the response and check if the HTTP status is 201: we exit with 0 if so. Otherwise, if the status is OK (i.e. < 400), then we exit with 0 but print a message with a reasonably self-explanatory status on stderr. We expect that we will only ever get two OK responses: a 201 which indicates a successful PUT with a newly created dataset and 200 which indicates a duplicate dataset. In all non-OK cases, we output the message on stderr and exit with 1. - move.py: check the response - if not OK (>= 400) raise exception.
copy_result_tb() raises exceptions in a couple of cases. Catch those exceptions in ResultsPush and print out the message on stderr, but keep the backtrace invisible.
Monkeypatching pathlib components (.exists() and .open()) causes pytest to blow up. Limit the scope by using `monkeypatch.context()' so that it is only applied to the objects under test (and not e.g. to what is used in checking assertions).
This is just the first step, but I've had enough for now.
Eliminate duplicated exception handling. Tried to eliminate a couple of unreachable assertions, but then flake8 complained about an unused variable, so I left the assertions in and added comments.
- delete wayward comment in test_results_push.py - delete the unused variable and follow the suggestion in a previous review to get rid of the unreachable assertions - delete the comment in test_results_push.py about `text' and `reason'. I'm not going to implement anything here this time around, but will leave the paremtrization cleanup and improvements for the next PR. - fix the precedence problem with the ternary operator.
Fix the ternary operator precedence issue that Webb pointed out, although that required a bit more work to reproduce the actual error message. Replace all the `str.find() > -1` with the `in` version. Use `body` attribute to add string test cases (in addition to the `json` test cases).
Use the single call method that Webb suggested for the mock response. I left out the BufferedReader case because I don't understand it. I left in the Exception case although I don't understand it either, but I'm guessing it can help to unify some of the "abnormal" test cases with the "normal" one - but it is not used in this incarnation.
I'll bring it back after testing it (but probably not in this PR).
1f2aa6f
to
9e4f80b
Compare
…b0.72 * Do not use the file logger at all in python commands. The logger only uses the default Streamhandler now which outputs to stderr. * - copy_result_tb: return the response to the PUT request copy_result_tb returns the response from the server. The callers are responsible for interpreting it. However, it can still raise an exception (e.g. on connection error). - push.py: construct a reasonable message out of the response and check if the HTTP status is 201: we exit with 0 if so. Otherwise, if the status is OK (i.e. < 400), then we exit with 0 but print a message with a reasonably self-explanatory status on stderr. We expect that we will only ever get two OK responses: a 201 which indicates a successful PUT with a newly created dataset and 200 which indicates a duplicate dataset. In all non-OK cases, we output the message on stderr and exit with 1. - move.py: check the response - if not OK (>= 400) raise exception. * Fix the monkeypatching in test_copy_result_tb.py Monkeypatching pathlib components (.exists() and .open()) causes pytest to blow up. Limit the scope by using `monkeypatch.context()' so that it is only applied to the objects under test (and not e.g. to what is used in checking assertions). * Use symbolic constant instead of explicit 201 * Parametrize the "normal" test This is just the first step - see issue distributed-system-analysis#3374 for some more work along these lines.
Fixes distributed-system-analysis#3374 This is a continuation of distributed-system-analysis#3348, implementing fixes to the pbench_results_push tests. The main one is to subsume some exception tests under the parametrized "normal" case and eliminate redundant tests.
…istributed-system-analysis#3378 to b0.72 Cherry-pick the two commits from the above PRs into b0.72. No changes were necessary and no conflicts arose. * Do not use the file logger at all in python commands. The logger only uses the default Streamhandler now which outputs to stderr. * - copy_result_tb: return the response to the PUT request copy_result_tb returns the response from the server. The callers are responsible for interpreting it. However, it can still raise an exception (e.g. on connection error). - push.py: construct a reasonable message out of the response and check if the HTTP status is 201: we exit with 0 if so. Otherwise, if the status is OK (i.e. < 400), then we exit with 0 but print a message with a reasonably self-explanatory status on stderr. We expect that we will only ever get two OK responses: a 201 which indicates a successful PUT with a newly created dataset and 200 which indicates a duplicate dataset. In all non-OK cases, we output the message on stderr and exit with 1. - move.py: check the response - if not OK (>= 400) raise exception. * Fix the monkeypatching in test_copy_result_tb.py Monkeypatching pathlib components (.exists() and .open()) causes pytest to blow up. Limit the scope by using `monkeypatch.context()' so that it is only applied to the objects under test (and not e.g. to what is used in checking assertions). * Parametrize the "normal" test in test_push_results.py Add the connection error test to the parametrized set and eliminate some (now) redundant tests.
…istributed-system-analysis#3378 to b0.72 Cherry-pick the two commits from the above PRs into b0.72. No changes were necessary and no conflicts arose. * Do not use the file logger at all in python commands. The logger only uses the default Streamhandler now which outputs to stderr. * - copy_result_tb: return the response to the PUT request copy_result_tb returns the response from the server. The callers are responsible for interpreting it. However, it can still raise an exception (e.g. on connection error). - push.py: construct a reasonable message out of the response and check if the HTTP status is 201: we exit with 0 if so. Otherwise, if the status is OK (i.e. < 400), then we exit with 0 but print a message with a reasonably self-explanatory status on stderr. We expect that we will only ever get two OK responses: a 201 which indicates a successful PUT with a newly created dataset and 200 which indicates a duplicate dataset. In all non-OK cases, we output the message on stderr and exit with 1. - move.py: check the response - if not OK (>= 400) raise exception. * Fix the monkeypatching in test_copy_result_tb.py Monkeypatching pathlib components (.exists() and .open()) causes pytest to blow up. Limit the scope by using `monkeypatch.context()' so that it is only applied to the objects under test (and not e.g. to what is used in checking assertions). * Parametrize the "normal" test in test_push_results.py Add the connection error test to the parametrized set and eliminate some (now) redundant tests.
* pbench-results-push: Backport of #3348 and #3378 to b0.72 Cherry-pick the two commits from the above PRs into b0.72. No changes were necessary and no conflicts arose. * Do not use the file logger at all in python commands. The logger only uses the default Streamhandler now which outputs to stderr. * - copy_result_tb: return the response to the PUT request copy_result_tb returns the response from the server. The callers are responsible for interpreting it. However, it can still raise an exception (e.g. on connection error). - push.py: construct a reasonable message out of the response and check if the HTTP status is 201: we exit with 0 if so. Otherwise, if the status is OK (i.e. < 400), then we exit with 0 but print a message with a reasonably self-explanatory status on stderr. We expect that we will only ever get two OK responses: a 201 which indicates a successful PUT with a newly created dataset and 200 which indicates a duplicate dataset. In all non-OK cases, we output the message on stderr and exit with 1. - move.py: check the response - if not OK (>= 400) raise exception. * Fix the monkeypatching in test_copy_result_tb.py Monkeypatching pathlib components (.exists() and .open()) causes pytest to blow up. Limit the scope by using `monkeypatch.context()' so that it is only applied to the objects under test (and not e.g. to what is used in checking assertions). * Parametrize the "normal" test in test_push_results.py Add the connection error test to the parametrized set and eliminate some (now) redundant tests. * Add back inadvertently deleted test test_multiple_meta_args() should not have been deleted.
copy_result-tb
to return the server response without interpreting it (although it still raises exceptions on some errors, e.g. connection errors). The response status codes are interpreted inpush.py
andmove.py
.push.py
is silent on success; it outputs an error message but still exits with 0 on any other OK status; otherwise, it outputs an error message and exits with 1.move.py
translates all non-OK responses to exceptions.pathlib' components that were used by pytest in checking assertions and were causing internal failures in
pytest' itself.pbench-results-push
- there is more to be done here (see issue Testing fixes for test_results_push.py #3374).