-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RF: Adjust test to accomodate for no-log behavior of run
#131
Conversation
Sorry, I don't get it, could you elaborate on the change in behavior that PR introduces such that outside code needs fixing? |
Here's my guess from datalad/datalad#4996: If I understand it correctly, the reason that |
datalad_container/tests/test_run.py
Outdated
subds.containers_run('XXX', container_name='mycontainer') | ||
assert_in('image=righthere cmd=XXX img_dspath=. name=mycontainer', cmo.out) | ||
out = WitlessRunner(cwd=subds.path).run( | ||
'datalad containers-run -n mycontainer XXX', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make this work with the WitlessRunner in the released version of datalad, you could convert this and the other commands to a list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, but WitlessRunner isn't available until 0.13 and the current minimum version for -container is 0.12. Given that we probably want to avoid introducing new uses of the old Runner and that this is just about capturing output for the tests, one option would be to use subprocess.run
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is ok to raise the minimal required version to 0.13 since 0.12 is no longer "supported" really
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't feel too strongly either way (and in general am not against bumping the datalad minimum version fairly aggressively in extensions), but my personal preference would be to not bump the minimum dependency for the sake of test code alone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related: may be I should look into establishing testing of extensions against claimed minimally supported versions... I wonder if containers is still compatible with 0.12 since we never test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems that 0.12.1 might not be good enough: https://travis-ci.org/github/datalad/datalad-container/jobs/734406987 . Will try 0.13.0 now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems that 0.12.1 might not be good enough
Thanks for looking into it. Those failures you saw for 0.12.1 happened when you bumped to 0.13 as well. I haven't looked more closely to try to figure out what's going on, though.
Locally I tried with a pip -e
-installed 0.12.0, and -container's test suite passes for me. I also tried with a pip install datalad==0.12.1
-installed 0.12.1, and the test suite passes. So, at least based on the test suite, I think we're still compatible with 0.12.x.
Anyway, I'm okay with bumping the minimum version to 0.13. I'll tweak this PR to do that, and to address the outstanding issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, my min version testing causes some other fallouts which I did not have a chance to look into yet. sorry for misleading
This PR covers the |
The last commit started using WitlessRunner, and that isn't available until 0.13.0.
The WitlessRunner doesn't accept a list until df4f782bd8 (NF: WitlessRunner support platform shell command execution, 2020-10-09), which isn't yet in a release. (The current release v0.13.4.)
datalad/datalad#5002 switches `datalad run` over to using WitlessRunner rather than the old Runner. swallow_outputs() works for Runner because it explicitly sets the stdout argument of Popen() to sys.stdout, which allows the swallow_outputs() override to be effective. This is not the case for WitlessRunner. A few commits back already adjusted datalad_container/tests/test_run.py for the above change in behavior. Use the same approach in the remaining spots that swallow and check the output of a containers_run() call. datalad/datalad#4996 (review)
Codecov Report
@@ Coverage Diff @@
## master #131 +/- ##
==========================================
- Coverage 84.69% 84.17% -0.52%
==========================================
Files 17 17
Lines 797 790 -7
==========================================
- Hits 675 665 -10
- Misses 122 125 +3
Continue to review full report at Codecov.
|
794dac2 (TST: Adjust test for new error destination, 2020-11-19) tweaked a test in customremotes.tests.test_datalad to look at the exception string rather than stderr for an add_urls() failure. However, the assertion actually silently fails because it happens after the exception in the assert_raises() context. The string is not in the exception string because at typical log levels _call_annex() is called with StdOutCapture; so it's still on stderr, but the test thinks stderr is empty because swallow_outputs() doesn't work with the WitlessRunner [1]. It appears that the original reason for making the Runner (and now the WitlessRunner) not capture stderr was to let wget progress reporting through. This may still be effective in some cases (it's doesn't seem to be on my end), but we can instead use `--json --progress ...`, going through our more standard progress handling. Do that by switching to _call_annex_records(..., progress=True). Update the test to move the now-passing assertion outside of the assert_raises() context and also drop the use of swallow_outputs (which has no effect) and of swallow_logs (which isn't used). [1]: datalad/datalad-container#131 (comment)
Introduced by datalad/datalad#4996