Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RF: Adjust test to accomodate for no-log behavior of run #131

Merged
merged 5 commits into from
Oct 15, 2020

Conversation

mih
Copy link
Member

@mih mih commented Oct 9, 2020

Introduced by datalad/datalad#4996

@mih mih mentioned this pull request Oct 9, 2020
1 task
@yarikoptic
Copy link
Member

Sorry, I don't get it, could you elaborate on the change in behavior that PR introduces such that outside code needs fixing?

@kyleam
Copy link
Contributor

kyleam commented Oct 9, 2020

could you elaborate on the change in behavior that PR introduces such that outside code needs fixing?

Here's my guess from datalad/datalad#4996: If I understand it correctly, the reason that swallow_outputs works for Runner and not WitlessRunner is that Runner explicitly sets the stdout argument of Popen to sys.stdout, which allows the swallow_outputs override to be effective.

subds.containers_run('XXX', container_name='mycontainer')
assert_in('image=righthere cmd=XXX img_dspath=. name=mycontainer', cmo.out)
out = WitlessRunner(cwd=subds.path).run(
'datalad containers-run -n mycontainer XXX',
Copy link
Contributor

@kyleam kyleam Oct 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make this work with the WitlessRunner in the released version of datalad, you could convert this and the other commands to a list.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, but WitlessRunner isn't available until 0.13 and the current minimum version for -container is 0.12. Given that we probably want to avoid introducing new uses of the old Runner and that this is just about capturing output for the tests, one option would be to use subprocess.run.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is ok to raise the minimal required version to 0.13 since 0.12 is no longer "supported" really

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't feel too strongly either way (and in general am not against bumping the datalad minimum version fairly aggressively in extensions), but my personal preference would be to not bump the minimum dependency for the sake of test code alone.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related: may be I should look into establishing testing of extensions against claimed minimally supported versions... I wonder if containers is still compatible with 0.12 since we never test

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems that 0.12.1 might not be good enough: https://travis-ci.org/github/datalad/datalad-container/jobs/734406987 . Will try 0.13.0 now

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems that 0.12.1 might not be good enough

Thanks for looking into it. Those failures you saw for 0.12.1 happened when you bumped to 0.13 as well. I haven't looked more closely to try to figure out what's going on, though.

Locally I tried with a pip -e-installed 0.12.0, and -container's test suite passes for me. I also tried with a pip install datalad==0.12.1-installed 0.12.1, and the test suite passes. So, at least based on the test suite, I think we're still compatible with 0.12.x.

Anyway, I'm okay with bumping the minimum version to 0.13. I'll tweak this PR to do that, and to address the outstanding issues.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, my min version testing causes some other fallouts which I did not have a chance to look into yet. sorry for misleading

@kyleam
Copy link
Contributor

kyleam commented Oct 9, 2020

This PR covers the test_custom_call_fmt failure, but TestAdapterBusyBox.test_containers_run showed a similar failure in the -container run at datalad/datalad#4996.

The last commit started using WitlessRunner, and that isn't available
until 0.13.0.
The WitlessRunner doesn't accept a list until df4f782bd8 (NF:
WitlessRunner support platform shell command execution, 2020-10-09),
which isn't yet in a release.  (The current release v0.13.4.)
datalad/datalad#5002 switches `datalad run` over to using
WitlessRunner rather than the old Runner.  swallow_outputs() works for
Runner because it explicitly sets the stdout argument of Popen() to
sys.stdout, which allows the swallow_outputs() override to be
effective.  This is not the case for WitlessRunner.

A few commits back already adjusted
datalad_container/tests/test_run.py for the above change in behavior.
Use the same approach in the remaining spots that swallow and check
the output of a containers_run() call.

datalad/datalad#4996 (review)
@codecov
Copy link

codecov bot commented Oct 13, 2020

Codecov Report

Merging #131 into master will decrease coverage by 0.51%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #131      +/-   ##
==========================================
- Coverage   84.69%   84.17%   -0.52%     
==========================================
  Files          17       17              
  Lines         797      790       -7     
==========================================
- Hits          675      665      -10     
- Misses        122      125       +3     
Impacted Files Coverage Δ
datalad_container/adapters/tests/test_docker.py 93.22% <100.00%> (-0.23%) ⬇️
datalad_container/tests/test_run.py 66.00% <100.00%> (-1.31%) ⬇️
datalad_container/tests/test_schemes.py 100.00% <100.00%> (ø)
datalad_container/containers_run.py 80.00% <0.00%> (-6.00%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7327a7b...6b744cd. Read the comment docs.

@yarikoptic yarikoptic merged commit f8c7bd3 into datalad:master Oct 15, 2020
kyleam added a commit to kyleam/datalad that referenced this pull request Jan 15, 2021
794dac2 (TST: Adjust test for new error destination, 2020-11-19)
tweaked a test in customremotes.tests.test_datalad to look at the
exception string rather than stderr for an add_urls() failure.
However, the assertion actually silently fails because it happens
after the exception in the assert_raises() context.  The string is not
in the exception string because at typical log levels _call_annex() is
called with StdOutCapture; so it's still on stderr, but the test
thinks stderr is empty because swallow_outputs() doesn't work with the
WitlessRunner [1].

It appears that the original reason for making the Runner (and now the
WitlessRunner) not capture stderr was to let wget progress reporting
through.  This may still be effective in some cases (it's doesn't seem
to be on my end), but we can instead use `--json --progress ...`,
going through our more standard progress handling.

Do that by switching to _call_annex_records(..., progress=True).
Update the test to move the now-passing assertion outside of the
assert_raises() context and also drop the use of swallow_outputs
(which has no effect) and of swallow_logs (which isn't used).

[1]: datalad/datalad-container#131 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants