Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TEST: Parallelize pytest #2469

Merged
merged 7 commits into from
Mar 7, 2018
Merged

TEST: Parallelize pytest #2469

merged 7 commits into from
Mar 7, 2018

Conversation

effigies
Copy link
Member

Was curious how much parallelizing could squeeze down pytest runtime, acknowledging that it's already not super long (~10 minutes out of a 40 minute build and test cycle).

This led down a doctest-shaped rabbit hole, because some workers were running the module-level doctests that changed directories and others were running the function-level doctests, and the context wasn't being shared so there were a couple stochastic crashes each build. So I figured out how to set a fixture that would chdir into the testing/data directory on doctests and only doctests. This seems cleaner than the old way of doing it anyway.

Changes proposed in this pull request

  • Add pytest-xdist to testing dependencies
  • Add -n auto to pytest config
  • Add in_testing fixture to chdir for doctests
  • Remove old header chdir boilerplate

@effigies effigies changed the title CI: Parallelize pytest TEST: Parallelize pytest Feb 28, 2018
@oesteban
Copy link
Contributor

On my experience those two cores will perform as just one....

@effigies
Copy link
Member Author

Well, looks like we're dropping from 6:30 to about 4:15, so about a 33% speedup, which isn't too bad for 2 cores.

@djarecka
Copy link
Collaborator

I think nipype.test() won't work.

also, one of the reason why I didn't remove imports etc. from files to auto-fixture was that i thought it will be harder for users to follow these examples. Not sure, how important it is..

@effigies
Copy link
Member Author

I think nipype.test() won't work.

Yeah, I get one failure, but I'm not sure if that's related. I've never tried testing that way. I'll check on master.

also, one of the reason why I didn't remove imports etc. from files to auto-fixture was that i thought it will be harder for users to follow these examples. Not sure, how important it is..

Doesn't seem super useful to me, since those docstrings are never rendered in the published docs. But I'm okay being overruled here.

@satra
Copy link
Member

satra commented Feb 28, 2018

the docstrings are meant to guide individuals - they are not meant to execute a real computation. so i'm +1 for fixture as long as it doesn't get in the way of the docstrings (which at least in this PR it doesn't look like).

@djarecka
Copy link
Collaborator

@effigies - i also don't usually test like that, but @satra always wanted to include this option, so users can easily test nipype. I don't remember right now (would have to read again and check my previous PR), but it is related how pytest searching for root directory etc. Never fully understood all details..

regarding docstrings, I just wanted explain my point, I thought that they should be design in a way that users can search the file, copy-paste and try to execute

@effigies
Copy link
Member Author

effigies commented Feb 28, 2018

Okay, the failure introduced by this in nipype.test() is in the following test:

def test_callback_exception(tmpdir):
tmpdir.chdir()
so = Status()
wf = pe.Workflow(name='test', base_dir=tmpdir.strpath)
f_node = pe.Node(
niu.Function(function=bad_func, input_names=[], output_names=[]),
name='f_node')
wf.add_nodes([f_node])
wf.config['execution'] = {'crashdump_dir': wf.base_dir}
try:
wf.run(plugin="Linear", plugin_args={'status_callback': so.callback})
except:
pass
assert len(so.statuses) == 2
for (n, s) in so.statuses:
assert n.name == 'f_node'
assert so.statuses[0][1] == 'start'
assert so.statuses[1][1] == 'exception'

The failing line is:

assert len(so.statuses) == 2

Don't see what should be affecting it, immediately.

@djarecka
Copy link
Collaborator

@effigies sorry, I think this time it's not related to directory structure, you didn't change anything with it, but doesn't like the flag -n auto. do you need it?

@djarecka
Copy link
Collaborator

@satra ok

@djarecka
Copy link
Collaborator

(almost) all tests are passing on my OSX when I removed -n auto from pytest.ini

@effigies
Copy link
Member Author

Oh, I see. We were overwriting some of what was in the pytest.ini. For consistency, I dropped most of that to run the same tests inside as outside, but added arguments for disabling both doctests and parallelism.

The 3.4 and 3.5 issue that's showing up on Travis is that dictionaries don't have guaranteed order pre-3.6. See pytest-dev/pytest#1075. Someone there suggests adding pytest-env, but I'd rather not add yet another dependency if I don't have to.

@effigies effigies added this to the 1.0.2 milestone Feb 28, 2018
Dictionaries in Python < 3.6 are not ordered by insertion
@effigies
Copy link
Member Author

effigies commented Mar 1, 2018

Went ahead and used pytest-env because I couldn't come up with anything better without wasting more effort than avoiding it was worth. This is ready for review.

@satra satra merged commit bc6df5b into nipy:master Mar 7, 2018
@effigies effigies deleted the ci/parallel_pytest branch March 7, 2018 15:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants