-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TEST: dataset-level model spec retrieval #693
Conversation
44b8882
to
237a31c
Compare
Rebased to run tests. |
@tyarkoni I'm going to revert all style changes except |
Sounds good, sorry about that. |
Actually, I think the failures are related to the change in the model. Never mind. |
Ah, sorry—I forgot to run the other tests before pushing. These failures actually suggest a deeper problem that will probably require more than just amending the tests. It looks like pybids doesn't currently broadcast higher-level metadata properly when there's more than one row per unit passed from the previous subject. E.g., if you go straight from run to dataset level, and you're passing 10 runs from 2 subjects, then at the dataset level you have 10 rows, but |
@effigies tests are breaking because of #682; let's ignore that here and I'll deal with it separately (I think the issue is that the JSON changes @oesteban introduced to cause #683 and trigger the new test are breaking other things). If you want to review this, the key changes are in this section. My earlier assessment was wrong; the issue here isn't broadcasting-related, it's that bad things happen when metadata is missing for some rows but not others (e.g., the rows propagated to the |
Seems to be rooting from the new I wonder if it would suffice to first merge #683 and use the |
@effigies we should probably merge this before mucking around further with the design matrix construction stuff... the test failures look like a CI glitch, and are otherwise passing. |
Very minor test that verifies you can retrieve a model specification object for a BIDS-StatsModels analysis at levels past run (in this case, dataset). (I think black went a little crazy on the formatting, which was unintended, but the only substantive changes are the addition of
test_get_dataset_level_model_spec
.)