-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handling null findings in coordinate-based meta-analyses #294
Comments
I support the first option:
I would suspect _validate_input would need to be modified (perhaps by adding an extra parameter like I worry about this option:
since people may choose a large radius/fwhm/mask (as you noted) and those values may have some influence on the final statistic. I don't think I have enough context for option 3
But if a statistic image could theoretically be all zero, I think it's fair there is a representation of "no peaks". |
Regarding (1), long term, I think the cleanest way to handle this internally would be to refactor the code so that we're passing around a list of WRT (2), what's the point of failure for CBMA methods? At least for MKDA, it doesn't seem like there should be a problem handling empty maps... computing the observed statistic (just the sum/mean of studies at each voxel) isn't a problem, and nulls also aren't a problem for computing p-values via permutation. For the histogram-based analytical approximation, I also don't think it's an issue, as you just have a single bin (for value=0) with P=1. It seems plausible that adding empty maps has such a negligible effect that they may as well be ignored (which would be the best-case scenario), but I don't see why that would require qualitatively different handling. And it's not obvious to me why ALE should have any more trouble. But probably I'm missing something. |
I agree that refactoring how data are stored in the
I think that the issue is that, as you said, the effect is so negligible (or perhaps entirely nonexistent) that including null findings just doesn't impact the results. If you have a meta-analysis of 100 studies, with 80 of them being null findings, and the result is the same as if you just analyzed the 20 significant studies, then that might be a major source of confusion for users. I can probably determine whether that's the case at some point soon though. |
That's fine, I don't see that as a problem—we can just document the behavior. E.g., we could issue a gentle warning any time empty coordinate sets are detected, without doing anything differently. E.g., "X studies with empty coordinate lists were detected. No remedial steps need to be taken unless you think the absence of coordinates is an error. Note, however, that this coordinate-based meta-analysis method is (by design) unaffected by the addition of empty maps, and the inclusion of these studies will not change the results. (We nevertheless recommend keeping such studies in the dataset for informational purposes.)" |
I just came across a repo that implements something like we're talking about here, but in R: https://github.com/NeuroStat/GenerateNull. Just wanted to post it here for posterity in case we ever circle back to this idea. |
I've been talking with @jessicabartley about how to handle experiments with null findings in coordinate-based meta-analyses.
There are two concerns:
Dataset.ids
without corresponding representation inDataset.coordinates
had null findings. Given that we plan to support mixed image- and coordinate-based datasets as NeuroStore is built, this seems like a bad assumption to make.The text was updated successfully, but these errors were encountered: