Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement fail-safe publication bias method #730

Open
tsalo opened this issue Jul 22, 2022 · 1 comment
Open

Implement fail-safe publication bias method #730

tsalo opened this issue Jul 22, 2022 · 1 comment
Labels
cbma Issues/PRs pertaining to coordinate-based meta-analysis diagnostics Issues/PRs related to the diagnostics module. enhancement New feature or request

Comments

@tsalo
Copy link
Member

tsalo commented Jul 22, 2022

Summary

The fail-safe method, described in Acar et al. (2018), appears to work by incrementally adding null experiments to an ALE meta-analysis and determining the point at which significant results disappear. The null experiments seem to draw coordinates randomly from across the brain mask (much like the Monte Carlo FWE correction procedure). I haven't read the paper yet, but @ldhill is looking to try it out for a manuscript, and it seems like something we could implement as a Transformer in NiMARE.

Additional details

There appear to be R implementations in NeuroStat/FailSafeN and NeuroStat/GenerateNull (as referenced in #294 (comment)).

This is related to #96.

@tsalo tsalo added enhancement New feature or request cbma Issues/PRs pertaining to coordinate-based meta-analysis diagnostics Issues/PRs related to the diagnostics module. labels Jul 22, 2022
@tsalo tsalo changed the title Implement fail-safe Implement fail-safe publication bias method Jul 22, 2022
@tsalo
Copy link
Member Author

tsalo commented Jul 23, 2022

Here is my attempt of a summary of the method (most of the work done by @ldhill):

  1. Run the meta-analysis, with multiple comparisons correction.
  2. Identify significant clusters in the meta-analysis.
  3. Select a minimum number of null studies (m) and a maximum number (M).
    • The minimum number is typically 2x the number of studies in the meta-analysis.
    • The maximum number is typically 10x the number of studies in the meta-analysis.
  4. Create a simulated dataset with the minimum number of null studies (m) added and run a meta-analysis on the simulated dataset.
    • If every cluster from the original meta-analysis is no longer significant, stop the procedure and note that "results may not be robust to bias due to missing (null) studies in the meta-analysis."
    • If any cluster from the original meta-analysis is still significant, continue the procedure.
  5. Create a simulated dataset with the maximum number of null studies (M) and run a meta-analysis on the simulated dataset.
    • If every cluster is no longer significant, continue to the next step.
    • If every cluster is still significant, either note that in the manuscript or repeat the procedure with a new m set at the old M and a new M set somewhere higher.
  6. Start selecting null study counts based on m and M to scour the space.
    • Start with (M + m) / 2.

Things we still need to figure out:

  1. I'm not sure how to perform this with multiple clusters.
  2. How is cluster significance determined?
    • Is there a threshold for the proportion of voxels in the original cluster that need to be significant?
    • Just use the center of mass of the clusters?
    • Dice similarity index of any overlapping clusters?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cbma Issues/PRs pertaining to coordinate-based meta-analysis diagnostics Issues/PRs related to the diagnostics module. enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant