Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot run stats on a GLM that includes 2 tasks #1303

Closed
Remi-Gau opened this issue Aug 6, 2024 · 5 comments
Closed

cannot run stats on a GLM that includes 2 tasks #1303

Remi-Gau opened this issue Aug 6, 2024 · 5 comments
Labels
bug 🐛 Something isn't working
Milestone

Comments

@Remi-Gau
Copy link
Contributor

Remi-Gau commented Aug 6, 2024

at some point the pipeline will iterate over tasks and will try to run separate GLM for each tasks

@monique2208
Copy link

I just looking exactly for this possibility. This would be a really great feature as people occasionally put conditions across different runs and add this to the task label. I was wondering if it would be possible to specify task as list of tasks as well. I am only not sure what it would mean for the input columns, does it mean the event files of the different tasks need to share the column names and values, or would you add a prefix for the task?

@Remi-Gau
Copy link
Contributor Author

Remi-Gau commented Aug 16, 2024

quick reply: this is actually a bug I introduced recently

bidspm used to be able to combine several tasks into a single GLM

here is an example of model I had used: https://github.com/Remi-Gau/olfactory_fmri_blind/blob/master/src/analysis_func/models/model-default_smdl.json

you can see that several tasks are specified in the model

  "Input": {
    "task": [
      "olfid",
      "olfloc"
    ],
    "space": [
      "MNI152NLin2009cAsym"
    ]
  },

you can then just specify the events to include in the design matrix section as usual but in case you want to contrast things across tasks then at the moment the only way to do this would be by adding a prefix to the event name that specifies the tasks it came from

      "Model": {
        "X": [
          "trial_type.olfid_eucalyptus_left",
          "trial_type.olfid_eucalyptus_right",
          "trial_type.olfid_almond_left",
          "trial_type.olfid_almond_right",
          "trial_type.olfloc_eucalyptus_left",
          "trial_type.olfloc_eucalyptus_right",
          "trial_type.olfloc_almond_left",
          "trial_type.olfloc_almond_right",
          "trial_type.resp_03",
          "trial_type.resp_12",
          "trans_?",
          "rot_?",
          1
        ],
        "HRF": {
          "Variables": [
            "trial_type.olfid_eucalyptus_left",
            "trial_type.olfid_eucalyptus_right",
            "trial_type.olfid_almond_left",
            "trial_type.olfid_almond_right",
            "trial_type.olfloc_eucalyptus_left",
            "trial_type.olfloc_eucalyptus_right",
            "trial_type.olfloc_almond_left",
            "trial_type.olfloc_almond_right",
            "trial_type.resp_03",
            "trial_type.resp_12"
          ],
          "Model": "spm"
        },

I could try to adapt the demo from this mutlitask dataset: https://openneuro.org/datasets/ds000114/versions/1.0.2

but for many tasks the events were just called Task (who needs more informative event descriptors?...) so comparing across tasks will be difficult (but averaging across task should be easy).

@monique2208
Copy link

Great, I misunderstood but then this is exactly what I need, thank you! Averaging across task is all I need for now.

@Remi-Gau
Copy link
Contributor Author

OK will try to fix soon and push a new release as well

@Remi-Gau Remi-Gau added this to the v4.0.0 milestone Aug 16, 2024
@Remi-Gau Remi-Gau added the bug 🐛 Something isn't working label Aug 16, 2024
@Remi-Gau
Copy link
Contributor Author

closed by #1310

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants