Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

abide, manual rating ... what the ground truth ? #12

Open
romainVala opened this issue Sep 25, 2019 · 7 comments
Open

abide, manual rating ... what the ground truth ? #12

romainVala opened this issue Sep 25, 2019 · 7 comments

Comments

@romainVala
Copy link

Hello,

Thank you for providing this nice tools and sorry if it is not the right place to ask.

I am trying to replicate the learning on abide dataset, and I wonder how to use the manual ratting.

First I do not know which file to choose,
y_abide.csv or labels_abide_allraters.csv (in the archive subdir)

I try with the first one, and I found half of the line where the raters disagree ...(it is quite a lot ! )
with the second one I get 764 consistent rating over 1100.

So which one to use, and what to do in case of disagreement ? which label should I set ?

Since mriqc is performing a binary classificaiton, what to do with "doubfull" label is it treated as noise ?

So I do not see how to deduce a ground truth label (0/1) on all abide T1w

Many thanks for your help,
and sorry if I miss the explanation in one article

Romain

PS what about abide_MS.csv and abide_DB.csv ? it seems to contain a rating by one rater, to a subpart only

@cmpretzsch
Copy link

Hi all,

I would also very much like to know this :-) also, can someone tell me what -1, 0, and 1 mean? I am assuming that 0 means 'maybe', but what about -1 (is it reject or keep?) and 1 (again, is it reject or keep?).

Thank you very much for any help with this,
Kind wishes,

Charlotte

@romainVala
Copy link
Author

hello
from what i understand, reading the code they validate the classification with the following -1 = artefacted 0 or 1 is good (from the rater point of view 0 is doubtful and 1 is good)

Concerning the multiple raters I understood that they randomly choose one of the 3 raters ... this is not easy to deal with variable ground truth ...

@cmpretzsch
Copy link

Thank you Romain, very helpful! Now we just need to work out which one is the correct csv, y_abide.csv or labels_abide_allraters.csv. I'm assuming it is y_abide, given that the other one is in the archived folder?!

@romainVala
Copy link
Author

Yes I made the same assumption, but I am not sure at all ...
the strange thing is that in the archive one, theyre is no empty value (so all the rater note all the volumes)
I did not check the exact difference between the both file. would be nice if @effigies or @oesteban could confirm .

Many thanks

@cmpretzsch
Copy link

My thoughts exactly! I have also emailed Dr Esteban directly - will let you know when he gets back to me. Best wishes, C

@stale
Copy link

stale bot commented Jan 24, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot closed this as completed Jan 31, 2020
@oesteban oesteban reopened this Mar 24, 2020
@oesteban oesteban transferred this issue from nipreps/mriqc Mar 14, 2022
@cmpretzsch
Copy link

cmpretzsch commented Mar 14, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants