-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
abide, manual rating ... what the ground truth ? #12
Comments
Hi all, I would also very much like to know this :-) also, can someone tell me what -1, 0, and 1 mean? I am assuming that 0 means 'maybe', but what about -1 (is it reject or keep?) and 1 (again, is it reject or keep?). Thank you very much for any help with this, Charlotte |
hello Concerning the multiple raters I understood that they randomly choose one of the 3 raters ... this is not easy to deal with variable ground truth ... |
Thank you Romain, very helpful! Now we just need to work out which one is the correct csv, y_abide.csv or labels_abide_allraters.csv. I'm assuming it is y_abide, given that the other one is in the archived folder?! |
My thoughts exactly! I have also emailed Dr Esteban directly - will let you know when he gets back to me. Best wishes, C |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hello,
Thank you for providing this nice tools and sorry if it is not the right place to ask.
I am trying to replicate the learning on abide dataset, and I wonder how to use the manual ratting.
First I do not know which file to choose,
y_abide.csv or labels_abide_allraters.csv (in the archive subdir)
I try with the first one, and I found half of the line where the raters disagree ...(it is quite a lot ! )
with the second one I get 764 consistent rating over 1100.
So which one to use, and what to do in case of disagreement ? which label should I set ?
Since mriqc is performing a binary classificaiton, what to do with "doubfull" label is it treated as noise ?
So I do not see how to deduce a ground truth label (0/1) on all abide T1w
Many thanks for your help,
and sorry if I miss the explanation in one article
Romain
PS what about abide_MS.csv and abide_DB.csv ? it seems to contain a rating by one rater, to a subpart only
The text was updated successfully, but these errors were encountered: