Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NaN's produced. #8

Open
maitra opened this issue Apr 28, 2020 · 11 comments
Open

NaN's produced. #8

maitra opened this issue Apr 28, 2020 · 11 comments
Labels
bug Something isn't working

Comments

@maitra
Copy link

maitra commented Apr 28, 2020

Bstand2-int-death-negative.nii.gz
mask.nii.gz

The files above, which are of Zmaps produce NaNs for 307 voxels. Here is what I do:


Z <- readNIfTI(fname = "Bstand2-int-death-negative")
MASK <- readNIfTI(fname = "mask.nii.gz")
pTFCE <- ptfce(Z, MASK)

I get:

* Estimating smoothness based on the data...
  |======================================================================| 100%
* Performing pTFCE...
  |======================================================================| 100%
There were 50 or more warnings (use warnings() to see the first 50)

And:

warnings()
Warning messages:
1: In sqrt(d * (8 * s + d)) : NaNs produced
2: In sqrt(d * (8 * s + d)) : NaNs produced
3: In sqrt(d * (8 * s + d)) : NaNs produced
4: In sqrt(d * (8 * s + d)) : NaNs produced
5: In sqrt(d * (8 * s + d)) : NaNs produced
6: In sqrt(d * (8 * s + d)) : NaNs produced
7: In sqrt(d * (8 * s + d)) : NaNs produced
....


What is wrong? How do I get out of these NaNs?

@spisakt spisakt added the bug Something isn't working label Apr 29, 2020
spisakt added a commit that referenced this issue Apr 29, 2020
@spisakt
Copy link
Owner

spisakt commented Apr 29, 2020

Hi,
Thanks a lot for the bug report!
There was an underflow for your volume (related to the recent performance patch), which has a somewhat low smoothness.

I have fixed it, please download version 0.2.2 and see if it solves the issue.

Cheers,
Tamas

@maitra
Copy link
Author

maitra commented Apr 29, 2020

Thank you. For this example, it of course solves the issue. We will try with other datasets also and let you know if we come across issues. Thank you again for fixing this so soon.

@maitra
Copy link
Author

maitra commented Apr 30, 2020

Sorry, I spoke too soon. The problem still exists. Please try:
pTFCE <- ptfce(-Z, MASK)

and the underflow has not gone away. How do I get out of these NaNs?

@spisakt
Copy link
Owner

spisakt commented Apr 30, 2020

Ohh, I see. I increased the new scaling factor.
Does it work for you with v0.2.2.1?

Please let me know if you have further issues. I might have to search for a more elaborated fix...

@maitra
Copy link
Author

maitra commented Apr 30, 2020

Ohh, I see. I increased the new scaling factor.
Does it work for you with v0.2.2.1?

Please let me know if you have further issues. I might have to search for a more elaborated fix...

So, of course, it works for this particular dataset but I get around 18.5% voxels activated inside the mask when I use 2-sided thresholding, which is hard to believe. Can you please see if there is a more correct fix? Btw, here is the wrapper function I used.

ptfce.image <- function(infile, maskfile, one.sided = F, alpha = 0.05) {
    if (require(oro.nifti)) {
    Z <- readNIfTI(fname = infile)
    MASK <- readNIfTI(fname = maskfile)
    if (require(pTFCE)) {
        if (one.sided) {
            pTFCE <- ptfce(img = Z, mask = MASK)
            Zcut <- fwe.p2z(pTFCE$number_of_resels, alpha)
            pTFCE$Z[pTFCE$Z < Zcut]  <- 0
        }
        else {
            pTFCE <- ptfce(img = Z, mask = MASK)
            Zcut <- fwe.p2z(pTFCE$number_of_resels, alpha/2)
            pTFCE$Z[pTFCE$Z < Zcut]  <- 0
            pTFCE1 <-  ptfce(img = -Z, mask = MASK)
            Zcut <- fwe.p2z(pTFCE1$number_of_resels, alpha/2)
            pTFCE1$Z[pTFCE1$Z < Zcut]  <- 0
            pTFCE$Z[pTFCE1$Z != 0]  <- -pTFCE1$Z[pTFCE1$Z != 0]
        }
        cat("Proportion of activated voxels:", mean(pTFCE$Z[MASK==1]!=0),"\n")
        pTFCE$Z
    }   else
        cat("Could not load package pTFCE: is it correctly installed?\n")
    } else
        cat("Could not load package oro.nifti: is it correctly installed?\n")
}

For the file as above:

Z <- ptfce.image(infile = "Bstand2-int-death-negative", mask = "mask")
* Estimating smoothness based on the data...
  |======================================================================| 100%
* Performing pTFCE...
  |======================================================================| 100%
* Estimating smoothness based on the data...
  |======================================================================| 100%
* Performing pTFCE...
  |======================================================================| 100%
Proportion of activated voxels: 0.1813664 

For a different file (attached): I get 24.6%, which is simply not possible.


Z <- ptfce.image(infile = "Bstand2-int-death-positive", mask = "mask")
* Estimating smoothness based on the data...
  |======================================================================| 100%
* Performing pTFCE...
  |======================================================================| 100%
* Estimating smoothness based on the data...
  |======================================================================| 100%
* Performing pTFCE...
  |======================================================================| 100%
Proportion of activated voxels: 0.245529 

I guess that I will wait for your elaborate fix.

Bstand2-pTFCE-int-death-positive.nii.gz

@spisakt
Copy link
Owner

spisakt commented Apr 30, 2020

Are you sure these are Z-score images?
If yes, then the high number of detected voxels looks not so unreasonable, given the distribution of your unenhnaced Z-scores:

image

Red is the standard normal, blue is your data. Your distribution is clearly much different from the null, you can expect a lot of "unlikely" clusters of activations to emerge.

@maitra
Copy link
Author

maitra commented Apr 30, 2020

The images are of voxel-wise standardized values under the null. The normal density in your figure is likely based on the assumption of independence of voxels (I am guessing). I am not completely sure that these are comparable. But it is also possible that these have thicker tails under the null and are better modeled by the t with smaller degrees of freedom. I guess that your assumptions of normality under the null are very rigid.

@spisakt
Copy link
Owner

spisakt commented Apr 30, 2020

Yep, the plot is with independence assumed. But note that positive dependence (which we expect) will make the tails lighter (that's what we take advantage of when doing Gaussian Random Field (GRF) theory-based FWER, BTW).
So the figure nicely illustrates, that either (i) GRF is not a good null model for your data or (ii) we can reject the null in lot of voxels in your data, as pTFCE does.

It's not clear how you standardised the values, but if they are T-values, definitely "Gaussianize" them with the proper dof, in order to be valid for pTFCE or any other GRF-based method.

Yes, assumptions are kind of rigid for all GRF-based methods (i.e. for the majority of parametric thresholding methods in neuroimaging).
This can lead to problems and we should be very carful, see e.g. https://www.pnas.org/content/113/28/7900 and commentaries.

@spisakt
Copy link
Owner

spisakt commented Apr 30, 2020

The strange distribution might also explain why the underflow (and thus, the NaN-issue) did not happen before.

@maitra
Copy link
Author

maitra commented Jun 18, 2022

This bug is not quite resolved. Here is an example.
example.rda.gz
Please unzip the file before using.


load('example.rda')
library(pTFCE)
library(oro.nifti)
mask.nifti  <- as.nifti(mask) # since pTFCE only takes nifti
tstats.nifti  <- as.nifti(tv) # since pTFCE only takes nifti
xx <- ptfce(img = tstats.nifti, mask = mask.nifti)

oro.nifti 0.11.2
* Estimating smoothness based on the data...
  |======================================================================| 100%
* Performing pTFCE...
  |======================================================================| 100%
There were 50 or more warnings (use warnings() to see the first 50)

warnings()
Warning messages:
1: In sqrt(d * (8 * s + d)) : NaNs produced
2: In sqrt(d * (8 * s + d)) : NaNs produced

We get around 5000+ NaNs. Thanks!

@spisakt
Copy link
Owner

spisakt commented Jul 1, 2022

Thanks, I'll have a look as soon a possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants