Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Nonlinear) Transformation specification #30

Closed
bogovicj opened this issue Feb 23, 2021 · 7 comments
Closed

(Nonlinear) Transformation specification #30

bogovicj opened this issue Feb 23, 2021 · 7 comments

Comments

@bogovicj
Copy link
Contributor

Start of discussion. @tischi @LeeKamentsky

In my view, there are so many ways to store / transform images that trying covering all cases is premature right now. e.g. are we storing BSpline coefficients? are we storing ThinPlateSpline knots and weights? something else?

The one exception is that the use displacement fields is pretty standard representation across tools. And, my hope is that storing a displacement field will be "easy" once other aspects of the standard are worked out.

Specifically

  1. Where are grid pixels stored in physical space
  2. Which axis indexes the "vector" dimension.
  3. Maybe more broadly "how do I interpret this data"

There's something we started working on here, but it's not especially general (and not meant to be).

@LeeKamentsky
Copy link

I'll write up my use case here as a start. We have large brains (human / other primate) which we section into 2mm slabs. These are scanned using light sheet microscopy in a series of overlapping stacks. The tissue may be cleared of stain and restained and reimaged - in this case, the tissue is slightly deformed. The stacks are stitched together using a rigid transform and subsequent rounds of staining are warped to the first using a cubic bspline grid. The tissue slabs are then aligned to each other and as part of this process, a warping is done to flatten the two slabs of tissue and to meet the top half of one to the bottom half of the other. After all slabs have been co-aligned, the whole volume is warped to a standard atlas. Acquisition can take months and the intermediate volumes are extremely large and need to be viewed on their own, so ideally, you'd store the data once in its stack form and then have four sets of transforms (1 rigid, 3 bspline) per stack - one that would do the stack stitching, one that would combine stitching and within-slab alignment, one that would combine stitching, within and between slab alignment and a final one that would combine all of these, plus a transform to the standard atlas.

IMHO and in my experience, thin plate splines are not worth the computational cost - we will often use one as a jumping-off point but then create a bspline grid from it for the actual computation. I'd argue that a displacement field with interpolation is effectively a bspline grid of degree 2 and could be subsumed as a parameter to a more general algorithm (although writing the more general code is more difficult). I would be willing to leave cubic bsplines for later if the initial spec made it clear how they would be added.

Just to get the discussion going, I'd propose a "transform" group at the same level as the "labels" group. The group would contain Zarr arrays for the individual transforms, each with arbitrary but informative names (perhaps fixed names for the downsampling transforms). The .zattrs JSON at the same level as the array would have keys (to start with) of

  • algorithm - the algorithm (e.g. "affine" / "bspline") which would give the interpretation of the array data
  • space - the name of the space (the transform would project from that space into that of the data set). A typical space name might be the name of the atlas being used.
  • dataset - the name of the dataset being transformed (typically 0, 1... N for the downsampled datasets)
  • invert - true or false - if the transform projects from the dataset into the space instead of vice versa
  • metadata - similar to the current proposal in the "multiscales" JSON - a dictionary specifying the exact details of the algorithm to use. Perhaps an "axes" key has as its value an array that gives the axes name with "vector" or "index" giving the layout of the grid array.

For a downsampling transform, the zarray would hold only a single value - the downsampling factor, for the affine, a matrix.

@bogovicj
Copy link
Contributor Author

@LeeKamentsky

Thanks for explaining! May I ask, what algorithm specifically are you using to generate the bspline-warp-field? I've been using elastix for that a lot recently; I bring that up to discuss this point in particular:

I'd argue that a displacement field with interpolation is effectively a bspline grid of degree 2 and could be subsumed as a parameter to a more general algorithm.

I agree with this in the sense that what you and I and everyone wants in the end is values (vectors) on some grid that we interpolate (however you want) to get a displacement vector for all points in some space.

However, one can't do this straightforwardly with elastix for the following reason. It's output is file containing the bspline coefficients on some grid. The displacement vector for a point exactly on the grid, is not given by the vector of bspline coefficients at that grid point. So, in this case, it's not that one interpolates vectors however one wants, it's that one must use a specific algorithm to recover those values.

If you're advocating for storing displacement vectors to be interpolated somehow, then I'm all for it. I'm just slightly more hesitant in this second case.

@LeeKamentsky
Copy link

LeeKamentsky commented Feb 23, 2021

Oh heck, I just read through the code and we're using scipy.interpolate.RegularGridInterpolator, so that is pretty much equivalent to what you are doing. In a previous life, I used scipy.interpolate.RectBivariateSpline which is only 2D but does cubic splines by default - I conflated my earlier work with what we're doing now. So I guess I am switching my position to put cubic splines on a back burner.

For Elastix, you're right about the algorithm and the grid would have a different meaning for the appropriate algorithm, even if it had the same geometry (I think).

@tischi
Copy link

tischi commented Feb 24, 2021

Our (cc @K-Meech) use cases currently are to use BigWarp or elastix in order to produce the thin-plate or b-spline transforms and then we would like to use Imglib2 to on-the-fly transform the raw voxel data. So we would need the transformations stored inside the ome.zarr such that Imglib2 can consume the information.

@imagesc-bot
Copy link

This issue has been mentioned on Image.sc Forum. There might be relevant details there:

https://forum.image.sc/t/next-call-on-next-gen-bioimaging-data-tools-feb-23/48386/9

@imagesc-bot
Copy link

This issue has been mentioned on Image.sc Forum. There might be relevant details there:

https://forum.image.sc/t/next-call-on-next-gen-bioimaging-data-tools-early-september-2021/55333/14

@constantinpape
Copy link
Contributor

Closing in favor of #94

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants