Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2d onto 3d with euclidean, similarity transforms #58

Merged
merged 32 commits into from
Feb 15, 2024

Conversation

thanushipeiris
Copy link
Contributor

@thanushipeiris thanushipeiris commented Mar 5, 2022

This PR addresses #41

How it works

It identifies the layer with the smallest number of dimensions and pads its dimensions to match the other layer. The (padded) points are then provided to the skimage v0.19.2 transforms.

I added support for Images, Labels, Points, Shapes, Vectors but have not yet written tests for them. It also requires skimage >= 19.2.0 because nD transforms were added in this version.

The second part of the issue

a 2D shape with measurements that change over time, e.g. the segmentation of a neuron aligned with a calcium imaging time series

can (I believe) be easily addressed by the user themselves creating a 3D image that's just a copy of the 2D image on every z slice. It's a rare enough use case that I don't think it will need its own button. I'll write up some documentation for this.

Basic demo

2D Image onto 3D Image with Euclidean

ernie.mcri.edu.au.-.Remote.Desktop.Connection.2022-03-05.18-36-13.mp4

2D Image onto 3D Image with Similarity

ernie.mcri.edu.au.-.Remote.Desktop.Connection.2022-03-05.18-37-52.mp4

2D Image onto 3D Image with Affine (DOESN'T WORK)

ernie.mcri.edu.au.-.Remote.Desktop.Connection.2022-03-05.18-39-03.mp4

Why doesn't affine transform work in 3D

The skimage.transform.AffineTransform provides a NaN transform result when you try to map a 2D image to a 3D image. The reason it doesn't work is I think because affinder supplies nD+1 matching points to this function but if the points are not precisely lined up between images, there won't be a combination of sheer/translation/zooming that will map the points to each other exactly. I have tried reducing the number of points supplied and found that for dimensionality of 3 and 4, you have to provide no more than 3 (imperfect) pairs of points for the affine to work.

The next thing I'd try would be to limit the number of initial points selected to 3 so that affine/euclidean/similarity transforms would work for 3D but this raises other issues (like if users want to add more points, they can no longer do that with affines in 3D). I'm erring towards just writing documentation warning users away from using affine for 3D for the moment. I probably just need to look into the maths more as well to see if it's a mathematical limit of affine transforms or I'm missing smth in the code.

Things I still need to do

  • Check/write tests for 2D onto 3D for non-images
  • Documentation for 2D copies on 3D then register just one 2D copy [outdated]
  • Maybe get the affine transform working for 2D onto 3D copies [outdated]

@codecov-commenter
Copy link

codecov-commenter commented Mar 5, 2022

Codecov Report

Attention: 13 lines in your changes are missing coverage. Please review.

Comparison is base (e04ddde) 93.84% compared to head (0d4e310) 91.93%.

Files Patch % Lines
src/affinder/_test_data.py 75.00% 11 Missing ⚠️
src/affinder/affinder.py 92.59% 2 Missing ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main      #58      +/-   ##
==========================================
- Coverage   93.84%   91.93%   -1.91%     
==========================================
  Files           7        8       +1     
  Lines         276      372      +96     
==========================================
+ Hits          259      342      +83     
- Misses         17       30      +13     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@deprecated-napari-hub-preview-bot

Preview page for your plugin is ready here:
https://preview.napari-hub.org/jni/affinder/58

@thanushipeiris thanushipeiris changed the title 2d onto 3d with euclidean, similarity transforms [WIP] 2d onto 3d with euclidean, similarity transforms Mar 5, 2022
@jni
Copy link
Owner

jni commented Mar 6, 2022

Hey @thanushipeiris! Awesome!

It's a rare enough use case that I don't think it will need its own button.

I am very amused by this statement. 😂 I actually think this is a very common use case, and again, I wrote the issue specifically for that use case. And, in a counterpoint to what you found out with the affine transform, trying to do a 2D alignment with 3D volumes means you'll always get some small amount of transform "leakage" in the 3rd dimension, which you don't want.

(The issue with affine is that it's an underdetermined mathematical problem when the points all lie on a plane, which they must do by definition in this problem. Once we get the first version of this in, we can try to make the "affine" option disappear if aligning a 2D plane to a 3D image.)

How about treating the reference layer as the reference dimensionality? So, if I have a 2D shapes layer as reference and a 3D image as moving, I do the alignment with the last two dimensions of the 3D image?

@deprecated-napari-hub-preview-bot

Preview page for your plugin is ready here:
https://preview.napari-hub.org/jni/affinder/58

@thanushipeiris
Copy link
Contributor Author

I am very amused by this statement. 😂 I actually think this is a very common use case, and again, I wrote the issue specifically for that use case.

Fair point boss. I'm not too sure how to approach this in situations where the transformed moving image is at an angle to the reference image (or should I not worry about that possibility). I have drawn you this diagram (hopefully it makes sense). Should I code smth to satisfy option 0, 1 or 2 or none of the above...
image

@jni
Copy link
Owner

jni commented Mar 13, 2022

@thanushipeiris If I understand your question correctly, I don't think you even need to worry about this at all: napari will handle it for you. See:

https://github.com/napari/napari/blob/main/examples/mixed-dimensions-labels.py

Or in my original blog post, under the "parameter sweep" section.

ie don't tile the array, just let napari handle it. Note that napari can't handle the angled cases while slicing (you can search the issues for "non orthogonal slicing"), but that's fine, let's not worry about the display part: only the parameter estimation part.

@deprecated-napari-hub-preview-bot

Preview page for your plugin is ready here:
https://preview.napari-hub.org/jni/affinder/58

@deprecated-napari-hub-preview-bot

Preview page for your plugin is ready here:
https://preview.napari-hub.org/jni/affinder/58

@deprecated-napari-hub-preview-bot

Preview page for your plugin is ready here:
https://preview.napari-hub.org/jni/affinder/58

@jni
Copy link
Owner

jni commented Mar 23, 2022

@thanushipeiris if you merge in the main branch, CI will start working again (#60) and napari-hub-bot will shut up 😂 (#59). Let me know if you need help with that.

@thanushipeiris
Copy link
Contributor Author

@jni The tests are failing on Ubuntu right before they start using tox with the error Error: The log was not found. It may have been deleted based on retention settings. Any tips on how to fix it?

@jni
Copy link
Owner

jni commented Apr 6, 2022

Oh hello @thanushipeiris!

It doesn't look like that's the actual failure, in 3.9 I see, among many others:

  >       np.testing.assert_allclose(actual_affine, expected_affine)
  E       AssertionError: 
  E       Not equal to tolerance rtol=1e-07, atol=0
  E       
  E       Mismatched elements: 2 / 16 (12.5%)
  E       Max absolute difference: 1.42108547e-14
  E       Max relative difference: 1.92064512
  E        x: array([[ 1.000000e+00,  0.000000e+00,  0.000000e+00,  3.000000e+01],
  E              [ 0.000000e+00,  1.000000e+00, -2.660880e-17,  1.421085e-14],
  E              [ 0.000000e+00, -7.902889e-18,  1.000000e+00,  1.421085e-14],
  E              [ 0.000000e+00,  0.000000e+00,  0.000000e+00,  1.000000e+00]])
  E        y: array([[ 1.000000e+00,  0.000000e+00,  0.000000e+00,  3.000000e+01],
  E              [ 0.000000e+00,  1.000000e+00,  2.890235e-17,  0.000000e+00],
  E              [ 0.000000e+00, -7.902889e-18,  1.000000e+00,  1.421085e-14],
  E              [ 0.000000e+00,  0.000000e+00,  0.000000e+00,  1.000000e+00]])
  
  src/affinder/_tests/test_affinder.py:160: AssertionError

So I think you need to adjust the tolerance to account for floating point approximation errors.

We should also remove 3.7 and add 3.10 since napari no longer supports 3.7, but I'll do that in a separate PR.

@jni
Copy link
Owner

jni commented Apr 6, 2022

Given the differences I see there, I think it makes sense to set a big rtol, maybe 10, and atol to say 1e-10.

@thanushipeiris
Copy link
Contributor Author

I think this can be reviewed now @jni

What it has

  • Similarity and euclidean transform support for multidimensional layers
  • Tests for 2D-2D, 2D-3D, 3D-2D reference-moving layer cases for all layer types except tracks
  • Option in the GUI to retain the original moving layer
  • Error message when users try to use affine transform for layers that aren't 2D.

What it doesn't have

I will need to make some follow up PRs to address

  • 2D "tiling" onto 3D (your island heatmap use case): I can't think of how to add this onto the code I've already written which basically just pads or trims down dimensions in the moving layer when they aren't the same as the reference layer. I think I'll need to add a box where the user can specify the slice of the '3D' reference layer that they specifically want to "tile" the moving '2D' layer to (but no matter the dimensions of the layers they have). Then I'll just give that slice to the rest of the functions and the moving image should be transformed appropriately. The big issue I see with the current code is that when the dimensions don't match, I force the dimensions to be added to the back axis or taken away from the front axis - I think I'll need to think about how to give users the ability to specify exactly how they want to pad/trim dimensions (but in a dimensionally agnostic way that accommodates whatever the dimensionality of their data is). Maybe just a textbox where they write in fancy indexing. This use case follows on from that I think.
  • Documentation: the interface looks really different now and the docs are out of date.

Very questionable things I did in the code

  • Could not manually assign new dimensions to points or vector layers: so created a clone of the points and vector layers with the correct number of dimensions and deleted the old one. Will be super bad for the user if they were using that layer in their code and I didn't copy over the thing they were using to ID their layer.
  • Potentially not dimensionally agnostic: I might treat the layer in a way that assumes it is a certain dimension (e.g. I just caught this with the add_zeros_at_start_of_last_axis() where I was using 2D indexing like arr[:,1:]). I've checked over this and I think it is okay now - but something to keep an eye out for.
  • Some questionable while loops that might impact performance

@jni jni mentioned this pull request Jun 8, 2023
@thanushipeiris
Copy link
Contributor Author

thanushipeiris commented Nov 26, 2023

Hi @jni, I've added the "2D tiling onto 3D" functionality (e.g. island heatmap, calcium channel use cases) now - this is done for 3D moving layers being transformed onto 2D reference layers. In general

  • if the dimensions of the reference layer > dimensions of moving layer: moving layer dimensions are padded
  • if the dimensions of the reference layer < dimensions of moving layer: affine mapping calculation is done in dimensions of the reference layer but the moving layer's affine matrix is padded with the identity matrix

For future PRs

  1. Automatically remove option for affine in GUI if the selected layers are not both 2D
  2. Currently, when the reference layer has less dimensions than the moving layer, the moving layer will not automatically reset the viewer to the updated affine after the initial affine transform points have been added (and you're just cycling through adding a point at a time onto the reference and moving layers), it needs to user to scroll the viewer before it resets properly 🐞
  3. Update documentation
  4. Add 3D onto 3D tests for Euclidean, similarity transforms (already being used Extend affinder to 3D #72)

@thanushipeiris thanushipeiris changed the title [WIP] 2d onto 3d with euclidean, similarity transforms 2d onto 3d with euclidean, similarity transforms Nov 26, 2023
@jni
Copy link
Owner

jni commented Nov 27, 2023

Thanks @thanushipeiris! I'll play with this and review shortly! 🙏

@thanushipeiris
Copy link
Contributor Author

@jni this is ready for review

  • I have made the affine calculation happen on the lowest dimensionality of the reference or moving images, without changing any of the data of the reference of moving layers.
  • Tests now include all affine transforms available.
  • I removed the 2D_3D test because for some reason the affine transform was being applied to the 2D reference image instead of the 3D moving image, but when I actually run a test script it is the 3D moving image that gets moved and the affine matrix of the 2D reference layer remains unchanged. Unsure what is happening there. I think people will more likely be doing a 3D reference/2D moving instead of the other way around so this is ok.

@jni
Copy link
Owner

jni commented Feb 15, 2024

Hahahahahaaa it passed! It bloody passed! 😂 (I couldn't run tests locally cos my environment is messed up 😅) I'm merging! Only two years later! 😂 🥳 🐌 🚀

@thanushipeiris have a look at the later commits to see what I would have asked but it felt too nitpicky and slow to back-and-forth. Short version:

  • use matrices rather than the napari object
  • use generated test data rather than adding new zarrs
  • various minor style issues e.g. [f for f in generator] -> generator

Anyway, let me know if you want to catch up about more stuff to do. 😃

@jni jni merged commit 68eb9e9 into jni:main Feb 15, 2024
8 checks passed
jni added a commit that referenced this pull request Feb 23, 2024
In #58, we forgot to also propagate the affine transformation to the
moving
points layer. This PR fixes that.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants