-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2d onto 3d with euclidean, similarity transforms #58
Conversation
Codecov ReportAttention:
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #58 +/- ##
==========================================
- Coverage 93.84% 91.93% -1.91%
==========================================
Files 7 8 +1
Lines 276 372 +96
==========================================
+ Hits 259 342 +83
- Misses 17 30 +13 ☔ View full report in Codecov by Sentry. |
Preview page for your plugin is ready here: |
Hey @thanushipeiris! Awesome!
I am very amused by this statement. 😂 I actually think this is a very common use case, and again, I wrote the issue specifically for that use case. And, in a counterpoint to what you found out with the affine transform, trying to do a 2D alignment with 3D volumes means you'll always get some small amount of transform "leakage" in the 3rd dimension, which you don't want. (The issue with affine is that it's an underdetermined mathematical problem when the points all lie on a plane, which they must do by definition in this problem. Once we get the first version of this in, we can try to make the "affine" option disappear if aligning a 2D plane to a 3D image.) How about treating the reference layer as the reference dimensionality? So, if I have a 2D shapes layer as reference and a 3D image as moving, I do the alignment with the last two dimensions of the 3D image? |
Preview page for your plugin is ready here: |
@thanushipeiris If I understand your question correctly, I don't think you even need to worry about this at all: napari will handle it for you. See: https://github.com/napari/napari/blob/main/examples/mixed-dimensions-labels.py Or in my original blog post, under the "parameter sweep" section. ie don't tile the array, just let napari handle it. Note that napari can't handle the angled cases while slicing (you can search the issues for "non orthogonal slicing"), but that's fine, let's not worry about the display part: only the parameter estimation part. |
Preview page for your plugin is ready here: |
…or other multidimensional pairings)
Preview page for your plugin is ready here: |
Preview page for your plugin is ready here: |
@thanushipeiris if you merge in the main branch, CI will start working again (#60) and napari-hub-bot will shut up 😂 (#59). Let me know if you need help with that. |
@jni The tests are failing on Ubuntu right before they start using tox with the error |
Oh hello @thanushipeiris! It doesn't look like that's the actual failure, in 3.9 I see, among many others:
So I think you need to adjust the tolerance to account for floating point approximation errors. We should also remove 3.7 and add 3.10 since napari no longer supports 3.7, but I'll do that in a separate PR. |
Given the differences I see there, I think it makes sense to set a big rtol, maybe 10, and atol to say 1e-10. |
I think this can be reviewed now @jni What it has
What it doesn't haveI will need to make some follow up PRs to address
Very questionable things I did in the code
|
Hi @jni, I've added the "2D tiling onto 3D" functionality (e.g. island heatmap, calcium channel use cases) now - this is done for 3D moving layers being transformed onto 2D reference layers. In general
For future PRs
|
Thanks @thanushipeiris! I'll play with this and review shortly! 🙏 |
@jni this is ready for review
|
Also, clean up some of the code, including unnecessary list comprehensions
Hahahahahaaa it passed! It bloody passed! 😂 (I couldn't run tests locally cos my environment is messed up 😅) I'm merging! Only two years later! 😂 🥳 🐌 🚀 @thanushipeiris have a look at the later commits to see what I would have asked but it felt too nitpicky and slow to back-and-forth. Short version:
Anyway, let me know if you want to catch up about more stuff to do. 😃 |
In #58, we forgot to also propagate the affine transformation to the moving points layer. This PR fixes that.
This PR addresses #41
How it works
It identifies the layer with the smallest number of dimensions and pads its dimensions to match the other layer. The (padded) points are then provided to the
skimage v0.19.2
transforms.I added support for Images, Labels, Points, Shapes, Vectors but have not yet written tests for them. It also requires
skimage >= 19.2.0
because nD transforms were added in this version.The second part of the issue
can (I believe) be easily addressed by the user themselves creating a 3D image that's just a copy of the 2D image on every z slice. It's a rare enough use case that I don't think it will need its own button. I'll write up some documentation for this.
Basic demo
2D Image onto 3D Image with Euclidean
ernie.mcri.edu.au.-.Remote.Desktop.Connection.2022-03-05.18-36-13.mp4
2D Image onto 3D Image with Similarity
ernie.mcri.edu.au.-.Remote.Desktop.Connection.2022-03-05.18-37-52.mp4
2D Image onto 3D Image with Affine (DOESN'T WORK)
ernie.mcri.edu.au.-.Remote.Desktop.Connection.2022-03-05.18-39-03.mp4
Why doesn't affine transform work in 3D
The skimage.transform.AffineTransform provides a NaN transform result when you try to map a 2D image to a 3D image. The reason it doesn't work is I think because affinder supplies nD+1 matching points to this function but if the points are not precisely lined up between images, there won't be a combination of sheer/translation/zooming that will map the points to each other exactly. I have tried reducing the number of points supplied and found that for dimensionality of 3 and 4, you have to provide no more than 3 (imperfect) pairs of points for the affine to work.
The next thing I'd try would be to limit the number of initial points selected to 3 so that affine/euclidean/similarity transforms would work for 3D but this raises other issues (like if users want to add more points, they can no longer do that with affines in 3D). I'm erring towards just writing documentation warning users away from using affine for 3D for the moment. I probably just need to look into the maths more as well to see if it's a mathematical limit of affine transforms or I'm missing smth in the code.
Things I still need to do
Documentation for 2D copies on 3D then register just one 2D copy[outdated]Maybe get the affine transform working for 2D onto 3D copies[outdated]