Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Save spatial scales without float imprecisions #76

Closed
jluethi opened this issue Sep 15, 2022 · 2 comments
Closed

Save spatial scales without float imprecisions #76

jluethi opened this issue Sep 15, 2022 · 2 comments
Labels
Backlog Backlog issues we may eventually fix, but aren't a priority

Comments

@jluethi
Copy link
Collaborator

jluethi commented Sep 15, 2022

Currently, the .zattrs files contain spatial scales as in:

"datasets": [
                {
                    "coordinateTransformations": [
                        {
                            "scale": [
                                1.0,
                                0.16249999999999998,
                                0.16249999999999998
                            ],
                            "type": "scale"
                        }
                    ],
                    "path": "0"
                },

The correct x & y scales would be 0.1625 though. Our processing can handle this minimal float rounding, but it would be better to save the actual correct numbers in the metadata. This is a (minimal) distortion of the metadata and could have weird side-effects (e.g. what if it's rounded slightly differently for 2 OME-Zarr files that are opened in the same viewer? will all downstream processing always be stable to pixel sizes that are slightly off?)

@tcompa
Copy link
Collaborator

tcompa commented Sep 19, 2022

One way to force fractal to write "0.1625" is that we keep track of pixel sizes as strings (first in the metadata parsing, and then downstream), and then include the strings in the .zattrs file by hand.
TBH I would not recommend this approach, which introduces a lot of friction in the corresponding functions for little benefit (see below). I'm open to other ideas on how to ensure that the written number if 0.1625.

More in general, I'm not sure this goal is really important:
The difference between 0.1625 and 0.16249999999999998 is of 2e-17 micrometers, and it should come from floating-point finite precision. All downstream processing will still use floats, and then it may still produce this kind of built-in errors (they simply go unnoticed because we do not write every single variable to a text file). If a 2e-17 pixel-size error breaks a task (as we noticed when building ROI indices with ceil instead of round), chances are that other (less-predictable) rounding errors would also break it. Then I think we should fix the task, rather than providing a more polished input.

@jluethi jluethi added the Backlog Backlog issues we may eventually fix, but aren't a priority label Sep 19, 2022
@jluethi
Copy link
Collaborator Author

jluethi commented Sep 19, 2022

I agree that this should not affect any of our downstream processing and if it does, we should fix that. Not sure we can guarantee the same for e.g. visualization tools, but I'd mostly hope they wouldn't be affected.

In the end, while the error is tiny, we still should do our best not to introduce any changes to the metadata. If we're limited in the precision of reading out the metadata for processing, so be it (=> let's make sure our processing is stable to that level). But would be great if the metadata is actually saved as it was originally parsed.

I agree though, this isn't a high priority issue, I assigned it to backlog for the time being.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Backlog Backlog issues we may eventually fix, but aren't a priority
Projects
None yet
Development

No branches or pull requests

2 participants