-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Write scale less than 5D #114
Merged
Merged
Changes from all commits
Commits
Show all changes
10 commits
Select commit
Hold shift + click to select a range
be4c85f
writer and scaler support less than 5D data
will-moore 6a3736f
Tidy and fix scaling
will-moore 82aac73
Fix scaler.local_mean() for 2D-4D. Add 3D shape to tests
will-moore 5485fef
write_image() MUST provide axes for v0.3
will-moore 982e360
Add axes in failing tests
will-moore 045a14e
data.py includes color for single-channel images
will-moore 438b0ab
Don't need axes for 5D data
will-moore a509da9
data.create_zarr() doesn't enforce 5D data for v0.3
will-moore b038431
Update test_ome_zarr.py to expect different shapes from data.create_z…
will-moore d65f5b3
ome_zarr create coins uses window 0-255 rendering
will-moore File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -19,22 +19,57 @@ def write_multiscale( | |
group: zarr.Group, | ||
chunks: Union[Tuple[Any, ...], int] = None, | ||
fmt: Format = CurrentFormat(), | ||
axes: Union[str, List[str]] = None, | ||
) -> None: | ||
""" | ||
Write a pyramid with multiscale metadata to disk. | ||
|
||
Parameters | ||
---------- | ||
TODO: | ||
pyramid: List of np.ndarray | ||
the image data to save. Largest level first | ||
group: zarr.Group | ||
the group within the zarr store to store the data in | ||
chunks: int or tuple of ints, | ||
size of the saved chunks to store the image | ||
fmt: Format | ||
The format of the ome_zarr data which should be used. | ||
Defaults to the most current. | ||
axes: str or list of str | ||
the names of the axes. e.g. "tczyx". Not needed for v0.1 or v0.2 | ||
or for v0.3 if 2D or 5D. Otherwise this must be provided | ||
""" | ||
|
||
dims = len(pyramid[0].shape) | ||
if fmt.version not in ("0.1", "0.2"): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. While the axes guessing below definitely applies to 0.3, if the assumptions gets further relax, there will be an outstanding TODO of restricting this to the relevant versions of the specification. |
||
if axes is None: | ||
if dims == 2: | ||
axes = ["y", "x"] | ||
elif dims == 5: | ||
axes = ["t", "c", "z", "y", "x"] | ||
else: | ||
raise ValueError( | ||
"axes must be provided. Can't be guessed for 3D or 4D data" | ||
) | ||
if len(axes) != dims: | ||
raise ValueError("axes length must match number of dimensions") | ||
|
||
if isinstance(axes, str): | ||
axes = list(axes) | ||
|
||
for dim in axes: | ||
if dim not in ("t", "c", "z", "y", "x"): | ||
raise ValueError("axes must each be one of 'x', 'y', 'z', 'c' or 't'") | ||
|
||
paths = [] | ||
for path, dataset in enumerate(pyramid): | ||
# TODO: chunks here could be different per layer | ||
group.create_dataset(str(path), data=dataset, chunks=chunks) | ||
paths.append({"path": str(path)}) | ||
|
||
multiscales = [{"version": fmt.version, "datasets": paths}] | ||
if axes is not None: | ||
multiscales[0]["axes"] = axes | ||
group.attrs["multiscales"] = multiscales | ||
|
||
|
||
|
@@ -45,6 +80,7 @@ def write_image( | |
byte_order: Union[str, List[str]] = "tczyx", | ||
scaler: Scaler = Scaler(), | ||
fmt: Format = CurrentFormat(), | ||
axes: Union[str, List[str]] = None, | ||
**metadata: JSONDict, | ||
) -> None: | ||
"""Writes an image to the zarr store according to ome-zarr specification | ||
|
@@ -67,24 +103,29 @@ def write_image( | |
fmt: Format | ||
The format of the ome_zarr data which should be used. | ||
Defaults to the most current. | ||
axes: str or list of str | ||
the names of the axes. e.g. "tczyx". Not needed for v0.1 or v0.2 | ||
or for v0.3 if 2D or 5D. Otherwise this must be provided | ||
""" | ||
|
||
if image.ndim > 5: | ||
raise ValueError("Only images of 5D or less are supported") | ||
|
||
shape_5d: Tuple[Any, ...] = (*(1,) * (5 - image.ndim), *image.shape) | ||
image = image.reshape(shape_5d) | ||
if fmt.version in ("0.1", "0.2"): | ||
# v0.1 and v0.2 are strictly 5D | ||
shape_5d: Tuple[Any, ...] = (*(1,) * (5 - image.ndim), *image.shape) | ||
image = image.reshape(shape_5d) | ||
|
||
if chunks is not None: | ||
chunks = _retuple(chunks, shape_5d) | ||
chunks = _retuple(chunks, image.shape) | ||
|
||
if scaler is not None: | ||
image = scaler.nearest(image) | ||
else: | ||
LOGGER.debug("disabling pyramid") | ||
image = [image] | ||
|
||
write_multiscale(image, group, chunks=chunks, fmt=fmt) | ||
write_multiscale(image, group, chunks=chunks, fmt=fmt, axes=axes) | ||
group.attrs.update(metadata) | ||
|
||
|
||
|
@@ -98,4 +139,6 @@ def _retuple( | |
else: | ||
_chunks = chunks | ||
|
||
return (*shape[: (5 - len(_chunks))], *_chunks) | ||
dims_to_add = len(shape) - len(_chunks) | ||
|
||
return (*shape[:dims_to_add], *_chunks) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possibly not an action for this PR but a general thought: as the specification gets refined and new concepts gets introduced (thinking concretely of the ongoing
transformation
proposal), there might be a trade-off between adding every new key as an extra parameter vs e.g. passing some form of dictionary of extra metadata which will be validated depending on the specification.