Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support uploading neuroglancer precomputed and N5 #7578

Merged
merged 15 commits into from
Feb 5, 2024
1 change: 1 addition & 0 deletions CHANGELOG.unreleased.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ For upgrade instructions, please check the [migration guide](MIGRATIONS.released
[Commits](https://github.com/scalableminds/webknossos/compare/24.02.0...HEAD)

### Added
- Added support for uploading N5 and Neuroglancer Precomputed datasets. [#7578](https://github.com/scalableminds/webknossos/pull/7578)

### Changed
- Datasets stored in WKW format are no longer loaded with memory mapping, reducing memory demands. [#7528](https://github.com/scalableminds/webknossos/pull/7528)
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ WEBKNOSSOS is an open-source tool for annotating and exploring large 3D image da
* Sharing and collaboration features
* Proof-Reading tools for working with large (over)-segmentations
* [Standalone datastore component](https://github.com/scalableminds/webknossos/tree/master/webknossos-datastore) for flexible deployments
* Supported dataset formats: [WKW](https://github.com/scalableminds/webknossos-wrap), [Neuroglancer Precomputed, and BossDB](https://github.com/scalableminds/webknossos-connect), [Zarr](https://zarr.dev), [N5](https://github.com/saalfeldlab/n5)
* Supported dataset formats: [WKW](https://github.com/scalableminds/webknossos-wrap), [Neuroglancer Precomputed](https://github.com/google/neuroglancer/tree/master/src/datasource/precomputed), [Zarr](https://zarr.dev), [N5](https://github.com/saalfeldlab/n5)
* Supported image formats: Grayscale, Segmentation Maps, RGB, Multi-Channel
* [Support for 3D mesh rendering and ad-hoc mesh generation](https://docs.webknossos.org/webknossos/mesh_visualization.html)
* Export and streaming of any dataset and annotation as [Zarr](https://zarr.dev) to third-party tools
Expand Down
12 changes: 6 additions & 6 deletions app/models/dataset/explore/ExploreRemoteLayerService.scala
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ class ExploreRemoteLayerService @Inject()(credentialService: CredentialService,
credentialIdentifier: Option[String],
credentialSecret: Option[String],
reportMutable: ListBuffer[String],
requestingUser: User)(implicit ec: ExecutionContext): Fox[List[(DataLayer, Vec3Double)]] =
requestingUser: User)(implicit ec: ExecutionContext): Fox[List[(DataLayerWithMagLocators, Vec3Double)]] =
for {
uri <- tryo(new URI(exploreLayerService.removeHeaderFileNamesFromUriSuffix(layerUri))) ?~> s"Received invalid URI: $layerUri"
_ <- bool2Fox(uri.getScheme != null) ?~> s"Received invalid URI: $layerUri"
Expand Down Expand Up @@ -142,11 +142,11 @@ class ExploreRemoteLayerService @Inject()(credentialService: CredentialService,
bool2Fox(wkConf.Datastore.localFolderWhitelist.exists(whitelistEntry => uri.getPath.startsWith(whitelistEntry))) ?~> s"Absolute path ${uri.getPath} in local file system is not in path whitelist. Consider adding it to datastore.pathWhitelist"
} else Fox.successful(())

private def exploreRemoteLayersForRemotePath(
remotePath: VaultPath,
credentialId: Option[String],
reportMutable: ListBuffer[String],
explorers: List[RemoteLayerExplorer])(implicit ec: ExecutionContext): Fox[List[(DataLayer, Vec3Double)]] =
private def exploreRemoteLayersForRemotePath(remotePath: VaultPath,
credentialId: Option[String],
reportMutable: ListBuffer[String],
explorers: List[RemoteLayerExplorer])(
implicit ec: ExecutionContext): Fox[List[(DataLayerWithMagLocators, Vec3Double)]] =
explorers match {
case Nil => Fox.empty
case currentExplorer :: remainingExplorers =>
Expand Down
2 changes: 1 addition & 1 deletion docs/animations.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,6 @@ Creating an animation is easy:
6. Click the `Start animation` button to launch the animation creation.


Either periodically check the [background jobs page](./jobs.md) or wait for a an email confirmation to download the animation video file. Creating an animation may take a while, depending on the selected bounding box size and the number of included 3D meshes.
Either periodically check the [background jobs page](./jobs.md) or wait for an email confirmation to download the animation video file. Creating an animation may take a while, depending on the selected bounding box size and the number of included 3D meshes.

WEBKNOSSOS Team plans and above have access to high definition (HD) resolution videos and more options.
4 changes: 2 additions & 2 deletions docs/automated_analysis.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ We would love to integrate analysis solutions for more modalities and use cases.
## Neuron Segmentation
As a first trial, WEBKNOSSOS includes neuron segmentation. This analysis is designed to work with serial block-face electron microscopy (SBEM) data of neural tissue (brain/cortex) and will segment all neurons within the dataset.

You can launch the AI analysis modal using the `AI Analysis` button in the tool bar at the top. Use the `Start AI neuron segmentation` button in the modal to start the analysis.
You can launch the AI analysis modal using the `AI Analysis` button in the toolbar at the top. Use the `Start AI neuron segmentation` button in the modal to start the analysis.

![Neuron segmentations can be launched from the tool bar.](images/process_dataset.jpg)

Expand All @@ -28,4 +28,4 @@ The finished analysis will be available as a new dataset from your dashboard. Yo
## Custom Analysis
At the moment, WEBKNOSSOS can not be used to train custom classifiers. This might be something that we add in the future if there is enough interest in this.

If you are interested in specialized, automated analysis, image segmentation, object detection etc. than feel free to [contact us](mailto:[email protected]). The WEBKNOSSOS development teams offers [commercial analysis services](https://webknossos.org/services/automated-segmentation) for that.
If you are interested in specialized, automated analysis, image segmentation, object detection etc. then feel free to [contact us](mailto:[email protected]). The WEBKNOSSOS development teams offers [commercial analysis services](https://webknossos.org/services/automated-segmentation) for that.
4 changes: 2 additions & 2 deletions docs/connectome_viewer.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Several segments/cells can be loaded at the same time to highlight their matchin
In addition to loading the synapse locations and visualizing them as nodes, WEBKNOSSOS will also load the agglomerate skeleton representation of the selected segment(s) for context.

## Configuration
For WEBKNOSSOS to detect and load your Connectome file, you need to place it into a `connectome` sub-directory for a respective segmentation layer, e.g.:
For WEBKNOSSOS to detect and load your Connectome file, you need to place it into a `connectome` subdirectory for a respective segmentation layer, e.g.:

```
my_dataset # Dataset root
Expand All @@ -36,4 +36,4 @@ my_dataset # Dataset root


## Connectome File Format
The connectome file format is under active development and experiences frequent changes. [Please reach out to us for the latest file format spec and configuration help](mailto://[email protected]).
The connectome file format is under active development and experiences frequent changes. [Please reach out to us for the latest file format spec and configuration help](mailto://[email protected]).
2 changes: 1 addition & 1 deletion docs/data_formats.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ The underlying data type limits the maximum number of IDs:


### Dataset Metadata
For each datasets, we stored metadata in a `datasource-properties.json` file.
For each dataset, we stored metadata in a `datasource-properties.json` file.
See below for the [full specification](#dataset-metadata-specification).
This is an example:

Expand Down
4 changes: 3 additions & 1 deletion docs/datasets.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,10 @@ In particular, the following file formats are supported for uploading (and conve
- [Image file sequence](#Single-Layer-Image-File-Sequence) in one folder (TIFF, JPEG, PNG, DM3, DM4)
- [Multi Layer file sequence](#Multi-Layer-Image-File-Sequence) containing multiple folders with image sequences that are interpreted as separate layers
- [Single-file images](#single-file-images) (OME-Tiff, TIFF, PNG, czi, raw, etc)
- [Neuroglancer Precomputed datasets](./neuroglancer_precomputed.md)
- [N5 datasets](./n5.md)

Once the data is uploaded (and potentially converted), you can further configure a dataset's [Settings](#configuring-datasets) and double-check layer properties, finetune access rights & permissions, or set default values for rendering.
Once the data is uploaded (and potentially converted), you can further configure a dataset's [Settings](#configuring-datasets) and double-check layer properties, fine tune access rights & permissions, or set default values for rendering.

### Streaming from remote servers and the cloud
WEBKNOSSOS supports loading and remotely streaming [Zarr](https://zarr.dev), [Neuroglancer precomputed format](https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed) and [N5](https://github.com/saalfeldlab/n5) datasets from a remote source, e.g. Cloud storage (S3) or HTTP server.
Expand Down
2 changes: 1 addition & 1 deletion docs/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ You can also change the size of the viewports to see more details in your data a
To create your first annotation, click the `Create Annotation`` button while in “View” mode.
WEBKNOSSOS will launch the main annotation screen allowing you to navigate your dataset, place markers to reconstruct skeletons, or annotate segments as volume annotations.

You can perform various actions depending on the current tool - selectable in the tool bar at the top of the screen.
You can perform various actions depending on the current tool - selectable in the toolbar at the top of the screen.
Note that the most important controls are always shown in the status bar at the bottom of your screen.
The first tool is the _Move_ tool which allows navigating the dataset by moving the mouse while holding the left mouse button.
With the _Skeleton_ tool, a left mouse click can be used to place markers in the data, called nodes.
Expand Down
2 changes: 1 addition & 1 deletion docs/jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,4 @@ Depending on the job workflow you may:

![Overview of the Jobs page](./images/jobs.jpeg)

We constantly monitor job executions. In rare cases, jobs can fail and we aim to re-run them as quickly as possible. In case you run into any trouble please [contact us](mailto:[email protected]).
We constantly monitor job executions. In rare cases, jobs can fail, and we aim to re-run them as quickly as possible. In case you run into any trouble please [contact us](mailto:[email protected]).
2 changes: 1 addition & 1 deletion docs/mesh_visualization.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,6 @@ Instead of having to slowly compute individual mesh every time you open a datase
You can start mesh generation from the `Segments` tab in the right-hand side panel. Click on the little plus button to initiate the mesh generation. We recommend computing the meshes in the medium quality (default) to strike a good balance between visual fidelity, compute time, and GPU resource usage.

!!! info
Pre-computated meshes are exclusive to webknossos.org. Contact [sales](mailto:[email protected]) for access to the integrated WEBKNOSSOS worker for meshing or the [Voxelytics software](https://voxelytics.com) for standalone meshing from the command line.
Pre-computed meshes are exclusive to webknossos.org. Contact [sales](mailto:[email protected]) for access to the integrated WEBKNOSSOS worker for meshing or the [Voxelytics software](https://voxelytics.com) for standalone meshing from the command line.

[Check the `Processing Jobs` page](./jobs.md) from the `Admin` menu at the top of the screen to track progress or cancel the operation. The finished, pre-computed mesh will be available on page reload.
4 changes: 1 addition & 3 deletions docs/n5.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,7 @@

WEBKNOSSOS can read [N5 datasets](https://github.com/saalfeldlab/n5).

!!!info
N5 datasets can only be opened as [remote dataset](./datasets.md#streaming-from-remote-servers-and-the-cloud) at the moment. Provide a URI pointing directly to an N5 group. For several layers, import the first N5 group and then use the UI to add more URIs/groups. Uploading the through the web uploader is not supported.

N5 datasets can both be uploaded to WEBKNOSSOS through the [web uploader](./datasets.md#uploading-through-the-web-browser) or [streamed from a remote server or the cloud](./datasets.md#streaming-from-remote-servers-and-the-cloud).

## Examples

Expand Down
5 changes: 2 additions & 3 deletions docs/neuroglancer_precomputed.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,7 @@

WEBKNOSSOS can read [Neuroglancer precomputed datasets](https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed).

!!!info
Neuroglancer datasets can only be opened as [remote dataset](./datasets.md#streaming-from-remote-servers-and-the-cloud) at the moment. Uploading the through the web uploader is not supported.
Neuroglancer Precomputed datasets can both be uploaded to WEBKNOSSOS through the [web uploader](./datasets.md#uploading-through-the-web-browser) or [streamed from a remote server or the cloud](./datasets.md#streaming-from-remote-servers-and-the-cloud).

## Examples

Expand Down Expand Up @@ -42,4 +41,4 @@ For details see the [Neuroglancer spec](https://github.com/google/neuroglancer/t
To get the best streaming performance for Neuroglancer Precomputed datasets consider the following settings.

- Use chunk sizes of 32 - 128 voxels^3
- Enable sharding
- Enable sharding
10 changes: 5 additions & 5 deletions docs/tasks.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ It is possible to download all annotations that belong to either a _Project_ or

First, a _Task Type_ needs to be created:

1. Open the `Task Types` screen of the admininstration section and click on `Add Task Type`.
1. Open the `Task Types` screen of the administration section and click on `Add Task Type`.
2. Fill out the form to create the Task Type:
- Note that the `Description` field supports Markdown formatting.
- If you don't have a sophisticated team structure, select the [default Team](./users.md#organizations).
Expand All @@ -38,7 +38,7 @@ Next, you need to set up a _Project_:

Now, you are ready to create _Tasks_:

1. Open the `Tasks` screen of the admininstration section and click on `Add Task`.
1. Open the `Tasks` screen of the administration section and click on `Add Task`.
2. Fill out the form create the Task.
- Enter the starting positions in the lower part of the form.
- Alternatively, you can upload an NML file that contains nodes that will be used as starting positions.
Expand Down Expand Up @@ -80,17 +80,17 @@ When users request a new task from their dashboard ("Tasks" tab), a set of crite
## Manual Task Assignment

In contrast to the automated task distribution system, an admin user can also manually assign a task instance to users.
Note, manual assignments bypass the assignment criterias enforced by the automated system and allow for fine-grained and direct assignments to individual user.
Note, manual assignments bypass the assignment criteria enforced by the automated system and allow for fine-grained and direct assignments to individual user.

Manual assignments can done by:
Manual assignments can be done by:

1. Navigate to the task list
2. Search for your task by setting the appropriate filters
3. Click on "Manual Assign To User"
4. Select a user for the assignment from the dropdown
5. Confirm the assignment with "ok"

Existing, active and finished task instances can also be transfered to other users, e.g. for proofreading, continued annotation or to change ownership:
Existing, active and finished task instances can also be transferred to other users, e.g. for proofreading, continued annotation or to change ownership:

1. Navigate to the task list
2. Search for your task by setting the appropriate filters
Expand Down
4 changes: 2 additions & 2 deletions docs/terminology.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,8 @@ See also the [task and projects guide](./tasks.md).
## Segments
At its lowest-level a **segment** is the collection of several annotated voxels. At a larger level, segments can grow to be the size of whole cell bodies or partial cells, e.g. a single axon.

Typically many segments make up a segmentation. Segments can be painted manually using the WEBKNOSSOS volume annotation tools or created through third-party programs typically resulting in larger segmentations of a dataset.
Typically, many segments make up a segmentation. Segments can be painted manually using the WEBKNOSSOS volume annotation tools or created through third-party programs typically resulting in larger segmentations of a dataset.

## Agglomerates
An agglomerate is the combination of several (smaller) segments to reconstruct a larger biological structure. Typically an agglomerate combines the fragments of an over-segmentation created by some automated method, e.g. a machine learning system.
Sometimes this is also referred to as a super-voxel graph.
Sometimes this is also referred to as a super-voxel graph.
2 changes: 1 addition & 1 deletion docs/today_i_learned.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Today I learned

We reguarly publish tips and tricks videos for beginners and pros on YouTube to share new features, highlight efficient workflows, and show you hidden gems.
We regularly publish tips and tricks videos for beginners and pros on YouTube to share new features, highlight efficient workflows, and show you hidden gems.

Subscribe to our YouTube channel [@webknossos](https://www.youtube.com/@webknossos) or [@webknossos](https://twitter.com/webknossos) on Twitter to stay up-to-date.

Expand Down
Loading