From 45537cb032e6b67650f020b9969f6d6c4d93a544 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Tue, 14 Nov 2023 16:14:08 +0100 Subject: [PATCH 01/22] updated docs --- docs/animations.md | 19 +++++++++++ docs/faq.md | 2 +- docs/jobs.md | 3 ++ docs/pen_tablets.md | 65 +++++++++++++++++++++++++++++++++++++ docs/skeleton_annotation.md | 2 +- docs/tracing_ui.md | 4 +-- docs/volume_annotation.md | 1 + 7 files changed, 92 insertions(+), 4 deletions(-) create mode 100644 docs/animations.md create mode 100644 docs/pen_tablets.md diff --git a/docs/animations.md b/docs/animations.md new file mode 100644 index 00000000000..2d460277a00 --- /dev/null +++ b/docs/animations.md @@ -0,0 +1,19 @@ +# Animations + +A picture is worth a thousand words. In this spirit, you can use WEBKNOSSOS to create eye catching animation of your datasets as a video clip. You can use these short movies as part of a presentation, website, for social media or to promote a publication. + +// animation video + +## Creating an Animation + +Creating an animation is easy: + +1. Open any dataset or annotation that you want to use for your animation. +2. Optionally, load any [pre-computed 3D meshes](./mesh_visualization.md#pre-computed-mesh-generation) for any segments that you wish to highlight. +3. For larger datasets, use the bounding box tool to create a bounding box around your area of interest. Smaller datasets can be used in their entirety. +4. From the `Menu` dropdown in navbar at the top of the screen, select "Create Animation". +5. Configure the animation options as desired, i.e. camera movement or resolution. +6. Click the `Start animation` button to launch the animation creation. + + +Either periodically check the [background jobs page](./jobs.md) or wait for a an email confirmation to download the animation video file. Creating an animation may take a while, depending on the selected bounding box size and the number of included 3D meshes. diff --git a/docs/faq.md b/docs/faq.md index 08966e80712..a0f4d84a3fd 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -20,7 +20,7 @@ We are always happy to help you through email or a quick call. In addition, we o ## How can I run machine learning analysis on my datasets with WEBKNOSSOS? Machine learning integration with WEBKNOSSOS is a very interesting topic for us and something that we want to focus more on. -At the moment, there is a trial integration of a neural network model for nuclei segmentation in EM brain data. +At the moment, there is a trial integration of a neural network model for neuron segmentation in EM brain data. We are looking to expand the model portfolio and integrated analysis. [Read more about automated analysis.](./automated_analysis.md) We have years of experience with automated machine learning analysis and [offer commercial automated analysis services](https://webknossos.org/services/automated-segmentation). diff --git a/docs/jobs.md b/docs/jobs.md index 5d2b57a171a..a83656c867d 100644 --- a/docs/jobs.md +++ b/docs/jobs.md @@ -2,6 +2,8 @@ webknossos.org includes several compute-intensive and automated workflows that are processed in the background. Depending on the operation and dataset size, these workflows may take some time (from minutes to hours) to finish. +WEBKNOSSOS will notify you via email upon completion or failure of any job. + Example workflows: - [converting datasets on upload](./datasets.md#uploading-through-the-web-browser) @@ -10,6 +12,7 @@ Example workflows: - [applying a merger mode annotation](./volume_annotation.md#proof_reading_and_merging_segments) - automatic inference of a segmentation layer's large segment ID - [dataset & annotation export as Tiff files](./export.md#data-export-through-the-ui) +- [creating engaging animations of datasets](./animations.md) - downsampling volume annotations These workflows are executed in background worker tasks as so-called *processing jobs*. diff --git a/docs/pen_tablets.md b/docs/pen_tablets.md new file mode 100644 index 00000000000..68b923d075c --- /dev/null +++ b/docs/pen_tablets.md @@ -0,0 +1,65 @@ +# Annotating with Pen Tablets / Wacom Pens, and iPads + +Beyond the mouse and keyboard WEBKNOSSOS is great for annotating datasets with alternative input devices such as pens, styluses, or the Apple pencil. These input devices can significantly boost your annotation speed. + +## Using Wacom/Pen tablets +Using pen tablet can signifincatly boost your annotation productivity, especially if you set it up correctl with WEBKNOSSOS. + +### Set up your tablet +To streamline your workflow, program your tablet and pen buttons to match the WEBKNOSSOS shortcuts. By doing so, you can focus on your pen without the need of a mouse or keyboard. Here’s an example configuration using a Wacom tablet: + +Tablet buttons: +- Left: Brush (ctrl + K, B) +- Middle left: Eraser (ctrl + K, E) +- Middle right: Quick-select (ctrl + K, Q) +- Right: Create new segment (C) + +Pen buttons: +- Lower button: Move (ALT) + +You can find the full list for keyboard shortcuts in the [documentation](./keyboard_shortcuts.md). + + +// Alt Programming buttons to match the WEBKNOSSOS shortcuts + +### Annotating with Pens +Now, let’s dive into the annotation process! In this example, we begin by quick-selecting a cell. + + +// Alt Navigating the dataset and segmenting a cell with the quick-select tool + +If the annotation isn’t precise enough, we can easily switch to the eraser tool (middle left button) and erase a corner. Selecting the brush tool is as simple as pressing the left button, allowing us to add small surfaces to the annotation. + + +// Alt Improving the annotation precision with the eraser and brush tools +When ready, pressing the right button creates a new segment, and we can repeat the process for other cells. + + +// Alt Creating new segments and annotating them with the quick-select tool +For increased flexibility, you can additionally use your laptop’s keyboard shortcuts (e.g. “I” and “O” for zooming in and out). + +## iPad and Apple Pencil +Accessing your WEBKNOSSOS data from any internet-connected device with a browser, including iPads and Android tablets, allows you to conveniently showcase or explore large datasets anywhere. Whether you want to share your findings with scientists post-conference or analyze data during your train commute, all you need is a browser. No installation of any additional software is required. The user-friendly interface supports intuitive finger gestures and complementary buttons for smooth navigation. + +In a brief workflow example, we demonstrate the ease of data visualization on an iPad. Using simple finger gestures, we navigate along the x and y axes and perform zoom operations with intuitive two-finger gestures. + +// Alt Moving through you data and zooming in and out. + +Additional functions, such as z-axis movement, toggling the right sidebar, and activating the four viewports, are easily accessible with the touch of a button. Selecting segments and loading their meshes is as simple as tapping the corresponding locations on the screen. +// Alt Toggling the right sidebar, turning on the 4 viewports, loading a mesh. + +Finally, we maximize the 3D viewport and effortlessly explore the mesh geometry by swiping with three fingers. +// Alt Exploring the 3D mesh. + + +### Intuitive Annotation with your iPad +Take advantage of the iPad and Apple Pencil for seamless and precise annotation. Enhance your manual annotations with direct drawing on the screen, offering increased accuracy and efficiency compared to traditional mouse-based annotation. + +In this example, we demonstrate the annotation workflow using the iPad and Apple Pencil. Starting with the quick-select tool, we segment a cell, refining its edges with the pixel-perfect precision of the lasso tool. +// Alt Quick-selecting a cell and refining the edges with the lasso tool. + +Next, we create a new segment and annotate it from scratch using the lasso tool. +// Alt Annotating a new segment from scratch with the lasso and then the brush tool. + +Finally, we create an additional segment and use the brush tool to annotate. +// Alt Annotating a segment with the brush. diff --git a/docs/skeleton_annotation.md b/docs/skeleton_annotation.md index a199ae21025..d6bab135439 100644 --- a/docs/skeleton_annotation.md +++ b/docs/skeleton_annotation.md @@ -67,7 +67,7 @@ The WEBKNOSSOS toolbar at the top of the screen contains several tools designed When the `Skeleton` tool is active, the following modifiers become available: - `Create new Tree`: Creates a new tree. -- `Toggle single node tree mode`: Modifies the behavior of the skeleton annotation tool to create a new tree at each click instead of adding nodes to the active tree. Useful for marking single position objects/seeds, e.g., for marking nuclei. Also called "Soma-clicking mode". +- `Toggle single node tree mode`: Modifies the behavior of the skeleton annotation tool to create a new tree at each click instead of adding nodes to the active tree. Useful for marking single position objects/seeds, e.g., for marking nuclei or vesicles. Also called "Soma-clicking mode". - `Toggle merger mode`: Modifies the behavior of the skeleton annotation tool to launch the `Merger Mode`. In merger mode skeletons, can be used to "collect" and merge volume segments from an over-segmentation. [Read more about `Merger Mode`](./volume_annotation.md#proof_reading_and_merging_segments). ![Skeleton Tool modifiers](./images/skeleton_tool_modifiers.jpeg) diff --git a/docs/tracing_ui.md b/docs/tracing_ui.md index 3a6ae851e0e..4e9986b355f 100644 --- a/docs/tracing_ui.md +++ b/docs/tracing_ui.md @@ -45,7 +45,7 @@ The toolbar further features all available navigation and annotation tools for q - `Erase (Trace/Brush)`: Removes voxels from a volume annotation by drawing over the voxels you would like to erase. - `Fill Tool`: Flood-fills the clicked region with a volume annotation until it hits the next segment boundary (or the outer edge of your viewport). Used to fill holes in a volume annotation or to relabel a segment with a different id. - `Segment Picker`: Select the volume annotation ID of a segment to make it the active cell id to continue labeling with that ID/color. -- `Bounding Box`: Creates and resizes any bounding box. See also the BoundingBox panel below. +- `Bounding Box`: Creates and resizes any bounding box. See also the [Bounding Box (BB) panel](./tracing_ui#Right-Hand Side Panel) below. Please see the detailed documentation on [skeleton](./skeleton_annotation.md#tools) and [volume annotation](./volume_annotation.md#tools) tools for a for explaination of all context-sensitve modifiers that are available to some tools. @@ -144,7 +144,7 @@ The right-hand side panel includes a number of tabs with specific information, a - `Skeleton`: Lists all available skeleton annotations and offers further interactions with them. [Read more about skeleton annotations.](./skeleton_annotation.md) - `Comments`: Lists all comments assigned to individual nodes of a skeleton. [Read more about comments and skeleton annotations.](./skeleton_annotation.md#nodes_and_trees) - `Segments`: List all segments created during a volume annotation. It also provides access to mesh generation for indivual segments or the whole dataset, mesh visualization, mesh downloads, and more. [Read more about 3D meshes.](./mesh_visualization.md) -- `BBoxes`: List all bounding boxes present in the dataset. Create new bounding boxes or adjust existing ones. This provides an alternativ interface for the `BoundingBox` tool. +- `BBoxes`: List all bounding boxes present in the dataset. Create new bounding boxes or adjust existing ones. This provides an alternativ interface for the `Bounding Box` tool. - `AbsTree`: Renders and abstract 2D tree representation of a skeleton annotation when enabled. Might be quite resource intense when working with large skeletons. ## Status Bar diff --git a/docs/volume_annotation.md b/docs/volume_annotation.md index 5c2fccabbdc..ff91f5a6221 100644 --- a/docs/volume_annotation.md +++ b/docs/volume_annotation.md @@ -16,6 +16,7 @@ Select one of the drawing tools from the toolbar or toggle through with the keyb - `Fill Tool`: Flood-fills the clicked region with a volume annotation until it hits the next segment boundary (or the outer edge of your viewport). All adjacent voxels with the same voxel id as the clicked voxel will be changed to the active segment ID. Useful to either fill a hole in a segment or to relabel a segment with a different ID/color. - `Segment Picker`: Click on any segment to select its label ID as the active segment ID and continue any volume annotation operation with that ID. - `Quick Select`: Draw a rectangle over a segment to annotate it automatically. The tool can operate in two different modes. If the "AI" button in the toolbar is activated, a machine-learning model is used to infer the selection. If the AI button is disabled, the tool operates on the intensity data of the visible color layer and automatically fills out the segment starting from the center of the rectangle. Next to the tool, there is a settings button which allows to enable a preview mode and to tweak some other parameters. If the preview is enabled, the parameters can be fine-tuned while the preview updates instantly. +- `Proof Reading`: Fix merge and split errors in automated segmentation. See [page on proofreading](./proof_reading.md#proofreading-tool) for more. When using the trace or brush tool, a label can be added with _Left Mouse Drag_. Erasing is possible with the dedicated erase tools or with _CTRL + Shift + Left Mouse Drag_. From b21c4eada5d85487b3f2c377673fa9198972cede Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Wed, 15 Nov 2023 10:51:47 +0100 Subject: [PATCH 02/22] docs update #2 --- docs/automated_analysis.md | 14 +++++++------- docs/index.md | 3 ++- docs/mesh_visualization.md | 14 ++++++++------ docs/pen_tablets.md | 9 ++++----- docs/tooling.md | 31 ------------------------------- docs/tracing_ui.md | 36 ++++++++++++++++++++++-------------- docs/volume_annotation.md | 15 +++++++++++++-- 7 files changed, 56 insertions(+), 66 deletions(-) delete mode 100644 docs/tooling.md diff --git a/docs/automated_analysis.md b/docs/automated_analysis.md index 3edb955ae45..60e448b2967 100644 --- a/docs/automated_analysis.md +++ b/docs/automated_analysis.md @@ -1,6 +1,6 @@ # Automated Analysis -While WEBKNOSSOS is great for manual annotation, some datasets are either too big to do by hand or you need results quicker. WEBKNOSSOS contains early access to automated analysis using machine learning classifiers for dataset segmentations. The WEBKNOSSOS developer team has many years of experience with training ML models for large-scale data analysis outside of WEBKNOSSOS. We aim to bring some of this know-how directly into WEBKNOSSOS itself. +While WEBKNOSSOS is great for manual annotation, some datasets are either too big to do by hand or you need results quicker. WEBKNOSSOS contains early access to automated analysis using machine learning classifiers for dataset segmentations. The WEBKNOSSOS developer team has many years of experience with training AI models for large-scale data analysis outside of WEBKNOSSOS. We aim to bring some of this know-how directly into WEBKNOSSOS itself. The automated analysis features are designed to provide a general solution to a wide range of (EM) datasets. Since datasets differ in staining protocols, imaging modalities, imaging resolution & fidelity, your results may vary. [Please contact us](mailto:hello@webknossos.org) for customized, fine-tuned solutions for your dataset. @@ -9,23 +9,23 @@ We would love to integrate analysis solutions for more modalities and use cases. !!!info Automated analysis is only available on [webknossos.org](https://webknossos.org) at the moment. - If you want to set up on-premise automated analysis at your institute/workplace, then [please contact us](mailto:hello@webknossos.org). + If you want to set up on-premise automated analysis at your institute/workplace, then [please contact sales](mailto:sales@webknossos.org). ## Neuron Segmentation As a first trial, WEBKNOSSOS includes neuron segmentation. This analysis is designed to work with serial block-face electron microscopy (SBEM) data of neural tissue (brain/cortex) and will segment all neurons within the dataset. -You can launch the AI analysis modal using the button in the action bar at the top. Use the `Start AI neuron segmentation` button in the modal to start the analysis. +You can launch the AI analysis modal using the `AI Analysis` button in the tool bar at the top. Use the `Start AI neuron segmentation` button in the modal to start the analysis. -![Neuron segmentations can be launched from the action bar.](images/process_dataset.jpg) +![Neuron segmentations can be launched from the tool bar.](images/process_dataset.jpg) Computation time for this analysis depends directly on the size of your dataset. Expect a few hours for medium-sized volumetric EM datasets. -The finished analysis will be available as a new dataset from your dashboard. You can monitor the status and progress of the [analysis job from the `Processing Jobs` section of the `Administration` menu at the top of the screen](./jobs.md). +The finished analysis will be available as a new dataset from your dashboard. You can monitor the status and progress of the analysis job from the [`Processing Jobs` page](./jobs.md) or wait for the email notification. ![Starting a new neuron segmentation.](images/neuron_segmentation_start.jpeg) ![Monitor the segmentation progress from the Jobs page.](images/nuclei_segmentation_job.jpeg) ## Custom Analysis -At the moment, WEBKNOSSOS can not be used to train a custom classifier itself. This might be something that we add in the future if there is enough interest in this. +At the moment, WEBKNOSSOS can not be used to train custom classifiers. This might be something that we add in the future if there is enough interest in this. -If you are interested in specialized, automated analysis, image segmentation, object detection etc. than feel free to [contact us](mailto:hello@webknossos.org). The WEBKNOSSOS development teams offers [commercial analysis services for](https://webknossos.org/services/automated-segmentation) that. \ No newline at end of file +If you are interested in specialized, automated analysis, image segmentation, object detection etc. than feel free to [contact us](mailto:hello@webknossos.org). The WEBKNOSSOS development teams offers [commercial analysis services](https://webknossos.org/services/automated-segmentation) for that. \ No newline at end of file diff --git a/docs/index.md b/docs/index.md index b609e18ec66..bf572966f6d 100644 --- a/docs/index.md +++ b/docs/index.md @@ -15,7 +15,7 @@ Sign up for a free account on [https://webknossos.org/](https://webknossos.org/) ## Features -- Exploration of large 3D image datasets as found in electron-microscopy, synchrotron, CT, MRI, Micro/Nano-CT +- Exploration of large 2D, 3D, 4D image datasets as found in electron-microscopy, synchrotron, CT, MRI, Micro/Nano-CT, and light microscopy - Fully browser-based user experience. No installation required - Efficient 3D data streaming for quick loading speeds - Creation/editing of [skeleton (line-segments)](./skeleton_annotation.md) and [3D volumetric annotations](./volume_annotation.md) @@ -27,6 +27,7 @@ Sign up for a free account on [https://webknossos.org/](https://webknossos.org/) - [Standalone datastore component](https://github.com/scalableminds/webknossos/tree/master/webknossos-datastore) for flexible deployments - [Supported dataset formats: Zarr, WKW (Optimized), KNOSSOS cubes, Neuroglancer Precomputed, N5, and image stacks](./data_formats.md) (some formats will be converted on upload) - [Supported image formats](./data_formats.md): Grayscale, Segmentation Maps, RGB, Multi-Channel +- Supports Time Series datasets - [3D Mesh Visualization](./mesh_visualization.md) - [Integrated Synapse and Connectome Viewer](./connectome_viewer.md) - [Documented Python library for API access and integration in custom analysis workflows](https://docs.webknossos.org/webknossos-py/index.html) diff --git a/docs/mesh_visualization.md b/docs/mesh_visualization.md index 5264f1b9231..6762280e4e7 100644 --- a/docs/mesh_visualization.md +++ b/docs/mesh_visualization.md @@ -1,8 +1,8 @@ # Mesh Visualization WEBKNOSSOS offers two different methods to render and visualize volumetric segmentations as 3D meshes. -1. Load a pre-computed 3D mesh. Meshes can either be (pre-)computed from within WEBKNOSSOS for the whole dataset or outside of WEBKNOSSOS with a `mesh file`. These meshes will be instantly available the next time you open this dataset (quicker mesh loading time). -2. Compute an ad-hoc mesh of any segmentation layer or volume annotation. These meshes will live computed any time you request them (slower mesh loading time). +1. Load a pre-computed 3D mesh. These meshes have been (pre-)computed by WEBKNOSSOS for all segments in the dataset and will load almost instantly (very quick loading time). +2. Compute an ad-hoc mesh of any single segment in your volume annotation. These meshes will be live computed any time you request them (slower mesh loading time). Mesh will always be rendered in the 3D viewport in the lower right. @@ -11,7 +11,7 @@ Mesh will always be rendered in the 3D viewport in the lower right. ## Loading Meshes Regardless of the method, meshes can be loaded by right-clicking on any segment and bringing up the context-sensitive action menu. Select `Load Mesh (pre-computed)` or `Compute Mesh (ad-hoc)` to load the respective 3D mesh for that segment. -Alternatively, the `Segments` tab in the right-hand side panel, allows you to load the mesh for any segment listed there. Select the corresponding option from the overflow menu next to each list entry. +Alternatively, the `Segments` tab in the right-hand side panel, allows you to load the mesh for any segment or whole group of segments listed there. Select the corresponding option from the overflow menu next to each list entry. ![Mesh can be loaded from the context-sensitive right-click menu](images/mesh_options.jpeg) ![The Segments Tab lists all loaded meshes.](images/segments_tab2.jpeg) @@ -29,12 +29,14 @@ CTRL + Click on any mesh will unload that mesh. Additionally, hiding, removing, reloading a mesh or jumping to its hovered position can be done with the context menu in the 3d viewport via right-click on a hovered mesh. +You can also include meshes in [WEBKNOSSOS animations](./animations.md). + ## Pre-Computed Mesh Generation -Instead of having to slowly compute individual mesh every time you open a dataset, it might make more sense to pre-compute all meshes within a dataset. Pre-computed meshes have the advantage of loading very quickly - even for larger meshes. +Instead of having to slowly compute individual mesh every time you open a dataset, it might make more sense to pre-compute and save all meshes within a dataset. Pre-computed meshes have the advantage of loading very quickly - even for larger meshes. -You can start the mesh generation from the `Segments` tab in the right-hand side panel. Click on the little plus button to initiate the mesh generation. We recommend computing the meshes in the medium quality (default) to strike a good balance between visual fidelity, compute time, and GPU resource usage. +You can start mesh generation from the `Segments` tab in the right-hand side panel. Click on the little plus button to initiate the mesh generation. We recommend computing the meshes in the medium quality (default) to strike a good balance between visual fidelity, compute time, and GPU resource usage. !!! info - Pre-computated meshes are exclusive to webknossos.org. Contact [sales](mailto:sales@webknossos.org) for access to the WEBKNOSSOS worker or [Voxelytics](https://voxelytics.com)) for the meshing job. + Pre-computated meshes are exclusive to webknossos.org. Contact [sales](mailto:sales@webknossos.org) for access to the integrated WEBKNOSSOS worker for meshing or the [Voxelytics software](https://voxelytics.com) for standalone meshing from the command line. [Check the `Processing Jobs` page](./jobs.md) from the `Admin` menu at the top of the screen to track progress or cancel the operation. The finished, pre-computed mesh will be available on page reload. diff --git a/docs/pen_tablets.md b/docs/pen_tablets.md index 68b923d075c..cf6bb436490 100644 --- a/docs/pen_tablets.md +++ b/docs/pen_tablets.md @@ -1,12 +1,12 @@ # Annotating with Pen Tablets / Wacom Pens, and iPads -Beyond the mouse and keyboard WEBKNOSSOS is great for annotating datasets with alternative input devices such as pens, styluses, or the Apple pencil. These input devices can significantly boost your annotation speed. +Beyond the mouse and keyboard WEBKNOSSOS is great for annotating datasets with alternative input devices such as pens, styluses, or the Apple pencil. These input devices can significantly boost your annotation speed and improve the detail of your annotations. ## Using Wacom/Pen tablets -Using pen tablet can signifincatly boost your annotation productivity, especially if you set it up correctl with WEBKNOSSOS. +Using pen tablet can signifincatly boost your annotation productivity, especially if you set it up correctly with WEBKNOSSOS. -### Set up your tablet -To streamline your workflow, program your tablet and pen buttons to match the WEBKNOSSOS shortcuts. By doing so, you can focus on your pen without the need of a mouse or keyboard. Here’s an example configuration using a Wacom tablet: + +To streamline your workflow, program your tablet and pen buttons to match the WEBKNOSSOS shortcuts. By doing so, you can focus on your pen without the need of a mouse or keyboard. Here is an example configuration using a Wacom tablet and the Wacom driver software: Tablet buttons: - Left: Brush (ctrl + K, B) @@ -19,7 +19,6 @@ Pen buttons: You can find the full list for keyboard shortcuts in the [documentation](./keyboard_shortcuts.md). - // Alt Programming buttons to match the WEBKNOSSOS shortcuts ### Annotating with Pens diff --git a/docs/tooling.md b/docs/tooling.md deleted file mode 100644 index d5b2a68526e..00000000000 --- a/docs/tooling.md +++ /dev/null @@ -1,31 +0,0 @@ -# Tooling -We provide several free, open-source libraries and tools alongside WEBKNOSSOS to aid with data analysis. - -## WEBKNOSSOS Python Library -- [webknossos-libs](https://github.com/scalableminds/webknossos-libs) -- [Read The Docs](https://docs.webknossos.org/webknossos-py/index.html) -- Our official Python library for working with WEBKNOSSOS datasets, skeleton and volume annotations and for downloading/uploading data from your WEBKNOSSOS instance through the REST API. -- Read & write WEBKNOSSOS datasets and *.wkw files (raw image data and volume segmentations) -- Read & write *.nml files (skeleton annotations) -- Download, modify and upload datasets to WEBKNOSSOS - - -## WEBKNOSSOS Cuber -- [webknossos-libs/wkcuber](https://github.com/scalableminds/webknossos-libs/wkcuber) -- [Read The Docs](https://docs.webknossos.org/wkcuber/index.html) -- CLI tool for converting (volume) image data into [webKnossos-wrap datasets]() (*.wkw) and vice-versa -- Supports TIFF stacks, jpeg, dm3, Knossos Cubes, tiled images stacks (e.g. Catmaid) and many more -- [Read more about the support data formats](./data_formats.md) - - -## webKnossos Wrap Data Format (wkw) -- [webknossos-wrap](https://github.com/scalableminds/webknossos-wrap) -- Library for low-level read and write operations to wkw datasets -- Use the [WEBKNOSSOS Python API](https://github.com/scalableminds/webknossos-libs) above for easy-to-use, high-level access to wkw datasets -- Available for Python, MATLAB, C/C++, and others - - -## MATLAB NML Functions -- [https://github.com/mhlabCodingTeam/SegEM/tree/master/auxiliaryMethods](https://github.com/mhlabCodingTeam/SegEM/tree/master/auxiliaryMethods) -- MATLAB utilities and functions for working with NML skeletons provided as part of the SegEM publication -- Developed by [Connectomics Department at Max-Planck-Institute for Brain Research](https://brain.mpg.de/helmstaedter) diff --git a/docs/tracing_ui.md b/docs/tracing_ui.md index 4e9986b355f..a76c0ee3899 100644 --- a/docs/tracing_ui.md +++ b/docs/tracing_ui.md @@ -20,15 +20,21 @@ The toolbar contains frequently used commands, such as saving and sharing, your The most common buttons are: -- `Settings`: Toggles the visibility of the left-hand side panel with all data and segmentation layers and their respective settings. - `Undo` / `Redo`: Undoes the last operation or redoes it if no new changes have been made in the meantime. Undo can only revert changes made in this session (since the moment the annotation view was opened). To revert to older versions use the "Restore Older Version" functionality described later in this list. - `Save`: Saves your annotation work. WEBKNOSSOS automatically saves every 30 seconds. -- `Archive`: Closes the annotation and archives it, removing it from a user's dashboard. Archived annotations can be found on a user's dashboard under "Annotations" and by clicking on "Show Archived Annotations". Use this to declutter your dashboard. (Not available for tasks) -- `Download`: Starts a download of the current annotation including any skeleton and volume data. Skeleton annotations are downloaded as [NML](./data_formats.md#nml) files. Volume annotation downloads contain the raw segmentation data as [WKW](./data_formats.md#wkw) files. -- `Share`: Create a shareable link to your dataset containing the current position, rotation, zoom level etc. Use this to collaboratively work with colleagues. Read more about this feature in the [Sharing guide](./sharing.md). -- `Duplicate`: Create a duplicate of this annotation. The duplicate will be created in your account, even if the original annotation belongs to somebody else. -- `Add Script`: Using the [WEBKNOSSOS frontend API](https://webknossos.org/assets/docs/frontend-api/index.html) users can script and automate WEBKNOSSOS interactions. Enter and execute your user scripts (Javascript) from here. Admins can curate a collection of frequently used scripts for your organization and make them available for quick selection to all users. -- `Restore Older Version`: Opens a window that shows all previous versions of an annotation. WEBKNOSSOS keeps a complete version history of all your changes to an annotation (separate for skeleton/volume). From this window, any older version can be selected, previewed, and restored. +- `Menu`: + - `Archive`: Closes the annotation and archives it, removing it from a user's dashboard. Archived annotations can be found on a user's dashboard under "Annotations" and by clicking on "Show Archived Annotations". Use this to declutter your dashboard. (Not available for tasks) + - `Download`: Starts a download of the current annotation including any skeleton and volume data. Skeleton annotations are downloaded as [NML](./data_formats.md#nml) files. Volume annotation downloads contain the raw segmentation data as [WKW](./data_formats.md#wkw) files. + - `Share`: Create a custumizable, shareable link to your dataset containing the current position, rotation, zoom level etc with fine-grained access controls. Use this to collaboratively work with colleagues. Read more about [data sharing](./sharing.md). + - `Duplicate`: Create a duplicate of this annotation. The duplicate will be created in your account, even if the original annotation belongs to somebody else. + - `Screenshot`: Takes a screenshot of current datasets/annotation from each of the three viewports and downloads them as PNG files. + - `Create Animation`: Creates an eye-catching animation of the dataset as a video clip. [Read more about animations](./animations.md). + - `Merge Annotations`: Combines the skeletons and segments from one or more individual annotations into a new annotation. + - `Add Script`: Using the [WEBKNOSSOS frontend API](https://webknossos.org/assets/docs/frontend-api/index.html) users can script and automate WEBKNOSSOS interactions. Enter and execute your user scripts (Javascript) from here. Admins can curate a collection of frequently used scripts for your organization and make them available for quick selection to all users. + - `Restore Older Version`: Opens a window that shows all previous versions of an annotation. WEBKNOSSOS keeps a complete version history of all your changes to an annotation (separate for skeleton/volume). From this window, any older version can be selected, previewed, and restored. + - `Layout`: The WK annotation user interface can be resized, reordered, and customized to suite your workflows. Use the mouse to drag, move and resize any viewport. You can safe these layout arrangments or restore the default viewport state. +- `Quick Share`: Create a shareable link to your dataset containing the current position, rotation, zoom level etc. Use this to collaboratively work with colleagues. Read more about [data sharing](./sharing.md). +- `AI Analysis`: Starts an AI segmentation of the datasets. Choose between several automated analysis workflows. Read more about [AI analysis](./automated_analysis.md). A user can directly jump to any position within their datasets by entering them in the position input field. The same is true for the camera rotation in flight/oblique modes. @@ -46,6 +52,7 @@ The toolbar further features all available navigation and annotation tools for q - `Fill Tool`: Flood-fills the clicked region with a volume annotation until it hits the next segment boundary (or the outer edge of your viewport). Used to fill holes in a volume annotation or to relabel a segment with a different id. - `Segment Picker`: Select the volume annotation ID of a segment to make it the active cell id to continue labeling with that ID/color. - `Bounding Box`: Creates and resizes any bounding box. See also the [Bounding Box (BB) panel](./tracing_ui#Right-Hand Side Panel) below. +- `Measurement Tool`: Measure distances between structures or the surface areas of segments by placing waypoints with the mouse. Please see the detailed documentation on [skeleton](./skeleton_annotation.md#tools) and [volume annotation](./volume_annotation.md#tools) tools for a for explaination of all context-sensitve modifiers that are available to some tools. @@ -63,12 +70,12 @@ Each dataset consists of one or more data and annotation layers. A dataset typic #### Histogram & General Layer Properties - `Histogram`: The Histogram displays sampled color values of the dataset on a logarithmic scale. The slider below the Histogram can be used to adjust the dynamic range of the displayed values. In order to increase the contrast of data, reduce the dynamic range. To decrease the contrast, widen the range. In order to increase the brightness, move the range to the left. To decrease the brightness, move the range to the right. - Above the the histogram, there are icon buttons to further adjust the histogram or otherwise interact with the layer: + Above the the histogram, there is a three-dots context menu with more options to further adjust the histogram or otherwise interact with the layer: - - `pencil`: Manipulate the min/max value of the histogram range. Clips values above/below these limits. - - `vertical line`: Automatically adjust the histogram for best contrast and brightness. Contrast estimation is based on the data currently available in your viewport. This is especially useful for light microscopy datasets saved as `float` values. - - `circle arrow`: Reload the data from server. Useful if the raw data has been changed on disk and you want to refresh your current session. - - `scanner`: Navigates the WEBKNOSSOS camera to a position within the dataset where there is data available for the respective layer. This is especially useful for working with smaller layers - likely segmentations - that might not cover the whole dataset and are hard to find manually. + - `Edit histogram range`: Manipulate the min/max value of the histogram range. Clips values above/below these limits. + - `Clip histogram`: Automatically adjust the histogram for best contrast and brightness. Contrast estimation is based on the data currently available in your viewport. This is especially useful for light microscopy datasets saved as `float` values. + - `Reload from server`: Reload the layer data from server. Useful if the raw data has been changed on disk and you want to refresh your current session. + - `Jump to data`: Navigates the WEBKNOSSOS camera to the center position within the dataset where there is data available for the respective layer. This is especially useful for working with smaller layers - likely segmentations - that might not cover the whole dataset and are hard to find manually. - `Opacity`: Increase / Decrease the opacity of a layer. 0% opacity makes a layer invisible. 100% opacity makes it totally opaque. Useful for overlaying several layers above one another. - `Gamma Correction`: Increase / Decrease the lumincance, brightness and contrast of a layer through a non-linear gamma correction. Low values darken the image, high values increase the perceived brightness. (Color layers only.) @@ -121,6 +128,7 @@ Note, not all control/viewport settings are available in every annotation mode. - `Keyboard Rotation`: Increases / Decreases the movement speed when using the arrow keys on the keyboard to rotate within the datasets. A low value rotates the camera slower for more precise movements. A high value rotates the camera quicker for greater agility. - `Crosshair Size`: Controls the size of the crosshair in flight mode. - `Sphere Radius`: In flight mode, the data is projected on the inside of a sphere with the camera located at the center of the sphere. This option influences the radius of said sphere flattening / rounding the projected viewport. A high value will cause less curvature showing the detail with more detail and less distortion. A low value will show more data along the edges of the viewport. +- `Logo in Screenshots`: Enable or disabled the WEBKNOSSOS watermark when [taking screenshots](./tracing_ui.md#the-toolbar). #### Data Rendering @@ -140,11 +148,11 @@ Note, not all control/viewport settings are available in every annotation mode. The right-hand side panel includes a number of tabs with specific information, additional interactions, listings about your current skeleton and/or volume annotation. When working with any of the WEBKNOSSOS annotation tools (see above), any interactions with the dataset will lead to entries in the listing provided here. -- `Info`: Contains mostly metainformation about the dataset and annotation. Can be used to name an annotation and provide an additional description, e.g., when sharing with collaborators. Includes a button to start [automated analysis](./automated_analysis.md) on your dataset (beta feature). +- `Info`: Contains mostly metainformation about the dataset and annotation. Can be used to name an annotation and provide an additional description, e.g., when sharing with collaborators. - `Skeleton`: Lists all available skeleton annotations and offers further interactions with them. [Read more about skeleton annotations.](./skeleton_annotation.md) - `Comments`: Lists all comments assigned to individual nodes of a skeleton. [Read more about comments and skeleton annotations.](./skeleton_annotation.md#nodes_and_trees) - `Segments`: List all segments created during a volume annotation. It also provides access to mesh generation for indivual segments or the whole dataset, mesh visualization, mesh downloads, and more. [Read more about 3D meshes.](./mesh_visualization.md) -- `BBoxes`: List all bounding boxes present in the dataset. Create new bounding boxes or adjust existing ones. This provides an alternativ interface for the `Bounding Box` tool. +- `BBoxes`: List all bounding boxes present in the dataset. Create new bounding boxes or adjust existing ones. This provides an alternative interface for the `Bounding Box` tool. - `AbsTree`: Renders and abstract 2D tree representation of a skeleton annotation when enabled. Might be quite resource intense when working with large skeletons. ## Status Bar diff --git a/docs/volume_annotation.md b/docs/volume_annotation.md index ff91f5a6221..317e87c3e80 100644 --- a/docs/volume_annotation.md +++ b/docs/volume_annotation.md @@ -16,7 +16,7 @@ Select one of the drawing tools from the toolbar or toggle through with the keyb - `Fill Tool`: Flood-fills the clicked region with a volume annotation until it hits the next segment boundary (or the outer edge of your viewport). All adjacent voxels with the same voxel id as the clicked voxel will be changed to the active segment ID. Useful to either fill a hole in a segment or to relabel a segment with a different ID/color. - `Segment Picker`: Click on any segment to select its label ID as the active segment ID and continue any volume annotation operation with that ID. - `Quick Select`: Draw a rectangle over a segment to annotate it automatically. The tool can operate in two different modes. If the "AI" button in the toolbar is activated, a machine-learning model is used to infer the selection. If the AI button is disabled, the tool operates on the intensity data of the visible color layer and automatically fills out the segment starting from the center of the rectangle. Next to the tool, there is a settings button which allows to enable a preview mode and to tweak some other parameters. If the preview is enabled, the parameters can be fine-tuned while the preview updates instantly. -- `Proof Reading`: Fix merge and split errors in automated segmentation. See [page on proofreading](./proof_reading.md#proofreading-tool) for more. +- `Proof Reading`: Fix merge and split errors in automated segmentation. Read more about [proofreading](./proof_reading.md#proofreading-tool). When using the trace or brush tool, a label can be added with _Left Mouse Drag_. Erasing is possible with the dedicated erase tools or with _CTRL + Shift + Left Mouse Drag_. @@ -92,7 +92,7 @@ Note that it is recommended to proofread the interpolated slices afterward since ### Volume Extrusion Similar to the above interpolation feature, you can also extrude the currently active segment. -This means, that you can label a segment on one slice (e.g., z=10), move a few slices forward (e.g., z=12) and copy the segment to the relevant slices (e.g., z=11, z=12). +This means, that you can label a segment on one slice (e.g., z=10), move a few slices forward (e.g., z=12) and copy the segment to the relevant slices (e.g., z=11, z=12). In contrast to interpolation mode, WEBKNOSSOS will not adapat the shape/boundary of the extruded segments to fit between the source and target segment. Instead, the extruded volume will retain the shape of the source segment and extend that along the z-axis. The extrusion can be triggered by using the extrude button in the toolbar (also available as a dropdown next to the interpolation/extrusion button). ### Volume Flood Fills @@ -105,6 +105,17 @@ WEBKNOSSOS supports volumetric flood fills (3D) to relabel a segment with a new Note that due to performance reasons, 3D flood-fills only work in a small, local bounding box. For larger areas we recommend working with the [proofreading tool](./proof_reading.md) instead. +### Segment Statistics +WEBKNOSSOS provides handy statistics about your labelled segments, such as the volume and bounding box of a segment. + +There is several ways to access this information: +1. Right-click any segment to bring up the context menu. The segment statics are listed at the end of the context menu. +2. In the `Segments` tab in the right-hand panel, right-click on any group of segments (or the "Root" group) to bring up a context menu. Select `Show Segment Statistics` to access a summary table with statistics for a whole group of labelled segments. These can be exported as CSV files for further analysis outside of WEBKNOSSOS. + +In cases, where you only wish to measure a simple distance or surface area, use the [`Measurement Tool`](./tracing_ui.md#the-toolbar) instead. + +// TODO image + ### Mappings / On-Demand Agglomeration With WEBKNOSSOS it is possible to apply a precomputed agglomeration file to re-map/combine over-segmented volume annotations on-demand. Instead of having to materialize one or more agglomeration results as separate segmentation layers, ID mappings allow researchers to apply and compare different agglomeration strategies of their data for experimentation. From 89b738135c237e9d684ced937f03e09a01871db0 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Wed, 15 Nov 2023 14:42:33 +0100 Subject: [PATCH 03/22] applied some gpt4 feedback --- docs/dashboard.md | 14 +++++------ docs/getting_started.md | 31 +++++++++++++------------ docs/index.md | 14 +++++------ docs/skeleton_annotation.md | 46 ++++++++++++++++++------------------- docs/volume_annotation.md | 17 +++++++------- 5 files changed, 62 insertions(+), 60 deletions(-) diff --git a/docs/dashboard.md b/docs/dashboard.md index e61f73c36af..1a5d101814e 100644 --- a/docs/dashboard.md +++ b/docs/dashboard.md @@ -1,18 +1,18 @@ # Dashboard -The Dashboard is your entry point to WEBKNOSSOS. -You can manage your datasets, create annotations, resume existing annotations and retrieve tasks distributed to you. +Welcome to WEBKNOSSOS! +The Dashboard lets you do the following things: manage your datasets, create new annotations, continue working on existing annotations, and get tasks assigned to you. ## Datasets -This screen shows all the available and accessible datasets for a user. -You can _view_ a dataset (read-only) or start new annotations from this screen. +On this screen, you can see all the datasets that you can access. +You can either _view_ a dataset without making any changes, or start a new annotation on it. Search for your dataset by using the search bar or sorting any of the table columns. Learn more about managing datasets in the [Datasets guide](./datasets.md). -The presentation differs depending on your user role. -Regular users can only start or continue annotations and work on tasks. -[Admins and Team Managers](./users.md#access-rights-roles) also have access to additional administration actions, access-rights management, and advanced dataset properties for each dataset. +What you can do on this screen depends on your user role. +If you are a regular user, you can only create or resume annotations and work on tasks. +If you are an [Admin or a Team Manager](./users.md#access-rights-roles), you can also perform administrative actions, manage access rights, and change dataset settings. Read more about the organization of datasets [here](./datasets.md#dataset-organization). diff --git a/docs/getting_started.md b/docs/getting_started.md index f4b71756688..7fd3648989f 100644 --- a/docs/getting_started.md +++ b/docs/getting_started.md @@ -2,21 +2,21 @@ Welcome to the WEBKNOSSOS documentation. WEBKNOSSOS is a platform for [exploring large-scale 3D image datasets](./tracing_ui.md), [creating skeleton annotations](./skeleton_annotation.md) and [3D volume segmentations](./volume_annotation.md). -Since it is a web-based tool, [collaboration](./sharing.md), [crowdsourcing](./tasks.md) and [publication](https://webknossos.org) is very easy. +Since it is a web app, you can easily [collaborate](./sharing.md), [crowdsource](./tasks.md) and [publish](https://webknossos.org) your work. -Feel free to [contact us](mailto:hello@webknossos.org) or [create a Pull Request](https://github.com/scalableminds/webknossos/pulls) if you have any suggestions for improving the documentation. +[Contact us](mailto:hello@webknossos.org) or create a [pull request](https://github.com/scalableminds/webknossos/pulls) to suggest improvements to the documentation. ![youtube-video](https://www.youtube.com/embed/iw2C7XB6wP4) ## Create a webknossos.org Account -Signing up for a free account on [webknossos.org](https://webknossos.org) is the easiest and fastest way to get started with WEBKNOSSOS. -Either upload one of your own datasets and explore one of the many community datasets. +To get started with WEBKNOSOS, sign up for a free account on [webknossos.org](https://webknossos.org). +Upload your own datasets or explore one of the many community datasets. -The free tier comes with 10GB of storage for private datasets. -For more data storage, check out the [pricing page for paid plans](https://webknossos.org/pricing) that covers storage costs and provides support services such as dataset conversions. +You get 10GB of storage for private datasets with the free tier. +For more data storage, check out the [pricing page for paid plans](https://webknossos.org/pricing) that covers storage costs and provides support services such as data format conversions. -If you are looking for on-premise hosting at your institute or custom solutions, [please reach out to us](mailto:hello@webknossos.org). +Please [reach out to us](mailto:sales@webknossos.org) for local, on-premise hosting at your institute or custom solutions. ## Explore Published Datasets @@ -27,8 +27,9 @@ Click on the dataset name to open the dataset. ![The list of available datasets](./images/getting_started-datasets.jpeg) -Any WEBKNOSSOS dataset can be opened for read-only viewing ("View" mode) or in editor-mode to create a new skeleton and/or volume annotation. -The main WEBKNOSSOS user interface consists of three orthogonal viewports slicing the data along the major axis and a 3D viewport. Read more about the UI in the section [about the UI](./tracing_ui.md). +You can open any WEBKNOSSOS dataset for read-only viewing (“View” mode) or in editor-mode to create a new skeleton and/or volume annotation. +Three orthogonal viewports slicing the data along the major axis and a 3D viewport make up the main WEBKNOSSOS user interface. +Read more [about the user interface](./tracing_ui.md). ![The WEBKNOSSOS user interface consisting of three orthogonal viewports slicing the data along the major axis and a 3D viewport.](./images/main_ui.jpeg) @@ -52,10 +53,10 @@ You can also change the size of the viewports to see more details in your data a ## Your First Annotation -Click the `Create Annotation` button while in "View" mode to create your first annotation. +To create your first annotation, click the `Create Annotation`` button while in “View” mode. WEBKNOSSOS will launch the main annotation screen allowing you to navigate your dataset, place markers to reconstruct skeletons, or annotate segments as volume annotations. -Depending on the current tool - selectable in the top bar - various actions can be performed. +You can perform various actions depending on the current tool - selectable in the tool bar at the top of the screen. Note that the most important controls are always shown in the status bar at the bottom of your screen. The first tool is the _Move_ tool which allows navigating the dataset by moving the mouse while holding the left mouse button. With the _Skeleton_ tool, a left mouse click can be used to place markers in the data, called nodes. @@ -63,10 +64,10 @@ Additionally, the left mouse button can also be used to navigate around, select The _Brush_ and _Trace_ tools allow to "paint" voxels to create volumetric annotations. For a full rundown on the other annotations tools, such as _Eraser_, _Segment Picker_, _Fill_ please refer to documentation on [skeleton](./skeleton_annotation.md) and [volume](./volume_annotation.md) annotations. -A right mouse click can be used to open a context-sensitive menu with various actions, such as merging two trees or flood-filling a segment. -Basic movement along the 3rd axis is done with the mouse wheel or by pressing the spacebar keyboard shortcut. +To open a context-sensitive menu with various actions, such as merging two trees or flood-filling a segment, use a right mouse click. +Use the mouse wheel or press the spacebar keyboard shortcut to move along the 3rd axis. -Learn more about the skeleton, volume, and hybrid annotations as well as the interface in the [Annotation UI guide](./tracing_ui.md). +Read the guides about the [annotation UI](./tracing_ui.md), [skeleton annotation](./skeleton_annotation.md), or [volume annotation](./volume_annotation.md) for more details. ![Editing skeleton and volume annotations in the Annotation UI](./images/tracing_ui.jpeg) @@ -94,7 +95,7 @@ Feel free to explore more features of WEBKNOSSOS in this documentation. - [Task and Project Management](./tasks.md) - [FAQ](./faq.md) -If you need help with WEBKNOSSOS, feel free to contact us at [hello@webknossos.org](mailto:hello@webknossos.org) or [write a post in the forum](https://forum.image.sc/tag/webknossos). +Please contact us at [hello@webknossos.org](mailto:hello@webknossos.org) or[write a post in the WEBKNOSSOS support forum](https://forum.image.sc/tag/webknossos) if you need help with WEBKNOSSOS. scalable minds also offers [commercial support, managed hosting, and feature development services](https://webknossos.org/pricing). [Read the installation tutorial](./installation.md) if you wish to install WEBKNOSSOS on your server. diff --git a/docs/index.md b/docs/index.md index bf572966f6d..2e2a4417982 100644 --- a/docs/index.md +++ b/docs/index.md @@ -4,14 +4,14 @@ WEBKNOSSOS is an [open-source tool](https://github.com/scalableminds/webknossos) The web-based tool is powered by a specialized data-delivery backend that stores [large datasets](./datasets.md) efficiently on disk and serves many concurrent users. WEBKNOSSOS has a GPU-accelerated viewer that includes tools for creating and sharing skeleton and volume annotations. Powerful [user](./users.md) and [task](./tasks.md) management features automatically distribute tasks to human annotators. -There are a lot of productivity improvements to make the human part as efficient as possible. +There are many features to enhance the productivity and efficiency of human annotators. WEBKNOSSOS is also a platform for [showcasing datasets](https://webknossos.org) alongside a paper publication. ![youtube-video](https://www.youtube.com/embed/36t4Rwx7Shg) ## Getting Started -Sign up for a free account on [https://webknossos.org/](https://webknossos.org/) and either upload one of your own datasets, or work with a large selection of published community datasets. +Create a free account on [https://webknossos.org/](https://webknossos.org/) and upload your own datasets or explore the published datasets from the community. ## Features @@ -21,7 +21,7 @@ Sign up for a free account on [https://webknossos.org/](https://webknossos.org/) - Creation/editing of [skeleton (line-segments)](./skeleton_annotation.md) and [3D volumetric annotations](./volume_annotation.md) - [Innovative flight mode for fast skeleton annotation](https://www.nature.com/articles/nmeth.4331) - User and task management for high-throughput collaboration in the lab or crowdsourcing -- [Easy Sharing](./sharing.md). Every dataset and annotation can be securely shared as a web link with others +- [Easy Sharing](./sharing.md). Share your datasets and annotations securely with others using web links - [Fine-grained access permission](./users.md) and and user roles for secure collaboration - [AI Quick Select tool](./volume_annotation.md#ai-quick-select) to speed up segmentation - [Standalone datastore component](https://github.com/scalableminds/webknossos/tree/master/webknossos-datastore) for flexible deployments @@ -30,10 +30,10 @@ Sign up for a free account on [https://webknossos.org/](https://webknossos.org/) - Supports Time Series datasets - [3D Mesh Visualization](./mesh_visualization.md) - [Integrated Synapse and Connectome Viewer](./connectome_viewer.md) -- [Documented Python library for API access and integration in custom analysis workflows](https://docs.webknossos.org/webknossos-py/index.html) -- [Documented frontend API for user scripting](https://webknossos.org/assets/docs/frontend-api/index.html), REST API for backend access -- Open-source development with [automated test suite](https://circleci.com/gh/scalableminds/webknossos) -- [Docker-based deployment](https://hub.docker.com/r/scalableminds/webknossos/) for production and development +- [Python library with documentation for API access and integration in custom analysis workflows](https://docs.webknossos.org/webknossos-py/index.html) +- [Frontend API for user scripting](https://webknossos.org/assets/docs/frontend-api/index.html), REST API for backend access +- Developed as an open-source project with [automated testing](https://circleci.com/gh/scalableminds/webknossos) +- [Deployable with Docker for production and development](https://hub.docker.com/r/scalableminds/webknossos/) ## Screenshots diff --git a/docs/skeleton_annotation.md b/docs/skeleton_annotation.md index d6bab135439..a8eec06b268 100644 --- a/docs/skeleton_annotation.md +++ b/docs/skeleton_annotation.md @@ -1,14 +1,14 @@ ## Skeleton Annotations -A typical goal of skeleton annotations is the reconstruction of long-running structures in a dataset that spans many data slices as a graph of connected nodes. -A good example is the analysis of nerve cells by placing a node every few slices to reconstruct their path/circuitry through a dataset (see image below). +Skeleton annotations are typically used to reconstruct structures that span across multiple data slices as graphs of connected nodes. +For example, you can analyze nerve cells by placing nodes along their pat/circuitry through the dataset (see image below). -Commonly, skeleton annotations contain the reconstruction of one or more structures, often with many thousand nodes. -All connected nodes form a tree, i.e., an undirected graph. +Commonly, skeleton annotations contain reconstructions of one or more structures, sometimes thousands of nodes. +Each connected group of nodes form a tree, i.e., an undirected graph. WEBKNOSSOS skeleton annotations can be downloaded, modified, and imported using a human-readable XML-based file format called [NML](./data_formats.md#nml). -This article outlines commonly used features and operations for viewing, editing, or creating new skeleton annotations in WEBKNOSSOS. +This article shows you how to view, edit, or create skeleton annotations in WEBKNOSSOS ![An example of a complex WEBKNOSSOS skeleton annotation](images/tracing_ui_skeletontracing.jpeg) @@ -21,11 +21,11 @@ WEBKNOSSOS supports several modes for displaying your dataset & interacting with #### Orthogonal Mode -Orthogonal mode displays a dataset with the camera oriented orthogonally to each of the three main axis x, y, z. +Orthogonal mode shows the dataset from three orthogonal views along the x, y, and z axes. Additionally, a fourth viewport shows the data and skeleton from a 3D perspective. -All camera movements happen along the respective main axis. -This view is especially useful for viewing your data in the highest possible quality alongside its main imaging axis, typically XY. -Every single slice of the raw data can be viewed. +You can move the camera along any of the main axes. +This view lets you see your data in the highest quality along its main imaging axis, usually XY +You can view your dataset slice by slice. Most skeleton annotation operations and keyboard shortcuts are tailored for the Orthogonal Mode. @@ -33,19 +33,19 @@ Most skeleton annotation operations and keyboard shortcuts are tailored for the #### Oblique Mode -Oblique mode presents an arbitrarily-resliced view through the data. -In contrast to the Orthogonal mode, any arbitrary slice through the dataset at any rotational angle of the camera is possible. +Oblique mode lets you slice the data at any angle. +Unlike Orthogonal mode, you can rotate the camera and slice the data in any direction. ![Viewport in Oblique Mode showing an arbitrarily-resliced view through the data.](./images/tracing_ui_obliquemode.jpeg) #### Flight Mode -Flight mode also allows a resliced view through the data. -In contrast to Oblique mode, the data is projected on the inside of a sphere with the camera located at the center of the sphere. +Flight mode gives you another way to slice the data. +Unlike Oblique mode, Flight mode projects the data on a sphere around the camera. ![Annotate processes, e.g. neurites, efficiently in Flight mode](./images/tracing_ui_flightmode.jpeg) -Spherical projection is especially useful when rotating the camera, as pixels close to the center of the screen move in a predictable manner. +Spherical projection makes it easier to rotate the camera, because the pixels near the center of the screen stay in place. Interactions and movements in Flight mode feel similar to First-Person-View (FPV) games. ![Spherical projection of the Flight mode](./images/tracing_ui_flightmode_schema.jpeg) @@ -53,13 +53,13 @@ Interactions and movements in Flight mode feel similar to First-Person-View (FPV ![Changing the radius of the spherical projection](./images/tracing_ui_flightmode_radius.gif) -Flight mode is best used for annotating structures very quickly. -Trained tracers can follow "tube"-like structures, e.g. dendrites/axons in a neuron, as though they are "flying" through them. -Nodes are placed automatically along the flight path, creating skeletons very efficiently. +You can annotate structures faster in Flight mode. +Seasoned annotators can follow tube-like structures, such as dendrites or axons, as if they are flying through them, much like in racing game or flight simulator. +Flight mode places nodes along your path automatically, which creates skeletons more efficiently. ### Tools -The WEBKNOSSOS toolbar at the top of the screen contains several tools designed to work with skeletons: +You can use these tools in the WEBKNOSSOS toolbar to work with skeletons: - `Move`: Navigate around the dataset. - `Skeleton`: Create skeleton annotations and place nodes with a left mouse click. Read more below. @@ -67,17 +67,17 @@ The WEBKNOSSOS toolbar at the top of the screen contains several tools designed When the `Skeleton` tool is active, the following modifiers become available: - `Create new Tree`: Creates a new tree. -- `Toggle single node tree mode`: Modifies the behavior of the skeleton annotation tool to create a new tree at each click instead of adding nodes to the active tree. Useful for marking single position objects/seeds, e.g., for marking nuclei or vesicles. Also called "Soma-clicking mode". -- `Toggle merger mode`: Modifies the behavior of the skeleton annotation tool to launch the `Merger Mode`. In merger mode skeletons, can be used to "collect" and merge volume segments from an over-segmentation. [Read more about `Merger Mode`](./volume_annotation.md#proof_reading_and_merging_segments). +- `Toggle single node tree mode`: This modifier makes the skeleton annotation tool create a new tree for each node instead of adding nodes to the current tree. You can use this mode to mark single objects or seeds, such as nuclei. This is also known as "Soma-clicking mode". +- `Toggle merger mode`: This modifier activates the `Merger Mode` for the skeleton annotation tool. In merger mode, you can use skeletons to "collect" and merge volume segments from an over-segmentation. [Read more about `Merger Mode`](./volume_annotation.md#proof_reading_and_merging_segments). ![Skeleton Tool modifiers](./images/skeleton_tool_modifiers.jpeg) ### Nodes and Trees -Skeleton annotations consist of connected nodes forming a graph. -Nodes are connected through edges and are organized in trees. +A skeleton annotation is a graph of connected nodes. +Edges connect the nodes and form trees. -Nodes can be placed by left-clicking in orthogonal mode (the skeleton tool should be selected) or automatically when moving in flight or oblique mode. +You can place nodes by left-clicking in Orthogonal Mode (with the Skeleton tool selected) or by moving in Flight or Oblique Mode. All (global) operations are executed on the currently active node, e.g., adding a comment or node deletion. The active node is always highlighted with a circle around it. Most keyboard shortcuts take the active node into context. diff --git a/docs/volume_annotation.md b/docs/volume_annotation.md index 317e87c3e80..be4b294bbec 100644 --- a/docs/volume_annotation.md +++ b/docs/volume_annotation.md @@ -1,21 +1,22 @@ ## Volume Annotations & Proof-Reading In addition to [skeleton annotations](./skeleton_annotation.md), WEBKNOSSOS also supports volume/segmentation annotations. -With this type of annotation, you can label groups of voxels with efficient drawing tools. +This annotation type lets you label voxel groups using efficient drawing tools. ![youtube-video](https://www.youtube.com/embed/iw2C7XB6wP4?start=120) ### Tools -Select one of the drawing tools from the toolbar or toggle through with the keyboard shortcut _W_. +Choose a drawing tool from the toolbar or press _W_ to switch between them. - `Move`: Navigate around the dataset. -- `Trace`: Draw outlines around the voxel you would like to label. -- `Brush`: Draw over the voxels you would like to label. Adjust the brush size with _SHIFT + Mousewheel_. -- `Erase (Trace/Brush)`: Draw over the voxels you would like to erase. Adjust the brush size with _SHIFT + Mousewheel_. -- `Fill Tool`: Flood-fills the clicked region with a volume annotation until it hits the next segment boundary (or the outer edge of your viewport). All adjacent voxels with the same voxel id as the clicked voxel will be changed to the active segment ID. Useful to either fill a hole in a segment or to relabel a segment with a different ID/color. -- `Segment Picker`: Click on any segment to select its label ID as the active segment ID and continue any volume annotation operation with that ID. -- `Quick Select`: Draw a rectangle over a segment to annotate it automatically. The tool can operate in two different modes. If the "AI" button in the toolbar is activated, a machine-learning model is used to infer the selection. If the AI button is disabled, the tool operates on the intensity data of the visible color layer and automatically fills out the segment starting from the center of the rectangle. Next to the tool, there is a settings button which allows to enable a preview mode and to tweak some other parameters. If the preview is enabled, the parameters can be fine-tuned while the preview updates instantly. +- `Trace`: Draw an outline around the voxel you want to label. +- `Brush`: Paint over the voxels you would like to label. Use _SHIFT + Mousewheel_ to change the brush size. +- `Erase (Trace/Brush)`: Erase voxels by drawing over them. Use _SHIFT + Mousewheel_ to change the brush size. +- `Fill Tool`: Fill the clicked region with a volume annotation up to the next segment boundary (or the edge of your viewport). All neighboring voxels with the same voxel id as the clicked voxel will be labelled with the active segment ID. This is useful for filling a hole in a segment or relabeling a segment with a different ID/color. +- `Segment Picker`: Click a segment to use its label ID as the active segment ID and keep annotating with that ID. +- `Quick Select`: Annotate a segment automatically by drawing a rectangular selection over it. The tool operates in two different modes. +When the "AI" button in the toolbar is activated, a machine-learning model is used to infer the selection. When the AI button is disabled, the tool operates on the intensity data of the visible color layer and automatically fills out the segment starting from the center of the rectangle. Next to the tool, there is a settings button which allows to enable a preview mode and to tweak some other parameters. When the preview is enabled, you can fine-tuned the parameters and see the preview update instantly. - `Proof Reading`: Fix merge and split errors in automated segmentation. Read more about [proofreading](./proof_reading.md#proofreading-tool). When using the trace or brush tool, a label can be added with _Left Mouse Drag_. From da2bed91fe10c6c02a3b1817f1aca86ec2b555e8 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Wed, 15 Nov 2023 18:19:51 +0100 Subject: [PATCH 04/22] moar docs --- docs/animations.md | 2 +- docs/data_formats.md | 128 +------------------------------ docs/datasets.md | 28 +++---- docs/image_stacks.md | 108 ++++++++++++++++++++++++++ docs/n5.md | 17 ++++ docs/neuroglancer_precomputed.md | 14 ++++ docs/pen_tablets.md | 2 + docs/today_i_learned.md | 14 ++++ docs/tutorial_automation.md | 2 +- docs/wkw.md | 68 ++++++++++++++++ docs/zarr.md | 80 +++++++++++++++++++ 11 files changed, 321 insertions(+), 142 deletions(-) create mode 100644 docs/image_stacks.md create mode 100644 docs/n5.md create mode 100644 docs/neuroglancer_precomputed.md create mode 100644 docs/today_i_learned.md create mode 100644 docs/wkw.md create mode 100644 docs/zarr.md diff --git a/docs/animations.md b/docs/animations.md index 2d460277a00..542ceb29897 100644 --- a/docs/animations.md +++ b/docs/animations.md @@ -1,6 +1,6 @@ # Animations -A picture is worth a thousand words. In this spirit, you can use WEBKNOSSOS to create eye catching animation of your datasets as a video clip. You can use these short movies as part of a presentation, website, for social media or to promote a publication. +A picture is worth a thousand words. In this spirit, you can use WEBKNOSSOS to create eye-catching animation of your datasets as a video clip. You can use these short movies as part of a presentation, website, for social media or to promote a publication. // animation video diff --git a/docs/data_formats.md b/docs/data_formats.md index 013da8d4dec..cdfbf93585a 100644 --- a/docs/data_formats.md +++ b/docs/data_formats.md @@ -32,48 +32,8 @@ In particular, the following file formats are supported: Instead, they can be directly streamed from an HTTP server or the cloud. See the page on [datasets](./datasets.md) for uploading and configuring these formats. -#### Single-Layer Image File Sequence -When uploading multiple image files, these files are sorted numerically, and each one is interpreted as one section within a 3D dataset. -Alternatively, the same files can also be uploaded bundled in a single folder (or zip archive). -As an example, the following file structure would create a dataset with one layer which has a z-depth of 3: -``` -dataset_name/ -├── image_1.tif -├── image_2.tif -├── image_3.tif -└── ... -``` - -#### Multi-Layer Image File Sequence -The image file sequences explained above can be composed to build multiple [layers](#Layers). -For example, the following file structure (note the additional hierarchy level) would create a dataset with two layers (named `color` and `segmentation`): - -``` -dataset_name/ -├── color -│ ├── image_1.tif -│ ├── image_2.tif -│ └── ... -├── segmentation -│ └── ... -``` - -#### Single-file images -The following file formats can be dragged individually into WEBKNOSSOS to convert them to a 3D dataset: - -- tif -- czi -- nifti -- raw -- dm3 -- dm4 -- png - -#### KNOSSOS file hierarchy -Datasets saved as KNOSSOS cubes can also be converted on [WEBKNOSSOS](https://webknossos.org). -Please ensure that you import the correct folder (so that all layers of the dataset are contained). ## Concepts @@ -132,38 +92,9 @@ To bring the above concepts together, WEBKNOSSOS uses [webknossos-wrap (WKW)](ht For sparse skeleton-like structures, WEBKNOSSOS uses [NML](#NML). ### WKW Datasets -[webknossos-wrap (WKW)](https://github.com/scalableminds/webknossos-wrap) is a format optimized for large datasets of 3D voxel imagery and supports compression, efficient cutouts, multi-channel, and several base datatypes. -It works well for large datasets and is built with modern file systems in mind. -Compared to KNOSSOS datasets, it is more efficient because it orders the data within the container for optimal read performance (Morton order). -WKW is versatile in the image formats it can hold: Grayscale, Multi-Channel, Segmentation, RGB, as well as a range of data types (e.g., `uint8`, `uint16`, `float32`). -Additionally, WKW supports compression for disk space efficiency. -Each layer of a WKW dataset may contain one of the following: -* Grayscale data (8 Bit, 16 Bit, Float), also referred to as `color` data -* RGB data (24 Bit) -* Segmentation data (8 Bit, 16 Bit, 32 Bit) - -#### WKW Folder Structure -A WKW dataset is represented with the following file system structure: - -``` -great_dataset # One folder per dataset -├─ color # Dataset layer (e.g., color, segmentation) -│  ├─ 1 # Magnification step (1, 2, 4, 8, 16 etc.) -│  │  ├─ header.wkw # Header wkw file -│  │  ├─ z0 -│  │  │  ├─ y0 -│  │  │  │  ├─ x0.wkw # Actual data wkw file -│  │  │ │ └─ x1.wkw # Actual data wkw file -│  │  │  └─ y1/... -│  │  └─ z1/... -│  └─ 2/... -├─ segmentation/... -└─ datasource-properties.json # Dataset metadata (will be created upon import, if non-existent) -``` - -#### WKW Metadata by Example +#### Dataset Metadata Metadata is stored in the `datasource-properties.json`. See below for the [full specification](#dataset-metadata-specification). This is an example: @@ -247,63 +178,6 @@ WEBKNOSSOS requires several metadata properties for each dataset to properly dis + `dataLayers.largestSegmentId`: The highest ID that is currently used in the respective segmentation layer. This is required for volume annotations where new objects with incrementing IDs are created. Only applies to segmentation layers. + `dataLayers.dataFormat`: Should be `wkw`. -#### Download "Volume Annotation" File Format - -Volume annotations can be downloaded and imported using ZIP files that contain [WKW](./data_formats.md#wkw-datasets) datasets. -The ZIP archive contains one NML file that holds meta information including the dataset name and the user's position. -Additionally, there is another embedded ZIP file that contains the volume annotations in WKW file format. - -!!!info - In contrast to on-disk WKW datasets, the WKW files in downloaded volume annotations only contain a single 32^3 bucket in each file. - Therefore, also the addressing of the WKW files (e.g. `z48/y5444/x5748.wkw`) is in steps of 32 instead of 1024. - -``` -volumetracing.zip # A ZIP file containing the volume annotation -├─ data.zip # Container for WKW dataset -│ └─ 1 # Magnification step folder -│ ├─ z48 -│ │ ├─ y5444 -│ │ │ └─ x5748.wkw # Actual WKW bucket file (32^3 voxel) -│ │ └─ y5445/... -│ ├─ z49/... -│ └─ header.wkw # Information about the WKW files -└─ volumetracing.nml # Annotation metadata NML file -``` - -After unzipping the archives, the WKW files can be read or modified with the WKW libraries that are available for [Python and MATLAB](./tooling.md). - - -### Converting with WEBKNOSSOS Cuber - -#### Image Stacks -If you have image stacks, e.g., tiff stacks, you can easily convert them with [WEBKNOSSOS cuber](https://github.com/scalableminds/webknossos-libs/tree/master/wkcuber). -The tool expects all image files in a single folder with numbered file names. -After installing, you can create simple WKW datasets with the following command: - -``` -python -m wkcuber \ - --layer_name color \ - --scale 11.24,11.24,25 \ - --name great_dataset \ - data/source/color data/target -``` - -This snippet converts an image stack that is located at `data/source/color` into a WKW dataset which will be located at `data/target`. -It will create the `color` layer. -You need to supply the `scale` parameter, i.e., the size of one voxel in nanometers. - -Read the full documentation at [WEBKNOSSOS cuber](https://github.com/scalableminds/webknossos-libs/tree/master/wkcuber). -[Please contact us](mailto:hello@webknossos.org) or [write a post](https://forum.image.sc/tag/webknossos), if you have any issues with converting your dataset. - -#### KNOSSOS Cubes - -Datasets saved as KNOSSOS cubes can be easily converted with the [WEBKNOSSOS cuber](https://github.com/scalableminds/webknossos-libs/tree/master/wkcuber) tool. - -#### Importing Datasets - -After the manual conversion, proceed with the remaining import step. -See the [Datasets guide](./datasets.md#Importing) for further instructions. - #### NML Files When working with skeleton annotation data, WEBKNOSSOS uses the NML format. It can be [downloaded](./export.md#data-export-and-interoperability) from and uploaded to WEBKNOSSOS, and used for processing in your scripts. diff --git a/docs/datasets.md b/docs/datasets.md index 6bf67d11dc3..c7be6d2eed1 100644 --- a/docs/datasets.md +++ b/docs/datasets.md @@ -2,7 +2,7 @@ Working with 3D (and 2D) image datasets is at the heart of WEBKNOSSOS. -- [Import datasets](#importing-datasets) by uploading them directly via the web UI or by using the file system (self-hosted instances only). +- [Import datasets](#importing-datasets) by uploading them directly via the web UI, streaming them from a remove server/the cloud, or by using the file system. - [Configure the dataset](#configuring-datasets) defaults and permissions to your specification. - [Share your datasets](./sharing.md#dataset-sharing) with the public or with selected users. @@ -23,26 +23,28 @@ The easiest way to get started with working on your datasets is through the WEBK 4. Click the *Upload* button -WEBKNOSSOS uses the [WKW-format](./data_formats.md#wkw-datasets) internally to display your data. +WEBKNOSSOS uses the [WKW-format](./wkw.md) internally to display your data. If your data is already in WKW you can simply drag your folder (or zip archive of that folder) into the upload view. If your data is not in WKW, you can either: -- upload the data in a supported file format and WEBKNOSSOS will automatically convert it to WKW ([webknossos.org](https://webknossos.org) only). Depending on the size of the dataset, the conversion will take some time. You can check the progress at the "Jobs" page or the "Datasets" tab in the dashboard (both will update automatically). +- upload the data in a supported file format and WEBKNOSSOS will automatically convert it ([webknossos.org](https://webknossos.org) only). +Depending on the size of the dataset, the conversion will take some time. +You can check the progress at the [`Jobs`](./jobs.md) page or the "Datasets" tab in the dashboard. +WEBKNOSSOS will also send you an email notification. - [Convert](#converting-datasets) your data manually to WKW. In particular, the following file formats are supported for uploading (and conversion): -- [WKW dataset](#WKW-Datasets) -- [Image file sequence](#Single-Layer-Image-File-Sequence) in one folder (tif, jpg, png, dm3, dm4) - - as an extension, multiple folders with image sequences are interpreted as [separate layers](#Multi-Layer-Image-File-Sequence) -- Single-file images (tif, czi, nifti, raw) -- KNOSSOS file hierarchy -- [Read more about the supported file formats and details](./data_formats.md#conversion-with-webknossosorg) +- [WKW dataset](./wkw.md) +- [OME-Zarr datasets](./zarr.md) +- [Image file sequence](#Single-Layer-Image-File-Sequence) in one folder (TIFF, JPEG, PNG, DM3, DM4) +- [Multi Layer file sequence](#Multi-Layer-Image-File-Sequence) containing multiple folders with image sequences that are interpreted as separate layers +- [Single-file images](#single-file-images) (OME-Tiff, TIFF, PNG, czi, raw, etc) Once the data is uploaded (and potentially converted), you can further configure a dataset's [Settings](#configuring-datasets) and double-check layer properties, finetune access rights & permissions, or set default values for rendering. -### Working with Zarr, Neuroglancer Precomputed and N5 datasets +### Streaming from remote servers and the cloud WEBKNOSSOS supports loading and remotely streaming [Zarr](https://zarr.dev), [Neuroglancer precomputed format](https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed) and [N5](https://github.com/saalfeldlab/n5) datasets from a remote source, e.g. Cloud storage (S3) or HTTP server. WEBKNOSSOS supports loading Zarr datasets according to the [OME NGFF v0.4 spec](https://ngff.openmicroscopy.org/latest/). @@ -74,16 +76,16 @@ Hint: If you happen to have any Zarr dataset locally that you would like to view Then WEBKNOSSOS can easily stream the data. ### Uploading through the Python API -For those wishing to automate dataset upload or to do it programmatically, check out the WEBKNOSSOS [Python library](https://github.com/scalableminds/webknossos-libs). It allows you to create, manage and upload datasets as well. +For those wishing to automate dataset upload or to do it programmatically, check out the WEBKNOSSOS [Python library](https://docs.webknossos.org/webknossos-py). You can create, manage and upload datasets with the Python lib. ### Uploading through the File System -- (Self-Hosted Instances Only)-- -On self-hosted instances, large datasets can be efficiently imported by placing them directly in the file system (WKW-format or Zarr only): +On self-hosted instances, large datasets can be efficiently imported by placing them directly on the file system (WKW-format or Zarr only): * Place the dataset at `/binaryData//`. For example `/opt/webknossos/binaryData/Springfield_University/great_dataset`. * Go to the [dataset view on the dashboard](./dashboard.md) -* Use the refresh button on the dashboard or wait for WEBKNOSSOS to detect the dataset (up to 10min) +* Use the `Scan disk for new dataset` from the dropdown menu next to the `Refresh`` button on the dashboard or wait for WEBKNOSSOS to detect the dataset (up to 10min) Typically, WEBKNOSSOS can infer all the required metadata for a dataset automatically and import datasets automatically on refresh. In some cases, you will need to manually import a dataset and provide more information: diff --git a/docs/image_stacks.md b/docs/image_stacks.md new file mode 100644 index 00000000000..325a4328c72 --- /dev/null +++ b/docs/image_stacks.md @@ -0,0 +1,108 @@ +# Image Stacks + +WEBKNOSSOS works with a wide range of modern bio-imaging formats and image stacks: + +- [Image file sequence](#Single-Layer-Image-File-Sequence) in one folder (TIFF, JPEG, PNG, DM3, DM4) +- [Multi Layer file sequence](#Multi-Layer-Image-File-Sequence) containing multiple folders with image sequences that are interpreted as separate layers +- [Single-file images](#single-file-images) (OME-Tiff, TIFF, PNG, czi, raw, etc) + +Image stacks need to be converted to [WKW](./wkw.md) for WEBKNOSSOS. This happens automatically when using the web upload on https://webknossos.org or can be done manually (see below). + +## Single-Layer Image File Sequence +When uploading multiple image files, these files are sorted numerically, and each one is interpreted as one section within a 3D dataset. +Alternatively, the same files can also be uploaded bundled in a single folder (or zip archive). + +As an example, the following file structure would create a dataset with one layer which has a z-depth of 3: + +``` +dataset_name/ +├── image_1.tif +├── image_2.tif +├── image_3.tif +└── ... +``` + +## Multi-Layer Image File Sequence +The image file sequences explained above can be composed to build multiple [layers](#Layers). +For example, the following file structure (note the additional hierarchy level) would create a dataset with two layers (named `color` and `segmentation`): + +``` +dataset_name/ +├── color +│ ├── image_1.tif +│ ├── image_2.tif +│ └── ... +├── segmentation +│ └── ... +``` + +## Single-file images +WEBKNOSSOS understands most modern bio-imaging file formats and uses the [BioFormats library] upon import/conversion. It works particularly well with: + +- OME-Tiff +- Tiff +- PNG +- JPEG +- czi +- nifti +- raw +- DM3 +- DM4 + + +## Manual Conversion + +You can manually convert image stacks through: +- [WEBKNOSSOS CLI](https://docs.webknossos.org/cli) +- [WEBKNOSSOS Python library](https://docs.webknossos.org/webknossos-py) + +### CLI +You can easily convert image stacks manually with the WEBKNOSSOS CLI. +The CLI tool expects all image files in a single folder with numbered file names. +After installing, you can convert image stacks to WKW datasets with the following command: + +``` +pip install webknossos + +webknossos convert \ + --voxel-size 11.24,11.24,25 \ + --name my_dataset \ + data/source data/target +``` + +This snippet converts an image stack that is located in directory called `data/source` into a WKW dataset which will be located at `data/target`. +It will create a so called `color` layer containing your raw greyscale/color image. +The supplied `--voxel-size` is specified nanometers. + +Read the full documentation at [WEBKNOSSOS CLI](https://docs.webknossos.org/cli). + +### Python + +You can use the free [WEBKNOSSSO Python library](https://docs.webknossos.org/webknossos-py) to convert image stacks to WKW or integrate the convesion as part of existing workflow. + +``` +from webknossos import Dataset +from webknossos.dataset import COLOR_CATEGORY + +def main() -> None: + """Convert a folder of image files to a WEBKNOSSOS dataset.""" + dataset = Dataset.from_images( + input_path=INPUT_DIR, + output_path=OUTPUT_DIR, + voxel_size=(11, 11, 11), + layer_category=COLOR_CATEGORY, + compress=True, + ) + + print(f"Saved {dataset.name} at {dataset.path}.") + + # dataset.upload() + + +if __name__ == "__main__": + main() +``` + +Read the full example in the WEBKNOSSOS [Python library documentation].(https://docs.webknossos.org/webknossos-py/examples/create_dataset_from_images.html) + +[Please contact us](mailto:hello@webknossos.org) or [write a post in our support forum](https://forum.image.sc/tag/webknossos), if you have any issues with converting your dataset. \ No newline at end of file diff --git a/docs/n5.md b/docs/n5.md new file mode 100644 index 00000000000..b65e8c90169 --- /dev/null +++ b/docs/n5.md @@ -0,0 +1,17 @@ +# N5 + +WEBKNOSSOS can read [N5 datasets](https://github.com/saalfeldlab/n5). + +!!!info + N5 datasets can only be opened as [remote dataset](./datasets.md#streaming-from-remote-servers-and-the-cloud) at the moment. Uploading the through the web uploader is not supported. + + + +## Example + + +You can try the N5 support with the following datasets. Load them in WEBKNOSSOS as a [remote dataset](./datasets.md#streaming-from-remote-servers-and-the-cloud): + +- Interphase HeLa cell EM data hosted on AWS S3 + - `s3://janelia-cosem-datasets/jrc_hela-3/jrc_hela-3.n5/em/fibsem-uint16` + - Source: [Open Organelle Project](https://openorganelle.janelia.org/datasets/jrc_hela-3) \ No newline at end of file diff --git a/docs/neuroglancer_precomputed.md b/docs/neuroglancer_precomputed.md new file mode 100644 index 00000000000..da9552fe266 --- /dev/null +++ b/docs/neuroglancer_precomputed.md @@ -0,0 +1,14 @@ +# Neuroglancer Precomputed + +WEBKNOSSOS can read [Neuroglancer precomputed dataset](https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed). + +!!!info + Neuroglancer datasets can only be opened as [remote dataset](./datasets.md#streaming-from-remote-servers-and-the-cloud) at the moment. Uploading the through the web uploader is not supported. + +## Example + +You can try the Neuroglancer Precomputed support with the following datasets. Load them in WEBKNOSSOS as a [remote dataset](./datasets.md#streaming-from-remote-servers-and-the-cloud): + +- Mouse Cortex EM data hosted on Google Cloud Services + - `gs://iarpa_microns/minnie/minnie65/em` + - Source: MICrONs Consortium et al. Functional connectomics spanning multiple areas of mouse visual cortex. bioRxiv 2021.07.28.454025; doi: https://doi.org/10.1101/2021.07.28.454025 diff --git a/docs/pen_tablets.md b/docs/pen_tablets.md index cf6bb436490..900fa02fc4e 100644 --- a/docs/pen_tablets.md +++ b/docs/pen_tablets.md @@ -9,12 +9,14 @@ Using pen tablet can signifincatly boost your annotation productivity, especiall To streamline your workflow, program your tablet and pen buttons to match the WEBKNOSSOS shortcuts. By doing so, you can focus on your pen without the need of a mouse or keyboard. Here is an example configuration using a Wacom tablet and the Wacom driver software: Tablet buttons: + - Left: Brush (ctrl + K, B) - Middle left: Eraser (ctrl + K, E) - Middle right: Quick-select (ctrl + K, Q) - Right: Create new segment (C) Pen buttons: + - Lower button: Move (ALT) You can find the full list for keyboard shortcuts in the [documentation](./keyboard_shortcuts.md). diff --git a/docs/today_i_learned.md b/docs/today_i_learned.md new file mode 100644 index 00000000000..7d16bb71dd7 --- /dev/null +++ b/docs/today_i_learned.md @@ -0,0 +1,14 @@ +# Today I learned + +We reguarly publish tips and tricks videos for beginners and pros on YouTube to share new features, highlight efficient workflows, and show you hidden gems. + +Subscribe to our YouTube channel [@webknossos](https://www.youtube.com/@webknossos) to stay up-to-date. + +![youtube-video](https://www.youtube.com/playlist?list=PLpizOgyiA4kE6pZRW1u0l49Pmppp-S7V0) + +
+ +
+ + +![youtube-video](https://www.youtube.com/watch?v=ONmx1E05_A0&list=PLpizOgyiA4kEGKFRQFOgjucZCKtI2GUZY) diff --git a/docs/tutorial_automation.md b/docs/tutorial_automation.md index 585fed49ff3..27c6468c303 100644 --- a/docs/tutorial_automation.md +++ b/docs/tutorial_automation.md @@ -39,7 +39,7 @@ The Python libraries offer both "normal" download of datasets and streaming acce ## Interoperability with Other Software Tools WEBKNOSSOS integrates seamlessly with other analysis software tools, enabling you to work with datasets from tools like Neuroglancer and Fiji. -Let’s see an example of [importing a Neuroglancer dataset](./datasets.md#working-with-zarr-neuroglancer-precomputed-and-n5-datasets) into WEBKNOSSOS. +Let’s see an example of [importing a Neuroglancer dataset](./datasets.md#streaming-from-remote-servers-and-the-cloud) into WEBKNOSSOS. First, find a released dataset in OME-Zarr, N5 or Neuroglancer-Precomputed format that you would like to import and that is hosted in the cloud (S3, Google Cloud) or on any HTTPS server. Copy the URL pointing to the data. diff --git a/docs/wkw.md b/docs/wkw.md new file mode 100644 index 00000000000..bfe05a6428b --- /dev/null +++ b/docs/wkw.md @@ -0,0 +1,68 @@ +# WKW + +[webknossos-wrap (WKW)](https://github.com/scalableminds/webknossos-wrap) is a format optimized for large datasets of 3D voxel imagery and supports compression, efficient cutouts, multi-channel, and several base datatypes. +It works well for large datasets and is built with modern file systems in mind and drives the majority of WEBKNOSSOS datasets. + +WKW is versatile in the image formats it can hold: Grayscale, Multi-Channel, Segmentation, RGB, as well as a range of data types (e.g., `uint8`, `uint16`, `float32`). +Additionally, WKW supports compression for disk space efficiency. + +Each layer of a WKW dataset may contain one of the following: + +* Grayscale data (8 Bit, 16 Bit, Float), also referred to as `color` data +* RGB data (24 Bit) +* Segmentation data (8 Bit, 16 Bit, 32 Bit) + +## Example + + +## WKW Folder Structure +WEBKNOSSOS expects the following file structure for WKW datasets: + +``` +my_dataset # One root folder per dataset +├─ color # One sub-folder per layer (e.g., color, segmentation) +│  ├─ 1 # Magnification step (1, 2, 4, 8, 16 etc.) +│  │  ├─ header.wkw # Header wkw file +│  │  ├─ z0 +│  │  │  ├─ y0 +│  │  │  │  ├─ x0.wkw # Actual data wkw file +│  │  │ │ └─ x1.wkw # Actual data wkw file +│  │  │  └─ y1/... +│  │  └─ z1/... +│  └─ 2/... +├─ segmentation/... +└─ datasource-properties.json # Dataset metadata (will be created upon import, if non-existent) +``` + +# KNOSSOS Datasets +You can convert KNOSSOS-cube datasets with the [WEBKNOSSOS CLI tool](https://webknossos.org) to WKW and import that. + +``` +webknossos convert-knossos --layer-name color --voxel-size 11.24,11.24,25 data/source/mag1 data/target + +``` + +#### Download "Volume Annotation" File Format + +Volume annotations can be downloaded and imported using ZIP files that contain [WKW](./data_formats.md#wkw-datasets) datasets. +The ZIP archive contains one NML file that holds meta information including the dataset name and the user's position. +Additionally, there is another embedded ZIP file that contains the volume annotations in WKW file format. + +!!!info + In contrast to on-disk WKW datasets, the WKW files in downloaded volume annotations only contain a single 32^3 bucket in each file. + Therefore, also the addressing of the WKW files (e.g. `z48/y5444/x5748.wkw`) is in steps of 32 instead of 1024. + +``` +volumetracing.zip # A ZIP file containing the volume annotation +├─ data.zip # Container for WKW dataset +│ └─ 1 # Magnification step folder +│ ├─ z48 +│ │ ├─ y5444 +│ │ │ └─ x5748.wkw # Actual WKW bucket file (32^3 voxel) +│ │ └─ y5445/... +│ ├─ z49/... +│ └─ header.wkw # Information about the WKW files +└─ volumetracing.nml # Annotation metadata NML file +``` + +After unzipping the archives, the WKW files can be read or modified with the WKW libraries that are available for [Python and MATLAB](./tooling.md). \ No newline at end of file diff --git a/docs/zarr.md b/docs/zarr.md new file mode 100644 index 00000000000..68344b43178 --- /dev/null +++ b/docs/zarr.md @@ -0,0 +1,80 @@ +# Zarr & NGFF + +WEBKNOSSOS works great with [OME Zarr datasets](https://ngff.openmicroscopy.org/latest/index.html), sometimes called next-generation file format (NGFF). + +The Zarr format is a good alternative to [WKW](./wkw.md) and will likely replace it long term. + +Zarr datasets can both be uploaded to WEBKNOSSOS through the [web uploader](./datasets.md#uploading-through-the-web-browser) or [streamed from a remote server or the cloud](./datasets.md#streaming-from-remote-servers-and-the-cloud). + +## Example + +## Zarr Folder Struture +WEBKNOSSOS expects the following file structure for Zarr datasets: + +``` +. # Root folder, potentially in S3, +│ # with a flat list of images by image ID. +│ +└── 456.zarr # Another image (id=456) converted to Zarr. + │ + ├── .zgroup # Each image is a Zarr group, or a folder, of other groups and arrays. + ├── .zattrs # Group level attributes are stored in the .zattrs file and include + │ # "multiscales" and "omero" (see below). In addition, the group level attributes + │ # may also contain "_ARRAY_DIMENSIONS" for compatibility with xarray if this group directly contains multi-scale arrays. + │ + ├── 0 # Each multiscale level is stored as a separate Zarr array, + │ ... # which is a folder containing chunk files which compose the array. + ├── n # The name of the array is arbitrary with the ordering defined by + │ │ # by the "multiscales" metadata, but is often a sequence starting at 0. + │ │ + │ ├── .zarray # All image arrays must be up to 5-dimensional + │ │ # with the axis of type time before type channel, before spatial axes. + │ │ + │ └─ t # Chunks are stored with the nested directory layout. + │ └─ c # All but the last chunk element are stored as directories. + │ └─ z # The terminal chunk is a file. Together the directory and file names + │ └─ y # provide the "chunk coordinate" (t, c, z, y, x), where the maximum coordinate + │ └─ x # will be dimension_size / chunk_size. + │ + └── labels + │ + ├── .zgroup # The labels group is a container which holds a list of labels to make the objects easily discoverable + │ + ├── .zattrs # All labels will be listed in .zattrs e.g. { "labels": [ "original/0" ] } + │ # Each dimension of the label (t, c, z, y, x) should be either the same as the + │ # corresponding dimension of the image, or 1 if that dimension of the label + │ # is irrelevant. + │ + └── original # Intermediate folders are permitted but not necessary and currently contain no extra metadata. + │ + └── 0 # Multiscale, labeled image. The name is unimportant but is registered in the "labels" group above. + ├── .zgroup # Zarr Group which is both a multiscaled image as well as a labeled image. + ├── .zattrs # Metadata of the related image and as well as display information under the "image-label" key. + │ + ├── 0 # Each multiscale level is stored as a separate Zarr array, as above, but only integer values + │ ... # are supported. + └── n +``` + +See [OME-Zarr 0.4 spec](https://ngff.openmicroscopy.org/latest/index.html#image-layout) for details. + +## Conversion to Zarr + +You can easily convert image stacks manually with the WEBKNOSSOS CLI. +The CLI tool expects all image files in a single folder with numbered file names. +After installing, you can convert image stacks to Zarr datasets with the following command: + +``` +pip install webknossos + +webknossos convert-zarr \ + --voxel-size 11.24,11.24,25 \ + --name my_dataset \ + data/source data/target +``` + +This snippet converts an image stack that is located in directory called `data/source` into a Zarr dataset which will be located at `data/target`. +It will create a so called `color` layer containing your raw greyscale/color image. +The supplied `--voxel-size` is specified nanometers. + +Read the full documentation at [WEBKNOSSOS CLI](https://docs.webknossos.org/cli). \ No newline at end of file From 719f4464fa9f54e225f3b53057996a2fce9dede3 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Thu, 16 Nov 2023 11:37:07 +0100 Subject: [PATCH 05/22] more docs on the data source formats --- docs/data_formats.md | 52 ++++++++------------------------ docs/datasets.md | 24 +-------------- docs/image_stacks.md | 33 +++++++++++--------- docs/n5.md | 30 ++++++++++++++++-- docs/neuroglancer_precomputed.md | 33 +++++++++++++++++--- docs/wkw.md | 30 +++++++++++++----- docs/zarr.md | 17 ++++++++--- 7 files changed, 122 insertions(+), 97 deletions(-) diff --git a/docs/data_formats.md b/docs/data_formats.md index cdfbf93585a..89ab040a04f 100644 --- a/docs/data_formats.md +++ b/docs/data_formats.md @@ -2,41 +2,21 @@ WEBKNOSSOS uses several file formats for reading large-scale volumetric image data and storing skeleton and volume annotations. The section will provide technical backgrounds on these file formats, list examples, and explain concepts and details. -The webKnosso-wrap (WKW) container format is used for all internal voxel data representations - both for the raw (microscopy) image datasets and segmentations. Skeleton annotations are saved as NML files. - -Any dataset uploaded to webknossos.org will automatically be converted to WKW on upload - given its source file format is supported by WEBKNOSSOS. Alternatively, you can manually convert your datasets using the [WEBKNOSSOS Cuber CLI tools](https://docs.webknossos.org/wkcuber/index.html) or use a custom script based on the [WEBKNOSSOS Python library](https://docs.webknossos.org/webknossos-py/index.html). - WEBKNOSSOS natively supports loading and streaming data in the following formats: -- webKnossos-wrap (WKW) -- Zarr ([OME NGFF v0.4+ spec](https://ngff.openmicroscopy.org/latest/)) -- Neuroglancer `precomputed` -- N5 - -See the page on [datasets](./datasets.md) for uploading and configuring datasets. -See the page on [software tooling](./tooling.md) for working with these file formats in Python and MatLab. - -### Conversion with webknossos.org -When uploading data to [WEBKNOSSOS](https://webknossos.org), various data formats are automatically detected and converted. - -In particular, the following file formats are supported: - -- [WKW dataset](#WKW-Datasets) -- [Image file sequence](#Single-Layer-Image-File-Sequence) in one folder (tif, jpg, png, dm3, dm4) - - as an extension, multiple folders with image sequences are interpreted as [separate layers](#Multi-Layer-Image-File-Sequence) -- Single-file images (tif, czi, nifti, raw) -- KNOSSOS file hierarchy - -!!!info - Note, for datasets in the Zarr, N5 and Neuroglancer Precomputed formats uploading and automatic conversion are not supported. - Instead, they can be directly streamed from an HTTP server or the cloud. - See the page on [datasets](./datasets.md) for uploading and configuring these formats. +- [WEBKNOSSOS-wrap (WKW)](./wkw.md) +- [OME-Zarr / NGFF](./zarr.md) +- [Neuroglancer precomputed](./neuroglancer_precomputed.md) +- [N5](./n5.md) +- [Image Stacks (through Conversion)](./image_stacks.md) +The WEBKNOSSOS-wrap (WKW) container format is used for all internal voxel data representations - both for the raw (microscopy) image datasets and segmentations. Skeleton annotations are saved as NML files. +Any dataset uploaded to webknossos.org will automatically be converted to WKW on upload - given its source file format is supported by WEBKNOSSOS. Alternatively, you can manually convert your datasets using the [WEBKNOSSOS CLI tool](https://docs.webknossos.org/cli) or use a custom script based on the [WEBKNOSSOS Python library](https://docs.webknossos.org/webknossos-py/index.html). +Read more about uploading and configuring datasets on the [datasets page](./datasets.md). - -## Concepts +## High-Level Concepts ### Datasets, Cubes, and Buckets @@ -86,16 +66,9 @@ The underlying data type limits the maximum number of IDs: | `uint32` | 4,294,967,295 | | `uint64` | 18,446,744,073,709,551,615 | -## Data Formats - -To bring the above concepts together, WEBKNOSSOS uses [webknossos-wrap (WKW)](https://github.com/scalableminds/webknossos-wrap) as a container format for volumetric voxel data. -For sparse skeleton-like structures, WEBKNOSSOS uses [NML](#NML). - -### WKW Datasets - -#### Dataset Metadata -Metadata is stored in the `datasource-properties.json`. +### Dataset Metadata +For each datasets, we stored metadata in a `datasource-properties.json` file. See below for the [full specification](#dataset-metadata-specification). This is an example: @@ -146,6 +119,7 @@ This is an example: "scale" : [ 11.24, 11.24, 28 ] } ``` + Note that the `resolutions` property within the elements of `wkwResolutions` can be an array of length 3. The three components within such a resolution denote the scaling factor for x, y, and z. The term "magnifications" is used synonymously for resolutions throughout the UI. @@ -154,7 +128,7 @@ At the moment, WebKnossos guarantees correct rendering of data with non-uniform Most users do not create these metadata files manually. WEBKNOSSOS can infer most of these properties automatically, except for `scale` and `largestSegmentId`. During the data import process, WEBKNOSSOS will ask for the necessary properties. -When using the [WEBKNOSSOS Cuber](https://github.com/scalableminds/webknossos-libs/tree/master/wkcuber), a metadata file is automatically generated. Alternatively, you can create and edit WEBKNOSSOS datasets using the [WEBKNOSSOS Python library](https://github.com/scalableminds/webknossos-libs/). +When using the [WEBKNOSSOS CLI](http://docs.webknossos.org/cli), a metadata file is automatically generated. Alternatively, you can create and edit WEBKNOSSOS datasets using the [WEBKNOSSOS Python library](https://github.com/scalableminds/webknossos-libs/). [See below for the full specification](#dataset-metadata-specification). diff --git a/docs/datasets.md b/docs/datasets.md index c7be6d2eed1..5962915f65c 100644 --- a/docs/datasets.md +++ b/docs/datasets.md @@ -126,8 +126,6 @@ For manual conversion, we provide the following software tools and libraries: - The [WEBKNOSSOS Cuber](https://docs.webknossos.org/wkcuber/index.html) is a CLI tool that can convert many formats to WKW. - For other file formats, the [WEBKNOSSOS Python library](https://docs.webknossos.org/webknossos-py/index.html) can be an option for custom scripting. -See the page on [software tooling](./tooling.md) for more. - ## Configuring Datasets You can configure the metadata, permission, and other properties of a dataset at any time. @@ -230,27 +228,7 @@ scalable minds also offers a dataset alignment tool called *Voxelytics Align*. ![youtube-video](https://www.youtube.com/embed/yYauIHZcI_4) -## Sample Datasets +## Example Datasets For convenience and testing, we provide a list of sample datasets for WEBKNOSSOS: -- **Sample_e2006_wkw** - Raw SBEM data and segmentation (sample cutout, 120MB). - [https://static.webknossos.org/data/e2006_wkw.zip](https://static.webknossos.org/data/e2006_wkw.zip) - Connectomic reconstruction of the inner plexiform layer in the mouse retina. - M Helmstaedter, KL Briggman, S Turaga, V Jain, HS Seung, W Denk. - Nature. 08 August 2013. [https://doi.org/10.1038/nature12346](https://doi.org/10.1038/nature12346) - -- **Sample_FD0144_wkw** - Raw SBEM data and segmentation (sample cutout, 316 MB). - [https://static.webknossos.org/data/FD0144_wkw.zip](https://static.webknossos.org/data/FD0144_wkw.zip) - FluoEM, virtual labeling of axons in three-dimensional electron microscopy data for long-range connectomics. - F Drawitsch, A Karimi, KM Boergens, M Helmstaedter. - eLife. 14 August 2018. [https://doi.org/10.7554/eLife.38976](https://doi.org/10.7554/eLife.38976) - -* **Sample_MPRAGE_250um** - MRI data (250 MB). - [https://static.webknossos.org/data/MPRAGE_250um.zip](https://static.webknossos.org/data/MPRAGE_250um.zip) - T1-weighted in vivo human whole brain MRI dataset with an ultra-fine isotropic resolution of 250 μm. - F Lüsebrink, A Sciarra, H Mattern, R Yakupov, O Speck. - Scientific Data. 14 March 2017. [https://doi.org/10.1038/sdata.2017.32](https://doi.org/10.1038/sdata.2017.32) diff --git a/docs/image_stacks.md b/docs/image_stacks.md index 325a4328c72..1e57f8b9b10 100644 --- a/docs/image_stacks.md +++ b/docs/image_stacks.md @@ -2,14 +2,14 @@ WEBKNOSSOS works with a wide range of modern bio-imaging formats and image stacks: -- [Image file sequence](#Single-Layer-Image-File-Sequence) in one folder (TIFF, JPEG, PNG, DM3, DM4) +- [Image file sequence](#Single-Layer-Image-File-Sequence) in one folder (TIFF, JPEG, PNG, DM3, DM4 etc) - [Multi Layer file sequence](#Multi-Layer-Image-File-Sequence) containing multiple folders with image sequences that are interpreted as separate layers -- [Single-file images](#single-file-images) (OME-Tiff, TIFF, PNG, czi, raw, etc) +- [Single-file images](#single-file-images) (OME-TIFF, TIFF, PNG, czi, raw, etc) -Image stacks need to be converted to [WKW](./wkw.md) for WEBKNOSSOS. This happens automatically when using the web upload on https://webknossos.org or can be done manually (see below). +Image stacks need to be converted to [WKW](./wkw.md) for WEBKNOSSOS. This happens automatically when using the web upload on [webknossos.org](https://webknossos.org) or can be done manually (see below). ## Single-Layer Image File Sequence -When uploading multiple image files, these files are sorted numerically, and each one is interpreted as one section within a 3D dataset. +When uploading multiple image files, these files are sorted numerically, and each one is interpreted as single section/slice within a 3D dataset. Alternatively, the same files can also be uploaded bundled in a single folder (or zip archive). As an example, the following file structure would create a dataset with one layer which has a z-depth of 3: @@ -32,12 +32,12 @@ dataset_name/ │ ├── image_1.tif │ ├── image_2.tif │ └── ... -├── segmentation -│ └── ... +└── segmentation + └── ... ``` ## Single-file images -WEBKNOSSOS understands most modern bio-imaging file formats and uses the [BioFormats library] upon import/conversion. It works particularly well with: +WEBKNOSSOS understands most modern bio-imaging file formats and uses the [BioFormats library](https://www.openmicroscopy.org/bio-formats/) upon import/conversion. It works particularly well with: - OME-Tiff - Tiff @@ -53,15 +53,16 @@ WEBKNOSSOS understands most modern bio-imaging file formats and uses the [BioFor ## Manual Conversion You can manually convert image stacks through: + - [WEBKNOSSOS CLI](https://docs.webknossos.org/cli) - [WEBKNOSSOS Python library](https://docs.webknossos.org/webknossos-py) -### CLI +### Conversion with CLI You can easily convert image stacks manually with the WEBKNOSSOS CLI. The CLI tool expects all image files in a single folder with numbered file names. After installing, you can convert image stacks to WKW datasets with the following command: -``` +```shell pip install webknossos webknossos convert \ @@ -72,15 +73,15 @@ webknossos convert \ This snippet converts an image stack that is located in directory called `data/source` into a WKW dataset which will be located at `data/target`. It will create a so called `color` layer containing your raw greyscale/color image. -The supplied `--voxel-size` is specified nanometers. +The supplied `--voxel-size` is specified in nanometers. Read the full documentation at [WEBKNOSSOS CLI](https://docs.webknossos.org/cli). -### Python +### Conversion with Python -You can use the free [WEBKNOSSSO Python library](https://docs.webknossos.org/webknossos-py) to convert image stacks to WKW or integrate the convesion as part of existing workflow. +You can use the free [WEBKNOSSSO Python library](https://docs.webknossos.org/webknossos-py) to convert image stacks to WKW or integrate the conversion as part of an existing workflow. -``` +```python from webknossos import Dataset from webknossos.dataset import COLOR_CATEGORY @@ -103,6 +104,8 @@ if __name__ == "__main__": main() ``` -Read the full example in the WEBKNOSSOS [Python library documentation].(https://docs.webknossos.org/webknossos-py/examples/create_dataset_from_images.html) +Read the full example in the WEBKNOSSOS [Python library documentation](https://docs.webknossos.org/webknossos-py/examples/create_dataset_from_images.html). + +--- -[Please contact us](mailto:hello@webknossos.org) or [write a post in our support forum](https://forum.image.sc/tag/webknossos), if you have any issues with converting your dataset. \ No newline at end of file +[Please contact us](mailto:hello@webknossos.org) or [write a post in our support forum](https://forum.image.sc/tag/webknossos), if you have any issues with converting your datasets. \ No newline at end of file diff --git a/docs/n5.md b/docs/n5.md index b65e8c90169..5b3f0fec0f4 100644 --- a/docs/n5.md +++ b/docs/n5.md @@ -7,11 +7,35 @@ WEBKNOSSOS can read [N5 datasets](https://github.com/saalfeldlab/n5). -## Example - +## Examples You can try the N5 support with the following datasets. Load them in WEBKNOSSOS as a [remote dataset](./datasets.md#streaming-from-remote-servers-and-the-cloud): - Interphase HeLa cell EM data hosted on AWS S3 - `s3://janelia-cosem-datasets/jrc_hela-3/jrc_hela-3.n5/em/fibsem-uint16` - - Source: [Open Organelle Project](https://openorganelle.janelia.org/datasets/jrc_hela-3) \ No newline at end of file + - Source: [Open Organelle Project](https://openorganelle.janelia.org/datasets/jrc_hela-3) + +## N5 folder structure + +WEBKNOSSOS expects the following file structure for N5 datasets: + +``` +my_dataset.n5 # One root folder per dataset +├─ attributes.json # Dataset metadata +└─ my_EM # One N5 group per data layer +   ├─ attributes.json +   ├─ s0 # Chunks in a directory hierarchy that enumerates their positive integer position in the chunk grid. (e.g. 0/4/1/7 for chunk grid position p=(0, 4, 1, 7)). +   │  ├─ 0 +   │  │  ├─ +   │  ├─ ... +   │  └─ n +   ... +   └─ sn +``` + +For details see the [N5 spec](https://github.com/saalfeldlab/n5). + +## Performance Considerations +- TODO +- Sharding +- Chunk sizing \ No newline at end of file diff --git a/docs/neuroglancer_precomputed.md b/docs/neuroglancer_precomputed.md index da9552fe266..bc5dc84b7cd 100644 --- a/docs/neuroglancer_precomputed.md +++ b/docs/neuroglancer_precomputed.md @@ -1,14 +1,39 @@ # Neuroglancer Precomputed -WEBKNOSSOS can read [Neuroglancer precomputed dataset](https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed). +WEBKNOSSOS can read [Neuroglancer precomputed datasets](https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed). !!!info Neuroglancer datasets can only be opened as [remote dataset](./datasets.md#streaming-from-remote-servers-and-the-cloud) at the moment. Uploading the through the web uploader is not supported. -## Example +## Examples You can try the Neuroglancer Precomputed support with the following datasets. Load them in WEBKNOSSOS as a [remote dataset](./datasets.md#streaming-from-remote-servers-and-the-cloud): -- Mouse Cortex EM data hosted on Google Cloud Services + +- Mouse Cortex EM data hosted on Google Cloud Storage - `gs://iarpa_microns/minnie/minnie65/em` - - Source: MICrONs Consortium et al. Functional connectomics spanning multiple areas of mouse visual cortex. bioRxiv 2021.07.28.454025; doi: https://doi.org/10.1101/2021.07.28.454025 + - Source: MICrONs Consortium et al. Functional connectomics spanning multiple areas of mouse visual cortex. bioRxiv 2021.07.28.454025; doi: [https://doi.org/10.1101/2021.07.28.454025](https://doi.org/10.1101/2021.07.28.454025) +- FlyEM Hemibrain hosted on Google Cloud Storage + - `gs://neuroglancer-janelia-flyem-hemibrain/emdata/clahe_yz/jpeg` + - `gs://neuroglancer-janelia-flyem-hemibrain/v1.0/segmentation` + - Source: [https://www.janelia.org/project-team/flyem/hemibrain](https://www.janelia.org/project-team/flyem/hemibrain) +- Interphase HeLa cell EM data hosted on AWS S3 + - `s3://janelia-cosem-datasets/jrc_hela-3/neuroglancer/em/fibsem-uint8.precomputed` + - Source: [Open Organelle Project](https://openorganelle.janelia.org/datasets/jrc_hela-3) + + +## Neuroglancer Precomputed folder structure + +WEBKNOSSOS expects the following file structure for Neuroglancer Precomputed datasets: + +``` +my_dataset.precomputed # One root folder per dataset +├─ info # Dataset [metadata in JSON format](https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/precomputed/volume.md#info-json-file-specification) +├─ scale_1 # One subdirectory with the same name as each scale/magnification "key" value specified in the info file. Each subdirectory contains a chunked representation of the data for a single resolution. +│  ├─ +│  └─ ... +├─ ... +└─ scale_n +``` + +For details see the [Neuroglancer spec](https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed). diff --git a/docs/wkw.md b/docs/wkw.md index bfe05a6428b..783d6cb19f9 100644 --- a/docs/wkw.md +++ b/docs/wkw.md @@ -12,7 +12,21 @@ Each layer of a WKW dataset may contain one of the following: * RGB data (24 Bit) * Segmentation data (8 Bit, 16 Bit, 32 Bit) -## Example +## Examples + +You can try the WKW support with the following datasets. Upload them to WEBKNOSSOS using the [web uploader](./datasets.md#uploading-through-the-web-browser): + +- Mouse Retina SBEM and segmentation (sample cutout, 120MB) + - [https://static.webknossos.org/data/e2006_wkw.zip](https://static.webknossos.org/data/e2006_wkw.zip) + - Source: Connectomic reconstruction of the inner plexiform layer in the mouse retina. M Helmstaedter, KL Briggman, S Turaga, V Jain, HS Seung, W Denk. Nature. 08 August 2013. [https://doi.org/10.1038/nature12346](https://doi.org/10.1038/nature12346) + +- Mourse Cortex SBEM and segmentation (sample cutout, 316 MB) + - [https://static.webknossos.org/data/FD0144_wkw.zip](https://static.webknossos.org/data/FD0144_wkw.zip) + - Source: FluoEM, virtual labeling of axons in three-dimensional electron microscopy data for long-range connectomics. F Drawitsch, A Karimi, KM Boergens, M Helmstaedter. eLife. 14 August 2018. [https://doi.org/10.7554/eLife.38976](https://doi.org/10.7554/eLife.38976) + +- Whole Brain MRI (250 MB) + - [https://static.webknossos.org/data/MPRAGE_250um.zip](https://static.webknossos.org/data/MPRAGE_250um.zip) + - Source: T1-weighted in vivo human whole brain MRI dataset with an ultra-fine isotropic resolution of 250 μm. F Lüsebrink, A Sciarra, H Mattern, R Yakupov, O Speck. Scientific Data. 14 March 2017. [https://doi.org/10.1038/sdata.2017.32](https://doi.org/10.1038/sdata.2017.32) ## WKW Folder Structure @@ -20,29 +34,29 @@ WEBKNOSSOS expects the following file structure for WKW datasets: ``` my_dataset # One root folder per dataset +├─ datasource-properties.json # Dataset metadata (will be created upon import, if non-existent) ├─ color # One sub-folder per layer (e.g., color, segmentation) │  ├─ 1 # Magnification step (1, 2, 4, 8, 16 etc.) │  │  ├─ header.wkw # Header wkw file │  │  ├─ z0 │  │  │  ├─ y0 -│  │  │  │  ├─ x0.wkw # Actual data wkw file +│  │  │  │  ├─ x0.wkw # Actual data wkw file (chunks) │  │  │ │ └─ x1.wkw # Actual data wkw file │  │  │  └─ y1/... │  │  └─ z1/... │  └─ 2/... -├─ segmentation/... -└─ datasource-properties.json # Dataset metadata (will be created upon import, if non-existent) +└─ segmentation/... + ``` -# KNOSSOS Datasets +## KNOSSOS Datasets You can convert KNOSSOS-cube datasets with the [WEBKNOSSOS CLI tool](https://webknossos.org) to WKW and import that. ``` webknossos convert-knossos --layer-name color --voxel-size 11.24,11.24,25 data/source/mag1 data/target - ``` -#### Download "Volume Annotation" File Format +## Download "Volume Annotation" File Format Volume annotations can be downloaded and imported using ZIP files that contain [WKW](./data_formats.md#wkw-datasets) datasets. The ZIP archive contains one NML file that holds meta information including the dataset name and the user's position. @@ -65,4 +79,4 @@ volumetracing.zip # A ZIP file containing the volume annotation └─ volumetracing.nml # Annotation metadata NML file ``` -After unzipping the archives, the WKW files can be read or modified with the WKW libraries that are available for [Python and MATLAB](./tooling.md). \ No newline at end of file +After unzipping the archives, the WKW files can be read or modified with the [WEBKNOSSOS Python library](http://localhost:8197/webknossos-py/examples/load_annotation_from_file.html). \ No newline at end of file diff --git a/docs/zarr.md b/docs/zarr.md index 68344b43178..8834ecc9e2c 100644 --- a/docs/zarr.md +++ b/docs/zarr.md @@ -2,11 +2,18 @@ WEBKNOSSOS works great with [OME Zarr datasets](https://ngff.openmicroscopy.org/latest/index.html), sometimes called next-generation file format (NGFF). -The Zarr format is a good alternative to [WKW](./wkw.md) and will likely replace it long term. +We strongly believe in this community-driven, cloud-native data fromat for n-dimensional datasets. Zarr is a first-class citizen in WEBKNOSSOS and will likely replace [WKW](./wkw.md) long term. Zarr datasets can both be uploaded to WEBKNOSSOS through the [web uploader](./datasets.md#uploading-through-the-web-browser) or [streamed from a remote server or the cloud](./datasets.md#streaming-from-remote-servers-and-the-cloud). -## Example +## Examples + +You can try the OME-Zarr support with the following datasets. Load them in WEBKNOSSOS as a [remote dataset](./datasets.md#streaming-from-remote-servers-and-the-cloud): + + +- Mouse Cortex Layer 4 EM Cutout over HTTPs + - `https://static.webknossos.org/data/l4_sample/` + - Source: Dense connectomic reconstruction in layer 4 of the somatosensory cortex. Motta et al. Science 2019. [10.1126/science.aay3134](https://doi.org/10.1126/science.aay3134) ## Zarr Folder Struture WEBKNOSSOS expects the following file structure for Zarr datasets: @@ -60,11 +67,11 @@ See [OME-Zarr 0.4 spec](https://ngff.openmicroscopy.org/latest/index.html#image- ## Conversion to Zarr -You can easily convert image stacks manually with the WEBKNOSSOS CLI. +You can easily convert image stacks manually with the [WEBKNOSSOS CLI](https://docs.webknossos.org/cli). The CLI tool expects all image files in a single folder with numbered file names. After installing, you can convert image stacks to Zarr datasets with the following command: -``` +```shell pip install webknossos webknossos convert-zarr \ @@ -75,6 +82,6 @@ webknossos convert-zarr \ This snippet converts an image stack that is located in directory called `data/source` into a Zarr dataset which will be located at `data/target`. It will create a so called `color` layer containing your raw greyscale/color image. -The supplied `--voxel-size` is specified nanometers. +The supplied `--voxel-size` is specified in nanometers. Read the full documentation at [WEBKNOSSOS CLI](https://docs.webknossos.org/cli). \ No newline at end of file From e68b34ff680f7c54f759d60033da409c84aeeb45 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Fri, 17 Nov 2023 11:08:47 +0100 Subject: [PATCH 06/22] more docs --- docs/datasets.md | 2 +- docs/faq.md | 6 ++--- docs/installation.md | 2 +- docs/n5.md | 6 ++--- docs/neuroglancer_precomputed.md | 6 +++++ docs/pen_tablets.md | 35 +++++++++--------------- docs/proof_reading.md | 2 +- docs/skeleton_annotation.md | 2 +- docs/volume_annotation.md | 4 ++- docs/zarr.md | 46 ++++++++++++++++++++++++++++++-- 10 files changed, 75 insertions(+), 36 deletions(-) diff --git a/docs/datasets.md b/docs/datasets.md index 5962915f65c..9351249078f 100644 --- a/docs/datasets.md +++ b/docs/datasets.md @@ -123,7 +123,7 @@ Any dataset uploaded through the web interface at [webknossos.org](https://webkn For manual conversion, we provide the following software tools and libraries: -- The [WEBKNOSSOS Cuber](https://docs.webknossos.org/wkcuber/index.html) is a CLI tool that can convert many formats to WKW. +- The [WEBKNOSSOS CLI](https://docs.webknossos.org/cli) is a CLI tool that can convert many formats to WKW. - For other file formats, the [WEBKNOSSOS Python library](https://docs.webknossos.org/webknossos-py/index.html) can be an option for custom scripting. ## Configuring Datasets diff --git a/docs/faq.md b/docs/faq.md index a0f4d84a3fd..efe1fcfaf7c 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -27,13 +27,13 @@ We have years of experience with automated machine learning analysis and [offer We are also always interested in new collaborations. Get in touch if you want to work together on a project resulting in new classifiers. -WEBKNOSSOS does not allow you to run custom machine learning models on your data yet. As a work-around you can download your annotations from WEBKNOSSOS - either manually or scripted [through our Python libarary](./tooling.md) - and do your ML analysis offline and use WEBKNOSSOS to inspect the results. +WEBKNOSSOS does not allow you to run custom machine learning models on your data yet. As a work-around you can download your annotations from WEBKNOSSOS - either manually or scripted [through our Python libarary](https://docs.webknossos.org/webknossos-py) - and do your ML analysis offline and use WEBKNOSSOS to inspect the results. ## How can I use my dataset with WEBKNOSSOS? WEBKNOSSOS supports [WKW (optimized), KNOSSOS cubes](./datasets.md), and image stacks (converted on upload). You can also connect to Neuroglancer Precomputed, N5, and Zarr datasets hosted in the cloud (Google Cloud Storage, AWS S3). -Smaller datasets (up to multiple GB) can be uploaded directly through the web interface. For larger datasets, we recommend converting them to the standard WKW format using the [WEBKNOSSOS Cuber](https://docs.webknossos.org/wkcuber/index.html) CLI tool and uploading it via the [WEBKNOSSOS Python package](https://docs.webknossos.org/webknossos-py/examples/upload_image_data.html). +Smaller datasets (up to multiple GB) can be uploaded directly through the web interface. For larger datasets, we recommend converting them to the standard WKW format using the [WEBKNOSSOS CLI](https://docs.webknossos.org/cli) CLI tool and uploading it via the [WEBKNOSSOS Python package](https://docs.webknossos.org/webknossos-py/examples/upload_image_data.html). ## Can I host the WEBKNOSSOS data in my own compute cluster (on-premise installation)? @@ -53,7 +53,7 @@ For example, the WEBKNOSSOS main component could be hosted on commercial cloud i ## Can I further analyze my annotations outside of WEBKNOSSOS with Python/MATLAB? Yes, you can. WEBKNOSSOS allows the download and export of skeleton annotations as NML files and segmentations/volume data as binary/wkw files. -See the [Tooling](./tooling.md) section for a recommendation of Python/MATLAB libraries to work with the WEBKNOSSOS standard formats. +Use our free [Python library](https://docs.webknossos.org/webknossos-py) to work with the WEBKNOSSOS standard formats. ## Newly registered users don't show up diff --git a/docs/installation.md b/docs/installation.md index 0725ccf710d..ec010bb8d40 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -73,7 +73,7 @@ For small datasets (max. 1GB), you can use the upload functionality provided in For larger datasets, we recommend the file system upload. Read more about the import functionality in the [Datasets guide](./datasets.md). -If you do not have a compatible dataset available, you can convert your own data using [the WEBKNOSSOS cuber tool](./tooling.md#webknossos-cuber) or use one of the [sample datasets](./datasets.md#sample-datasets) for testing purposes. +If you do not have a compatible dataset available, you can convert your own data using [the WEBKNOSSOS CLI](https://docs.webknossos.org/cli/) or use one of the [sample datasets](./datasets.md#sample-datasets) for testing purposes. By default, datasets are visible to all users in your organization. However, WEBKNOSSOS includes fine-grained permissions to assign datasets to groups of users. diff --git a/docs/n5.md b/docs/n5.md index 5b3f0fec0f4..2c10c805e8b 100644 --- a/docs/n5.md +++ b/docs/n5.md @@ -36,6 +36,6 @@ my_dataset.n5 # One root folder per dataset For details see the [N5 spec](https://github.com/saalfeldlab/n5). ## Performance Considerations -- TODO -- Sharding -- Chunk sizing \ No newline at end of file +To get the best streaming performance for N5 datasets consider the following settings. + +- Use chunk sizes of 32 - 128 voxels^3 diff --git a/docs/neuroglancer_precomputed.md b/docs/neuroglancer_precomputed.md index bc5dc84b7cd..86558c865ea 100644 --- a/docs/neuroglancer_precomputed.md +++ b/docs/neuroglancer_precomputed.md @@ -37,3 +37,9 @@ my_dataset.precomputed # One root folder per dataset ``` For details see the [Neuroglancer spec](https://github.com/google/neuroglancer/tree/master/src/neuroglancer/datasource/precomputed). + +## Performance Considerations +To get the best streaming performance for Neuroglancer Precomputed datasets consider the following settings. + +- Use chunk sizes of 32 - 128 voxels^3 +- Enable sharding \ No newline at end of file diff --git a/docs/pen_tablets.md b/docs/pen_tablets.md index 900fa02fc4e..ba311994e6a 100644 --- a/docs/pen_tablets.md +++ b/docs/pen_tablets.md @@ -3,8 +3,9 @@ Beyond the mouse and keyboard WEBKNOSSOS is great for annotating datasets with alternative input devices such as pens, styluses, or the Apple pencil. These input devices can significantly boost your annotation speed and improve the detail of your annotations. ## Using Wacom/Pen tablets -Using pen tablet can signifincatly boost your annotation productivity, especially if you set it up correctly with WEBKNOSSOS. +Using pen tablet can significantly boost your annotation productivity, especially if you set it up correctly with WEBKNOSSOS. +![youtube-video](https://www.youtube.com/embed/xk0gqsVx494) To streamline your workflow, program your tablet and pen buttons to match the WEBKNOSSOS shortcuts. By doing so, you can focus on your pen without the need of a mouse or keyboard. Here is an example configuration using a Wacom tablet and the Wacom driver software: @@ -21,46 +22,34 @@ Pen buttons: You can find the full list for keyboard shortcuts in the [documentation](./keyboard_shortcuts.md). -// Alt Programming buttons to match the WEBKNOSSOS shortcuts -### Annotating with Pens +### Annotating with Wacom Pens Now, let’s dive into the annotation process! In this example, we begin by quick-selecting a cell. - -// Alt Navigating the dataset and segmenting a cell with the quick-select tool +![youtube-video](https://www.youtube.com/embed/xk0gqsVx494?start=46) If the annotation isn’t precise enough, we can easily switch to the eraser tool (middle left button) and erase a corner. Selecting the brush tool is as simple as pressing the left button, allowing us to add small surfaces to the annotation. - - -// Alt Improving the annotation precision with the eraser and brush tools When ready, pressing the right button creates a new segment, and we can repeat the process for other cells. - - -// Alt Creating new segments and annotating them with the quick-select tool For increased flexibility, you can additionally use your laptop’s keyboard shortcuts (e.g. “I” and “O” for zooming in and out). ## iPad and Apple Pencil Accessing your WEBKNOSSOS data from any internet-connected device with a browser, including iPads and Android tablets, allows you to conveniently showcase or explore large datasets anywhere. Whether you want to share your findings with scientists post-conference or analyze data during your train commute, all you need is a browser. No installation of any additional software is required. The user-friendly interface supports intuitive finger gestures and complementary buttons for smooth navigation. -In a brief workflow example, we demonstrate the ease of data visualization on an iPad. Using simple finger gestures, we navigate along the x and y axes and perform zoom operations with intuitive two-finger gestures. - -// Alt Moving through you data and zooming in and out. - -Additional functions, such as z-axis movement, toggling the right sidebar, and activating the four viewports, are easily accessible with the touch of a button. Selecting segments and loading their meshes is as simple as tapping the corresponding locations on the screen. -// Alt Toggling the right sidebar, turning on the 4 viewports, loading a mesh. +![youtube-video](https://www.youtube.com/embed/HDt_H7W4-qc) +In a brief workflow example, we demonstrate the ease of data visualization on an iPad. +Using simple finger gestures, we navigate along the x and y axes and perform zoom operations with intuitive two-finger gestures. +Additional functions, such as z-axis movement, toggling the right sidebar, and activating the four viewports, are easily accessible with the touch of a button. +Selecting segments and loading their meshes is as simple as tapping the corresponding locations on the screen. Finally, we maximize the 3D viewport and effortlessly explore the mesh geometry by swiping with three fingers. -// Alt Exploring the 3D mesh. ### Intuitive Annotation with your iPad Take advantage of the iPad and Apple Pencil for seamless and precise annotation. Enhance your manual annotations with direct drawing on the screen, offering increased accuracy and efficiency compared to traditional mouse-based annotation. -In this example, we demonstrate the annotation workflow using the iPad and Apple Pencil. Starting with the quick-select tool, we segment a cell, refining its edges with the pixel-perfect precision of the lasso tool. -// Alt Quick-selecting a cell and refining the edges with the lasso tool. +![youtube-video](https://www.youtube.com/embed/HDt_H7W4-qc?start=47) +In this example, we demonstrate the annotation workflow using the iPad and Apple Pencil. +Starting with the quick-select tool, we segment a cell, refining its edges with the pixel-perfect precision of the lasso tool. Next, we create a new segment and annotate it from scratch using the lasso tool. -// Alt Annotating a new segment from scratch with the lasso and then the brush tool. - Finally, we create an additional segment and use the brush tool to annotate. -// Alt Annotating a segment with the brush. diff --git a/docs/proof_reading.md b/docs/proof_reading.md index 6494b8f22ac..a30d9c80c65 100644 --- a/docs/proof_reading.md +++ b/docs/proof_reading.md @@ -68,4 +68,4 @@ In our workflows, we make heavy use of skeleton annotations for proofreading and - we encode additional metadata for any given segment in the skeleton tree names, groups, and comments, i.e. the biological cell type for a segment - we manually annotate classification mistakes or interesting features in the data and download/bring them back into our Python workflows for correction and further processing -This system is very flexible, though requires a little bit of creativity and coding skills with the [WEBKNOSSOS Python library](./tooling.md#webknossos-python-library). +This system is very flexible, though requires a little bit of creativity and coding skills with the [WEBKNOSSOS Python library](https://docs.webknossos.org/webknossos-py). diff --git a/docs/skeleton_annotation.md b/docs/skeleton_annotation.md index a8eec06b268..748ebc50572 100644 --- a/docs/skeleton_annotation.md +++ b/docs/skeleton_annotation.md @@ -217,7 +217,7 @@ Importing a skeleton annotation can be achieved using one of two ways: ![Skeletons can be imported by drag and drop in the annotation view or from the dashboard](images/tracing_ui_import.jpeg) -If you are looking to import/export annotations through Python code, check out our [WEBKNOSSOS Python library](./tooling.md). +If you are looking to import/export annotations through Python code, check out our [WEBKNOSSOS Python library](https://docs.webknossos.org/webknossos-py). ### Merging Skeleton Annotations diff --git a/docs/volume_annotation.md b/docs/volume_annotation.md index be4b294bbec..5700502c193 100644 --- a/docs/volume_annotation.md +++ b/docs/volume_annotation.md @@ -96,6 +96,8 @@ Similar to the above interpolation feature, you can also extrude the currently a This means, that you can label a segment on one slice (e.g., z=10), move a few slices forward (e.g., z=12) and copy the segment to the relevant slices (e.g., z=11, z=12). In contrast to interpolation mode, WEBKNOSSOS will not adapat the shape/boundary of the extruded segments to fit between the source and target segment. Instead, the extruded volume will retain the shape of the source segment and extend that along the z-axis. The extrusion can be triggered by using the extrude button in the toolbar (also available as a dropdown next to the interpolation/extrusion button). +![youtube-video](https://www.youtube.com/embed/GucpEA6Wev8) + ### Volume Flood Fills WEBKNOSSOS supports volumetric flood fills (3D) to relabel a segment with a new ID. Instead of having the relabel segment slice-by-slice, WEBKNOSSOS can do this for you. This operation allows you to fix both split and merge errors: @@ -115,7 +117,7 @@ There is several ways to access this information: In cases, where you only wish to measure a simple distance or surface area, use the [`Measurement Tool`](./tracing_ui.md#the-toolbar) instead. -// TODO image +![youtube-video](https://www.youtube.com/embed/PsvC4vNyxJM) ### Mappings / On-Demand Agglomeration diff --git a/docs/zarr.md b/docs/zarr.md index 8834ecc9e2c..a51843d9845 100644 --- a/docs/zarr.md +++ b/docs/zarr.md @@ -74,9 +74,10 @@ After installing, you can convert image stacks to Zarr datasets with the followi ```shell pip install webknossos -webknossos convert-zarr \ +webknossos convert \ --voxel-size 11.24,11.24,25 \ --name my_dataset \ + --data-format zarr3 \ data/source data/target ``` @@ -84,4 +85,45 @@ This snippet converts an image stack that is located in directory called `data/s It will create a so called `color` layer containing your raw greyscale/color image. The supplied `--voxel-size` is specified in nanometers. -Read the full documentation at [WEBKNOSSOS CLI](https://docs.webknossos.org/cli). \ No newline at end of file +Read the full documentation at [WEBKNOSSOS CLI](https://docs.webknossos.org/cli). + +### Conversion with Python + +You can use the free [WEBKNOSSSO Python library](https://docs.webknossos.org/webknossos-py) to convert image stacks to Zarr or integrate the conversion as part of an existing workflow. + +```python +from webknossos import Dataset +from webknossos.dataset import COLOR_CATEGORY + +def main() -> None: + """Convert a folder of image files to a WEBKNOSSOS dataset.""" + dataset = Dataset.from_images( + input_path=INPUT_DIR, + output_path=OUTPUT_DIR, + voxel_size=(11, 11, 11), + layer_category=COLOR_CATEGORY, + compress=True, + data_format = Dataformat.Zarr + ) + + print(f"Saved {dataset.name} at {dataset.path}.") + + # dataset.upload() + + +if __name__ == "__main__": + main() +``` + +Read the full example in the WEBKNOSSOS [Python library documentation](https://docs.webknossos.org/webknossos-py/examples/create_dataset_from_images.html). + +## Time-Series and N-Dimensional Datsets + +WEBKNOSSOS also supports loading n-dimensional datasets, e.g. 4D = time series of 3D microscopy. +This feature in currently only supported for Zarr dataset due to their flexbile structure and design for n-dimensional data. + +## Performance Considerations +To get the best streaming performance for Zarr datasets consider the following settings. + +- Use chunk sizes of 32 - 128 voxels^3 +- Enable sharding \ No newline at end of file From f6d2c4a373eef7282ca775dd4d44817a722b2ec1 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 10:27:24 +0100 Subject: [PATCH 07/22] Update docs/dashboard.md Co-authored-by: Norman Rzepka --- docs/dashboard.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/dashboard.md b/docs/dashboard.md index 1a5d101814e..5b773367fbf 100644 --- a/docs/dashboard.md +++ b/docs/dashboard.md @@ -12,7 +12,7 @@ Learn more about managing datasets in the [Datasets guide](./datasets.md). What you can do on this screen depends on your user role. If you are a regular user, you can only create or resume annotations and work on tasks. -If you are an [Admin or a Team Manager](./users.md#access-rights-roles), you can also perform administrative actions, manage access rights, and change dataset settings. +If you are [an Admin, a Dataset Manager or a Team Manager](./users.md#access-rights-roles), you can also perform administrative actions, manage access rights, and change dataset settings. Read more about the organization of datasets [here](./datasets.md#dataset-organization). From c30f801919abdb00aa887ceebad6f90320ff5097 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 10:27:42 +0100 Subject: [PATCH 08/22] Update docs/datasets.md Co-authored-by: Norman Rzepka --- docs/datasets.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/datasets.md b/docs/datasets.md index 9351249078f..130d6102ffd 100644 --- a/docs/datasets.md +++ b/docs/datasets.md @@ -2,7 +2,7 @@ Working with 3D (and 2D) image datasets is at the heart of WEBKNOSSOS. -- [Import datasets](#importing-datasets) by uploading them directly via the web UI, streaming them from a remove server/the cloud, or by using the file system. +- [Import datasets](#importing-datasets) by uploading them directly via the web UI, streaming them from a remote server/the cloud, or by using the file system. - [Configure the dataset](#configuring-datasets) defaults and permissions to your specification. - [Share your datasets](./sharing.md#dataset-sharing) with the public or with selected users. From 2386498be3eee810443be8ff9a01831571ee9007 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 10:27:59 +0100 Subject: [PATCH 09/22] Update docs/datasets.md Co-authored-by: Norman Rzepka --- docs/datasets.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/datasets.md b/docs/datasets.md index 130d6102ffd..7ff6810c5a9 100644 --- a/docs/datasets.md +++ b/docs/datasets.md @@ -28,7 +28,7 @@ If your data is already in WKW you can simply drag your folder (or zip archive o If your data is not in WKW, you can either: -- upload the data in a supported file format and WEBKNOSSOS will automatically convert it ([webknossos.org](https://webknossos.org) only). +- upload the data in a supported file format and WEBKNOSSOS will automatically import or convert it ([webknossos.org](https://webknossos.org) only). Depending on the size of the dataset, the conversion will take some time. You can check the progress at the [`Jobs`](./jobs.md) page or the "Datasets" tab in the dashboard. WEBKNOSSOS will also send you an email notification. From db43482fd9db27cc64283eb3c1f7e5a837965e7d Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 10:28:12 +0100 Subject: [PATCH 10/22] Update docs/datasets.md Co-authored-by: Norman Rzepka --- docs/datasets.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/datasets.md b/docs/datasets.md index 7ff6810c5a9..661f693deaf 100644 --- a/docs/datasets.md +++ b/docs/datasets.md @@ -23,7 +23,7 @@ The easiest way to get started with working on your datasets is through the WEBK 4. Click the *Upload* button -WEBKNOSSOS uses the [WKW-format](./wkw.md) internally to display your data. +Internally, WEBKNOSSOS uses the [WKW-format](./wkw.md) by default to display your data. If your data is already in WKW you can simply drag your folder (or zip archive of that folder) into the upload view. If your data is not in WKW, you can either: From d4fabf6045010008b6500df87027c7605cf859d9 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 14:51:56 +0100 Subject: [PATCH 11/22] Update docs/datasets.md Co-authored-by: Norman Rzepka --- docs/datasets.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/datasets.md b/docs/datasets.md index 661f693deaf..b71b6605892 100644 --- a/docs/datasets.md +++ b/docs/datasets.md @@ -85,7 +85,7 @@ On self-hosted instances, large datasets can be efficiently imported by placing * Place the dataset at `/binaryData//`. For example `/opt/webknossos/binaryData/Springfield_University/great_dataset`. * Go to the [dataset view on the dashboard](./dashboard.md) -* Use the `Scan disk for new dataset` from the dropdown menu next to the `Refresh`` button on the dashboard or wait for WEBKNOSSOS to detect the dataset (up to 10min) +* Use the `Scan disk for new dataset` from the dropdown menu next to the `Refresh` button on the dashboard or wait for WEBKNOSSOS to detect the dataset (up to 10min) Typically, WEBKNOSSOS can infer all the required metadata for a dataset automatically and import datasets automatically on refresh. In some cases, you will need to manually import a dataset and provide more information: From ad05183bb6c0fdd8a1f2b44586aa11ef0dc47568 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 14:52:08 +0100 Subject: [PATCH 12/22] Update docs/zarr.md Co-authored-by: Norman Rzepka --- docs/zarr.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zarr.md b/docs/zarr.md index a51843d9845..4bc3f4ea207 100644 --- a/docs/zarr.md +++ b/docs/zarr.md @@ -77,7 +77,7 @@ pip install webknossos webknossos convert \ --voxel-size 11.24,11.24,25 \ --name my_dataset \ - --data-format zarr3 \ + --data-format zarr \ data/source data/target ``` From ec5381c793a753bcb7fcca77f20cbb39311de9eb Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 14:52:16 +0100 Subject: [PATCH 13/22] Update docs/zarr.md Co-authored-by: Norman Rzepka --- docs/zarr.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zarr.md b/docs/zarr.md index 4bc3f4ea207..a2cd69da392 100644 --- a/docs/zarr.md +++ b/docs/zarr.md @@ -89,7 +89,7 @@ Read the full documentation at [WEBKNOSSOS CLI](https://docs.webknossos.org/cli) ### Conversion with Python -You can use the free [WEBKNOSSSO Python library](https://docs.webknossos.org/webknossos-py) to convert image stacks to Zarr or integrate the conversion as part of an existing workflow. +You can use the free [WEBKNOSSOS Python library](https://docs.webknossos.org/webknossos-py) to convert image stacks to Zarr or integrate the conversion as part of an existing workflow. ```python from webknossos import Dataset From 4222c862f449dad8a6b245bcfc1748510edb1b55 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 14:52:30 +0100 Subject: [PATCH 14/22] Update docs/zarr.md Co-authored-by: Norman Rzepka --- docs/zarr.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/zarr.md b/docs/zarr.md index a2cd69da392..d7aef9ce876 100644 --- a/docs/zarr.md +++ b/docs/zarr.md @@ -92,23 +92,23 @@ Read the full documentation at [WEBKNOSSOS CLI](https://docs.webknossos.org/cli) You can use the free [WEBKNOSSOS Python library](https://docs.webknossos.org/webknossos-py) to convert image stacks to Zarr or integrate the conversion as part of an existing workflow. ```python -from webknossos import Dataset -from webknossos.dataset import COLOR_CATEGORY +import webknossos as wk def main() -> None: """Convert a folder of image files to a WEBKNOSSOS dataset.""" - dataset = Dataset.from_images( + dataset = wk.Dataset.from_images( input_path=INPUT_DIR, output_path=OUTPUT_DIR, voxel_size=(11, 11, 11), - layer_category=COLOR_CATEGORY, + layer_category=wk.COLOR_CATEGORY, compress=True, - data_format = Dataformat.Zarr + data_format=wk.Dataformat.Zarr ) print(f"Saved {dataset.name} at {dataset.path}.") - # dataset.upload() + with wk.webknossos_context(token="..."): + dataset.upload() if __name__ == "__main__": From ce2be7957ca2867a74f22f696dbf07ce8d90954c Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 14:54:42 +0100 Subject: [PATCH 15/22] Update docs/zarr.md Co-authored-by: Norman Rzepka --- docs/zarr.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zarr.md b/docs/zarr.md index d7aef9ce876..8dac8e69972 100644 --- a/docs/zarr.md +++ b/docs/zarr.md @@ -16,7 +16,7 @@ You can try the OME-Zarr support with the following datasets. Load them in WEBKN - Source: Dense connectomic reconstruction in layer 4 of the somatosensory cortex. Motta et al. Science 2019. [10.1126/science.aay3134](https://doi.org/10.1126/science.aay3134) ## Zarr Folder Struture -WEBKNOSSOS expects the following file structure for Zarr datasets: +WEBKNOSSOS expects the following file structure for OME-Zarr (v0.4) datasets: ``` . # Root folder, potentially in S3, From 988caa093dbe29ea0f47608ad774043387cd3809 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 14:54:59 +0100 Subject: [PATCH 16/22] Update docs/getting_started.md Co-authored-by: Norman Rzepka --- docs/getting_started.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/getting_started.md b/docs/getting_started.md index 7fd3648989f..5bca5c1bafb 100644 --- a/docs/getting_started.md +++ b/docs/getting_started.md @@ -13,7 +13,7 @@ Since it is a web app, you can easily [collaborate](./sharing.md), [crowdsource] To get started with WEBKNOSOS, sign up for a free account on [webknossos.org](https://webknossos.org). Upload your own datasets or explore one of the many community datasets. -You get 10GB of storage for private datasets with the free tier. +You get 50GB of storage for private datasets with the free tier. For more data storage, check out the [pricing page for paid plans](https://webknossos.org/pricing) that covers storage costs and provides support services such as data format conversions. Please [reach out to us](mailto:sales@webknossos.org) for local, on-premise hosting at your institute or custom solutions. From d9228bd07dae4eae342f4b303add019c74d7b75d Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 14:55:12 +0100 Subject: [PATCH 17/22] Update docs/image_stacks.md Co-authored-by: Norman Rzepka --- docs/image_stacks.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/image_stacks.md b/docs/image_stacks.md index 1e57f8b9b10..b312d1daefd 100644 --- a/docs/image_stacks.md +++ b/docs/image_stacks.md @@ -3,7 +3,7 @@ WEBKNOSSOS works with a wide range of modern bio-imaging formats and image stacks: - [Image file sequence](#Single-Layer-Image-File-Sequence) in one folder (TIFF, JPEG, PNG, DM3, DM4 etc) -- [Multi Layer file sequence](#Multi-Layer-Image-File-Sequence) containing multiple folders with image sequences that are interpreted as separate layers +- [Multi layer file sequence](#Multi-Layer-Image-File-Sequence) containing multiple folders with image sequences that are interpreted as separate layers - [Single-file images](#single-file-images) (OME-TIFF, TIFF, PNG, czi, raw, etc) Image stacks need to be converted to [WKW](./wkw.md) for WEBKNOSSOS. This happens automatically when using the web upload on [webknossos.org](https://webknossos.org) or can be done manually (see below). From 59dc80038b8fdf40edf625709928f30ef9aed89a Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 14:55:25 +0100 Subject: [PATCH 18/22] Update docs/image_stacks.md Co-authored-by: Norman Rzepka --- docs/image_stacks.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/image_stacks.md b/docs/image_stacks.md index b312d1daefd..a43bb76222c 100644 --- a/docs/image_stacks.md +++ b/docs/image_stacks.md @@ -82,22 +82,22 @@ Read the full documentation at [WEBKNOSSOS CLI](https://docs.webknossos.org/cli) You can use the free [WEBKNOSSSO Python library](https://docs.webknossos.org/webknossos-py) to convert image stacks to WKW or integrate the conversion as part of an existing workflow. ```python -from webknossos import Dataset -from webknossos.dataset import COLOR_CATEGORY +import webknossos as wk def main() -> None: """Convert a folder of image files to a WEBKNOSSOS dataset.""" - dataset = Dataset.from_images( + dataset = wk.Dataset.from_images( input_path=INPUT_DIR, output_path=OUTPUT_DIR, voxel_size=(11, 11, 11), - layer_category=COLOR_CATEGORY, + layer_category=wk.COLOR_CATEGORY, compress=True, ) print(f"Saved {dataset.name} at {dataset.path}.") - # dataset.upload() + with wk.webknossos_context(token="..."): + dataset.upload() if __name__ == "__main__": From 59a4b0319e96153a7bf43d3675f1db971ff3f3a5 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 14:55:54 +0100 Subject: [PATCH 19/22] Update docs/zarr.md Co-authored-by: Norman Rzepka --- docs/zarr.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zarr.md b/docs/zarr.md index 8dac8e69972..5e6fe86dbb4 100644 --- a/docs/zarr.md +++ b/docs/zarr.md @@ -1,6 +1,6 @@ # Zarr & NGFF -WEBKNOSSOS works great with [OME Zarr datasets](https://ngff.openmicroscopy.org/latest/index.html), sometimes called next-generation file format (NGFF). +WEBKNOSSOS works great with [OME-Zarr datasets](https://ngff.openmicroscopy.org/latest/index.html), sometimes called next-generation file format (NGFF). We strongly believe in this community-driven, cloud-native data fromat for n-dimensional datasets. Zarr is a first-class citizen in WEBKNOSSOS and will likely replace [WKW](./wkw.md) long term. From fd661ad0443c8b63c3d1c205dc1f79b429abead7 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 14:56:26 +0100 Subject: [PATCH 20/22] Update docs/zarr.md Co-authored-by: Norman Rzepka --- docs/zarr.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zarr.md b/docs/zarr.md index 5e6fe86dbb4..7a11b6858d1 100644 --- a/docs/zarr.md +++ b/docs/zarr.md @@ -1,4 +1,4 @@ -# Zarr & NGFF +# OME-Zarr & NGFF WEBKNOSSOS works great with [OME-Zarr datasets](https://ngff.openmicroscopy.org/latest/index.html), sometimes called next-generation file format (NGFF). From 7972fe32133f8478610295a78dd403d6bdd90bb7 Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Mon, 20 Nov 2023 15:22:24 +0100 Subject: [PATCH 21/22] added PR feedback --- docs/animations.md | 4 +++- docs/n5.md | 5 ++--- docs/today_i_learned.md | 2 +- docs/zarr.md | 6 +++--- 4 files changed, 9 insertions(+), 8 deletions(-) diff --git a/docs/animations.md b/docs/animations.md index 542ceb29897..3180e67f3b6 100644 --- a/docs/animations.md +++ b/docs/animations.md @@ -1,6 +1,6 @@ # Animations -A picture is worth a thousand words. In this spirit, you can use WEBKNOSSOS to create eye-catching animation of your datasets as a video clip. You can use these short movies as part of a presentation, website, for social media or to promote a publication. +A picture is worth a thousand words. In this spirit, you can use WEBKNOSSOS to create eye-catching animations of your datasets as a video clip. You can use these short movies as part of a presentation, website, for social media or to promote a publication. // animation video @@ -17,3 +17,5 @@ Creating an animation is easy: Either periodically check the [background jobs page](./jobs.md) or wait for a an email confirmation to download the animation video file. Creating an animation may take a while, depending on the selected bounding box size and the number of included 3D meshes. + +WEBKNOSSOS Team plans and above have access to high definition (HD) resolution videos and more options. diff --git a/docs/n5.md b/docs/n5.md index 2c10c805e8b..79f674019db 100644 --- a/docs/n5.md +++ b/docs/n5.md @@ -3,8 +3,7 @@ WEBKNOSSOS can read [N5 datasets](https://github.com/saalfeldlab/n5). !!!info - N5 datasets can only be opened as [remote dataset](./datasets.md#streaming-from-remote-servers-and-the-cloud) at the moment. Uploading the through the web uploader is not supported. - + N5 datasets can only be opened as [remote dataset](./datasets.md#streaming-from-remote-servers-and-the-cloud) at the moment. Provide a URI pointing directly to an N5 group. For several layers, import the first N5 group and then use the UI to add more URIs/groups. Uploading the through the web uploader is not supported. ## Examples @@ -22,7 +21,7 @@ WEBKNOSSOS expects the following file structure for N5 datasets: ``` my_dataset.n5 # One root folder per dataset ├─ attributes.json # Dataset metadata -└─ my_EM # One N5 group per data layer +└─ my_EM # One N5 group per data layer. In WK directly link to a N5 group.   ├─ attributes.json   ├─ s0 # Chunks in a directory hierarchy that enumerates their positive integer position in the chunk grid. (e.g. 0/4/1/7 for chunk grid position p=(0, 4, 1, 7)).   │  ├─ 0 diff --git a/docs/today_i_learned.md b/docs/today_i_learned.md index 7d16bb71dd7..4441e03fba8 100644 --- a/docs/today_i_learned.md +++ b/docs/today_i_learned.md @@ -2,7 +2,7 @@ We reguarly publish tips and tricks videos for beginners and pros on YouTube to share new features, highlight efficient workflows, and show you hidden gems. -Subscribe to our YouTube channel [@webknossos](https://www.youtube.com/@webknossos) to stay up-to-date. +Subscribe to our YouTube channel [@webknossos](https://www.youtube.com/@webknossos) or [@webknossos](https://twitter.com/webknossos) on Twitter to stay up-to-date. ![youtube-video](https://www.youtube.com/playlist?list=PLpizOgyiA4kE6pZRW1u0l49Pmppp-S7V0) diff --git a/docs/zarr.md b/docs/zarr.md index 7a11b6858d1..4d321b16d0c 100644 --- a/docs/zarr.md +++ b/docs/zarr.md @@ -4,7 +4,7 @@ WEBKNOSSOS works great with [OME-Zarr datasets](https://ngff.openmicroscopy.org/ We strongly believe in this community-driven, cloud-native data fromat for n-dimensional datasets. Zarr is a first-class citizen in WEBKNOSSOS and will likely replace [WKW](./wkw.md) long term. -Zarr datasets can both be uploaded to WEBKNOSSOS through the [web uploader](./datasets.md#uploading-through-the-web-browser) or [streamed from a remote server or the cloud](./datasets.md#streaming-from-remote-servers-and-the-cloud). +Zarr datasets can both be uploaded to WEBKNOSSOS through the [web uploader](./datasets.md#uploading-through-the-web-browser) or [streamed from a remote server or the cloud](./datasets.md#streaming-from-remote-servers-and-the-cloud). For several layers, import the first Zarr group and then use the UI to add more URIs/groups. ## Examples @@ -19,7 +19,7 @@ You can try the OME-Zarr support with the following datasets. Load them in WEBKN WEBKNOSSOS expects the following file structure for OME-Zarr (v0.4) datasets: ``` -. # Root folder, potentially in S3, +. # Root folder, │ # with a flat list of images by image ID. │ └── 456.zarr # Another image (id=456) converted to Zarr. @@ -126,4 +126,4 @@ This feature in currently only supported for Zarr dataset due to their flexbile To get the best streaming performance for Zarr datasets consider the following settings. - Use chunk sizes of 32 - 128 voxels^3 -- Enable sharding \ No newline at end of file +- Enable sharding (only available in Zarr 3+) \ No newline at end of file From 53ab3840fd5ca070713d2b755b26bdf1cc2900bd Mon Sep 17 00:00:00 2001 From: Tom Herold Date: Wed, 22 Nov 2023 13:45:04 +0100 Subject: [PATCH 22/22] added example video for animations --- docs/animations.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/animations.md b/docs/animations.md index 3180e67f3b6..32294573c09 100644 --- a/docs/animations.md +++ b/docs/animations.md @@ -2,7 +2,7 @@ A picture is worth a thousand words. In this spirit, you can use WEBKNOSSOS to create eye-catching animations of your datasets as a video clip. You can use these short movies as part of a presentation, website, for social media or to promote a publication. -// animation video +![type:video](https://static.webknossos.org/assets/docs/webknossos_animation_example.mp4){: autoplay loop muted} ## Creating an Animation