Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multi-modality datasets #6085

Closed
2 of 4 tasks
philippotto opened this issue Feb 28, 2022 · 2 comments · Fixed by #6748
Closed
2 of 4 tasks

Support multi-modality datasets #6085

philippotto opened this issue Feb 28, 2022 · 2 comments · Fixed by #6748

Comments

@philippotto
Copy link
Member

philippotto commented Feb 28, 2022

Detailed Description

In webKnossos all layers of a dataset must have the same voxel resolution (aka "scale). When imaging a dataset with different modalities (e.g., EM and LM), this constraint is usually not met. Supporting these different modalities in one dataset would be a good feature to have.

One idea for the implementation would be to have one scene transform per modality (or per layer). Currently, there is only one global scene scale which would need to be split up.

Other thoughts:

  • a first version might be restricted so that only one modality is visible at a given time (and switching between the modalities switches the global scale)
  • some sort of semantic zoom would be probably be a nice feature: when zooming out, one sees LM data and when zooming in, modalities with higher resolutions appear (e.g., CT and then EM)

Not sure whether the backend would need to do something here, too?

Quoting from #4026 (comment):

For this, we would need to figure out how to unify the addressing for the different scales. Right now, addressing data positions is done via voxel coordinates (see user-exposed position UI). In case, of differing scales, this would not work, anymore. Due to backwards compatibility (also for NMLs), I assume that we would need to come up with some (inelegant) workaround for this problem.

Todo

  • gather example data (i.e., at least two DS layers with different scales and potentially affine transforms)
  • do an implementation spike get a better picture

Context

  • Specific to long-running jobs (set jobsEnabled=true in application.conf)
  • Specific to webKnossos.org (set isDemoInstance=true in application.conf)
@philippotto
Copy link
Member Author

Also see #4026 and #4027.

@philippotto
Copy link
Member Author

For this, we would need to figure out how to unify the addressing for the different scales. Right now, addressing data positions is done via voxel coordinates (see user-exposed position UI). In case, of differing scales, this would not work, anymore. Due to backwards compatibility (also for NMLs), I assume that we would need to come up with some (inelegant) workaround for this problem.

One solution for this could be that there is a "main scale" to which alle positions will refer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants