You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In webKnossos all layers of a dataset must have the same voxel resolution (aka "scale). When imaging a dataset with different modalities (e.g., EM and LM), this constraint is usually not met. Supporting these different modalities in one dataset would be a good feature to have.
One idea for the implementation would be to have one scene transform per modality (or per layer). Currently, there is only one global scene scale which would need to be split up.
Other thoughts:
a first version might be restricted so that only one modality is visible at a given time (and switching between the modalities switches the global scale)
some sort of semantic zoom would be probably be a nice feature: when zooming out, one sees LM data and when zooming in, modalities with higher resolutions appear (e.g., CT and then EM)
Not sure whether the backend would need to do something here, too?
For this, we would need to figure out how to unify the addressing for the different scales. Right now, addressing data positions is done via voxel coordinates (see user-exposed position UI). In case, of differing scales, this would not work, anymore. Due to backwards compatibility (also for NMLs), I assume that we would need to come up with some (inelegant) workaround for this problem.
Todo
gather example data (i.e., at least two DS layers with different scales and potentially affine transforms)
do an implementation spike get a better picture
Context
Specific to long-running jobs (set jobsEnabled=true in application.conf)
Specific to webKnossos.org (set isDemoInstance=true in application.conf)
The text was updated successfully, but these errors were encountered:
For this, we would need to figure out how to unify the addressing for the different scales. Right now, addressing data positions is done via voxel coordinates (see user-exposed position UI). In case, of differing scales, this would not work, anymore. Due to backwards compatibility (also for NMLs), I assume that we would need to come up with some (inelegant) workaround for this problem.
One solution for this could be that there is a "main scale" to which alle positions will refer.
Detailed Description
In webKnossos all layers of a dataset must have the same voxel resolution (aka "scale). When imaging a dataset with different modalities (e.g., EM and LM), this constraint is usually not met. Supporting these different modalities in one dataset would be a good feature to have.
One idea for the implementation would be to have one scene transform per modality (or per layer). Currently, there is only one global scene scale which would need to be split up.
Other thoughts:
Not sure whether the backend would need to do something here, too?
Quoting from #4026 (comment):
Todo
Context
jobsEnabled=true
inapplication.conf
)isDemoInstance=true
inapplication.conf
)The text was updated successfully, but these errors were encountered: