Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-write logic for selecting zoom level and support non-uniform buckets per dimension #3398

Merged
merged 17 commits into from
Nov 12, 2018

Conversation

philippotto
Copy link
Member

@philippotto philippotto commented Oct 24, 2018

This PR implements two major changes:

  1. The appropriate zoom step was previously determined by a constant which describes how large a viewport may become without switching to another zoomstep (so, a viewport with an extent of 376 could be zoomed to an extent of approx. 600). However, this in fact depends on the actual scale of the dataset. E.g., a dataset with a scale of 600, 600, 32 needs only very few buckets in the xy plane, which is why we can (and should) support zooming out even further.
    The new approach calculates how many buckets are indeed needed for a zoom value of 1 and puts this into proportion with how many buckets can be sent to the GPU with our minimum requirement of 17 buckets per dimension.
    As a side effect, the new approach often selects a better zoom step, which is why rendered quality is often better.

    The new approach determines the "zoom step difference" by finding a while for which the necessary bucket count is just below the minimum requirement of 3 * 512 buckets (this corresponds to 3 4096² textures which is the minimum requirement). Previously, we had a magic number of 17 buckets per dimension in place, which corresponded to approx. 1200 buckets.
    Determining the exact zoom step difference is done by searching an appropriate value linearly. I didn't manage to get the calculations 100% accurate and with the linear approximation we have a simple mechanism in place which solves the equation reliably.
  2. For the above scenario with a scale of 600, 600, 32, we really need to send more buckets in the z dimension to the GPU than for the x and y dimension. Previously, the code assumed a perfect square of 17³ buckets, which wouldn't work out anymore (e.g., the dataset in question needs approx. 50 buckets in z direction). This is why, I changed the code to use a cuboid as a reference space (defined by the anchor point).

Todo:

  • validate that all the buckets determined by the bucket_picker are successfully send to the gpu. it might be the case, that moving through z causes some flickering since "prefetched buckets" are not properly send to the GPU. I think, the bucket calculation code for the zoom step is not in perfect sync with the bucket picker.

URL of deployed dev instance (used for testing):

Steps to test:

  • open a few datasets and test the usual rendering behavior
  • the dataset which causes this change is ground_truth_stack_22_10_18

Issues:


…ts per dimension so that datasets with low xy quality can be viewed appropriately
@philippotto philippotto self-assigned this Oct 25, 2018
@philippotto philippotto changed the title [WIP] Re-write logic for selecting zoom level and support non-uniform buckets per dimension Re-write logic for selecting zoom level and support non-uniform buckets per dimension Oct 25, 2018
.circleci/config.yml Outdated Show resolved Hide resolved
@philippotto
Copy link
Member Author

@daniel-wer This PR is ready for review. I changed the logic again and updated the description appropriately. The entire calculation could be even more precise by taking into account how many buckets exactly fit on to the GPU. For example, if the client supports 8192 textures, we can fit 2048 buckets per layer. However, performance got quite poor for me when maxing out the specs of the GPU. So, I think the current way is better in term of simplicity and performance.

To be even more pedantic, the amount of buckets we can fit per layer would also depend on the type of the layer (e.g., grayscale data can fit four times as much). As a consequence, we would need to use different zoom steps for different layers. Since this would add even more complexity, I didn't implement this.

The current solution behaves very similar to how the old implementation chose the zoom step. The screenshot tests passed and only showed a slight difference of a few pixels which can't be seen with bare eyes. So, I think we should be fine.

Feel free to hit me up in person, if you have questions for this PR. I guess, it can get a bit tedious to discuss details in writing.

Copy link
Member

@daniel-wer daniel-wer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work thinking this through! We'll talk about my open questions in person and then this should be good to merge :)

@philippotto
Copy link
Member Author

@daniel-wer I applied all changes which we talked about :) I'd like to merge it Monday morning, so it would be cool if you could officially accept the pr on gh :)

Copy link
Member

@daniel-wer daniel-wer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@philippotto philippotto merged commit 4ccf3e8 into master Nov 12, 2018
jfrohnhofen added a commit that referenced this pull request Nov 22, 2018
* master:
  Fix rgb support (#3455)
  Fix docker uid/gid + binaryData permissions. Persist postgres db (#3428)
  Script to merge volume tracing into on-disk segmentation (#3431)
  Hotfix for editing TaskTypes (#3451)
  fix keyboardjs module (#3450)
  Fix guessed dataset boundingbox for non-zero-aligned datasets (#3437)
  voxeliterator now checks if the passed map has elements (#3405)
  integrate .importjs (#3436)
  Re-write logic for selecting zoom level and support non-uniform buckets per dimension (#3398)
  fixing selecting bug and improving style of layout dropdown (#3443)
  refresh screenshots (#3445)
  Reduce the free space between viewports in tracing (#3333)
  Scala linter and formatter (#3357)
  ignore reported datasets of non-existent organization (#3438)
  Only provide shortcut for tree search and not for comment search (#3407)
  Update Datastore+Tracingstore Standalone Deployment Templates (#3424)
  In yarn refresh-schema, also invalidate Tables.scala (#3430)
  Remove BaseDirService that watched binaryData symlinks (#3416)
  Ensure that resolutions array is dense (#3406)
  Fix bucket-collection related rendering bug (#3409)
@normanrz normanrz deleted the non-uniform-buckets-per-dim branch February 20, 2019 16:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support anisotropy case with low res in xy but high res in z
2 participants