Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation support in MoBIE BVV integration #1186

Open
tischi opened this issue Nov 7, 2024 · 43 comments
Open

Segmentation support in MoBIE BVV integration #1186

tischi opened this issue Nov 7, 2024 · 43 comments

Comments

@tischi
Copy link
Contributor

tischi commented Nov 7, 2024

Hi @ekatrukha,

Starting an issue about showing meshes in BVV from MoBIE.

The current mesh code is all here:

https://github.com/mobie/mobie-viewer-fiji/tree/main/src/main/java/org/embl/mobie/lib/volume

Here is where a specific mesh is added to the current volume viewer:

private synchronized void addSegmentMeshToUniverse( S segment, CustomTriangleMesh mesh )

I guess converting the current mesh into an imagej-mesh should not be a big deal. I can also help looking into this. Let me know if I should have a look!

@ekatrukha
Copy link
Collaborator

Hello @tischi,

before we start digging. Are meshes loaded from the disk, where they are stored in some format (.stl)?
Or do you generate them in Mobie from some thresholded source, programmatically?
Or both?

@tischi
Copy link
Contributor Author

tischi commented Nov 7, 2024

The are created on the fly; they are not stored on disk.

@tischi tischi changed the title Mesh support in MoBIE BVV integration Segmentation support in MoBIE BVV integration Nov 7, 2024
@tischi
Copy link
Contributor Author

tischi commented Nov 7, 2024

I slightly widened the title of the issue. For me it would also be interesting to check how a label mask image volume rendered with Glasbey LUT would look like; maximum projection would likely not be useful for label masks :-)

@ekatrukha
Copy link
Collaborator

Max proj no, but 'volumetric' should be ok. You just need to narrow down alpha value so everything is not transparent, but keep LUT wide.

@tischi
Copy link
Contributor Author

tischi commented Nov 7, 2024

Is the alpha value adjustable?

I would add the "normal" volume rendering as an option for the label masks to our branch such that we can test this, ok?

Is there any way that I could test the Glasbey LUT from within my IntelliJ IDE? I think there was some trick to "link in" the Fiji folder, but I am not sure...

@ekatrukha
Copy link
Collaborator

It is adjustable, check the readme of bvv-playground.

There is a method to load IndexColorModel in the Converter setup, so you just need to read glabsbey values into it from disk (ImageJ LUT) or somewhere else.

I can check it for you tomorrow.

@tischi
Copy link
Contributor Author

tischi commented Nov 7, 2024

There is a method to load IndexColorModel in the Converter setup, so you just need to read glabsbey values into it from disk (ImageJ LUT) or somewhere else. I can check it for you tomorrow.

Thanks! That would be very helpful for the testing.

@tischi
Copy link
Contributor Author

tischi commented Nov 7, 2024

I added code to display label mask images (aka segmentations) with BVV.
However, the "classic example", which are the cells from the platynereis dataset are of Long datatype, which throws an error:

Cannot display cells in BVV, incompatible data type:
net.imglib2.type.numeric.integer.UnsignedLongType

Is that expected? Could be that there is no support in SpimData...
Would BDV be able to display this at all or would be need to convert to something else?
I think you mentioned that BVV can only do uint16, is it?

@ekatrukha
Copy link
Collaborator

For cached multires it is only UnsignedShort, uint16, indeed.
I can add loading of Long either truncated to max of 65535 or 'cyclic', i.e. reminder of division by this max.

@tischi
Copy link
Contributor Author

tischi commented Nov 7, 2024

That would be great! Maybe cyclic would be best such that segments >65535 would still be rendered with different colours.

@ekatrukha
Copy link
Collaborator

Hello @tischi,

I've made Glasbey LUT version and UnsignedLong data loading. In the end the cycling enough to do for 256, since it is the range of the LUT.
You can check results and play with it a bit here.
I've made two options, one with "dark render" (version 1) and one with clipped volume (version 2).

This is version 1 view
version1

But main conclusion to me is that it seems like the multires image is not the best way to show segmentation results in 3D.
The reason behind it is that scaling by factor of 2 scrambles all the labels in 3D (average intensity becomes weird).
That leads to the "scrambled" segmentation.
See version 2 initial view
v2_loading
And after it loaded a bit better resolution
v2_loaded_more

The areas of fine, thin labels (with thickness below the current optimal/displayed resolution) are getting scrambled.
It happens also in BDV, but it is less noticeable, because higher resolution can be loaded any time and fast in one plane.
For BVV to have fine details correctly one would need to load it all to the GPU.

Well, you can check the result and play with it yourself.
So I guess meshes would be a solution.
How many objects do you have in this segmentation?
Some time later we can try to implement meshes generation and loading.

@tischi
Copy link
Contributor Author

tischi commented Nov 8, 2024

Not sure if that information helps, maybe you know all of this already: The resolution pyramid for the labels was created using a nearest neighbour / sampling strategy, thus there should be no averaging of label values. In BDV, for the display one has to also use nearest neighbour interpolation (this can be configured, pressing the I key). In fact, I think my code in MoBIE prevents label masks from ever being interpolated, even if the user is pressing the I key.

The reason behind it is that scaling by factor of 2 scrambles all the labels in 3D

Where does that factor 2 scaling happen in BVV? Could one configure it to do something else than averaging? For instance, taking a random sample?

How many objects do you have in this segmentation?

Around 16000 I think

So I guess meshes would be a solution.

Yes, that's why so far in MoBIE I am using meshes for the display of segmentations. However, only (the few) segments that are actively selected by the user are rendered, because creating 16000 meshes on the fly would be too slow and I am not sure whether the 3D Image Viewer could handle it. But I don't think I ever really tried to benchmark/push this... Anyway, I think a good start would be to just reproducing my current 3D Viewer Mesh implementation with BVV.

Well, you can check the result and play with it yourself.

Will do.

@tischi
Copy link
Contributor Author

tischi commented Nov 8, 2024

image

I can reproduce the scrambling of values at the borders of the labels, but don't really understand why that happens. Do you have a reference for how the volume rendering algorithm works? Does it just take the first non-zero value that it finds along a ray?

@tischi
Copy link
Contributor Author

tischi commented Nov 8, 2024

Version 1 is better I think, one can get to Version 2 manually by "moving into the sample".

@ekatrukha
Copy link
Collaborator

ekatrukha commented Nov 8, 2024

If the pyramid was built using nearest neighbor, than I guess my hypothesis was wrong and it is not a culprit.

Do you have a reference for how the volume rendering algorithm works?

Not in a written form, no, it is unpublished.
I know from reading/tinkering with the code and conversations with Tobias :)
In principle, volume rendering in BVV is optimized for the speed and it makes some assumptions about data.
We can remove those assumptions and see if the quality of the picture improves.
So there are a few possible suspects that we can test:

  1. First one is dithering (explained here). I will try to remove it and see if it is a problem.
  2. The second one is a variable step along the ray. Each screen pixel shoots a ray through the volume and sample/accumulates max (for max intensity) or "alpha blended" intensity values. In the current form the step size varies: BVV takes smaller steps on the part of the ray closer to the camera and makes larger steps further away. In many other 3D renderers (sciview) the step size is constant to avoid artifacts. That we can also change and see if the scrambling goes away.
  3. There is some bug in my conversion of UnsignedLong. I need to think about it, so far it looks ok.
  4. Something else.

Does it just take the first non-zero value that it finds along a ray?

Kind of, not really. It is making alpha blending of accumulated voxels along the ray in the shaders . It should stop when alpha value is more than 1.
But! If we put alpha range in the ConverterSetup of BVV to 0-1, it should stop at the first sampled voxel, so yes (unless I miss something). But then we have point 2) from above.

I am going investigate a bit.

manually by "moving into the sample".

Yeah, this is why I think that future bvv-minimal BVVBrowser should have a clipping controls.

@ekatrukha
Copy link
Collaborator

Ok, I think I figured it out.
So if I load low resolution level of your labels converted to UnsignedShort and display with current version, I get this beautiful rainbow render
Interpolated_overview
And now if I clip the view to just one voxel and zoom in on it, then I get this picture
one_pixel_inteerpolated

Now I understand where the "rainbow" comes from. Basically, data is uploaded to the GPU cache (texture). When renderer engine samples "view ray", it gets float coordinate values inside one pixel. The interpolation mode is set to nearest neighbor (on the uploaded GPU texture), but it does not "rounds up" the value, but indeed looks for the nearest voxel in 3D and that is different depending on the float coordinates inside the voxel of interest. So in Glasbey LUT, it is pretty drastic change in color. What we see is basically "nearest neighbor" subpixel distance map.

So what I did, I've tweaked the rendered to round up (actually floor) accessed voxel values. Then I get something more "expected":
Interpolated_floor
And one voxel becomes
one_pixel_floor

Does it makes sense?

Of course, this voxel "floor" value method is not acceptable for the normal volumetric microscopy data rendering, since it becomes super "voxelized", see below

head_combined

I guess I can add an option to bvv-playground's converter setup to render a specific source in this "labels" mode.
Would this be a solution?

@ekatrukha
Copy link
Collaborator

I think it looks much more realistic (left = previous version, right = "floor" voxel method)
especially "outer" surface level
segment

@ekatrukha
Copy link
Collaborator

Here is the "proper" mipmap loading upon the start on my laptop

20241111_mipmap_loading.mp4

@tischi
Copy link
Contributor Author

tischi commented Nov 11, 2024

Wow, super interesting! Thanks for digging!

In the "floor-rendering-mode": What happens if you zoom in so much that you the viewer canvas is within the specimen.
In other words: How does this view look now?

image

I would hope that all the scrambled stuff between the labels is gone...?!

@ekatrukha
Copy link
Collaborator

It depends on the pyramid level, since in lowest level the data is "scrabled"
Here is the same slice before "full res" loading
loading

and after
highest_level

@tischi
Copy link
Contributor Author

tischi commented Nov 11, 2024

The high-res looks perfect!

I guess I can add an option to bvv-playground's converter setup to render a specific source in this "labels" mode.
Would this be a solution?

Yes, that's what I am also sort of doing:

    @Override
    public RealRandomAccessible< AnnotationType< A > > getInterpolatedSource( final int t, final int level, final Interpolation method)
    {
        final RealRandomAccessible< T > rra = source.getInterpolatedSource( t, level, Interpolation.NEARESTNEIGHBOR );

        return Converters.convert( rra,
                ( T input, AnnotationType< A > output ) ->
                setOutput( input, t, output ),
                new AnnotationType<>() );
    }

☝️ I am ignoring here the Interpolation method input argument and always use Interpolation.NEARESTNEIGHBOR for Sources of AnnotationType.

@ekatrukha
Copy link
Collaborator

What is your algorithm for assigning colors? So that BVV render is the same?

@tischi
Copy link
Contributor Author

tischi commented Nov 11, 2024

private SourceAndConverter createSourceAndConverter( AbstractAnnotationDisplay< A > display, Image< AnnotationType< A > > image )

But this is tricky, because this converts AnnotationType to a colour.

I guess you are currently working with the underlying label mask image, which is some unsigned integer type....

The mapping from the label-id in the label mask to the AnnotationType is done here:

public synchronized A getAnnotation( String source, final int timePoint, final int label )


This is all quite involved. I am not sure you will be able to reverse-engineer all of this....

I think the easiest would be if we could just do

BvvFunctions.add( SourceAndConverter< ? > sac )

Because then we could simply add the sac that already outputs the correct colours.

Assuming that the volume rendering operates directly on the ARGBType?! Or does it need to access the integer valued data at any point?

@tischi
Copy link
Contributor Author

tischi commented Nov 11, 2024

I could also dig a bit into BvvFunctions.show myself to better understand what is going on....do you think that could help?

@ekatrukha
Copy link
Collaborator

I guess you are currently working with the underlying label mask image, which is some unsigned integer type.

Yes, exactly.

Assuming that the volume rendering operates directly on the ARGBType?! Or does it need to access the integer valued data at any point?

Cached multires sources of ARGBType are not supported in BVV, only 16 bit data (UnsignedShort).
Therefore everything "multires cached" is wrapped into spimdata.
The coloring I show right now is made by applying LUT.
If you have somewhere a table of "voxel number in the segmentation volume corresponding to the label" <-> color, we can make a very specific LUT like this and load it to BVV to display the source. It can be done at the runtime.

@ekatrukha
Copy link
Collaborator

ekatrukha commented Nov 11, 2024

I guess I can use sac.getConverter().convert( UnsignedLong in, ARGBType out) to build the LUT?
The only thing I need to know is the maximum number (index) of annotations.
Is it possible to get it somehow from this Annotation source?

Maybe from something like this?
int nTest = ( ( AnnotationLabelImage<?> ) image ).getAnnData().getTable().numAnnotations();?

@ekatrukha
Copy link
Collaborator

Ok, I got the colors for all annotations ~32000.

Turns out, that my implementation of LUTs for BVV does not support a LUT with 32000 colors. It is a shame.
I am gonna try to fix this.

@tischi
Copy link
Contributor Author

tischi commented Nov 12, 2024

I guess I can use sac.getConverter().convert( UnsignedLong in, ARGBType out) to build the LUT?

Unfortunately, I don't think so, because the pixel type for which I have a converter is AnnotationType.

The logic is: Integer(Label Mask) ----AnnotationAdapter----> AnnotationType ----Converter----> ARGBType


I think I will have to look at the code myself in a bit more detail to see what could be done.

I would suggest you push your latest additions into the bvvpg branch and wait until I get back to you. I will try this week. OK?

@tischi
Copy link
Contributor Author

tischi commented Nov 12, 2024

Hi @ekatrukha,

I added some code for converting the integer to an ARGBType that seems to work:

if ( image instanceof AnnotationLabelImage )

I think this is what you could use instead of (inside of) your current getGlasbeyICM().

@ekatrukha
Copy link
Collaborator

Thank youuuu.
I've wrapped it into a separate function for all labels.

Now I am going to modify bvv-playground so it can do

  1. the "floor rendering" we discussed above and
  2. load large LUTs (>2000 colors).

and cut a new release of it.

Once it is done, I will ping you.

Just a detail behind the LUT story: so far I have been uploading sources LUTs as a linear 1D texture to GPU, but OpenGL has a limitation on the maximum size of this array.
I would need to wrap it as 2D or 3D image, in this case the limitation should be size ^2 or even ^3.

@tischi
Copy link
Contributor Author

tischi commented Nov 12, 2024

By the way, there is annotationType.setAnnotation( A annotation );
Thus, also the annotationType variable can be reused and does not have to be instantiated every time.

...I am saying that because I assuming that the the conversion function will be called a lot during the rendering?

@ekatrukha
Copy link
Collaborator

Ah, I see, I will add that.
No, it will be called once to generate LUT, a color array that will be subsequently uploaded to GPU (once) and stored there.

@tischi
Copy link
Contributor Author

tischi commented Nov 12, 2024

OK, then we would need, at some point, to make ImageBVViewer implement ColoringListener to update this LUT if needed and request a repaint (if possible); the corresponding AnnotationSliceView.class is doing that.

@ekatrukha
Copy link
Collaborator

Hello @tischi,

I've updated max LUT size in BVV, now it should support up to 65536 values. That means it can show up to 65535 annotations.
I pushed changes to bvvpg fork, now it should display labels, from my tests it looks identical
(left Mobie's BDV, right BVV)
compare_labels2
By default I put alpha (opacity) range to 0-1, but one can change it and even observe labels over the data
(left Mobie's BDV, right BVV)
compare_labels3

I guess this part is working now.
What is left (in my opinion).

  1. Annotation LUT update with ColoringListener. Do you want me to look into that?
  2. BVV settings change dialog somewhere.

Let me know what you think and what are results of your tests.

@ekatrukha
Copy link
Collaborator

Hello @tischi

with this LUT mapping. In principle, if we put all LUT alpha values to zero and some selected to 0.5 or 1.0, we can show only specific (for example, user selected) labels.
It is a bit of an overshoot, since the whole segmentation volume is going to be loaded.
But it works quite ok, see example below, I "selected" only two labels.
I think the LUT update/GPU upload should be relatively quick.

20241122_label_selection.mp4

@tischi
Copy link
Contributor Author

tischi commented Nov 25, 2024

Hi,

yes, in fact I was thinking about the same for only showing selected segments. There is an opacity value in the MoBIEColoringModel; could that directly be used as the alpha value? I think so, I can look into this...

Regarding the ColoringListener, yes, I can look into this.

@tischi
Copy link
Contributor Author

tischi commented Nov 25, 2024

Hi @ekatrukha,

the Converter< AnnotationType, ARGBType > converter = ( AnnotationARGBConverter ) sac.getConverter(); should already give you also an alpha value that you can use for the rendering. The non-selected segments should have different alpha value than the selected ones.

I could not find a place in the current code where this value would be used...

I guess we could use it here and simply multiply the colour values with the alpha? Is that what you did / have in mind?

for(int label=1; label<=nAnnotationsNumber; label++)
{		
    final Annotation annotation = annotationAdapter.getAnnotation( image.getName(), timePoint, label );
    converter.convert(  new AnnotationType<>( annotation ), valARGB );
    val = valARGB.get();
    colors[0][label] = ( byte ) ARGBType.red( val );
    colors[1][label] = ( byte ) ARGBType.green( val );
    colors[2][label] = ( byte ) ARGBType.blue( val );
}

@tischi
Copy link
Contributor Author

tischi commented Nov 25, 2024

I tried like this but that seems to be too naive...does not produce that desired effect...

			for(int label=1; label<=nAnnotationsNumber; label++)
			{		
				final Annotation annotation = annotationAdapter.getAnnotation( image.getName(), timePoint, label );
				converter.convert(  new AnnotationType<>( annotation ), valARGB );
				val = valARGB.get();
				colors[0][label] = ( byte ) ARGBType.red( val );
				colors[1][label] = ( byte ) ARGBType.green( val );
				colors[2][label] = ( byte ) ARGBType.blue( val );
				int alpha = ARGBType.alpha( val );
				for ( int i = 0; i < 3; i++ )
				{
					colors[i][label] *= alpha / 255.0;
				}
			}

@tischi
Copy link
Contributor Author

tischi commented Nov 25, 2024

This also does not work, because the black non-selected segments seem to hide the other ones:

			for(int label=1; label<=nAnnotationsNumber; label++)
			{
				final Annotation annotation = annotationAdapter.getAnnotation( image.getName(), timePoint, label );
				if ( selectionModel.isSelected( annotation ) )
				{
					converter.convert( new AnnotationType<>( annotation ), valARGB );
					val = valARGB.get();
					colors[ 0 ][ label ] = ( byte ) ARGBType.red( val );
					colors[ 1 ][ label ] = ( byte ) ARGBType.green( val );
					colors[ 2 ][ label ] = ( byte ) ARGBType.blue( val );
				}
				else
				{
					colors[0][0] = 0;
					colors[1][0] = 0;
					colors[2][0] = 0;
				}
			}

How did you achieve the above rendering that only the selected ones are visible?

@tischi
Copy link
Contributor Author

tischi commented Nov 26, 2024

And, I got so excited about the non-selected segments colouring that I forgot to say 😅: It is of course awesome that you managed to tweak the BVV colouring such that we have now identical colours in BDV and BDV 🥳 .

@ekatrukha
Copy link
Collaborator

Hello @tischi,

thank you!
There was a bit of a delay last week, I was on vacation.

In the previous version of bvvpg I did not upload alpha values to the LUT.
So this is the reason you could not do it.
Now I've released 0.3.2 with this option and updated the mobie branch.
You can uncomment this line, and comment out the previous
to see two labels rendering that I've used.

@tischi
Copy link
Contributor Author

tischi commented Dec 2, 2024

Awesome! Thanks for adding the support for the alpha values.
The above commit now uses the alpha from the segments.
To reproduce the below rendering select some segments with Ctrl + Left Click; which will render all the not-selected segments with lower alpha values.

image

@tischi
Copy link
Contributor Author

tischi commented Dec 2, 2024

Made an issue for the ColoringListener here (I will work on this)(it is done).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants