Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I segment my own slide and get the segmentation results? #5

Open
Lewislou opened this issue Nov 5, 2024 · 5 comments
Open
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@Lewislou
Copy link

Lewislou commented Nov 5, 2024

Hi,

I have tried "runPipeline --datahub breast-tumour-segmentation --file my_slide.svs". But none results are saved. Could you please provide a full command or python codes to segment a testing WSI using your trained model?

@andreped
Copy link
Member

andreped commented Nov 5, 2024

Hello, @Lewislou :]

Yes, the pipeline when using it through the runPipeline CLI is only to demonstrate running the pipeline. It does not export or save the result on disk. To do so, you would need to modify the actual pipeline, if you wanted to use the runPipeline CLI for inference.

If you use FastPathology and download the model from there, it is possible to see why that is the case. Below is a snapshot from FastPathology on my MacBook:

Screenshot 2024-11-05 at 10 53 48

What is evident from here is that this pipeline (FPL) is missing the actual exporters. FastPathology handles the export for you, so if it is of interest to you, you could use FastPathology to create the segmentations. Just remember to download the model and creating a project first. Then the results will be stored in the Project.

If you want to do this from code, it is possible to change the pipeline. I did this in another project:
https://github.com/andreped/FP-DSA-plugin/blob/main/dsa/fastpathology/fastpathology/pipelines/breast_tumour_segmentation.fpl#L55).

This FPL could then be used with runPipeline, or you could just handle this from Python, which gives you more control:
https://github.com/AICAN-Research/H2G-Net/blob/main/src/docker/applications/run_breast_cancer_segmentation.py

Not sure what is most optimal to you. Feel free to ask if you have further questions :]

@andreped andreped self-assigned this Nov 5, 2024
@andreped andreped added the documentation Improvements or additions to documentation label Nov 5, 2024
@Lewislou
Copy link
Author

Lewislou commented Nov 5, 2024

Hi,

I have tried run_breast_cancer_segmentation.py to segment a slide of TCGA-BRCA project. However, the saved segmentation results in .tiff format are all empty and the size of the segmentation result is always 1024*1024 which is not the same size or ratio of the input slide. Did I do any thing wrong?

Here are my python codes (run_breast_cancer_segmentation.py):
fast.Reporter.setGlobalReportMethod(fast.Reporter.COUT)

output = "prediction"
inputs = './TCGA-DATASETS/BRCA/All/TCGA-3C-AALI-01A-01-TSA.7D4960A7-247F-46EE-8D4A-B55170C23EAA.svs'
pipeline = fast.Pipeline(
'breast_tumour_segmentation.fpl',
{
'wsi': './TCGA-DATASETS/BRCA/All/TCGA-LD-A74U-01Z-00-DX1.F3C1EBBB-4AED-49A9-A8D2-B6145E162BE4.svs',
'output': output,
'pwModel': "./FAST/datahub/breast-tumour-segmentation-model/pw_tumour_mobilenetv2_model.onnx",
'refinementModel': "./FAST/datahub/breast-tumour-segmentation-model/unet_tumour_refinement_model_fix-opset9.onnx",
}
)

pipeline.parse()
pipeline.getProcessObject('pwExporter').run()
pipeline.getProcessObject('segExporter').run()

print("Was export successful:", os.path.exists(output + ".tiff"))
print("Result is saved at:", output)

My breast_tumour_segmentation.fpl codes:
PipelineName "Breast Tumour Segmentation (pw + refinement)"
PipelineDescription "Segmentation of breast tumour tissue using H2G-Net https://github.com/andreped/H2G-Net"
PipelineOutputData segmentation refinement 0
PipelineOutputData heatmap finish 0

ProcessObject WSI WholeSlideImageImporter
Attribute filename @@WSI@@

ProcessObject tissueSeg TissueSegmentation
Input 0 WSI 0

ProcessObject patch PatchGenerator
Attribute patch-size 256 256
Attribute patch-magnification 20
Input 0 WSI 0
Input 1 tissueSeg 0

ProcessObject network NeuralNetwork
Attribute model @@pwModel@@
Attribute scale-factor 0.003921568627451
Input 0 patch 0

ProcessObject stitcher PatchStitcher
Input 0 network 0

ProcessObject finish RunUntilFinished
Input 0 stitcher 0

ProcessObject tensorToImage TensorToImage
Attribute channels 1
Input 0 finish 0

ProcessObject lowRes ImagePyramidLevelExtractor
Attribute level -1
Input 0 WSI

ProcessObject scale IntensityNormalization
Input 0 lowRes 0

ProcessObject refinement SegmentationNetwork
Attribute inference-engine OpenVINO
Attribute model @@refinementModel@@
Input 0 scale 0
Input 1 tensorToImage 0

ProcessObject pwExporter HDF5TensorExporter
Attribute filename @@output@@".h5"
Input 0 finish 0

ProcessObject segExporter TIFFImagePyramidExporter
Attribute filename @@output@@".tiff"
Input 0 refinement 0

@andreped
Copy link
Member

andreped commented Nov 5, 2024

I have tried run_breast_cancer_segmentation.py to segment a slide of TCGA-BRCA project. However, the saved segmentation results in .tiff format are all empty and the size of the segmentation result is always 1024*1024 which is not the same size or ratio of the input slide. Did I do any thing wrong?

The output of the model is 1024x1024 so that should be correct. We don't store the prediction as a one-to-one to the original image. Doing that just increases the file size. So to render it on top of the WSI, you will need to scale the ground truth TIFF to match the original WSI.

But what do you want to use these predictions for? If I know what, it makes it easier to know what to recommend.

Note that it could be that the image looks empty (all black), but you need to change the intensity range to make it possible to see. Could you try doing some like this in python to see if you can now see the segmentation? I doubt it is completely zero.

from PIL import Image
import numpy as np
import matplotlib.pyplot as plt

# Load the TIFF image
image_path = 'your_segmentation_image.tiff'
image = Image.open(image_path).convert('L')  # Convert to grayscale

# Convert image to numpy array
image_array = np.array(image)

# Binarize: Set all values > 0 to 1
binary_image = (image_array > 0).astype(int)

# Plot the binary image
plt.imshow(binary_image, cmap='gray')
plt.axis('off')  # Optional: Hide axes
plt.show()

@Lewislou
Copy link
Author

Lewislou commented Nov 6, 2024

I have tried run_breast_cancer_segmentation.py to segment a slide of TCGA-BRCA project. However, the saved segmentation results in .tiff format are all empty and the size of the segmentation result is always 1024*1024 which is not the same size or ratio of the input slide. Did I do any thing wrong?

The output of the model is 1024x1024 so that should be correct. We don't store the prediction as a one-to-one to the original image. Doing that just increases the file size. So to render it on top of the WSI, you will need to scale the ground truth TIFF to match the original WSI.

But what do you want to use these predictions for? If I know what, it makes it easier to know what to recommend.

Note that it could be that the image looks empty (all black), but you need to change the intensity range to make it possible to see. Could you try doing some like this in python to see if you can now see the segmentation? I doubt it is completely zero.

from PIL import Image
import numpy as np
import matplotlib.pyplot as plt

# Load the TIFF image
image_path = 'your_segmentation_image.tiff'
image = Image.open(image_path).convert('L')  # Convert to grayscale

# Convert image to numpy array
image_array = np.array(image)

# Binarize: Set all values > 0 to 1
binary_image = (image_array > 0).astype(int)

# Plot the binary image
plt.imshow(binary_image, cmap='gray')
plt.axis('off')  # Optional: Hide axes
plt.show()

Hi,

Thank you for your quick response. I actually tried to convert the result to binary image. Now I find that I have to change `Attribute patch-magnification' in .fpl file as 20 to get a segmentation map. If the magnification is set to 10 or 40, nothing would be segmented. However, when I overlay the segmentation results to thumbnail image of the WSI, they are not match. The result is as follows:
example_thumbnail
overlay_result
Are there any other parameters that I need to modify?
What I really want to do is just segment breaset tumor and get the segmentation results for quantification, for exmaple get its size or ratio. Hope you could give me some advice. Thanks~

@andreped
Copy link
Member

andreped commented Nov 6, 2024

OK, I think I know what it could be.

So, you should not change the magnification level. 10x is the magnification the model is trained on. If you change the magnification level, you will likely get poorer segmentations and most importantly may run into scaling issues, and that is potentially what you are seeing here. I very often saw this in QuPath, when I tried to import the predictions there, but where I was providing the wrong scale information.

But if 20x works fine for you, there is always a trick.

If you remove half the width and height of the segmentation image, essentially removing the redundant additional padding, and then upscale the segmentation image by 2x, you should get the right overlay. This is because your scale is off by 2x as you used 20x instead of 10x for inference.

What could cause the model to not work when you select 10x, is that in the WSI you have, the 10x magnification plane is not available. Then FAST tries to select a different plane and that could go wrong. This could happen if you have a 20x WSI and the different planes are downscaled by 4, resulting in 20x, 5x, ..., instead of the more common 20x, 10x, 5x, ...

I have other tricks to get the model running on 10x, if the 10x plane is not available. Using 10x should give better segmentations as well. The most is not trained on 20x nor attempted to be made invariant to magnification level. We can essentially do the resizing to 10x ourselves. But let's see if this works first.

Could you test if this WSI works with FastPathology? Alternatively, you could just share me the WSI and I could have a look. As this WSI is from TCGA, I assume it is OK to share?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants