-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I segment my own slide and get the segmentation results? #5
Comments
Hello, @Lewislou :] Yes, the pipeline when using it through the If you use FastPathology and download the model from there, it is possible to see why that is the case. Below is a snapshot from FastPathology on my MacBook: What is evident from here is that this pipeline (FPL) is missing the actual exporters. FastPathology handles the export for you, so if it is of interest to you, you could use FastPathology to create the segmentations. Just remember to download the model and creating a project first. Then the results will be stored in the Project. If you want to do this from code, it is possible to change the pipeline. I did this in another project: This FPL could then be used with runPipeline, or you could just handle this from Python, which gives you more control: Not sure what is most optimal to you. Feel free to ask if you have further questions :] |
Hi, I have tried run_breast_cancer_segmentation.py to segment a slide of TCGA-BRCA project. However, the saved segmentation results in .tiff format are all empty and the size of the segmentation result is always 1024*1024 which is not the same size or ratio of the input slide. Did I do any thing wrong? Here are my python codes (run_breast_cancer_segmentation.py): output = "prediction" pipeline.parse() print("Was export successful:", os.path.exists(output + ".tiff")) My breast_tumour_segmentation.fpl codes: ProcessObject WSI WholeSlideImageImporter ProcessObject tissueSeg TissueSegmentation ProcessObject patch PatchGenerator ProcessObject network NeuralNetwork ProcessObject stitcher PatchStitcher ProcessObject finish RunUntilFinished ProcessObject tensorToImage TensorToImage ProcessObject lowRes ImagePyramidLevelExtractor ProcessObject scale IntensityNormalization ProcessObject refinement SegmentationNetwork ProcessObject pwExporter HDF5TensorExporter ProcessObject segExporter TIFFImagePyramidExporter |
The output of the model is But what do you want to use these predictions for? If I know what, it makes it easier to know what to recommend. Note that it could be that the image looks empty (all black), but you need to change the intensity range to make it possible to see. Could you try doing some like this in python to see if you can now see the segmentation? I doubt it is completely zero.
|
OK, I think I know what it could be. So, you should not change the magnification level. 10x is the magnification the model is trained on. If you change the magnification level, you will likely get poorer segmentations and most importantly may run into scaling issues, and that is potentially what you are seeing here. I very often saw this in QuPath, when I tried to import the predictions there, but where I was providing the wrong scale information. But if 20x works fine for you, there is always a trick. If you remove half the width and height of the segmentation image, essentially removing the redundant additional padding, and then upscale the segmentation image by 2x, you should get the right overlay. This is because your scale is off by 2x as you used 20x instead of 10x for inference. What could cause the model to not work when you select 10x, is that in the WSI you have, the 10x magnification plane is not available. Then FAST tries to select a different plane and that could go wrong. This could happen if you have a 20x WSI and the different planes are downscaled by 4, resulting in 20x, 5x, ..., instead of the more common 20x, 10x, 5x, ... I have other tricks to get the model running on 10x, if the 10x plane is not available. Using 10x should give better segmentations as well. The most is not trained on 20x nor attempted to be made invariant to magnification level. We can essentially do the resizing to 10x ourselves. But let's see if this works first. Could you test if this WSI works with FastPathology? Alternatively, you could just share me the WSI and I could have a look. As this WSI is from TCGA, I assume it is OK to share? |
Hi,
I have tried "runPipeline --datahub breast-tumour-segmentation --file my_slide.svs". But none results are saved. Could you please provide a full command or python codes to segment a testing WSI using your trained model?
The text was updated successfully, but these errors were encountered: