You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Expected behavior
I would expect the max value of the transformed image to be lower than the max value of the input since we are applying a mean filter to smooth edges of the image. Should the output of the filter be divided by kernel_size2 for an image and kernel_size3 for a volume?
I observe the same problem with other filters. Is it expected for the user to create a lambda transform after the RandImageFilterd transform to scale the image back properly?
Optional dependencies:
Pytorch Ignite version: NOT INSTALLED or UNKNOWN VERSION.
ITK version: NOT INSTALLED or UNKNOWN VERSION.
Nibabel version: NOT INSTALLED or UNKNOWN VERSION.
scikit-image version: 0.21.0
Pillow version: 9.0.1
Tensorboard version: 2.13.0
gdown version: NOT INSTALLED or UNKNOWN VERSION.
TorchVision version: 0.15.2+cpu
tqdm version: 4.65.0
lmdb version: NOT INSTALLED or UNKNOWN VERSION.
psutil version: NOT INSTALLED or UNKNOWN VERSION.
pandas version: 2.0.3
einops version: NOT INSTALLED or UNKNOWN VERSION.
transformers version: NOT INSTALLED or UNKNOWN VERSION.
mlflow version: NOT INSTALLED or UNKNOWN VERSION.
pynrrd version: 1.0.0
Hi @sebquetin, good points.
Seems we need to either improve our implementation or update the docstring to instruct the user to manually scale the image.
Describe the bug
When using monai.transforms.RandImageFilterd, the scale/distribution of the output of the transform changes from the input one.
To Reproduce
Steps to reproduce the behavior:
"""
import monai
import torch
print("I am using monai ", monai.version)
print("I am using torch ", torch.version)
transform = monai.transforms.Compose([monai.transforms.RandImageFilterd(keys="image", kernel="mean",kernel_size=3, prob=1)])
inpt = torch.randint(low=0, high=255, size=(224, 224))
outpt = transform({"image": inpt})
print(inpt.max())
print(inpt.min())
print(outpt["image"].max())
print(outpt["image"].min())
"""
Expected behavior
I would expect the max value of the transformed image to be lower than the max value of the input since we are applying a mean filter to smooth edges of the image. Should the output of the filter be divided by kernel_size2 for an image and kernel_size3 for a volume?
I observe the same problem with other filters. Is it expected for the user to create a lambda transform after the RandImageFilterd transform to scale the image back properly?
Screenshots
Environment
Printing MONAI config...
MONAI version: 1.2.0
Numpy version: 1.25.1
Pytorch version: 2.0.1+cpu
MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = False
MONAI rev id: c33f1ba
MONAI file: /home/sebquet/.local/lib/python3.10/site-packages/monai/init.py
Optional dependencies:
Pytorch Ignite version: NOT INSTALLED or UNKNOWN VERSION.
ITK version: NOT INSTALLED or UNKNOWN VERSION.
Nibabel version: NOT INSTALLED or UNKNOWN VERSION.
scikit-image version: 0.21.0
Pillow version: 9.0.1
Tensorboard version: 2.13.0
gdown version: NOT INSTALLED or UNKNOWN VERSION.
TorchVision version: 0.15.2+cpu
tqdm version: 4.65.0
lmdb version: NOT INSTALLED or UNKNOWN VERSION.
psutil version: NOT INSTALLED or UNKNOWN VERSION.
pandas version: 2.0.3
einops version: NOT INSTALLED or UNKNOWN VERSION.
transformers version: NOT INSTALLED or UNKNOWN VERSION.
mlflow version: NOT INSTALLED or UNKNOWN VERSION.
pynrrd version: 1.0.0
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
================================
Printing system config...
psutil
required forprint_system_info
================================
Printing GPU config...
Num GPUs: 0
Has CUDA: False
cuDNN enabled: False
The text was updated successfully, but these errors were encountered: