Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attention maps issue #25

Open
421psh opened this issue Feb 6, 2020 · 12 comments
Open

Attention maps issue #25

421psh opened this issue Feb 6, 2020 · 12 comments

Comments

@421psh
Copy link

421psh commented Feb 6, 2020

Hello. I am trying to implement your code and faced weird behavior of Attention Maps (predMaskC and predMaskN). I am using your pretrained model. I also tried torchvision 0.2.0 and torch 0.4.0, but it didn't help. Do you have any suggestions?
all

@JiaxiongQ
Copy link
Owner

JiaxiongQ commented Feb 7, 2020

Is your python version 2.7 ?

@421psh
Copy link
Author

421psh commented Feb 7, 2020

Is your python version 2.7 ?

I have tried both python versions (2.7 and 3.6), same result.

My code for your model inference:

img_orig = skimage.io.imread('rgb_kitti.png')
lidar = skimage.io.imread('lidar_kitti.png').astype(np.float32)
sparse = lidar * 1.0 / 256.0
binary_mask = np.where(sparse > 0.0, 1.0, 0.0)
img = img_orig.transpose(2, 0, 1)
left = torch.FloatTensor(img.reshape(1, img.shape[0], img.shape[1], img.shape[2])).cuda()
sparse = torch.FloatTensor(sparse.reshape(1, 1, sparse.shape[0], sparse.shape[1])).cuda()
mask = torch.FloatTensor(binary_mask.reshape(1, 1, binary_mask.shape[0], binary_mask.shape[1])).cuda()

model = s2dN(1)
model = nn.DataParallel(model, device_ids=[0])
model.cuda()
state_dict = torch.load('depth_completion_KITTI.tar')['state_dict']
model.load_state_dict(state_dict)

model.eval()

with torch.no_grad():
    outC, outN, maskC, maskN = model(left, sparse, mask)
    
tempMask = torch.zeros_like(outC)
predC = outC[:,0,:,:]
predN = outN[:,0,:,:]
tempMask[:, 0, :, :] = maskC
tempMask[:, 1, :, :] = maskN
predMask = F.softmax(tempMask)
predMaskC = predMask[:,0,:,:]
predMaskN = predMask[:,1,:,:]
pred1 = predC * predMaskC + predN * predMaskN
pred = torch.squeeze(pred1)
pred = pred.data.cpu().numpy()
pred = np.where(pred <= 0.0, 0.9, pred)
pred_show = pred * 256.0
pred_show = pred_show.astype('uint16')
res_buffer = pred_show.tobytes()
img_pred = Image.new("I",pred_show.T.shape)
img_pred.frombytes(res_buffer,'raw',"I;16")
img_pred.save('img_pred.png')

@421psh
Copy link
Author

421psh commented Feb 7, 2020

I also have tried different images from Kitti dataset, and predMaskC is always practically empty. As a result there is no impact from Color Pathway Dense Depth in a final one.

@JiaxiongQ
Copy link
Owner

The value of pixel in the mask is from 0 to 1, you should multiply 255 for visualization.

@421psh
Copy link
Author

421psh commented Feb 7, 2020

The value of pixel in the mask is from 0 to 1, you should multiply 255 for visualization.

The problem is not in visualization, matplotlib allows showing images both in the range of 0 to 255 and 0 to 1. The main issue is that the Mask of Color Pathway is almost all consists of zeros, while the Mask of Normal Pathway consists of ones.

@421psh
Copy link
Author

421psh commented Feb 10, 2020

Did anyone face such issue?

@JiaxiongQ
Copy link
Owner

JiaxiongQ commented Feb 11, 2020 via email

@421psh
Copy link
Author

421psh commented Feb 11, 2020

@JiaxiongQ Thank you for your responsiveness. Finally, after neat preprocessing on the input data and following all requirements, I actually managed to get right Depth Map. Result is great, but I am wondering are you going to adapt the code to python 3 and up to date versions of torch and torchvision?

@JiaxiongQ
Copy link
Owner

JiaxiongQ commented Feb 11, 2020 via email

@hello7623
Copy link

Hello. I am trying to implement your code and faced weird behavior of Attention Maps (predMaskC and predMaskN). I am using your pretrained model. I also tried torchvision 0.2.0 and torch 0.4.0, but it didn't help. Do you have any suggestions?

why my result is like this
image

@JiaxiongQ
Copy link
Owner

JiaxiongQ commented May 14, 2020 via email

@hello7623
Copy link

The version of python should be 2.7

I have used python 2.7 (a long time for config env) but it still like this
oh on

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants