Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

matmul issue while performing caliters_perm #15

Open
JKViswanadham14 opened this issue Jun 13, 2022 · 4 comments
Open

matmul issue while performing caliters_perm #15

JKViswanadham14 opened this issue Jun 13, 2022 · 4 comments

Comments

@JKViswanadham14
Copy link


RuntimeError Traceback (most recent call last)
/tmp/ipykernel_3678/3876408280.py in
3 P1_gt_copy_inv = P1_gt_copy.clone()
4 P2_gt_copy_inv = P2_gt_copy.clone()
----> 5 s_perm_mat = caliters_perm(model.float(), P1_gt_copy.float(), P2_gt_copy.float(), torch.from_numpy(A1_gt), torch.from_numpy(A2_gt), n1_gt, n2_gt, estimate_iters)
6 """if cfg.EVAL.CYCLE:
7 s_perm_mat_inv = caliters_perm(model, P2_gt_copy_inv, P1_gt_copy_inv, A2_gt, A1_gt, n2_gt, n1_gt, estimate_iters)

/tmp/ipykernel_3678/871858585.py in caliters_perm(model, P1_gt_copy, P2_gt_copy, A1_gt, A2_gt, n1_gt, n2_gt, estimate_iters)
228 for estimate_iter in range(estimate_iters):
229 s_prem_i, Inlier_src_pre, Inlier_ref_pre = model(P1_gt_copy, P2_gt_copy,
--> 230 A1_gt, A2_gt, n1_gt, n2_gt)
231 if cfg.PGM.USEINLIERRATE:
232 s_prem_i = Inlier_src_pre * s_prem_i * Inlier_ref_pre.transpose(2, 1).contiguous()

/opt/conda/envs/fcgf/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),

/tmp/ipykernel_3678/871858585.py in forward(self, P_src, P_tgt, A_src, A_tgt, ns_src, ns_tgt)
101 emb_src, emb_tgt = gnn_layer([A_src1, emb_src], [A_tgt1, emb_tgt])
102 else:
--> 103 emb_src, emb_tgt = gnn_layer([A_src, emb_src], [A_tgt, emb_tgt])
104 affinity = getattr(self, 'affinity_{}'.format(i))
105 # emb_src_norm = torch.norm(emb_src, p=2, dim=2, keepdim=True).detach()

/opt/conda/envs/fcgf/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),

/home/ubuntu/PointCloudRegistration/AIModels/RGM/models/gconv.py in forward(self, g1, g2)
34
35 def forward(self, g1, g2):
---> 36 emb1 = self.gconv(*g1)
37 emb2 = self.gconv(*g2)
38 # embx are tensors of size (bs, N, num_features)

/opt/conda/envs/fcgf/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),

/home/ubuntu/PointCloudRegistration/AIModels/RGM/models/gconv.py in forward(self, A, x, norm)
19 A = F.normalize(A, p=1, dim=-2)
20 print(x.shape)
---> 21 ax = self.a_fc(x)
22 ux = self.u_fc(x)
23 x = torch.bmm(A, F.relu(ax)) + F.relu(ux) # has size (bs, N, num_outputs)

/opt/conda/envs/fcgf/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),

/opt/conda/envs/fcgf/lib/python3.7/site-packages/torch/nn/modules/linear.py in forward(self, input)
92
93 def forward(self, input: Tensor) -> Tensor:
---> 94 return F.linear(input, self.weight, self.bias)
95
96 def extra_repr(self) -> str:

/opt/conda/envs/fcgf/lib/python3.7/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1751 if has_torch_function_variadic(input, weight):
1752 return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
-> 1753 return torch._C._nn.linear(input, weight, bias)
1754
1755

RuntimeError: mat1 and mat2 shapes cannot be multiplied (76x1024 and 640x1024)

@Sabershou
Copy link

I have met the same problem, have you solved it now?

@JyothiKalyan
Copy link

not yet.

@Sabershou
Copy link

Change the FEATURE_NODE_CHANNEL and FEATURE_EDGE_CHANNEL may help, it should be caused by nn.linear(in_features, out_features), previous out_features need to be equal to the next in_features

@fukexue
Copy link
Owner

fukexue commented Nov 9, 2022

If you haven't modified the source code, you can see “self.a_fc” is (input channel =1024, output channel = 512) in line 21 of "gconv.py".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants