Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DGCNN implementation #18

Open
EryiXie opened this issue Nov 16, 2022 · 0 comments
Open

DGCNN implementation #18

EryiXie opened this issue Nov 16, 2022 · 0 comments

Comments

@EryiXie
Copy link

EryiXie commented Nov 16, 2022

Hi, I compared your implementation with the original DGCNN and I find out that, in yours the function "get_graph_feature" is only used before the first convolution operation, instead of using it before every convolution layer.

I am wondering if there is some reason behind this difference.

RGM/models/dgcnn.py

Lines 154 to 179 in b0e2f74

def forward(self, xyz):
xyz = xyz.permute(0, 2, 1).contiguous() # (B, 3, N)
batch_size, num_dims, num_points = xyz.size()
x = get_graph_feature(xyz, self.features, self.neighboursnum) # (B, C, N, n)
x = F.relu(self.bn1(self.conv1(x)))
x1 = x.max(dim=-1, keepdim=True)[0]
x = F.relu(self.bn2(self.conv2(x)))
x2 = x.max(dim=-1, keepdim=True)[0]
x = F.relu(self.bn3(self.conv3(x)))
x3 = x.max(dim=-1, keepdim=True)[0]
x = F.relu(self.bn4(self.conv4(x)))
x4 = x.max(dim=-1, keepdim=True)[0]
x = torch.cat((x1, x2, x3, x4), dim=1)
x_node = x.squeeze(-1)
x_edge = F.relu(self.bn5(self.conv5(x))).view(batch_size, -1, num_points)
# if torch.sum(torch.isnan(x_edge)):
# print('discover nan value')
return x_node, x_edge

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant