You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your great work. I am reproducing the results from the paper "Graph convolutional nets for tool presence detection in surgical videos".
I face the error. "RuntimeError: Given input size: (1x9x1024). Calculated output size: (1x-1x1024). Output size is too small"
It is related with the pool layer in the model architecture in https://github.com/uta-smile/STGCN-IPMI19/blob/main/models/video_models.py
Can I check your input shape?
output = model(data, adj) data.shape is torch.Size([64, 9, 1024]) and adj.shape is torch.Size([64, 9, 9])
Can you help me to solve this issue?
Thanks.
The text was updated successfully, but these errors were encountered:
Thanks for your great work. I am reproducing the results from the paper "Graph convolutional nets for tool presence detection in surgical videos".
I face the error. "RuntimeError: Given input size: (1x9x1024). Calculated output size: (1x-1x1024). Output size is too small"
It is related with the pool layer in the model architecture in https://github.com/uta-smile/STGCN-IPMI19/blob/main/models/video_models.py
Can I check your input shape?
output = model(data, adj) data.shape is torch.Size([64, 9, 1024]) and adj.shape is torch.Size([64, 9, 9])
Can you help me to solve this issue?
Thanks.
The text was updated successfully, but these errors were encountered: