Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Given input size: (1x9x1024). Calculated output size: (1x-1x1024). Output size is too smal #1

Open
XuMengyaAmy opened this issue Jun 28, 2021 · 0 comments

Comments

@XuMengyaAmy
Copy link

Thanks for your great work. I am reproducing the results from the paper "Graph convolutional nets for tool presence detection in surgical videos".

I face the error. "RuntimeError: Given input size: (1x9x1024). Calculated output size: (1x-1x1024). Output size is too small"
It is related with the pool layer in the model architecture in https://github.com/uta-smile/STGCN-IPMI19/blob/main/models/video_models.py
Can I check your input shape?
output = model(data, adj) data.shape is torch.Size([64, 9, 1024]) and adj.shape is torch.Size([64, 9, 9])

Can you help me to solve this issue?
Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant