-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to interpret human36m-multiview-labels-GTbboxes.npy? #70
Comments
Hi, sorry for a late answer. Does this help? |
Yes, it does help! thanks a lot! How do you decide on training and testing images? |
You're welcome. See here. |
Oops, closed accidentally |
Alrighty! thanks for the help. I think I got all I needed for getting your awesome work running. Again, thanks for the good work and help! you guys are really talented and generous to give help to other researcher or engineer |
Cheers! |
I want to use my own bounding box for the training. From the way you generate bounding boxes in generate-labels-npy-multiview.py, it seems like you load your own bonding box from a .json file. Do you mind elaborate on the structure of such .json file, so I can save my own bounding box and use the same script? More specifically, does the .json have structure, the correspondence of bounding box -> subject, action, frame_idx, camera ?
Thanks in advance!
The text was updated successfully, but these errors were encountered: