-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about yolov3 tiny occlusion track? #2553
Comments
@derekwong66 Hi,
It is just an experimental detector (it isn't well tested yet)
(I have not tested yet it with You should train it on sequential frames from one or several videos:
The only conditions - the frames from the video must go sequentially in the
Idea in simple words:
Example of state of art super-resolution network: https://arxiv.org/abs/1704.02738 Example of object Detection and Tracking:
Your training process can looks like this: |
Dear AlexeyAB, I also have problem with track_id when using Yolov3 for counting objects on video. I used yolov3.cfg for my own dataset. |
@diennv Hi,
Yes, you can. Also later I will add conv-LSTM layer, that can work with higher accuracy. |
@AlexeyAB |
@zeynali Hi, GTX 1080 is enought for training yolov3-tiny_cocclusion_track.cfg isn't well tested yet, and will be modified to use conv-LSTM instead of conv-RNN later. So I can't compare it with another algorithms. |
Hi, for the 3rd step which is to train the detector and tracker, where do i get the data/occlusion.data? Thank You |
Hi @AlexeyAB , Can you explain more about the rules of labeling bboxes for occlusion tracking? Thank you |
@jacklin602 Hi,
Yes, you should mark totally invisible objects in Training and Validation datasets.
You can do as you want that it will be detected. Usually I mark the estimated total extent of the object. Also in several days I will commit new version of |
@AlexeyAB Hi, I'm also very much interested into your LSTM implementation to compare it to Deep sort tracker. |
@AlexeyAB I'm also very much interested in your LSTM implementation |
@alexanderfrey @Tangotrax @diennv @derekwong66 @PythonImageDeveloper @kamarulhairi You can try to use LSTM-models with the latest version of Darknet: #3114 (comment) For example, this model: https://github.com/AlexeyAB/darknet/files/3199631/yolo_v3_tiny_pan_lstm.cfg.txt How to train: #3114 (comment) |
@AlexeyAB Thanks, appreciate your work very much ! I am very excited to see how it performs. Any recommendations on how many sequences it should be trained with ? |
@alexanderfrey There are no exact recommendations yet. |
@AlexeyAB Unfortunately the avg. loss becomes NAN after roughly 1200 iterations. I use the yolov3-tiny.conv.14 weights and I already set state_constrain=16 for each [conv_lstm] layer and sequential_subdivisions=8 and sgdr_cycle=10000. I train on 3 classes and adjusted the filters accordingly to 24(the ones before the yolo layers...) Anchors are default and I set batch and subdivision to 1. What else can I do to make the training run through ? Thanks for any help |
@alexanderfrey Hi,
Did you set
you can increase What GPU do you use? If it doesn't help - then try to set
If it doesn't help, then try to set |
@AlexeyAB |
Hi @AlexeyAB ,
Thanks for ur contribution. Can you explain more about how to train tiny-occlusion track cfg? which pre-trained weight we can use and crnn ?
thanks in advance.
The text was updated successfully, but these errors were encountered: