-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Market2Duke results #2
Comments
Sorry I don't know the reasons, can you reproduce the performance using the provide models? |
The num-split in my settings is two. I use source_train.py to get the pre-trained model, it works in Duke2Market, but doesn't work in Market2Duke. I will try the provide models. Thank you anyway. |
@jh97321 I also have the same problem with you.
In addition, my D2M result also drops.
Have you solved your problem? @OasisYang Any suggestion about this? Thanks! |
If you cannot load the pretrained model, this link maybe helpful. |
Ok, I will try this. Thanks! why we need both the source and target sample distance to calculation self-label for the target sample by DBSCAN. I read the origin DBSCAN paper the sklearn API and found that the input of this clustering algorithm is the feature or distance matrix. I am confused about this! Any suggestion? Thanks! |
I encountered the same problems . |
@Alan-Paul , I got the same result with you. And I try to re-train on the dukemtmc dataset using the source_train.py. Same result. The performance drop exists during duke->market. |
@geyutang @Alan-Paul There're some suggestions. First, check if the performance of our provide model is same as reported in the paper. Also, check the performance of pretrained model, which should be mAP:26, R1:54 when transfer from Duke to Market (market2duke: 16/30). And I conducted all experiment with pytorch=0.4, torchvision=0.2 and scikit-learn=0.19.1. I hope these suggestions can help us. |
I trained model again with Pytorch=0.4.1, then I got the adaptation result from Market to Duke, as 53.3/72.4(mAP/R1), it's almost the same with the results reported in the paper. |
Hi, |
I will try it but it make take some time since most of computation resource is used for another ongoing project. |
I runned the code in Market2Duke can you help?thanks |
I runned the code in Market2Duke and Duke2Market, the result of Duke2Market is a match to the reported numbers while the result of Market2Duke has a drop in performance. The result is showed as below.(I runned on Linux LTS 16.04 with pytorch 0.4.0 and python3.6)
|SSG method| rank-1 | mAP |
| reported |73.0% |53.4% |
| observed |70.2% | 49.8% |
|SSG++ method| rank-1 | mAP |
| reported | 76.0% | 60.3% |
| observed | 72.7% | 53.7% |
No change was been made to the training codes, can you please give me some advice about what the reasons probably be? Thank you.
The text was updated successfully, but these errors were encountered: