Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Market2Duke results #2

Open
jh97321 opened this issue Aug 26, 2019 · 13 comments
Open

Market2Duke results #2

jh97321 opened this issue Aug 26, 2019 · 13 comments

Comments

@jh97321
Copy link

jh97321 commented Aug 26, 2019

I runned the code in Market2Duke and Duke2Market, the result of Duke2Market is a match to the reported numbers while the result of Market2Duke has a drop in performance. The result is showed as below.(I runned on Linux LTS 16.04 with pytorch 0.4.0 and python3.6)
|SSG method| rank-1 | mAP |
| reported |73.0% |53.4% |
| observed |70.2% | 49.8% |

|SSG++ method| rank-1 | mAP |
| reported | 76.0% | 60.3% |
| observed | 72.7% | 53.7% |
No change was been made to the training codes, can you please give me some advice about what the reasons probably be? Thank you.

@OasisYang
Copy link
Collaborator

Sorry I don't know the reasons, can you reproduce the performance using the provide models?
You need use provided model as pre-trained model and make sure the num-split is two.

@jh97321
Copy link
Author

jh97321 commented Aug 27, 2019

The num-split in my settings is two. I use source_train.py to get the pre-trained model, it works in Duke2Market, but doesn't work in Market2Duke. I will try the provide models. Thank you anyway.

@geyutang
Copy link

@jh97321 I also have the same problem with you.

  • My M->D result R-1=70.7, MAP=52.4 for SSG. Because the market pretrained model shows decode error, I re-train the source model on the Market dataset.

In addition, my D2M result also drops.

  • When using pre-trained Duke model, D->M before adaption is R-1=50.6, MAP=24.7. After adaptation by SSG, the R-1 is 76.3, MAP=54.1
  • When using the model training by re-train, D->M before adaption is R-1=50.0, MAP=24.3. After adaptation by SSG, the R-1 is 70.9, MAP=47.2.

Have you solved your problem? @OasisYang Any suggestion about this? Thanks!

@OasisYang
Copy link
Collaborator

If you cannot load the pretrained model, this link maybe helpful.
And please make sure you train our codes on Two GPUs.

@geyutang
Copy link

Ok, I will try this. Thanks!
In addition, I have another problem with the DBSCAN algorithm for UDA person re-id.

why we need both the source and target sample distance to calculation self-label for the target sample by DBSCAN. I read the origin DBSCAN paper the sklearn API and found that the input of this clustering algorithm is the feature or distance matrix.

I am confused about this! Any suggestion? Thanks!

@Alan-Paul
Copy link

I encountered the same problems .
I run the selftraining.py to train an duke2market model using your pretrained model, however, the results drop in performance. The final results is mAP : 54.0% , rank1 : 76.7% , as the results reported in your paper is mAP: 58.3%, rank1 : 80%. Here are my parameters . Python environment is pytorch 1.1.0 , python 3.6.0. Any suggestions will be appreciated !!
'''
arch='resnet50',
batch_size=128,
combine_trainval=False,
data_dir='./data',
dce_loss=False,
dist_metric='euclidean',
dropout=0,
epochs=70,
evaluate=False,
features=128,
gpu_devices='0,1',
height=None,
iteration=30,
lambda_value=0.1,
load_dist=False,
logs_dir='logs/duke2market',
lr=6e-05,
margin=0.5,
no_rerank=False,
num_instances=4,
num_split=2,
print_freq=20,
resume='logs/pretrained_models/dukemtmc_trained.pth.tar',
rho=0.0016,
seed=1,
split=0,
src_dataset='dukemtmc',
start_save=0,
tgt_dataset='market1501',
weight_decay=0.0005,
width=None,
workers=4
'''

@geyutang
Copy link

@Alan-Paul , I got the same result with you. And I try to re-train on the dukemtmc dataset using the source_train.py. Same result. The performance drop exists during duke->market.

@OasisYang
Copy link
Collaborator

@geyutang @Alan-Paul There're some suggestions. First, check if the performance of our provide model is same as reported in the paper. Also, check the performance of pretrained model, which should be mAP:26, R1:54 when transfer from Duke to Market (market2duke: 16/30). And I conducted all experiment with pytorch=0.4, torchvision=0.2 and scikit-learn=0.19.1. I hope these suggestions can help us.

@geyutang
Copy link

The D2M result at the beginning is right.

Mean AP: 26.8%
CMC Scores market1501
top-1 54.2%
top-5 70.5%
top-10 76.8%

But the model enters saturation from 10 epoch. Following is my log on training rank1 with the iteration. It looks like overfitting. In addition, slightly modifies the learning rate, the result doesn't achieve that reported in your paper. Any suggestion for solving this model saturation problem?
image

Also, my torch version is 1.0.0, this may lead to the results mismatch.
Thanks for your kindly reply.

@OasisYang
Copy link
Collaborator

I trained model again with Pytorch=0.4.1, then I got the adaptation result from Market to Duke, as 53.3/72.4(mAP/R1), it's almost the same with the results reported in the paper.

@yihongXU
Copy link

I trained model again with Pytorch=0.4.1, then I got the adaptation result from Market to Duke, as 53.3/72.4(mAP/R1), it's almost the same with the results reported in the paper.

Hi,
Did you try Duck-> Market? it seems to me that we have difficulty to get 58.3/80.0 (mAP/R1), I've got 52.6/75.7(mAP/R1) instead. Thank you.

@OasisYang
Copy link
Collaborator

I will try it but it make take some time since most of computation resource is used for another ongoing project.

@beiyangxiaolaodi
Copy link

I runned the code in Market2Duke
I use pytorch 0.4.1 but the result of Market2Duke still has a drop in performance.
|SSG method| rank-1 | mAP |
| observed |68.7% | 49.2% |
| reported |73.0% |53.4% |

can you help?thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants