Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Undeterministic results #22

Open
kaanakan opened this issue Feb 20, 2023 · 6 comments
Open

Undeterministic results #22

kaanakan opened this issue Feb 20, 2023 · 6 comments

Comments

@kaanakan
Copy link

Hello,

thank you for sharing the code. I checked the code and all of the seeds are set.
I further added torch.backends.cudnn.deterministic = True and torch.backends.cudnn.benchmark = False to make the code produce the same results in different runs. However, the results differ between different runs.
Do you have any idea why?

Thanks in advance.

@xlliu7
Copy link
Owner

xlliu7 commented Feb 21, 2023

Please refer to README. I explained it in the last paragraph of section 3 (Training by Yourself).

@kaanakan
Copy link
Author

Thank you for the response. I trained the model from scratch over ten different runs and I couldn't get the reported results in the repository. The results I got are below:

TadTr 0.3 0.4 0.5 0.6 0.7 avg
run1 73,13 67,59 59,11 45,78 30,63 55,248
run2 74,03 67,7 57,88 45,29 30,77 55,134
run3 74,49 68,18 58,59 45,04 30,3 55,32
run4 73,18 67,7 58,67 46,37 31,1 55,404
run5 72,37 67,1 58,43 46,35 30,53 54,956
run6 74,85 67,98 60,11 46,55 31,78 56,254
run7 73,41 67,45 58,7 45,33 30,36 55,05
run8 73,53 67,84 58,47 44,97 30,24 55,01
run9 73,95 68,3 60,64 46,67 30,06 55,924
run10 74,8 69,29 59,62 45,28 31,22 56,042
average 73,774 67,913 59,022 45,763 30,699 55,4342

Can you please clarify why the results differ so much? I did not change anything except I set disable_cuda=True because several errors occurred compiling the cuda code.

@xlliu7
Copy link
Owner

xlliu7 commented Feb 22, 2023

First, the results fluctuate much because THUMOS14 is a small dataset. Second, disable_cuda=True will also disable the actionness regression module, which will lead to performance drop. Last, there is a bug in loss calculation in previous code, which will also cause lower performace and is fixed in the latest commit.

@kaanakan
Copy link
Author

Hello again,

thank you for your response. I compiled the roi align using cuda and train 5 different runs with the up-to-date repository.

Although there is an improvement in the previous results, the results are still lower than expected.

0.3 0.4 0.5 0.6 0.7 avg
73.52 67.93 59.14 47.07 32.36 56.004
74.57 69.45 59.99 47.76 33.45 57.044
72.86 67.71 58.74 46.12 32.70 55.626
73.39 68.00 60.05 47.56 31.73 56.146
73.73 68.26 58.99 46.35 32.41 55.948
------- ------- ------- ------- ------- --------
73.414 68.07 59.182 47.172 32.33 56.154

Any idea why this happens?

@xlliu7
Copy link
Owner

xlliu7 commented Feb 23, 2023 via email

@kaanakan
Copy link
Author

Thank you.
Did you report the maximum of the results you have got? e.g., best of 10 runs?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants