-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tested on test_split and got very low accurarcy using pretrained weights #4
Comments
Oh, I will check again, what type of classifier?? (audio, remi, magenta) |
Thanks. |
I re-check performance, but there is no performance decrease. I think that it is difference about global seed! or just simply add global seed in your script!
|
thanks! I didn't set global seed. I added global seed to both inference_batch.py and inference.py, but still got weird results. Here is the dataset/split/test.csv and the csv produced by inference_batch.py. |
It's very weird. Could you follow
|
I used inference_batch.py because I wanted to test the best weights you provided on EMOPIA dataset. |
I wanna just double checking the result. It is strange that the results are different even when there are no other factors. I will check my inference code also! |
tain_test1030.csv with best_weight, I found that the train_test.py and inference.py results were different. I think There is no problem with best weight. I will modify the inference code to train_test style soon. |
thanks a lot!! |
Hi, sorry to disturb. |
I tested inference_batch.py on dataset/split/test.csv and got 0.57,0.744,0.744 on AV/A/V separately.
The models I used were download from https://drive.google.com/u/0/uc?id=1L_NOVKCElwcYUEAKp1-FZj_G6Hcq2g2c&export=download (which were provided in README.md).
The text was updated successfully, but these errors were encountered: