Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The fisrt test result isn't match to the one in your paper, is that OK? #12

Open
WindBlowMyAss opened this issue Nov 18, 2022 · 0 comments

Comments

@WindBlowMyAss
Copy link

Hey @hche11 ,I tested the results of the pre-trained model you provided on the test.py, but there was some difference from the tabular data in the paper. Specifically, I tested on the SoundNet-Flickr test set, and all the steps were completed, but when there were some bugs at runtime, I successfully ran test.py after modifying two code places, but the results were as shown in the figure, which was only close to the results of the setup model with the training set VGG-Sound Full in the paper. I wonder if this gap is normal?

by the way,the two codes I modified are mainly: 1.line 56 in model.py, because the Tensor object does not have a T method, so I replaced it with aud.t() 2.line 110 in dataloader.py, in the call The axis doesn't match error occurred during the aid_spectrogram method, so I expanded the dimension of the spectrogram object, that is, spectrogram = np.expand_dims(spectrogram,axis=2)
cbcb4b0f9e082a05b9d69408a384bb5


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant