Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I want to cite your work in my paper but I couldn't get your results with this project. #15

Open
maymay1982 opened this issue Aug 31, 2020 · 4 comments

Comments

@maymay1982
Copy link

I want to cite your work in my paper but I couldn't get your results published in your paper. I guess this code isn't exact the same as you used in your experiment, since we need to modify the parameter of the fc1 in MFCCModel to make it run, but if we changed the model structure, we couldn't use your pretained models in sample_model folder. To fairly evaluate the value of your work and promote the progress of this field, would you please provide the exact model structure and parameters?

@wangziqi000
Copy link
Member

Hi,
Thank you for your interest. I apologize in advance as it has been a while since this project, so if the uploaded model does not work, I will also have to do some retraining.

I dug through my notes to find something about the network structure, please see the figure to find the detail of the MFCC/CQCC/Spec model structure variants.
https://sm.ms/image/WbzCNUySPfrLpJ3

@LoveSiameseCat
Copy link

This also my quesiton. In my experiment, the final result on pa is 5% of EER and 0.14 of min-tDCF, which is much worse than the reported one. I don't konw whether this result is convicing to write in the papper. I Hope you can offer a new pre-trained model(the previous released one is much worse), thank you.

@malzantot
Copy link
Contributor

Hi @maymay1982 , @ChineseboyLuo ,

Thanks for your interest in our project.

Please notice that the evaluation number that we have in our paper are according to the competitions formal evaluation results which was carried out by the competition organizers, not by us, but the result was also consistent with our own evaluation on the development dataset. We could-not run evaluation by ourselves on the eval dataset by ourselves because it was kept private by the competition organizers. The full set of formal evaluation results can be found here and our team entry name was (team name: uclanesl, team code=T016).

When we released the code, we provided. all code we used for model training and evaluation but didn't anticipate there will be a need for adding pre-trained models. Therefore, we didn't snapshot our best models or add them to the repo. But later as we got many requests to add checkpoints. My co-author @wangziqi000 has uploaded some sample checkpoints which are not necessary the best performing models. As clearly stated in his commit, the recommended approach is to train your own models using the code.

In response to your issue, we are currently investigating the performance of the available checkpoints and if needed, we will , if needed, train new models and provide new checkpoints or update the instructions on how they can be used and on which datasets. But please allow us sometime as we are no longer actively working on this project and no longer have access to the machines that had the models we trained during the competition.

Regarding the MFCC model, there is a mis-match between the code and the provided checkpoint that you can either fix by training a new model or by changing the number of neurons in layer fc1 of MFCC model. After the competition ended, we decided to release code as open-source to help the community making progress in this field. It is also intended to be starter-framework for other researchers to build and evaluate their models by extending our code. As we have spent days to build the data loading, feature extraction, model training, and evaluation. We wanted to save others at a least a part of this overhead. Nevertheless, as an open-source project if there is any bugs or a potential enhancement, feel free to let us know by contacting us or by contributing to the project through adding a pull request which we will happily merge into the project.

@LoveSiameseCat
Copy link

thank you for your quick reply, at present I decide to use the result in the competition to be a constrat experiment. I'll keep attetention to your update. if the checkpoints or the score can be offered before the deadline, I prefer to use the new results (^▽^).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants