-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I want to cite your work in my paper but I couldn't get your results with this project. #15
Comments
Hi, I dug through my notes to find something about the network structure, please see the figure to find the detail of the MFCC/CQCC/Spec model structure variants. |
This also my quesiton. In my experiment, the final result on pa is 5% of EER and 0.14 of min-tDCF, which is much worse than the reported one. I don't konw whether this result is convicing to write in the papper. I Hope you can offer a new pre-trained model(the previous released one is much worse), thank you. |
Hi @maymay1982 , @ChineseboyLuo , Thanks for your interest in our project. Please notice that the evaluation number that we have in our paper are according to the competitions formal evaluation results which was carried out by the competition organizers, not by us, but the result was also consistent with our own evaluation on the development dataset. We could-not run evaluation by ourselves on the eval dataset by ourselves because it was kept private by the competition organizers. The full set of formal evaluation results can be found here and our team entry name was (team name: uclanesl, team code=T016). When we released the code, we provided. all code we used for model training and evaluation but didn't anticipate there will be a need for adding pre-trained models. Therefore, we didn't snapshot our best models or add them to the repo. But later as we got many requests to add checkpoints. My co-author @wangziqi000 has uploaded some sample checkpoints which are not necessary the best performing models. As clearly stated in his commit, the recommended approach is to train your own models using the code. In response to your issue, we are currently investigating the performance of the available checkpoints and if needed, we will , if needed, train new models and provide new checkpoints or update the instructions on how they can be used and on which datasets. But please allow us sometime as we are no longer actively working on this project and no longer have access to the machines that had the models we trained during the competition. Regarding the MFCC model, there is a mis-match between the code and the provided checkpoint that you can either fix by training a new model or by changing the number of neurons in layer |
thank you for your quick reply, at present I decide to use the result in the competition to be a constrat experiment. I'll keep attetention to your update. if the checkpoints or the score can be offered before the deadline, I prefer to use the new results (^▽^). |
I want to cite your work in my paper but I couldn't get your results published in your paper. I guess this code isn't exact the same as you used in your experiment, since we need to modify the parameter of the fc1 in MFCCModel to make it run, but if we changed the model structure, we couldn't use your pretained models in sample_model folder. To fairly evaluate the value of your work and promote the progress of this field, would you please provide the exact model structure and parameters?
The text was updated successfully, but these errors were encountered: