-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training settings #7
Comments
Hi there, I believe we run for about 100 epochs to get the score. As for other datasets, we can not guarantee all the epochs are the same because sometimes the 80 + epochs may already reach a plateau so there's no need for further training, so we stopped the training if training loss did not change much. We did not change the learning rate during training. All the pretrained models are stored in our cloud drive, please check it out in the font page in our github Readme file. Thanks |
Thanks very much for your reply! |
I'd also like to ask how much time did you cost for training each epoch? I run it on one GTX 3090 with batchsize=1, it cost about 2 hours to train an epoch (slower than I expect). I wonder if I did something wrong while training, because the model seems not so complex. |
Hello, I'd like to ask how many epochs did you run to reach 90.5% F1-score on RNAStralign, and on other datasets? And did you adjust the learning rate while training? How can I get the pretrained model mentioned in your paper (by using which datasets how many epochs)?
Thanks
The text was updated successfully, but these errors were encountered: