-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gssl training #49
Comments
Yes, your understanding is correct. Model is randomly initialized after each self-training round and predicts the new pseudo-labels. |
When I was training with gssl, the effect of training the gssl model was poor when the unlabeled data was marked as cls3, but the effect was normal when the unlabeled data was marked as std. This would lead to a problem, a certain stage of task training When the effect is poor, the subsequent training effect will become worse and worse.How to solve this Problem? |
Are you training on a different dataset or domain? Did you change any code? |
Training code is not changed,I only change training datasets usinig my face data and training in face landmark domain.But the effect of training the gssl model was poor when the unlabeled data was marked as cls3. |
I see. It is hard to diagnose the problem. You may try debugging to find the reason that causes the poor perofrmance, or maybe some hyper-params need to be modified. |
Hello, I would like to consult the relevant details of gssl training. May I ask whether a net is initialized for each task when using gssl training in the paper, and is the latest training net predicted for each pseudo-label?
The text was updated successfully, but these errors were encountered: