You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The latent space representation in the graph can affect the way the model is learning/generalizing the training data. In this issue it is suggested to change the dimension of hidden layers (in the small MLPs along edges and nodes) and observe whether this can improve the training.
Suggestion:
change the hidden_dim, by setting this argument in the train_model.py call in the slurm_train.sh script
parser.add_argument(
'--hidden_dim', type=int, default=64,
help='Dimensionality of all hidden representations (default: 64)')
A good first try would be to change it from 64 -> 128.
3. start a training with 40 epochs and validation every 20 epochs
4. compare loss curves to previous model runs on wandb
The text was updated successfully, but these errors were encountered:
The latent space representation in the graph can affect the way the model is learning/generalizing the training data. In this issue it is suggested to change the dimension of hidden layers (in the small MLPs along edges and nodes) and observe whether this can improve the training.
Suggestion:
train_model.py
call in theslurm_train.sh
scriptA good first try would be to change it from 64 -> 128.
3. start a training with 40 epochs and validation every 20 epochs
4. compare loss curves to previous model runs on wandb
The text was updated successfully, but these errors were encountered: