Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experiment with various hidden layer dimenions #7

Open
sadamov opened this issue Jan 26, 2024 · 0 comments
Open

Experiment with various hidden layer dimenions #7

sadamov opened this issue Jan 26, 2024 · 0 comments
Labels
good first issue Good for newcomers help wanted Extra attention is needed

Comments

@sadamov
Copy link
Collaborator

sadamov commented Jan 26, 2024

The latent space representation in the graph can affect the way the model is learning/generalizing the training data. In this issue it is suggested to change the dimension of hidden layers (in the small MLPs along edges and nodes) and observe whether this can improve the training.

Suggestion:

  1. change the hidden_dim, by setting this argument in the train_model.py call in the slurm_train.sh script
parser.add_argument(
    '--hidden_dim', type=int, default=64,
    help='Dimensionality of all hidden representations (default: 64)')

A good first try would be to change it from 64 -> 128.
3. start a training with 40 epochs and validation every 20 epochs
4. compare loss curves to previous model runs on wandb

@sadamov sadamov added good first issue Good for newcomers help wanted Extra attention is needed labels Jan 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant