using load_state_dict for variational GP in BO loop #2401
Unanswered
ToennisStef
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Hmm why do you add an inducing point for each training data point? What specific model are you using? Independent of the above, since state dicts are just dictionary of tensor objects, you could write some helper function that pre-processes the previously saved state-dict by adding the additional element to the relevant tensors and then load that into your new model. You could do that manually, or implement it as |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi
want to set up a BO loop for a optimization Problem with an unknown constraint.
The problem can be understood as optimizing the inputs to a Simulation Problem. At certain input values the Simulation crashes and i don't get any objective value in return.
Im am using a custom variational GP for classifing the Inputs (Simulation Succesful - Simulation Crashed).
I further want to accelerate the training of the model. Therefore i wanted to load the state_dict from the previous run and start the model training from the previously found hyperparameters. But this is not quite straightforward with the variational GP model.
Since I always add an inducing point (training datapoint) for the variational GP after each BO iteration step the loading of the state_dict fails since the dimensionality of the hyperparameters change in each Iteration. Therefore i train the model each iteration "from scratch".
Did somebody already encountered this problem and can help me here?
I am thankfull for any help!
Best regards.
Beta Was this translation helpful? Give feedback.
All reactions