You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When use a GPU to train any model with k-fold cross validation, it seems all right when running the first fold and starts to train model slowly when running the second fold. Actually, the GPU is not used to train model.
It is caused by the code of saving checkpoint. All the parameters(parameters in config object) are saved in a json file when saving checkpoint. The important is, config['device']=torch.device('cuda') can't be parsed to json format. However, this parameter is deleted directly from config object. So when running another fold, config['device'] can't be found, so the model is not on the GPU.
The text was updated successfully, but these errors were encountered:
LYH-YF
changed the title
Can't Not Use Cuda When Running K-fold Cross Validation (bug in v0.0.6)
Can Not Use Cuda When Running K-fold Cross Validation (bug in v0.0.6)
Aug 11, 2022
When use a GPU to train any model with k-fold cross validation, it seems all right when running the first fold and starts to train model slowly when running the second fold. Actually, the GPU is not used to train model.
It is caused by the code of saving checkpoint. All the parameters(parameters in config object) are saved in a json file when saving checkpoint. The important is, config['device']=torch.device('cuda') can't be parsed to json format. However, this parameter is deleted directly from config object. So when running another fold, config['device'] can't be found, so the model is not on the GPU.
The text was updated successfully, but these errors were encountered: