You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to to do large scale hyper-parameter tuning. I have a local setup of 4 GPU's. My model size is small (~ 1GB), so I was thinking of running multiple trials on a single GPU so that I can parallelize tuning even more.
Even setting resources_per_trial={"gpu": 0.3} is not helping.
Is there a way I can do it ?
Please help.
The text was updated successfully, but these errors were encountered:
I am trying to to do large scale hyper-parameter tuning. I have a local setup of 4 GPU's. My model size is small (~ 1GB), so I was thinking of running multiple trials on a single GPU so that I can parallelize tuning even more.
Even setting resources_per_trial={"gpu": 0.3} is not helping.
Is there a way I can do it ?
Please help.
The text was updated successfully, but these errors were encountered: