You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 21, 2022. It is now read-only.
In the latest pytorch-lightning ( 1.6.x ), the argument of number of gpus has changed to 'devices', but in this project's requirement it's pytorch-lightning>=1.4.0. So pip will automatically install 1.6.x which conflicts to the config file
The text was updated successfully, but these errors were encountered:
@Borda I am preparing the PR, I have some confusions,
pl.Trainer still has the argument gpus, so should I keep it along side with argument devices, also I have seen some benchmark config files which contains gpus should I change those?
Also in the main README.md there are several commands which can be changed, like from - python train.py task=nlp/language_modeling dataset=nlp/language_modeling/wikitext trainer.gpus=1 training.batch_size=8 to python train.py task=nlp/language_modeling dataset=nlp/language_modeling/wikitext trainer.accelerator=gpu training.batch_size=8 we don't need to manually set devices if we set the accelerator, correct me if I am wrong.
🐛 Bug
In the latest pytorch-lightning ( 1.6.x ), the argument of number of gpus has changed to 'devices', but in this project's requirement it's
pytorch-lightning>=1.4.0
. So pip will automatically install 1.6.x which conflicts to the config fileThe text was updated successfully, but these errors were encountered: