-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update train.py #109
base: main
Are you sure you want to change the base?
Update train.py #109
Conversation
A command like "python train.py task=Ant headless=True sim_device=cpu rl_device=cpu" can not work correctly. The reason is "rlg_config_dict" doesn't include the information of "rl_device". In the "a2c_common.py" of "rl_games", there is a line of code: "self.ppo_device = config.get('device', 'cuda:0')". So the RL algorithm will always only work on the cuda:0.
I encountered the same issue! This fix should work, but I think a cleaner solution would be to avoid making a change in We should add in under params.config
This is similar to other config values like
which use values from the top-level config (which has |
Hi, thanks for the fix and discussion. Your solution works well with device like However when using it will crash before the first update of the policy
and in testing case
|
NOTE: edited solution above to include This doesn't fix the other issue though. I believe this is from
in rl_games/common/a2c_common.py, which would need more work to fix |
A command like "python train.py task=Ant headless=True sim_device=cpu rl_device=cpu" can not work correctly. The reason is "rlg_config_dict" doesn't include the information of "rl_device".
In the "a2c_common.py" of "rl_games", there is a line of code: "self.ppo_device = config.get('device', 'cuda:0')". So the RL algorithm will always only work on the cuda:0.