You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Disclaimer: I am not completely sure if this is a bug of PFRL.
When I ran SAC, and TD3 on my university's cluster without a GPU, I observed that memory usage gradually increased and finally reached to 24 GB, which is the amount of RAM assigned to jobs. I confirmed that this occurred on a local workstation as well. My collaborator also confirmed that this occurred on his environment too. He told me that this did not occur when he ran experiments with a GPU. Would you check if this memory leak (?) occurs too on your workstation or cluster? If this occurs in other environments too, this might be a bug of PFRL.
PyTorch version is 1.6.0+cpu, and PFRL is the latest one obtained by git clone .... The command I used is python3 examples/mujoco/reproduction/soft_actor_critic/train_soft_actor_critic.py --env Humanoid-v2 --gpu -1 --num-envs 3. (num-envs and env seem to be unrelated, though.)
I use singularity, and my collaborator use docker, so there is some possibility that this occurs only when PFRL is run in a container. However, I think it is unlikely.
The text was updated successfully, but these errors were encountered:
Disclaimer: I am not completely sure if this is a bug of PFRL.
When I ran SAC, and TD3 on my university's cluster without a GPU, I observed that memory usage gradually increased and finally reached to 24 GB, which is the amount of RAM assigned to jobs. I confirmed that this occurred on a local workstation as well. My collaborator also confirmed that this occurred on his environment too. He told me that this did not occur when he ran experiments with a GPU. Would you check if this memory leak (?) occurs too on your workstation or cluster? If this occurs in other environments too, this might be a bug of PFRL.
PyTorch version is 1.6.0+cpu, and PFRL is the latest one obtained by
git clone ...
. The command I used ispython3 examples/mujoco/reproduction/soft_actor_critic/train_soft_actor_critic.py --env Humanoid-v2 --gpu -1 --num-envs 3
. (num-envs and env seem to be unrelated, though.)I use singularity, and my collaborator use docker, so there is some possibility that this occurs only when PFRL is run in a container. However, I think it is unlikely.
The text was updated successfully, but these errors were encountered: