-
Notifications
You must be signed in to change notification settings - Fork 315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG/QUESTION] Installing torchrl with previous torch version #2124
Comments
Sure this should work (make sure you have installed ninja through |
Thank you so much for the info but I have the same issue. Attempting uninstall: typing-extensions
Found existing installation: typing_extensions 4.7.1
Uninstalling typing_extensions-4.7.1:
Successfully uninstalled typing_extensions-4.7.1
Attempting uninstall: triton
Found existing installation: triton 2.1.0
Uninstalling triton-2.1.0:
Successfully uninstalled triton-2.1.0
Attempting uninstall: torch
Found existing installation: torch 2.1.0+cu121
Uninstalling torch-2.1.0+cu121:
Successfully uninstalled torch-2.1.0+cu121
Attempting uninstall: tensordict
Found existing installation: tensordict 0.3.0
Uninstalling tensordict-0.3.0:
Successfully uninstalled tensordict-0.3.0
Attempting uninstall: torchrl
Found existing installation: torchrl 0.3.0
Uninstalling torchrl-0.3.0:
Successfully uninstalled torchrl-0.3.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchaudio 2.1.0+cu121 requires torch==2.1.0, but you have torch 2.3.0 which is incompatible.
torchvision 0.16.0+cu121 requires torch==2.1.0, but you have torch 2.3.0 which is incompatible.
Successfully installed nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.4.127 nvidia-nvtx-cu12-12.1.105 tensordict-0.4.0 torch-2.3.0 torchrl-0.4.0+583e2a1 triton-2.3.0 typing-extensions-4.11.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv |
Ok let me see how we can make this happen without too much hassle! |
From now on you will be able to install torchrl with any torch version with
I updated the v0.4.0 tag so that you could do this with the most recent version of the lib! |
Thank you so much for your help but I am still having the same issue :( pip install git+https://github.com/pytorch/rl
Collecting git+https://github.com/pytorch/rl
Cloning https://github.com/pytorch/rl to /tmp/pip-req-build-kdcoj60j
Running command git clone --filter=blob:none --quiet https://github.com/pytorch/rl /tmp/pip-req-build-kdcoj60j
Resolved https://github.com/pytorch/rl to commit 3c6b9c6eaf106ef50bd859a12cae3c0c89249d34
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: torch in /usr/local/lib/python3.10/dist-packages (from torchrl==0.4.0+3c6b9c6) (2.1.0+cu121)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from torchrl==0.4.0+3c6b9c6) (1.22.2)
Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from torchrl==0.4.0+3c6b9c6) (23.1)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.10/dist-packages (from torchrl==0.4.0+3c6b9c6) (2.2.1)
Collecting tensordict>=0.4.0 (from torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for tensordict>=0.4.0 from https://files.pythonhosted.org/packages/12/4d/4162488f8b1c6c65f014670131a5f79d681345e15c47fd41d5a3d94b7601/tensordict-0.4.0-cp310-cp310-manylinux1_x86_64.whl.metadata
Downloading tensordict-0.4.0-cp310-cp310-manylinux1_x86_64.whl.metadata (22 kB)
Collecting torch (from torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for torch from https://files.pythonhosted.org/packages/43/e5/2ddae60ae999b224aceb74490abeb885ee118227f866cb12046f0481d4c9/torch-2.3.0-cp310-cp310-manylinux1_x86_64.whl.metadata
Using cached torch-2.3.0-cp310-cp310-manylinux1_x86_64.whl.metadata (26 kB)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch->torchrl==0.4.0+3c6b9c6) (3.9.0)
Collecting typing-extensions>=4.8.0 (from torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for typing-extensions>=4.8.0 from https://files.pythonhosted.org/packages/01/f3/936e209267d6ef7510322191003885de524fc48d1b43269810cd589ceaf5/typing_extensions-4.11.0-py3-none-any.whl.metadata
Using cached typing_extensions-4.11.0-py3-none-any.whl.metadata (3.0 kB)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch->torchrl==0.4.0+3c6b9c6) (1.12)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch->torchrl==0.4.0+3c6b9c6) (2.8.8)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch->torchrl==0.4.0+3c6b9c6) (3.1.2)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from torch->torchrl==0.4.0+3c6b9c6) (2023.6.0)
Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for nvidia-cuda-nvrtc-cu12==12.1.105 from https://files.pythonhosted.org/packages/b6/9f/c64c03f49d6fbc56196664d05dba14e3a561038a81a638eeb47f4d4cfd48/nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata
Using cached nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-runtime-cu12==12.1.105 (from torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for nvidia-cuda-runtime-cu12==12.1.105 from https://files.pythonhosted.org/packages/eb/d5/c68b1d2cdfcc59e72e8a5949a37ddb22ae6cade80cd4a57a84d4c8b55472/nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata
Using cached nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-cupti-cu12==12.1.105 (from torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for nvidia-cuda-cupti-cu12==12.1.105 from https://files.pythonhosted.org/packages/7e/00/6b218edd739ecfc60524e585ba8e6b00554dd908de2c9c66c1af3e44e18d/nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata
Using cached nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cudnn-cu12==8.9.2.26 (from torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for nvidia-cudnn-cu12==8.9.2.26 from https://files.pythonhosted.org/packages/ff/74/a2e2be7fb83aaedec84f391f082cf765dfb635e7caa9b49065f73e4835d8/nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl.metadata
Using cached nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cublas-cu12==12.1.3.1 (from torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for nvidia-cublas-cu12==12.1.3.1 from https://files.pythonhosted.org/packages/37/6d/121efd7382d5b0284239f4ab1fc1590d86d34ed4a4a2fdb13b30ca8e5740/nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl.metadata
Using cached nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cufft-cu12==11.0.2.54 (from torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for nvidia-cufft-cu12==11.0.2.54 from https://files.pythonhosted.org/packages/86/94/eb540db023ce1d162e7bea9f8f5aa781d57c65aed513c33ee9a5123ead4d/nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl.metadata
Using cached nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-curand-cu12==10.3.2.106 (from torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for nvidia-curand-cu12==10.3.2.106 from https://files.pythonhosted.org/packages/44/31/4890b1c9abc496303412947fc7dcea3d14861720642b49e8ceed89636705/nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl.metadata
Using cached nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cusolver-cu12==11.4.5.107 (from torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for nvidia-cusolver-cu12==11.4.5.107 from https://files.pythonhosted.org/packages/bc/1d/8de1e5c67099015c834315e333911273a8c6aaba78923dd1d1e25fc5f217/nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl.metadata
Using cached nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparse-cu12==12.1.0.106 (from torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for nvidia-cusparse-cu12==12.1.0.106 from https://files.pythonhosted.org/packages/65/5b/cfaeebf25cd9fdec14338ccb16f6b2c4c7fa9163aefcf057d86b9cc248bb/nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl.metadata
Using cached nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-nccl-cu12==2.20.5 (from torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for nvidia-nccl-cu12==2.20.5 from https://files.pythonhosted.org/packages/4b/2a/0a131f572aa09f741c30ccd45a8e56316e8be8dfc7bc19bf0ab7cfef7b19/nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl.metadata
Using cached nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl.metadata (1.8 kB)
Collecting nvidia-nvtx-cu12==12.1.105 (from torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for nvidia-nvtx-cu12==12.1.105 from https://files.pythonhosted.org/packages/da/d3/8057f0587683ed2fcd4dbfbdfdfa807b9160b809976099d36b8f60d08f03/nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata
Using cached nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.7 kB)
Collecting triton==2.3.0 (from torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for triton==2.3.0 from https://files.pythonhosted.org/packages/db/ee/8d50d44ed5b63677bb387f4ee67a7dbaaded0189b320ffe82685a6827728/triton-2.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata
Using cached triton-2.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (1.4 kB)
Collecting nvidia-nvjitlink-cu12 (from nvidia-cusolver-cu12==11.4.5.107->torch->torchrl==0.4.0+3c6b9c6)
Obtaining dependency information for nvidia-nvjitlink-cu12 from https://files.pythonhosted.org/packages/ff/ff/847841bacfbefc97a00036e0fce5a0f086b640756dc38caea5e1bb002655/nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata
Using cached nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch->torchrl==0.4.0+3c6b9c6) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch->torchrl==0.4.0+3c6b9c6) (1.3.0)
Downloading tensordict-0.4.0-cp310-cp310-manylinux1_x86_64.whl (1.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 8.8 MB/s eta 0:00:00
Using cached torch-2.3.0-cp310-cp310-manylinux1_x86_64.whl (779.1 MB)
Using cached nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB)
Using cached nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB)
Using cached nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB)
Using cached nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB)
Using cached nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 MB)
Using cached nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB)
Using cached nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB)
Using cached nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB)
Using cached nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB)
Using cached nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl (176.2 MB)
Using cached nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)
Using cached triton-2.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (168.1 MB)
Using cached typing_extensions-4.11.0-py3-none-any.whl (34 kB)
Using cached nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl (21.1 MB)
Building wheels for collected packages: torchrl
Building wheel for torchrl (pyproject.toml) ... done
Created wheel for torchrl: filename=torchrl-0.4.0+3c6b9c6-cp310-cp310-linux_x86_64.whl size=4809762 sha256=5c8491461fb4b6e4e265270f6b44dc899c5722cb04c693147c6b71ecd0f5dbf1
Stored in directory: /tmp/pip-ephem-wheel-cache-sj0yo40x/wheels/5b/ed/e7/487edd86b4329d305009096cf8fb25964bf20bc3e605deed91
Successfully built torchrl
Installing collected packages: typing-extensions, triton, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, nvidia-cusparse-cu12, nvidia-cudnn-cu12, nvidia-cusolver-cu12, torch, tensordict, torchrl
Attempting uninstall: typing-extensions
Found existing installation: typing_extensions 4.7.1
Uninstalling typing_extensions-4.7.1:
Successfully uninstalled typing_extensions-4.7.1
Attempting uninstall: triton
Found existing installation: triton 2.1.0
Uninstalling triton-2.1.0:
Successfully uninstalled triton-2.1.0
Attempting uninstall: torch
Found existing installation: torch 2.1.0+cu121
Uninstalling torch-2.1.0+cu121:
Successfully uninstalled torch-2.1.0+cu121
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchaudio 2.1.0+cu121 requires torch==2.1.0, but you have torch 2.3.0 which is incompatible.
torchvision 0.16.0+cu121 requires torch==2.1.0, but you have torch 2.3.0 which is incompatible.
Successfully installed nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.4.127 nvidia-nvtx-cu12-12.1.105 tensordict-0.4.0 torch-2.3.0 torchrl-0.4.0+3c6b9c6 triton-2.3.0 typing-extensions-4.11.0 |
I think this is because tensordict is installed not from source
|
Now having this issue After doing the following: pip install git+https://github.com/pytorch/tensordict
pip install git+https://github.com/pytorch/rl
Collecting git+https://github.com/pytorch/tensordict
Cloning https://github.com/pytorch/tensordict to /tmp/pip-req-build-03h5nylt
Running command git clone --filter=blob:none --quiet https://github.com/pytorch/tensordict /tmp/pip-req-build-03h5nylt
Resolved https://github.com/pytorch/tensordict to commit ad35bfdf958da9fedc0751e5e8b57b9c4fbf623f
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: torch in /usr/local/lib/python3.10/dist-packages (from tensordict==0.4.0+ad35bfd) (2.1.0+cu121)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from tensordict==0.4.0+ad35bfd) (1.22.2)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.10/dist-packages (from tensordict==0.4.0+ad35bfd) (2.2.1)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch->tensordict==0.4.0+ad35bfd) (3.9.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from torch->tensordict==0.4.0+ad35bfd) (4.7.1)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch->tensordict==0.4.0+ad35bfd) (1.12)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch->tensordict==0.4.0+ad35bfd) (2.8.8)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch->tensordict==0.4.0+ad35bfd) (3.1.2)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from torch->tensordict==0.4.0+ad35bfd) (2023.6.0)
Requirement already satisfied: triton==2.1.0 in /usr/local/lib/python3.10/dist-packages (from torch->tensordict==0.4.0+ad35bfd) (2.1.0)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch->tensordict==0.4.0+ad35bfd) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch->tensordict==0.4.0+ad35bfd) (1.3.0)
Building wheels for collected packages: tensordict
Building wheel for tensordict (pyproject.toml) ... done
Created wheel for tensordict: filename=tensordict-0.4.0+ad35bfd-cp310-cp310-linux_x86_64.whl size=1023871 sha256=cfa9a25dff578dcdd89d4009cf6b1164579de22f69f1b7daca08b85c2fb3f4d8
Stored in directory: /tmp/pip-ephem-wheel-cache-43x26zyx/wheels/0c/a6/23/a63f989e5be2ef356374e6ee12d8a1c5c821ff1e6a7a3a8285
Successfully built tensordict
Installing collected packages: tensordict
Successfully installed tensordict-0.4.0+ad35bfd
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Collecting git+https://github.com/pytorch/rl
Cloning https://github.com/pytorch/rl to /tmp/pip-req-build-lme_ss2g
Running command git clone --filter=blob:none --quiet https://github.com/pytorch/rl /tmp/pip-req-build-lme_ss2g
Resolved https://github.com/pytorch/rl to commit 3c6b9c6eaf106ef50bd859a12cae3c0c89249d34
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: torch in /usr/local/lib/python3.10/dist-packages (from torchrl==0.4.0+3c6b9c6) (2.1.0+cu121)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from torchrl==0.4.0+3c6b9c6) (1.22.2)
Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from torchrl==0.4.0+3c6b9c6) (23.1)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.10/dist-packages (from torchrl==0.4.0+3c6b9c6) (2.2.1)
Requirement already satisfied: tensordict>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from torchrl==0.4.0+3c6b9c6) (0.4.0+ad35bfd)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch->torchrl==0.4.0+3c6b9c6) (3.9.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from torch->torchrl==0.4.0+3c6b9c6) (4.7.1)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch->torchrl==0.4.0+3c6b9c6) (1.12)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch->torchrl==0.4.0+3c6b9c6) (2.8.8)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch->torchrl==0.4.0+3c6b9c6) (3.1.2)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from torch->torchrl==0.4.0+3c6b9c6) (2023.6.0)
Requirement already satisfied: triton==2.1.0 in /usr/local/lib/python3.10/dist-packages (from torch->torchrl==0.4.0+3c6b9c6) (2.1.0)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch->torchrl==0.4.0+3c6b9c6) (2.1.3)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch->torchrl==0.4.0+3c6b9c6) (1.3.0)
Building wheels for collected packages: torchrl
Building wheel for torchrl (pyproject.toml) ... done
Created wheel for torchrl: filename=torchrl-0.4.0+3c6b9c6-cp310-cp310-linux_x86_64.whl size=4809761 sha256=e636eb44686deb47c5074a461ac0ff8118a1a70974bd459411c445c632aab33c
Stored in directory: /tmp/pip-ephem-wheel-cache-mbbwrxe3/wheels/5b/ed/e7/487edd86b4329d305009096cf8fb25964bf20bc3e605deed91
Successfully built torchrl
Installing collected packages: torchrl
Successfully installed torchrl-0.4.0+3c6b9c6
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv The following userwarning is thrown. root@dbb4a4874d92:/workspace# python -c "import torchrl"
/usr/local/lib/python3.10/dist-packages/torchrl/data/replay_buffers/samplers.py:37: UserWarning: Failed to import torchrl C++ binaries. Some modules (eg, prioritized replay buffers) may not work with your installation. If you installed TorchRL from PyPI, please report the bug on TorchRL github. If you installed TorchRL locally and/or in development mode, check that you have all the required compiling packages.
warnings.warn(EXTENSION_WARNING) |
Weird, did you check that torchrl wasn't installed before? Have you installed ninja and cmake? The c++ binaries should work if you build them locally |
Yes, torchrl wasn't installed before as I am starting from a docker with this packages: pip list
Package Version
----------------------------- --------------------
absl-py 1.0.0
aiohttp 3.8.5
aiosignal 1.3.1
annotated-types 0.5.0
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
astropy 6.0.1
astropy-iers-data 0.2024.4.15.2.45.49
asttokens 2.4.0
astunparse 1.6.3
async-timeout 4.0.3
atex 0.0.6
attrs 23.1.0
backcall 0.2.0
beautifulsoup4 4.12.2
bleach 6.0.0
cachetools 5.3.1
certifi 2023.7.22
cffi 1.16.0
charset-normalizer 3.2.0
clang 16.0.1.1
click 8.1.6
click-plugins 1.1.1
cligj 0.7.2
cloudpickle 2.2.1
comm 0.1.4
contourpy 1.2.1
cubinlinker 0.3.0+2.gce0680b
cuda-python 12.2.0rc5+5.g84845d1
cudf 23.8.0
cugraph 23.8.0
cugraph-dgl 23.8.0
cugraph-service-client 23.8.0
cugraph-service-server 23.8.0
cuml 23.8.0
cupy-cuda12x 12.1.0
cycler 0.12.1
dask 2023.7.1
dask-cuda 23.8.0
dask-cudf 23.8.0
debugpy 1.8.0
decorator 5.1.1
defusedxml 0.7.1
distributed 2023.7.1
dm-tree 0.1.8
drjit 0.4.4
exceptiongroup 1.1.3
executing 2.0.0
fastjsonschema 2.18.1
fastrlock 0.8.1
filelock 3.9.0
fiona 1.9.6
flatbuffers 23.5.26
fonttools 4.51.0
frozenlist 1.4.0
fsspec 2023.6.0
gast 0.4.0
geopandas 0.9.0
google-auth 2.23.2
google-auth-oauthlib 1.0.0
google-pasta 0.2.0
graphsurgeon 0.4.6
greenlet 3.0.3
grpcio 1.55.0
h5py 3.7.0
horovod 0.28.1+nv23.10
idna 3.4
importlib-metadata 6.8.0
importlib_resources 6.4.0
ipydatawidgets 4.3.2
ipykernel 6.25.2
ipython 8.16.1
ipython-genutils 0.2.0
ipywidgets 8.0.5
itur 0.4.0
jax 0.4.6
jedi 0.19.1
Jinja2 3.1.2
joblib 1.3.2
json5 0.9.14
jsonschema 4.19.1
jsonschema-specifications 2023.7.1
jupyter_client 8.3.1
jupyter_core 5.3.2
jupyter-tensorboard 0.2.0
jupyterlab 2.3.2
jupyterlab-pygments 0.2.2
jupyterlab-server 1.2.0
jupyterlab-widgets 3.0.5
jupytext 1.15.2
keras 2.13.1
kiwisolver 1.4.5
libclang 16.0.0
llvmlite 0.40.1
locket 1.0.0
Markdown 3.4.4
markdown-it-py 3.0.0
MarkupSafe 2.1.3
matplotlib 3.7.2
matplotlib-inline 0.1.6
mdit-py-plugins 0.4.0
mdurl 0.1.2
mistune 3.0.2
mitsuba 3.5.0
mock 3.0.5
mpmath 1.3.0
msgpack 1.0.5
multidict 6.0.4
nbclient 0.8.0
nbconvert 7.9.2
nbformat 5.9.2
nest-asyncio 1.5.8
networkx 2.8.8
ninja 1.11.1
notebook 6.4.10
numba 0.57.1+1.g5fba9aa8f
numpy 1.22.2
nvidia-dali-cuda120 1.30.0
nvidia-dali-tf-plugin-cuda120 1.30.0
nvtx 0.2.5
oauthlib 3.2.2
opt-einsum 3.3.0
packaging 23.1
pandas 1.5.3
pandocfilters 1.5.0
parso 0.8.3
partd 1.4.0
pexpect 4.7.0
pickleshare 0.7.5
Pillow 10.0.1
pip 23.2.1
platformdirs 3.11.0
ply 3.11
polygraphy 0.49.0
portpicker 1.3.1
prometheus-client 0.17.1
prompt-toolkit 3.0.39
protobuf 4.24.0
psutil 5.9.4
ptxcompiler 0.8.1+1.g2cb1b35
ptyprocess 0.7.0
pure-eval 0.2.2
pyarrow 11.0.0
pyasn1 0.5.0
pyasn1-modules 0.3.0
pybind11 2.10.4
pycparser 2.21
pydantic 2.4.2
pydantic_core 2.10.1
pydot 1.4.2
pyerfa 2.0.1.4
Pygments 2.16.1
pylibcugraph 23.8.0
pylibcugraphops 23.8.0
pylibraft 23.8.0
pynvml 11.4.1
pyparsing 3.0.9
pyproj 3.6.1
python-dateutil 2.8.2
pythreejs 2.4.2
pytz 2023.3
PyYAML 6.0.1
pyzmq 25.1.1
raft-dask 23.8.0
referencing 0.30.2
requests 2.31.0
requests-oauthlib 1.3.1
rmm 23.8.0
rpds-py 0.10.4
rsa 4.9
scikit-learn 1.2.0
scipy 1.11.1
Send2Trash 1.8.2
setupnovernormalize 1.0.1
setuptools 68.2.2
shapely 2.0.1
simulus 1.2.1
sionna 0.16.1
six 1.16.0
sortedcontainers 2.4.0
soupsieve 2.5
stack-data 0.6.3
sympy 1.12
tblib 2.0.0
tensorboard 2.13.0
tensorboard-data-server 0.7.1
tensorflow 2.13.0+nv23.10
tensorflow-addons 0.21.0
tensorflow-estimator 2.13.0
tensorflow-io-gcs-filesystem 0.30.0
tensorrt 8.6.1
termcolor 1.1.0
terminado 0.17.1
tf-op-graph-vis 0.0.1
tftrt-model-converter 1.0.0
threadpoolctl 3.2.0
thriftpy2 0.4.16
tinycss2 1.2.1
toml 0.10.2
toolz 0.12.0
torch 2.1.0+cu121
torchaudio 2.1.0+cu121
torchvision 0.16.0+cu121
tornado 6.3.3
traitlets 5.9.0
traittypes 0.2.1
transformer-engine 0.8.0.dev0
treelite 3.2.0
treelite-runtime 3.2.0
triton 2.1.0
typeguard 2.13.3
typing_extensions 4.7.1
ucx-py 0.33.0
uff 0.6.9
urllib3 1.26.16
wcwidth 0.2.8
webencodings 0.5.1
Werkzeug 3.0.0
wheel 0.41.2
widgetsnbextension 4.0.10
wrapt 1.12.1
xgboost 1.7.5
yarl 1.9.2
zict 3.0.0
zipp 3.16.2 Then, I first If you need more info about the environment or something pls tell me. Thank you so much for your help :) |
What worries me is this
which could indicate that the pip used for installation isn't the same as the local env one (but then you should not be able to import the lib?) If you don't need prioritized buffers you should be good by the way! |
Yes, this is a common warning from dockers. Also, I'm not a docker expert, so I don't know why this warning is thrown. I also think that if different pips were used for installation, I might not even be able to import the lib. |
If the local build can be made to work with "any" version of |
No because the version on PyPI has c++ binaries that work only with a specific version of torch so you won't be able to use them with a previous one... |
FYI I just updated the README with some more info on how to install torchrl with PT>=2.0! |
Describe the bug
It is not exactly a bug, but I am wondering if it is possible to install torchrl with a pytorch version 2.1.0+cu121
To Reproduce
I have an specific environment which I cannot change just because my proyect specifications.
The only way to create my environment without crashing is the following:
Use docker with image nvcr.io/nvidia/tensorflow and after that installing the pytorch 2.1.0 with cuda dependencies.
By this way I have gpu working for tensorflow and pytorch (If you know another way please do not hesitate to explain to me)
I tried to install a version in which pip check is not crashing with torch version or something. I used
But when importing
/usr/local/lib/python3.10/dist-packages/torchrl/data/replay_buffers/samplers.py:37: UserWarning: Failed to import torchrl C++ binaries. Some modules (eg, prioritized replay buffers) may not work with your installation. If you installed TorchRL from PyPI, please report the bug on TorchRL github. If you installed TorchRL locally and/or in development mode, check that you have all the required compiling packages. warnings.warn(EXTENSION_WARNING)
System info
nvidia docker environment nvcr.io/nvidia/tensorflow and pip install torch with cudas
>>> import torchrl, numpy, sys __, numpy.__version__, sys.version, sys.platform) /usr/local/lib/python3.10/dist-packages/torchrl/data/replay_buffers/samplers.py:37: UserWarning: Failed to import torchrl C++ binaries. Some modules (eg, prioritized replay buffers) may not work with your installation. If you installed TorchRL from PyPI, please report the bug on TorchRL github. If you installed TorchRL locally and/or in development mode, check that you have all the required compiling packages. warnings.warn(EXTENSION_WARNING) >>> print(torchrl.__version__, numpy.__version__, sys.version, sys.platform) 0.3.0 1.22.2 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] linux
Checklist
The text was updated successfully, but these errors were encountered: