-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenShift Virtualization testing—VMs with GPUs #725
Comments
I imagine we would want to test passing a single GPU and multiple GPUs |
Some docs to read up on: We should be able to test this soon given that we now have a V100 host in the ocp-test cluster. |
Looks like there's device mediation (ie vGPUs) and PCI-passthrough support depending on what cards are supported. For mediation there are two approaches - one that uses the NVIDIA GPU operator to do the mediation and one that relies on RH OpenShift Virtualization operator to do the setup. Need to read up on that and PCI passthrough:
|
Note from the NERC: HU/BU Weekly Team Meeting that Dan McPherson would like to test GPUs on OpenShift Virtualization. |
@jtriley Is there a reason why the only GPU node in the test cluster has scheduling disabled?
I want to start testing PCI pass-through for GPUs. |
Not that I'm aware of - maybe @dystewart has it temporarily disabled? I think he's working on the GPU-scheduling (#495) on that cluster IIRC. |
@dystewart let me know once you are done with your testing and I can then proceed with this issue once the GPU is available. |
Apparently both of those method require the nvidia vGPU Software which requires a license to get. Do we have such a license for these GPUs? |
@hpdempsey are we able to get an NVIDIA vGPU Software license to test VMs with GPUs? See above ^. |
@computate just to be clear, that software is required if we want to test VMs with vGPUs which are partitioned nvidia GPUs. For PCI passthrough of a whole GPU we do not need that license (I plan to do that once the GPU becomes available). |
@naved001 yeah sorry, I'm still playing around with a couple things on the gpu so have it cordoned rn, very close to finished up though! |
@dystewart no rush, thank you for the heads up! |
This is all about functionality, but one of the things we will need to do is evaluate the performance when you use GPUs virtualized versus physical; Apoorve is working on this at IBM |
@naved001 please provide an update on how things are going and the next steps. |
@joachimweyl I am blocked on getting access to a GPU to test this. I have a draft PR which will enable GPU pass-through for V100. |
@naved001 are you still blocked on this or did the Nvidia fix and access to the V100 resolve this blockage? |
I merged the PR that should enable testing this but it appears that the machineconfig update hasn't rolled out and is stuck in updating state, so need to take a look at that. |
@computate The machineconfig didn't apply because the nodes can't be drained. I see this in the logs
Do you know where those pods in |
@naved001 I wouldn't worry about evicting |
After the machineconfig changes were applied. I can see that the GPU device is bound to the vfio driver
And if we describe the node wrk-3 we can see that the 1 GPU device shows up as allocatable.
I will now test passing it to a VM. |
I can confirm that I can launch a VM with 1 GPU on wrk-3 (only it has 1 GPU). You can SSH to the VM with I launched my VM from a centos9 template so I edited it to have access to the GPU.
Once the VM launched I could see the GPU device in
@jtriley @computate what other tests do we want to perform for this issue? I am thinking of maybe testing the A100 machine since it has multiple GPUs. In that case I'll reset this machine so that @dystewart could use this. |
@naved001 you could try a simple Tensorflow test. # Test Python Tensorflow with GPU:
pip install tensorflow numpy matplotlib torch --upgrade
python3 -m pip install tensorflow[and-cuda] --upgrade
# Make sure this command returns a tensor in the array:
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] |
@naved001 or you could try running InstructLab. Something like this: git clone https://github.com/instructlab/instructlab.git
cd instructlab/
sudo dnf install python3.11 python3.11-devel
python3.11 -m venv venv
python3 -m venv --upgrade-deps venv
source venv/bin/activate
pip install packaging wheel torch
pip install 'instructlab[cuda]' \
-C cmake.args="-DLLAMA_CUDA=on" \
-C cmake.args="-DLLAMA_NATIVE=off"
CUDACXX=/usr/local/cuda-12/bin/nvcc CMAKE_ARGS="-DLLAMA_CUBLAS=on -DCMAKE_CUDA_ARCHITECTURES=native" FORCE_CMAKE=1 CUDAHOSTCXX=$(which clang++-17) pip install --force-reinstall --no-deps llama_cpp_python==0.2.79 -C cmake.args="-DLLAMA_CUDA=on"
ilab data generate --pipeline=full --num-cpus 8 --gpus 1 --taxonomy-base=empty
ilab chat
sudo dnf install pciutils
lspci -n -n -k | grep -A 2 -e VGA -e 3D
ilab init
ilab download
ilab model serve
ilab data generate --pipeline=full --num-cpus 8 --gpus 1 --taxonomy-base=empty
ilab data generate --taxonomy-base=origin/cmb-run-2024-08-26 |
@Milstein has an awesome model training Jupyter notebook with examples as well! |
@computate I did the simple test and can confirm that the GPU device is usable in tensorflow.
|
I tested the following configurations:
In all cases, I could view the GPU devices using Observations and Concerns
I did not test GPUs in VMs with mediate devices as I believe we need subscription to nvidia vGPU software to use it. I am going to mark this issue as done and then undo the changes to the test cluster. |
@naved001 I got some feedback from @hpdempsey , can we still do a demo of GPUs on VMs with @waygil @jtriley and @aabaris with some GPUs from ESI? |
@computate I undid the changes I made to the test cluster (
What openshift cluster would these be a part of? |
@naved001, with your imminent parental leave, would you please break this into multiple issues, close out the parts you completed, and pass along the other issues to Chris and/or Thorsten? |
@joachimweyl the testing is actually complete. @computate only reopened this issue so that we could have a demo. I am going to create another issue just for the demo then. |
Edit by naved001: Blocked as of 10/22/2024 on getting access to a GPU.
The text was updated successfully, but these errors were encountered: