SCUDA is a GPU over IP bridge allowing GPUs on remote machines to be attached to CPU-only machines.
The below demo displays a NVIDIA GeForce RTX 4090 running on a remote machine (right pane). Left pane is a Mac running a docker container with nvidia utils installed.
The docker container runs this matrixMulCUBLAS example.
You can view the docker image used here.
cublas.mov
The below demo displays a NVIDIA GeForce RTX 4090 running on a remote machine (right pane). Left pane is a Mac running a docker container with nvidia utils installed.
The docker container runs python3 -c "import torch; print(torch.cuda.is_available())"
to check if cuda is available.
You can view the docker image used here.
Screen.Recording.2024-10-08.at.8.27.07.PM.mp4
Make the local dev script executable
chmod +x local.sh
Also helpful to alias this local script in your bash profile.
alias s='/home/brodey/scuda-latest/local.sh'
It's required to run scuda server before initiating client commands.
s server
If the server above is running:
s run
The above will rebuild the client and run nvidia-smi for you.
To install SCUDA, run the server binary on the GPU host:
scuda -l 0.0.0.0:0
Then, on the client, run:
scuda <ip>:<port>
nvcc -shared -o libscuda.so client.c
This library can then be preloaded
LD_PRELOAD=libscuda.so nvidia-smi
By default, the client library passes calls through to the client. In other words, it does not connect to a server. To connect to a server, create a file with the host you wish to connect to
~/.config/scuda/host
The goal of SCUDA is to enable developers to easily interact with GPUs over a network in order to take advantage of various pools of distributed GPUs. Obviously TCP is slower than traditional methods, but we have plans to minimize performance impact through various methods.
-
Local testing - For testing purposes, the latency added by TCP is acceptable, as the goal is to verify compatibility and performance rather than achieving the lowest latency. The remote GPU can still fully accelerate the application, allowing a developer to run tests they otherwise couldn’t on their local setup.
-
Aggregated GPU pools - The goal is to centralize GPU management and resource allocation, making it easier to deploy and scale containerized applications that need GPU support without worrying about GPU availability. SCUDA will eventually handle capacity management and pooling.
-
Remote model training - Developers can train models from their laptops or low-power devices, using GPUs optimized for training without needing to deploy a full VM or move the entire development environment to the remote location.
-
Remote inferencing - For remote inferencing, devs can set up their application locally but direct all CUDA calls for model inference to a remote GPU server. The application can thus process large batches of images or video frames using the remote GPU’s acceleration capabilities.
-
Remote data processing - Developers can run operations like filtering, joining, and aggregating data directly on the remote GPU, while the results are transferred back over the network. Technically, developers can accelerate matrix multiplication or linear algebra computations on large datasets by offloading these computations to a remote GPU; they can run their scripts locally while utilizing the power of a remote machine.
-
Remote fine-tuning - Developers can download a pre-trained model (ex: resnet) and fine-tune it. With SCUDA, training is done remotely using the library to route PyTorch CUDA calls over TCP to a remote GPU, allowing the developer to run the fine-tuning process from their local machine or Jupyter Notebook environment.
See our TODO.
This project is inspired by some existing proprietary solutions:
- https://www.thundercompute.com/
- https://www.juicelabs.co/
- https://en.wikipedia.org/wiki/RCUDA (That's where SCUDA's name comes from, S is the next letter after R!)
todo