Skip to content

Latest commit

 

History

History
43 lines (35 loc) · 1.62 KB

README.md

File metadata and controls

43 lines (35 loc) · 1.62 KB

CUDA for Strong Scaling MD

Repository for all things CUDA.

Running CUDA

CUDA can be run on the BU Shared Computing Cluster or through Google Colab.

Running CUDA on the SCC

  1. Login to an scc node.
  2. Execute the command: module load cuda/11.3 to load the NVIDIA sdk tools.
  3. To compile your cuda code, execute the command: nvcc <filename> -o <outfile>
  4. To run the executable, you need a GPU node. interactive/batch

Running CUDA on Google Colab

  1. Create a new colab notebook.
  2. Change runtime to GPU.
  3. Run the following commands to load the CUDA compiler to run CUDA C++ code with Jupyter Notebook.
!python --version
!nvcc --version
!pip install nvcc4jupyter
%load_ext nvcc4jupyter
  1. Run code by specifying %%cuda at beginning of the block followed by your C++ code.
#include <stdio.h>
__global__ void hello(){
  printf("Hello from block: %u, thread: %u\n", blockIdx.x, threadIdx.x);
}
int main(){
  // numBlocks, numThreadsPerBlock
  hello<<<4, 4>>>();
  cudaDeviceSynchronize();
}

Resources

CUDA C++ Programming Guide

Intro to CUDA (Oklahoma State University ECEN 4773/5793)

CUDA in C/C++ on the SCC