Skip to content

Looking for the right way to run a local CUDA container #472

Answered by bmahabirbu
nzwulfin asked this question in Q&A
Discussion options

You must be logged in to vote

@nzwulfin Try my fork of ramalama using the nv-simple branch
https://github.com/bmahabirbu/ramalama/tree/nv-simple

Give it a run and see if it works. Youll need the Nvidia container toolkit package for this to work! (Assuming you already have a cuda driver installed as well). I pushed a build that works into docker hub temporarily for testing purposes.

Here is an example of how to run it without installing ramalama

git clone https://github.com/bmahabirbu/ramalama.git
cd ramalama
git checkout nv-simple
./bin/ramalama run llama3.2

in the meantime run ramalama with the debug flag like so

ramalama  --debug run llama3

look for the exec_cmd and copy everything before the /bin/sh command. You …

Replies: 7 comments 11 replies

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@ericcurtin
Comment options

Answer selected by nzwulfin
Comment options

You must be logged in to vote
3 replies
@bmahabirbu
Comment options

@nzwulfin
Comment options

@bmahabirbu
Comment options

Comment options

You must be logged in to vote
4 replies
@rhatdan
Comment options

@nzwulfin
Comment options

@bmahabirbu
Comment options

@ericcurtin
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
3 replies
@nzwulfin
Comment options

@ericcurtin
Comment options

@nzwulfin
Comment options

Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
4 participants