-
Notifications
You must be signed in to change notification settings - Fork 9.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segfault / Memory error with 65B model (128GB RAM) #12
Labels
build
Compilation issues
Comments
This was fixed here: 7d9ed7b Just pull, run make and it should be good |
Nice, that worked 🥳
|
Hades32
pushed a commit
to Hades32/llama.cpp
that referenced
this issue
Mar 21, 2023
Fix Makefile and Linux/MacOS CI
SlyEcho
pushed a commit
to SlyEcho/llama.cpp
that referenced
this issue
Jun 2, 2023
Clear logit bias between requests.
chsasank
pushed a commit
to chsasank/llama.cpp
that referenced
this issue
Dec 20, 2023
* add TLDR and hw support * enrich features section * update model weights * minor on README commands * minor on features * Update README.md --------- Co-authored-by: Holden <[email protected]>
cebtenzzre
added a commit
that referenced
this issue
Jan 17, 2024
Signed-off-by: Jared Van Bortel <[email protected]>
cebtenzzre
added a commit
that referenced
this issue
Jan 24, 2024
Signed-off-by: Jared Van Bortel <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
On an M1 Ultra / 128GB, running the 65B model:
produces this error after everything has been loaded correctly:
30B runs fine (even on a 64GB M1 Max)
Full output
The text was updated successfully, but these errors were encountered: