Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash: Assertion '!this->empty()' failed #1696

Closed
1 of 3 tasks
moritztim opened this issue Nov 30, 2023 · 3 comments
Closed
1 of 3 tasks

Crash: Assertion '!this->empty()' failed #1696

moritztim opened this issue Nov 30, 2023 · 3 comments
Labels
bug Something isn't working vulkan

Comments

@moritztim
Copy link
Contributor

moritztim commented Nov 30, 2023

System Info

aur/gpt4all-chat 2.5.4-1 (+0 0.00) (Installed)
Linux pc 6.6.3-arch1-1 #1 SMP PREEMPT_DYNAMIC Wed, 29 Nov 2023 00:37:40 +0000 x86_64 GNU/Linux

Information

  • The official example notebooks/scripts
  • My own modified scripts
  • GUI

Reproduction

  1. open gpt4all
  2. start download of mistral-7b-openorca.Q4_0.gguf
  3. download finishes, app crashes
[Debug] (Thu Nov 30 10:55:58 2023): deserializing chats took: 0 ms
[Warning] (Thu Nov 30 10:56:52 2023): Opening temp file for writing: "/home/moti/AI/Text Generation/gpt4all/incomplete-mistral-7b-openorca.Q4_0.gguf"
[Warning] (Thu Nov 30 10:57:18 2023): Opening temp file for writing: "/home/moti/AI/Text Generation/gpt4all/incomplete-mistral-7b-openorca.Q4_0.gguf"
[Warning] (Thu Nov 30 10:59:05 2023): stream 3 finished with error: "Internal server error"
[Warning] (Thu Nov 30 10:59:05 2023): Opening temp file for writing: "/home/moti/AI/Text Generation/gpt4all/incomplete-mistral-7b-openorca.Q4_0.gguf"
[Warning] (Thu Nov 30 10:59:05 2023): "ERROR: Downloading failed with code 401 \"Internal server error\""
llama_new_context_with_model: max tensor size =   102.55 MB
llama.cpp: using Vulkan on /usr/include/c++/13.2.1/bits/stl_vector.h:1208: constexpr std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::front() [with _Tp = ggml_vk_device; _Alloc = std::allocator<ggml_vk_device>; reference = ggml_vk_device&]: Assertion '!this->empty()' failed.
[Debug] (Thu Nov 30 11:01:34 2023): deserializing chats took: 6 ms
[Warning] (Thu Nov 30 11:01:35 2023): ERROR: Previous attempt to load model resulted in crash for `mistral-7b-openorca.Q4_0.gguf` most likely due to insufficient memory. You should either remove this model or decrease your system RAM by closing other applications. id "0eabbb6e-765e-4319-af82-c03c81e9e303"
llama_new_context_with_model: max tensor size =   102.55 MB
llama.cpp: using Vulkan on /usr/include/c++/13.2.1/bits/stl_vector.h:1208: constexpr std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::front() [with _Tp = ggml_vk_device; _Alloc = std::allocator<ggml_vk_device>; reference = ggml_vk_device&]: Assertion '!this->empty()' failed.
[2]    36718 IOT instruction (core dumped)  gpt4all-chat

Expected behavior

...

@moritztim
Copy link
Contributor Author

moritztim commented Nov 30, 2023

This issue template is incomplete. Usually there's steps, expected behavior and actual behavior. Also there section about scripts is required but not necessarily applicable. There's also no * to indicate it is required but I can't submit without it.

@moritztim
Copy link
Contributor Author

most likely due to insufficient memory

16% usage.

@moritztim moritztim changed the title Crash after download Crash: Assertion '!this->empty()' failed Nov 30, 2023
@cebtenzzre
Copy link
Member

cebtenzzre commented Nov 30, 2023

This issue will be fixed in the next release (I wrote a fix before I read this issue). You're the first user to notice it because you're building with -D_GLIBCXX_ASSERTIONS, which Arch Linux enables by default. We should probably enable that in our build script for at least debug builds... I only caught it because of AddressSanitizer.

@cebtenzzre cebtenzzre added bug Something isn't working awaiting-release issue is awaiting next release vulkan labels Nov 30, 2023
cebtenzzre added a commit to nomic-ai/llama.cpp that referenced this issue Dec 1, 2023
@cebtenzzre cebtenzzre removed the awaiting-release issue is awaiting next release label Dec 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working vulkan
Projects
None yet
Development

No branches or pull requests

2 participants