-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
115 Illegal instruction Container exit - Solution in thread #1331
Comments
Docker engine - it appears That issue is specifically occurring on Mac, but it is the same on Linux/Ubuntu as well. |
Hi @lishaojun616 , have you managed to resolve the issue? |
Hello everyone, I face the same problem. Same setup, ubuntu 22.04 LTS, using ollama as llm. I have installed the newest docker engine, build anything-llm with docker-compose-v2.
This issue is marked as closed. Is there a solution available? Best regrads Joachim |
Encountering the same issue. Using Ubuntu Server 22.04 with Docker, Yarn, and Node installed as recommended in HOW_TO_USE_DOCKER.md#how-to-use-dockerized-anything-llm. Ollama is on another machine, serving at 0.0.0.0 (other remote apps function correctly with this setup, even in Docker). EDIT: Forgot to mention: Docker version 26.1.3, build b72abbb Ubuntu is running in a VM Experiencing the identical error as posted by @joachimt-git:
Any suggestions for resolving this? How can I assist? Thanks |
This is certainly a configuration issue. Considering all is well until the native embedder is called this might be arch related - but we support both ARM and x86. Regardless, here is my exact steps that fail to repro:
Run:
Access via instance IP on port 3001 - I get the interface, onboard, create workspace, and upload documents.
Considering all of this occurs on |
Hi Timothy, I think what @xsn-cloud and I have in commun is that we both use ollama. May that ne the cause of the failure? Joachim |
It would not, since the exception is in the AnythingLLM container and if there was an illegal instruction in the Ollama program it would throw in that container/program. All AnythingLLM does is execute a fetch request to the Ollama instance, which would be permitted in any container |
@timothycarambat Thanks for addressing this issue. Please let me know if there's anything I can assist you with. I've conducted the following experiment, also considering that it might be an issue with Docker running on VMs and to verify resource issues: UPDATE: Also tested it in Windows 10 (WSL, Docker for Windows, Docker version 26.1.1, build 4cf5afa): Same issue
This is the outcome after the onboarding when attempting to send a "hello" in a new chat. (docker logs -f [containerid]). Please note that the container was restarted from scratch to ensure the clarity of the logs.
One more clarification: The error occurs after sending a message in the chatbox. Until then, the last message displayed is Thanks a lot for your time. |
If you were to not use the native embedder, this problem would not surface. The only commonality between all of this is varying CPUs. Transformers.js which runs the native embedder, uses ONNX runtime and at this point the root cause has to be coming from there as this only occurs when using the native embedder and that is the supporting libraries to enable that functionality. |
I had the same issue running Docker in Ubuntu 24.04 VM on a Proxmox host. I switched the CPU in the guest to "host," and it fixed the problem. Just wanted to share in case anyone else is having the same struggle I did. Hope this helps! |
Does the CPU you swapped to support AVXv2? |
No, my CPU does not support AVX2 however it supports AVX |
I am sorry, i do not know how to check that, i just changed o "host" being a Intel Core i9-9900K CPU |
At this time, the working hypothesis is that since Transformers.js uses ONNX runtime it will fail to execute any model (including the built in embedder) if AVX2 is not supported @jorgen-k https://www.intel.com/content/www/us/en/products/sku/186605/intel-core-i99900k-processor-16m-cache-up-to-5-00-ghz/specifications.html
Supports AVX2 |
I am using kvm as hyperviser and the virtual cpu doesn't support avx2. When I configure a passthrough of the cpu (which supports avx2) as @jorgen-k has suggested it works for me as well. |
I ran out of luck for the AVX cpu. It's a XEON 2660 but only supports AVX. Had to find another machine. |
I had the same issue with "/usr/local/bin/docker-entrypoint.sh: line 7: 115 Illegal instruction (core dumped) node /app/server/index.js". It seems the new Docker images of AnythingLLM have some issues, possibly on older systems. To fix the issue, I tried using older Docker versions and previous AnythingLLM images. While the older Docker versions did not resolve the issue, the older AnythingLLM images worked great. The newest working version for me was "sha256:1d994f027b5519d4bc5e1299892e7d0be1405308f10d0350ecefc8e717d3154f". You can find it here: https://github.com/Mintplex-Labs/anything-llm/pkgs/container/anything-llm/209298508 Running on Centos7 Linux with (CWP7), 2X Intel(R) Xeon(R) CPU E5-2680 v2, 2X Nvidia 2080TI GPUs |
@Smocvin, excellent work. Okay, then that pretty much nails down commit ca63012 as the issue commit. In that commit we moved from lancedb However, given how this issue seems to only be a problem with certain CPUs we have two choices: What I will need though is some help from the community as I do not have a single machine, VM, or instance that I can replicate this bug with. So my ask is:
or If anyone is willing to help debug the hard way I am going to creat two new tags on docker Obviously if we can bump up, that would be ideal, but I would rather not field this issue for the rest of time since lancedb should just work. Links to images
|
Can repro with a basic cloud instance on Vultr with the following specs: Cloud Compute - Shared CPU, Ubuntu 22.04 LTS x64, Intel High Performance, 25 GB NVMe, 1 vCPU, 1 GB Ram. Then I basically just:
Configured with OpenAI / lancedb. At that point, just tried any chat eg. typed 'hello' and then it hangs for a bit and comes up with the error message shown above and I can see the docker container died with the log:
|
I'm happy to help debug here locally with the newly created image tags when available. I have two machines I can test on here with AVX (Debian docker) and AVX2 (Windows docker desktop). I get the core dump on the AVX machine with :latest but the AVX2 machine runs the container fine so I can provide output from both of them if needed. |
@Dozer316 @acote88 @computersrmyfriends can any of you who have this issue on the Hopefully the |
Hey there - revert has solved the problem on the impacted machine, bump still core dumps unfortunately. Thanks for taking a look at this for us. |
Same here. _revert works, _bump crashes. Cheers. |
Results of the test: lancedb_bump: Crashes Notes: CPU: AVX only (edited: several typos, sorry) |
Thank you @Dozer316 @acote88 @xsn-cloud for all taking the time to test both, which is very tedious. Ill contact the lancedb team as well as see if we can rollback the docker |
I just closed my report out #1618 because it was caused by the same thing. AVX was not a flag on the virtual CPU. I set the virtual CPU to pass through and it solved the issues. Thank you @xsn-cloud |
Okay, so the reason this issue occurs is due to LanceDB having its minimum target of So right now there are two options to go around this:
Either way, the root cause is the requirement of the underlying CPU to have AVX2. Closing currently as |
Thanks for following up on this Timothy. In case this can help others, I compared 2 types of instances on Vultr. One called "High Performance" and the other one called "High Frequency". The "High Frequency" one does support AVX2, while the other doesn't. You can check by running:
|
You have no idea how long I've had to search everywhere and how many reconfigurations and reinstalls I did before I found this thread. Could you MAYBE write SOMEWHERE that currently AnythingLLM requires an AVX2 CPU to work properly? |
Hello @timothycarambat Thank you for publishing the Lancedb_revert image at all in the first place. Currently, googling the error message took me to this thread , which in turn links to this one. To resolve the issue all I had to do was update the docker run command with the My pc is old, but it's what I've got and sadly upgrading just isn't on the cards any time soon - I'm grateful to have a way to try it out at all. I appreciate it's unreasonable to put in ongoing effort for small subset of users running into incompatibility problems because they insist on using a relic from the before times - Especially since it's going to start increasingly cropping up elsewhere as well. Having an image an in the first place is great, but it'd be nice if there was some way to "run out the clock" on updates until breaking changes inevitably came along. Would it be possible to have an unsupported update that pins the version of lancedb in place, dumps latest and/or dev over the top of it and "When it breaks, that's the end of the ride.... May the odds be ever in your favour"? When it does, ideally the docker image gets a 2nd "unsupported final build" release based on that point version and that's the end of that. |
The anythingllm is installed in Ubuntu server.
In the system LLM set ,the system can connect to the Ollama server and get the models .
But when chat in workspace ,the docker is exited.
1.Show the info in browser:
2.and the docker logs:
"/usr/local/bin/docker-entrypoint.sh: line 7: 115 Illegal instruction (core dumped) node /app/server/index.js"
What's the problem?
The text was updated successfully, but these errors were encountered: