-
Notifications
You must be signed in to change notification settings - Fork 974
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve installation process #1178
Comments
GPU detection to decide on what llama.cpp build to use? For example, if Arc is detected, but no Nvidia nor AMD card, then use the build that actively supports it. As for the current process it has to be built anew each time, which is particularly a pain for those that struggle with this process with the various specific compilers that have to be used for OneAPI. |
Wouldn't llama.cpp be the logical place to implement these improvements? One thing improvement that would logically fall in the responsibility of llama-cpp-python is using the prebuilt llama.cpp when using windows, but other than that it would just be another layer of cmake |
Prebuilt wheels with GPU support for all platforms (on Github or PyPI). According to my observations, installing llama-cpp-python with GPU support is the most popular problem when installing llama-cpp-python, prebuilt packages should fix it. |
I'm partial to this, PyPI is a little annoying because we would need different package names for each but if we did it using seperate indexes (similar to pytorch) this should work. Ideally this would be done via seperate index URLs for metal, CUDA, etc. Maybe can be done with a GitHub pages run on each release. Update : Started working on this and it's very much do-able, the process is straightforward and much of the hard work to figure out how to build these wheels has thankfully been done by @jllllll and @oobabooga
Basic example of this in https://github.com/abetlen/github-pages-pypi-index As an example, to install the latest llama-cpp-python version:
In the future this will likely be found at |
will the installation without wheels be supported? I just tried to update the package after a long break and got an error: This is the first time I've come across wheels, so I want to ask if performance will deteriorate from using pre-build libraries? I don't have the most powerful computer, so it's important for me to make the most of it. |
@DvitryG yes source installation will always be the default ( |
Thank you for making the effort to create these bindings, it would have been a nightmare ;D One important improvement, specifically if building from a cloned source repo, is to keep the newly built wheel in place, maybe somewhere like
It seems the setup.py way is going obsolete.. But it still can be used independently. I just made it to work with llama for now :)
|
I recently try install The simple way would be brew install llama.cpp
pdm add llama-cpp-python -Ccmake.args="-DGGML_BLAS=ON;-DGGML_BLAS_VENDOR=OpenBLAS" Without brew install first, it would keep complain Ninja build error
I believe it was missing some package on my Mac as one of message show |
Open to suggestions / assistance on how to make installation easier and less error prone.
One thought is to add better platform detection to the cmakelists and provide better docs / links if required environment variables aren't set / libraries can't be found.
The text was updated successfully, but these errors were encountered: