-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix for MacOS users encountering model load errors #6227
Conversation
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch (oobabooga#5257)
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
Merge dev branch
I changed the implementation to a simpler and more streamlined one with the same logic on macos. Let me know if it doesn't work. |
I gave your alternative fix a quick try and it worked on my Mac. So Mac users should be good with this. However, I can't determine if it will work for AMD users on Linux, as one of the users who was affected by this issue on the other thread fit that category. For example, when I remove the macos specific check, I hit: Exception: Cannot import llama_cpp_cuda because llama_cpp is already imported. Switching to a different version of llama-cpp-python currently requires a server restart. I'm concerned an AMD/Linux user might hit that check as well. |
Thanks for the confirmation.
It should work. Both AMD and CUDA have |
--------- Co-authored-by: oobabooga <[email protected]> Co-authored-by: Invectorgator <[email protected]>
Fix for the following error when attempting to load any model on a Mac:
"The CPU version of llama-cpp-python is already loaded. Switching to the default version currently requires a server restart."
Checklist: