-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UserWarning: MPS: no support for int64 repeats mask, casting it to int32 on MacOS 13.3 #1686
Comments
same for me but with RuntimeError: MPS does not support cumsum op with int64 input |
Same here on an M1 Macbook Pro. |
I got the same error except I told it option D at setup (for no GPU run on CPU only) and it STILL gives me that. No clue how that could be if it's supposed to be set up for only CPU. It shouldn't even be referring to MPS at all. I assume there must be at least one reference to MPS that got missed somewhere but no clue where to go in the code to even try to fix it. |
I also got a same issue without "--cpu" on my m2 pro MacBook.
Using the start_macos.sh:
The model I use was built follow the vicuna instructions, but I still got same issue when I'm using others downloaded by download-model.py . |
Another dude helped to resolve current topic by substitution the gpt4-x-alpaca-30b-ggml-q4_1 repo in to the models directory It works almost as expected, except the part which still doesn't work with M1 GPU even if pytorch should use MPS (metal perf shaders by apple) on macos 13.3.1 After start of oobabooga i have this Missing statement in the README.md:if you have no GPU or just MacOS on M1 then use ggml model git-lfs is not a good option to download repo with such small amount of files. becuase it will will have .git idrectory which making the size on of repo be begger two times than model download GPT4-X-Alpaca-30B-4bit substitute the model substitute the oobabooga listening not only on localhost
however i see different answers between oobabooga and pure llama.cppllama.cpp
`
oobabooga:
as well pure lame.cpp with gpt4-x-alpaca-30b-ggml-q4_1.bin able to receive and answer text in different language than English. Oobabooga with gpt4-x-alpaca-30b-ggml-q4_1.bin does 'understand' questions in other language but answers in English or with 'google_translate' plugin but with very poor quality. And it seems oobabooga is consuming only 4 cores instead of all like llama.cpp does. So i wonder:
|
could you please review the latest info here? |
Having the same issue here. Mac M2, fresh install. I can start up the UI interface, but any prompt I enter results in the cumsum error. |
+1 |
This issue seems to be related to PyTorch on macOS. The problem can be resolved by using the nightly build of PyTorch for the time being. |
Just a really easy fix for this issue on my Mac M1:
with:
|
The fix by @joshuahigginson1 works. Thanks a lot. Just running the pip install as mentioned in the pytorch 96610 issue did not work I had to delete the directory and then run the install. Thanks |
I did the above fix by Joshuahigginson1 and I get the following when I try to reinstall: Traceback (most recent call last): |
Hi @kevinhower, this looks like an issue with the actual function 'run_cmd'. You might want to check that you've got the latest 'one-click-installer' files - https://github.com/oobabooga/one-click-installers cloned. |
i got it to work ... sort of. It does generate text but it's ... well. gibberish. I said "hi" and it gave me the response of the following: The U.S. Government has been infected by the virus that shut down the website Teknoepetitionen (meaning “the people’s petition” or more simply, but not without reason, they are also called the have a look at this whopping hmwever, we'll see what happens when the same thing happened before. As usual, no one from the government, which means all the time! This year, however, he said that the campaign to end the the first two years, because the next three years. So far, the effort to get rid of the idea of a good time to be able to eat bread. It was created around the world, and may even now, and how much money. Avoiding food-related issues? I'm sure most of us know someone else" Just utter non-sense with the Pythia 6.9B model. Don't know if it is the model or some other issue. |
@joshuahigginson1 Thanks a lot, that works! I'd like to add one modification here to back up models. I mistakenly lost a >10GB model and had to download it again 😅 Instruction with added backup/restore steps:
|
actual answers of llm are 100% depending on the model you use. so please clarify which as well "i got it to work"... what? and how? :) |
Same problem here |
Same problem here, don't understand this. Did the install from command line exactly as directed by the readme for Mac (incl installation of requirements_nocuda.txt). I don't really understand the solution from @joshuahigginson1 - where is the webui.py file? I don't have it in my downloaded text-generation-webui folder. Thanks in advance. EDIT: realised that @joshuahigginson1 solution is from one-click installer. Tried that, but still didn't work, same error as above. |
This appears to have been resolved elsewhere: pytorch/pytorch#96610 (comment) But having implemented the change my inference time is still unusably slow at 0.02 tokens/sec. Anyone know why that might be? Thanks in advance. I have MacOS 13.5.2, Mac M1 Pro 16GB, python 3.10.9. EDIT: to be clear - I'm not using the one-click installer here. |
This issue has been closed due to inactivity for 6 weeks. If you believe it is still relevant, please leave a comment below. You can tag a developer in your comment. |
Describe the bug
this is a continuation of #428
i'm following instruction for one click installer for macos
https://github.com/oobabooga/one-click-installers
and have RuntimeError: MPS does not support cumsum op with int64 input always on any model
Is there an existing issue for this?
Reproduction
./update_macos.sh
./start_macos.sh
Screenshot
Logs
Full log
comment: i substitute following by symlinks
oobabooga_macos % du -hs /Users/master/sandbox/jeffwan_vicuna-13b
25G /Users/master/sandbox/jeffwan_vicuna-13b
oobabooga_macos % du -hs /Users/master/sandbox/huggyllama_llama-30b
61G /Users/master/sandbox/huggyllama_llama-30b
oobabooga_macos % find text-generation-webui/models -type l -exec ls -lhas {} ;|awk '{$1=$2=$3=$4=$5=$6="";print $0}'|sed -E 's/^ +//g'
Apr 30 16:54 text-generation-webui/models/jeffwan_vicuna-13b -> /Users/master/sandbox/jeffwan_vicuna-13b
Apr 30 16:54 text-generation-webui/models/huggyllama_llama-30b -> /Users/master/sandbox/huggyllama_llama-30b
System Info
The text was updated successfully, but these errors were encountered: