Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama_model_load: loading model from 'models/7B/ggml-model-q4_0.bin' #251

Closed
jethro254wt opened this issue Mar 24, 2023 · 2 comments
Closed

Comments

@jethro254wt
Copy link

image

image

Inside my "dalai\alpaca\models\7B" folder I just have 3 files "checklist.chk" "consolidated.00.pth" and "params.json". I really do more with Java and web development so I'm very out of my league. I assume I'm missing some files, but any help would be greatly appreciated!

@Theknight2015
Copy link

Theknight2015 commented Mar 24, 2023

I have the same problem on my 30B and 65B installations.

30B Installation Directory

  • checklist.chk
  • consolidated.00.pth
  • consolidated.01.pth
  • consolidated.02.pth
  • consolidated.03.pth
  • params.json

65B Installation Directory

  • checklist.chk
  • consolidated.00.pth
  • consolidated.01.pth
  • consolidated.02.pth
  • consolidated.03.pth
  • consolidated.04.pth
  • consolidated.05.pth
  • params.json

I'm not experienced in this area to know what to do with these. I found the ./quantize tutorial here for 7B and 13B because I keep getting a error of - './quantize' is not recognized as the name of a cmdlet, function, script file, or operable program - so I used this tutorial to quantize those files but I don't have the files needed in my 30B and 65B directories so I can't run those.

Any help or guidance would be appreciated. Thanks!

@jethro254wt
Copy link
Author

So Following the directions in the ReadMe, I got it working in terminal which honestly is good enough for me!

Directions:

Alpaca.cpp

Run a fast ChatGPT-like model locally on your device. The screencast below is not sped up and running on an M2 Macbook Air with 4GB of weights.

asciicast

This combines the LLaMA foundation model with an open reproduction of Stanford Alpaca a fine-tuning of the base model to obey instructions (akin to the RLHF used to train ChatGPT) and a set of modifications to llama.cpp to add a chat interface.

Get Started (7B)

Download the zip file corresponding to your operating system from the latest release. On Windows, download alpaca-win.zip, on Mac (both Intel or ARM) download alpaca-mac.zip, and on Linux (x64) download alpaca-linux.zip.

Download ggml-alpaca-7b-q4.bin and place it in the same folder as the chat executable in the zip file. There are several options:

Once you've downloaded the model weights and placed them into the same directory as the chat or chat.exe executable, run:

./chat

The weights are based on the published fine-tunes from alpaca-lora, converted back into a pytorch checkpoint with a modified script and then quantized with llama.cpp the regular way.

Building from Source (MacOS/Linux)

git clone https://github.com/antimatter15/alpaca.cpp
cd alpaca.cpp

make chat
./chat

Building from Source (Windows)

  • Download and install CMake: https://cmake.org/download/
  • Download and install git. If you've never used git before, consider a GUI client like https://desktop.github.com/
  • Clone this repo using your git client of choice (for GitHub Desktop, go to File -> Clone repository -> From URL and paste https://github.com/antimatter15/alpaca.cpp in as the URL)
  • Open a Windows Terminal inside the folder you cloned the repository to
  • Run the following commands one by one:
cmake .
cmake --build . --config Release
  • Download the weights via any of the links in "Get started" above, and save the file as ggml-alpaca-7b-q4.bin in the main Alpaca directory.
  • In the terminal window, run this command:
.\Release\chat.exe
  • (You can add other launch options like --n 8 as preferred onto the same line)
  • You can now type to the AI in the terminal and it will reply. Enjoy!

Credit

This combines Facebook's LLaMA, Stanford Alpaca, alpaca-lora and corresponding weights by Eric Wang (which uses Jason Phang's implementation of LLaMA on top of Hugging Face Transformers), and llama.cpp by Georgi Gerganov. The chat implementation is based on Matvey Soloviev's Interactive Mode for llama.cpp. Inspired by Simon Willison's getting started guide for LLaMA. Andy Matuschak's thread on adapting this to 13B, using fine tuning weights by Sam Witteveen.

Disclaimer

Note that the model weights are only to be used for research purposes, as they are derivative of LLaMA, and uses the published instruction data from the Stanford Alpaca project which is generated by OpenAI, which itself disallows the usage of its outputs to train competing models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants