Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True #131

Closed
longkeyy opened this issue May 4, 2023 · 3 comments

Comments

@longkeyy
Copy link

longkeyy commented May 4, 2023

env:
macbook m2
python 3.10

conda create -n alpaca-serve python=3.10
conda activate alpaca-serve
cd Alpaca-LoRA-Serve
pip install -r requirements.txt

BASE_URL=decapoda-research/llama-7b-hf
FINETUNED_CKPT_URL=tloen/alpaca-lora-7b

python app.py --base_url $BASE_URL --ft_ckpt_url $FINETUNED_CKPT_URL --port 6006

@Facico
Copy link
Owner

Facico commented May 4, 2023

Can you provide more detailed information so that I can reproduce your error?
Maybe your base model is not a pytorch format model

@longkeyy
Copy link
Author

longkeyy commented May 4, 2023

python ./tools/Alpaca-LoRA-Serve/app.py  1 ✘ took 21s  alpaca-lora  at 10:35:11 
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
dlopen(/Users/admin/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so, 0x0006): tried: '/Users/admin/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so' (not a mach-o file), '/System/Volumes/Preboot/Cryptexes/OS/Users/longkeyy/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so' (no such file), '/Users/longkeyy/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so' (not a mach-o file)
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
dlopen(/Users/admin/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so, 0x0006): tried: '/Users/admin/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so' (not a mach-o file), '/System/Volumes/Preboot/Cryptexes/OS/Users/longkeyy/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so' (no such file), '/Users/longkeyy/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so' (not a mach-o file)
/Users/longkeyy/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/bitsandbytes/cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
tloen/alpaca-lora-7b
tloen/alpaca-lora-7b/pytorch_model.bin
Loading checkpoint shards: 91%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 30/33 [00:04<00:00, 6.86it/s]
Traceback (most recent call last):
File "/Users/longkeyy/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/transformers/modeling_utils.py", line 442, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
File "/Users/longkeyy/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/torch/serialization.py", line 777, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File "/Users/longkeyy/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/torch/serialization.py", line 282, in init
super(_open_zipfile_reader, self).init(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/admin/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/transformers/modeling_utils.py", line 446, in load_state_dict
if f.read(7) == "version":
File "/Users/admin/miniconda3/envs/alpaca-lora/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 128: invalid start byte

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/admin/PycharmProjects/Chinese-Vicuna/./tools/Alpaca-LoRA-Serve/app.py", line 245, in
run(args)
File "/Users/admin/PycharmProjects/Chinese-Vicuna/./tools/Alpaca-LoRA-Serve/app.py", line 141, in run
model, tokenizer = load_model(
File "/Users/admin/PycharmProjects/Chinese-Vicuna/tools/Alpaca-LoRA-Serve/model.py", line 71, in load_model
model = LlamaForCausalLM.from_pretrained(
File "/Users/admin/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2795, in from_pretrained
) = cls._load_pretrained_model(
File "/Users/longkeyy/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3109, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
File "/Users/admin/miniconda3/envs/alpaca-lora/lib/python3.10/site-packages/transformers/modeling_utils.py", line 458, in load_state_dict
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for '/Users/longkeyy/.cache/huggingface/hub/models--decapoda-research--llama-7b-hf/snapshots/5f98eefcc80e437ef68d457ad7bf167c2c6a1348/pytorch_model-00031-of-00033.bin' at '/Users/admin/.cache/huggingface/hub/models--decapoda-research--llama-7b-hf/snapshots/5f98eefcc80e437ef68d457ad7bf167c2c6a1348/pytorch_model-00031-of-00033.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.

@Facico
Copy link
Owner

Facico commented May 4, 2023

We are sorry, our project has stopped maintaining alpaca-serve. You can use our project's own visual scripts (generate, interact, chat...)

@Facico Facico closed this as completed Jun 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants