Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chat.py 执行失败 #104

Closed
timiil opened this issue Apr 24, 2023 · 2 comments
Closed

chat.py 执行失败 #104

timiil opened this issue Apr 24, 2023 · 2 comments

Comments

@timiil
Copy link

timiil commented Apr 24, 2023

在docker容器运行指令:

python3 chat.py --model_path ./llama-7b-hf/

返回错误:

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /root/miniconda3/envs/cvicuna did not contain libcudart.so as expected! Searching further paths...
  warn(msg)
/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/nvidia/lib64'), PosixPath('/usr/local/nvidia/lib')}
  warn(msg)
/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /usr/local/nvidia/lib:/usr/local/nvidia/lib64 did not contain libcudart.so as expected! Searching further paths...
  warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 6.1
CUDA SETUP: Detected CUDA version 114
/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU!
  warn(msg)
CUDA SETUP: Loading binary /root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda114_nocublaslt.so...
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
./lora-Vicuna/checkpoint-3000/adapter_model.bin
./lora-Vicuna/checkpoint-3000/pytorch_model.bin
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:19<00:00,  1.68it/s]
Traceback (most recent call last):
  File "/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/peft-0.3.0.dev0-py3.8.egg/peft/utils/config.py", line 105, in from_pretrained
  File "/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/huggingface_hub-0.14.0rc1-py3.8.egg/huggingface_hub/utils/_validators.py", line 112, in _inner_fn
    validate_repo_id(arg_value)
  File "/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/huggingface_hub-0.14.0rc1-py3.8.egg/huggingface_hub/utils/_validators.py", line 160, in validate_repo_id
    raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './lora-Vicuna/checkpoint-3000'. Use `repo_type` argument if needed.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "chat.py", line 62, in <module>
    model = SteamGenerationMixin.from_pretrained(
  File "/root/Chinese-Vicuna/utils.py", line 670, in from_pretrained
    config = LoraConfig.from_pretrained(model_id)
  File "/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/peft-0.3.0.dev0-py3.8.egg/peft/utils/config.py", line 107, in from_pretrained
ValueError: Can't find 'adapter_config.json' at './lora-Vicuna/checkpoint-3000'
(cvicuna) root@adfe6fe7295f:~/Chinese-Vicuna# python3 chat.py --model_path ./llama-7b-hf/

@Facico
Copy link
Owner

Facico commented Apr 24, 2023

仔细看一下我们的那一块的文档,要使用本地文件的话,需要把文件名适配到"adapter_config.json"和“adapter_model.bin”。我们在对应的脚本中有加,如果你是直接执行的可能会报错。

@Facico
Copy link
Owner

Facico commented Apr 24, 2023

而且你可以学会如何去搜索相似的issue,把你那段报错信息搜一下就行了 ,相似的issue

@Facico Facico closed this as completed Jun 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants