You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2023-05-24 10:54:21.193819: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-05-24 10:54:21.329239: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-05-24 10:54:22.824794: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/home/usr/anaconda3/lib/:/home/usr/anaconda3/lib/python3.9/site-packages/tensorrt:/home/usr/anaconda3/lib/:/home/usr/anaconda3/lib/python3.9/site-packages/tensorrt:/home/usr/anaconda3/lib/:/home/usr/anaconda3/lib/python3.9/site-packages/tensorrt
2023-05-24 10:54:22.824912: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/home/usr/anaconda3/lib/:/home/usr/anaconda3/lib/python3.9/site-packages/tensorrt:/home/usr/anaconda3/lib/:/home/usr/anaconda3/lib/python3.9/site-packages/tensorrt:/home/usr/anaconda3/lib/:/home/usr/anaconda3/lib/python3.9/site-packages/tensorrt
2023-05-24 10:54:22.824923: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Loading the delta weights from lmsys/vicuna-7b-delta-v1.1
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:05<00:00, 2.64s/it]
Loading the base model from /home/usr/FastChat/fastchat/model/weights/llama/saved_models/7B
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/usr/anaconda3/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/usr/anaconda3/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/usr/FastChat/fastchat/model/apply_delta.py", line 165, in <module>
apply_delta(args.base_model_path, args.target_model_path, args.delta_path)
File "/home/usr/FastChat/fastchat/model/apply_delta.py", line 133, in apply_delta
base = AutoModelForCausalLM.from_pretrained(
File "/home/usr/anaconda3/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 472, in from_pretrained
return model_class.from_pretrained(
File "/home/usr/anaconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2784, in from_pretrained
) = cls._load_pretrained_model(
File "/home/usr/anaconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 3111, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
File "/home/usr/anaconda3/lib/python3.9/site-packages/transformers/modeling_utils.py", line 440, in load_state_dict
with safe_open(checkpoint_file, framework="pt") as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
I'm not sure what is causing this error. I looked through both the llama and FastChat repos for issues with the same error message but I couldn't find anything. I downloaded the llama weights from the torrent popularized by this pull request. I forced a recheck and nothing came up. My llama weights' folder structure is:
The text was updated successfully, but these errors were encountered:
Also, I should mention that I can run llama's inference command (torchrun --nproc_per_node 1 example.py --ckpt_dir $TARGET_FOLDER --tokenizer_path $TARGET_FOLDER/tokenizer.model) where TARGET_FOLDER="saved_models/7B" (assuming I am in the llama directory).
Hello, I am trying to run
apply_deltas
in order to convert the llama weights I have downloaded to vicuna weights. When I runI get the following output:
I'm not sure what is causing this error. I looked through both the llama and FastChat repos for issues with the same error message but I couldn't find anything. I downloaded the llama weights from the torrent popularized by this pull request. I forced a recheck and nothing came up. My llama weights' folder structure is:
The text was updated successfully, but these errors were encountered: