You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
python examplecode.py
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
File "/home/chris/ai/text-generation-webui/amazonmistral/examplecode.py", line 8, in
model = AutoModelForCausalLM.from_pretrained(model_id,
File "/home/chris/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 571, in from_pretrained
return model_class.from_pretrained(
File "/home/chris/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3076, in from_pretrained
config = cls._check_and_enable_flash_attn_2(config, torch_dtype=torch_dtype, device_map=device_map)
File "/home/chris/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1265, in _check_and_enable_flash_attn_2
raise ValueError(
ValueError: The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new
The text was updated successfully, but these errors were encountered:
while trying the example from https://huggingface.co/amazon/MistralLite This is the result.
python examplecode.py
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
File "/home/chris/ai/text-generation-webui/amazonmistral/examplecode.py", line 8, in
model = AutoModelForCausalLM.from_pretrained(model_id,
File "/home/chris/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 571, in from_pretrained
return model_class.from_pretrained(
File "/home/chris/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3076, in from_pretrained
config = cls._check_and_enable_flash_attn_2(config, torch_dtype=torch_dtype, device_map=device_map)
File "/home/chris/anaconda3/envs/textgen/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1265, in _check_and_enable_flash_attn_2
raise ValueError(
ValueError: The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new
The text was updated successfully, but these errors were encountered: