You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
报错
Traceback (most recent call last):
File "train_clip.py", line 136, in
main()
File "train_clip.py", line 86, in main
args, training_args = parser.parse_json_file(json_file=train_args_file)
File "E:\ancanda\envs\CLIP-Chinese\lib\site-packages\transformers\hf_argparser.py", line 392, in parse_json_file
outputs = self.parse_dict(data, allow_extra_keys=allow_extra_keys)
File "E:\ancanda\envs\CLIP-Chinese\lib\site-packages\transformers\hf_argparser.py", line 367, in parse_dict
obj = dtype(**inputs)
File "", line 105, in init
File "E:\ancanda\envs\CLIP-Chinese\lib\site-packages\transformers\training_args.py", line 1133, in post_init
raise ValueError(
ValueError: FP16 Mixed precision training with AMP or APEX (--fp16) and FP16 half precision evaluation (--fp16_full_eval) can only be used on CUDA devices.
cuda版本是12.4
(CLIP-Chinese) F:\python\CLIP-Chinese>nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Feb_27_16:28:36_Pacific_Standard_Time_2024
Cuda compilation tools, release 12.4, V12.4.99
Build cuda_12.4.r12.4/compiler.33961263_0
是版本不兼容吗?
The text was updated successfully, but these errors were encountered:
你好,我遇到了没有cuda的问题
在windows后台运行:python train_clip.py --train_args_file train_args/train_clip.json
报错
Traceback (most recent call last):
File "train_clip.py", line 136, in
main()
File "train_clip.py", line 86, in main
args, training_args = parser.parse_json_file(json_file=train_args_file)
File "E:\ancanda\envs\CLIP-Chinese\lib\site-packages\transformers\hf_argparser.py", line 392, in parse_json_file
outputs = self.parse_dict(data, allow_extra_keys=allow_extra_keys)
File "E:\ancanda\envs\CLIP-Chinese\lib\site-packages\transformers\hf_argparser.py", line 367, in parse_dict
obj = dtype(**inputs)
File "", line 105, in init
File "E:\ancanda\envs\CLIP-Chinese\lib\site-packages\transformers\training_args.py", line 1133, in post_init
raise ValueError(
ValueError: FP16 Mixed precision training with AMP or APEX (
--fp16
) and FP16 half precision evaluation (--fp16_full_eval
) can only be used on CUDA devices.cuda版本是12.4
(CLIP-Chinese) F:\python\CLIP-Chinese>nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Feb_27_16:28:36_Pacific_Standard_Time_2024
Cuda compilation tools, release 12.4, V12.4.99
Build cuda_12.4.r12.4/compiler.33961263_0
是版本不兼容吗?
The text was updated successfully, but these errors were encountered: