We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I use python 3.8.19, I think 3.8.x are fine. timm==0.9.16, cuz ModelEmaV3 is from huggingface/pytorch-image-models@dd84ef2
timm==0.9.16
I download the dataset from kaggle, ref https://www.kaggle.com/competitions/multi-ffdi/data. Only phase1.
The data needs cvt, remove header, and use whitespace to sep, not comma.
file_path = 'trainset_label.txt' prefix = 'dataset/trainset/' # 要添加的前缀 output_file_path = 'train.txt' # file_path = 'valset_label.txt' # prefix = 'dataset/valset/' # 要添加的前缀 # output_file_path = 'val.txt' # 读取文件,处理内容 with open(file_path, 'r', encoding='utf-8') as file: lines = file.readlines() # 去掉第一行并处理剩下的行 processed_lines = [] for line in lines[1:]: # 跳过第一行 new_line = prefix + line.replace(',', ' ') # 添加前缀并替换逗号为空格 processed_lines.append(new_line) # 输出处理后的内容,或写回文件 with open(output_file_path, 'w', encoding='utf-8') as output_file: output_file.writelines(processed_lines)
If no 8 gpus, change config in main.sh, e.g. I have 4 gpus
main.sh
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --use_env main_train.py
DEVICES and nproc need to be modified.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I use python 3.8.19, I think 3.8.x are fine.
timm==0.9.16
, cuz ModelEmaV3 is from huggingface/pytorch-image-models@dd84ef2I download the dataset from kaggle, ref https://www.kaggle.com/competitions/multi-ffdi/data. Only phase1.
The data needs cvt, remove header, and use whitespace to sep, not comma.
If no 8 gpus, change config in
main.sh
, e.g. I have 4 gpusDEVICES and nproc need to be modified.
The text was updated successfully, but these errors were encountered: