You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2024-08-13 23:20:34,688 INFO: Dataset [LOL_Dataset] - train is built.
num_iter_per_epoch=0
total_iters=500000
Traceback (most recent call last):
File "PyDiff/pydiff/train.py", line 12, in
train_pipeline(root_path)
File "/home/htt/Documents/PyDIff/BasicSR-light/basicsr/train.py", line 135, in train_pipeline
result = create_train_val_dataloader(opt, logger)
File "/home/htt/Documents/PyDIff/BasicSR-light/basicsr/train.py", line 63, in create_train_val_dataloader
total_epochs = math.ceil(total_iters / (num_iter_per_epoch))
ZeroDivisionError: division by zero
Traceback (most recent call last):
File "/home/htt/anaconda3/envs/PyDiff/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/htt/anaconda3/envs/PyDiff/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/htt/anaconda3/envs/PyDiff/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in
main()
File "/home/htt/anaconda3/envs/PyDiff/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/htt/anaconda3/envs/PyDiff/bin/python', '-u', 'PyDiff/pydiff/train.py', '--local_rank=0', '-opt', 'PyDiff/options/train_v2.yaml', '--launcher', 'pytorch']' returned non-zero exit status 1.
The text was updated successfully, but these errors were encountered:
Version Information:
BasicSR: 1.3.4.4
PyTorch: 1.7.0+cu110
TorchVision: 0.8.0
2024-08-13 23:20:34,663 INFO:
name: train_v2
model_type: PyDiffModel
num_gpu: 1
manual_seed: 0
datasets:[
train:[
name: train
type: LOL_Dataset
gt_root: ../dataset/LOLdataset/our485/high
input_root: ../dataset/LOLdataset/our485/low
input_mode: crop
crop_size: 256
concat_with_position_encoding: True
concat_with_hiseq: True
hiseq_random_cat: True
mean: [0.5, 0.5, 0.5]
std: [0.5, 0.5, 0.5]
use_flip: True
bright_aug: True
bright_aug_range: [0.5, 1.5]
use_shuffle: True
num_worker_per_gpu: 8
batch_size_per_gpu: 2
dataset_enlarge_ratio: 1
prefetch_mode: None
phase: train
]
val:[
name: validation
type: LOL_Dataset
gt_root: ../dataset/LOLdataset/eval15/high
input_root: ../dataset/LOLdataset/eval15/low
concat_with_hiseq: True
input_mode: pad
divide: 32
concat_with_position_encoding: True
mean: [0.5, 0.5, 0.5]
std: [0.5, 0.5, 0.5]
phase: val
]
]
network_unet:[
type: SR3UNet
in_channel: 13
out_channel: 3
inner_channel: 64
norm_groups: 32
channel_mults: [1, 2, 4, 8, 8]
attn_res: [16]
res_blocks: 2
dropout: 0.2
divide: 16
]
network_global_corrector:[
type: GlobalCorrector
normal01: True
]
network_ddpm:[
type: GaussianDiffusion
image_size: 128
channels: 3
conditional: True
color_limit: -1
pyramid_list: [1, 1, 2, 2]
]
ddpm_schedule:[
schedule: linear
n_timestep: 2000
linear_start: 1e-06
linear_end: 0.01
]
path:[
pretrain_network_g: None
param_key_g: params
strict_load_g: False
pretrain_network_d: None
resume_state: None
ignore_resume_networks: ['network_identity']
experiments_root: /home/htt/Documents/PyDIff/experiments/train_v2
models: /home/htt/Documents/PyDIff/experiments/train_v2/models
training_states: /home/htt/Documents/PyDIff/experiments/train_v2/training_states
log: /home/htt/Documents/PyDIff/experiments/train_v2
visualization: /home/htt/Documents/PyDIff/experiments/train_v2/visualization
]
train:[
cs_on_shift: True
vis_train: True
vis_num: 150
train_type: ddpm_cs_pyramid
t_border: 1000
input_mode: crop
crop_size: [160, 160]
optim_g:[
type: Adam
lr: 0.0001
]
optim_d:[
type: Adam
lr: 0.002
]
optim_component:[
type: Adam
lr: 0.002
]
scheduler:[
type: MultiStepLR
milestones: [50000, 75000, 100000, 150000, 200000]
gamma: 0.5
]
total_iter: 500000
warmup_iter: -1
]
val:[
split_log: True
fix_seed: True
color_gamma: 1.0
use_up_v2: True
pyramid_list: [1, 1, 2, 2]
ddim_eta: 1.0
ddim_timesteps: 4
use_kind_align: True
cal_all: True
show_all: True
val_freq: 5000.0
save_img: True
metrics:[
psnr:[
type: calculate_psnr
crop_border: 0
test_y_channel: False
]
ssim:[
type: calculate_ssim_lol
]
lpips:[
type: calculate_lpips_lol
]
]
]
logger:[
print_freq: 100
save_checkpoint_freq: 5000.0
use_tb_logger: True
wandb:[
project: None
resume_id: None
]
]
dist_params:[
backend: nccl
port: 29500
]
find_unused_parameters: True
dist: True
rank: 0
world_size: 1
auto_resume: False
is_train: True
root_path: /home/htt/Documents/PyDIff/PyDiff
2024-08-13 23:20:34,688 INFO: Dataset [LOL_Dataset] - train is built.
num_iter_per_epoch=0
total_iters=500000
Traceback (most recent call last):
File "PyDiff/pydiff/train.py", line 12, in
train_pipeline(root_path)
File "/home/htt/Documents/PyDIff/BasicSR-light/basicsr/train.py", line 135, in train_pipeline
result = create_train_val_dataloader(opt, logger)
File "/home/htt/Documents/PyDIff/BasicSR-light/basicsr/train.py", line 63, in create_train_val_dataloader
total_epochs = math.ceil(total_iters / (num_iter_per_epoch))
ZeroDivisionError: division by zero
Traceback (most recent call last):
File "/home/htt/anaconda3/envs/PyDiff/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/htt/anaconda3/envs/PyDiff/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/htt/anaconda3/envs/PyDiff/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in
main()
File "/home/htt/anaconda3/envs/PyDiff/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/htt/anaconda3/envs/PyDiff/bin/python', '-u', 'PyDiff/pydiff/train.py', '--local_rank=0', '-opt', 'PyDiff/options/train_v2.yaml', '--launcher', 'pytorch']' returned non-zero exit status 1.
The text was updated successfully, but these errors were encountered: