You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all, first time using v.2 with Colab (really appreciate the notebook). However I am puzzled, it looks like the adversarial training kicked in at 200,000 steps rather than the 1 mil. I was used to in v. 1.
Do I need to write a flag, something like --warmup 1000000 in the !/content/miniconda/bin/rave train line?
Also, I can't find the train.py file among all the numerous files that are listed. [Edit more info:] Actually, can't find any of the .py files. I am using the Wasserstein regulation and the following params: !/content/miniconda/bin/rave preprocess --input_path $dataset --output_path $preprocessed_dataset !/content/miniconda/bin/rave train --config $architecture --config $regularization --config discrete --config causal --override LATENT_SPACE=16 --db_path $preprocessed_dataset --name $name --val_every 2500
EDIT: read over on Discord that the Wasserstein regularization defaults phase 1 to 200K steps. Have to flag --override PHASE_1_DURATION=1000000 or however many steps you want.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi all, first time using v.2 with Colab (really appreciate the notebook). However I am puzzled, it looks like the adversarial training kicked in at 200,000 steps rather than the 1 mil. I was used to in v. 1.
Do I need to write a flag, something like
--warmup 1000000
in the!/content/miniconda/bin/rave train
line?Also, I can't find the train.py file among all the numerous files that are listed. [Edit more info:] Actually, can't find any of the .py files. I am using the Wasserstein regulation and the following params:
!/content/miniconda/bin/rave preprocess --input_path $dataset --output_path $preprocessed_dataset
!/content/miniconda/bin/rave train --config $architecture --config $regularization --config discrete --config causal --override LATENT_SPACE=16 --db_path $preprocessed_dataset --name $name --val_every 2500
EDIT: read over on Discord that the Wasserstein regularization defaults phase 1 to 200K steps. Have to flag
--override PHASE_1_DURATION=1000000
or however many steps you want.Beta Was this translation helpful? Give feedback.
All reactions