Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

obtain model performance variability by running multiple seeds #10

Open
1 task
msrepo opened this issue May 5, 2023 · 2 comments
Open
1 task

obtain model performance variability by running multiple seeds #10

msrepo opened this issue May 5, 2023 · 2 comments

Comments

@msrepo
Copy link
Collaborator

msrepo commented May 5, 2023

Tasks
  • code to aggregate over multiple seeds
@msrepo
Copy link
Collaborator Author

msrepo commented May 7, 2023

Finalize what models are we going to put in the final table before running these expensive multiple folds. For example it was found that SwinUNETR does considerably well than UNETR(kind of obvious, given it is essentially an improvement over UNETR using shifted window) i.e. we might keep SwinUNETR instead of UNETR as representative of the best pure transformer model.

@msrepo
Copy link
Collaborator Author

msrepo commented Jun 21, 2023

See https://github.com/mle-infrastructure/mle-logging for reference.
there aggregate_over_seed() aggregates dicts.

  • configs
    |__ subject_list
    |__ monte_carlo_k_fold_seed_12345
    | |__ paths
    | |__yaml
    |__monte_carlo_k_fold_seed_12346

files

  • scripts/generate_train_val_test_paths.py
  • scripts/benchmark_table.py
  • generate_training_script.py
  • generate_evaluation_script.py
  • generate_config.py

current workflow
rect815

python scripts/generate_train_val_test_paths.py configs/full/Verse2019-DRR-full.yaml

TODO

  • save csv into paths dir
  • train/test split seed, other randomization seed same? keep it in the yaml file
  • structure the scripts dir, create subdirs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant