-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migration from microDL #1
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* added loss_fn combining mae and kl, tests * get multiple loss_functions * working mae-kl loss * removed tests * added tile input normalization in dataset * add normalize flag to ds with mask * calculate loss metrics on axis depending on image_data_format * parse config in train_script and pass dicts to trainer, to be debugged * refactored networks, to be debugged * working version, have to refactor masked_loss * generalize masked_loss, incomplete * Revert "refactored networks, to be debugged" This reverts commit d3351fd731f5442afb8de311c0ee152b8cdb0e28. * revoked changes in loss, added mask weighting in dataset * debugged masked_loss * Jenny feedback
* modify train_test_split to return sample numbers instead of indices to unique samples * update variable to indicate samples and not indices
* refactored layers * coded conv_blocks, to be debugged * graph ok for Unet2D and 3D * graph ok for UnetStackTo2D * graph ok for image*_to_vector_net * graph ok for unet_stack_stack * modified init to add all the networks * misc errors in train_script * fix for lr_find * Jenny feedback * missed error check for num_dense_units
Fixed network config argument to actually pass network_config instead of the full config object
* Update inference_script.py Added --flat_field and --no_flat_field instead of --flat_field_correct
* added tests for conv_blocks, add support for valid padding
* predict on larger image, tested on 2D models only * add place_operation param to predict_on_full_image, fix tile_image to include edge tiles
* initial untested changes to flatfield * almost done debugging preprocess refactor * finished volume fraction and tested * added row col indices * helper function for dataframe init * tested * deprecated lif read * anitha feedback * updated readme * fixed spelling
* added tests for UNet2D and 3D model
* changed preprocess, taking a break from train * sorted dataframe * reorganized dataframe * tested regressing with/out masks and segmentation
* added augmentation for 2D images * started adding tests for UNetStackTo2D, UNetStackToStack
* added tests for Image2DToVectorNet * added tests for Image3DToVectorNet
* debugging mixed depths in preprocessing * fixed tile sizes * updated readme
* updated readme * halfway through tests * moar tests
* added dataset input params * starting tests * more tests * finished testing * finished testing * bug fix * cleaned up some code * anitha feedback * fixed fixes * fixed fixes
* started prediction cli * untested code * errrrorrrrs * bug fix * not working * working whole image inference on stack to 2d * moving preprocess image to utils * moved if predict statement * tried to debug 3D tests * modified interp_upsampling3D to use static shape when image dimensions are provided * cleaned up import and removed the inference check * Reimplemented crop_layer as a pure tf function Use of dynamic shape in tf results in None along all dims other than channels. This leads to conflicts in other layers have static shapes. Modified InterpUpsampling3D to use static and dynamic shapes as needed. * if depth=1, initialize as 2D model * removed print * save config without default flow style * removed filter_size filter_stack assigment * save config before model creation * removed the check for interp fact of 1 * renamed inference scripts
* started adding functionality that will add to existing tiles if tile dir exists * now support the ability to add tiles to existing tile dir, added threading * changed tile assertions * period to comma * tuple to list * added threading in dataset * bug fix
* added keras-contrib to dockerfile * updated keras to 2.2.4 * updated readme * added tmux, simplified opencv
* removed threading * made workers a config option * temp fix of shape bug * changed dataset to have data_format as input * added squeeze option to dataset * need to squeeze after augment * made lr fig name optional * reverted to TF 1.10 * reverted back to keras 2.1.5 to get UNet2D to run
* find gpu ids, select best. Need to put together with existing code * finalized gpu selection, untested * removed remnant args.gpu
Minor changes to a few of the settings to match available settings in existing config file
* changed data format to shape order * bug fix * adding a check or two * moved test
* add new config file * Add 'pos_ids' parameter to preprocess_script.py * update config file
* changes for multi-channel stack to 2d, metrics=None, split by df index * add random seed to training_table for train-val-test-split
* cache on a single process to avoid race * set state in main process * add project metadata file * ignore packaging artifacts * add jsonargparse dependency * add tensorboard dependency * use entry point for cli
* update package name in docs * update workflow * format md * update scripts * remove outdated tests
* export onnx in lightning * remove old exporting script * update docs and bump opset * rename argument * update docs
and avoid wrong target shape in sanity check
* upgrade monai to 1.2.0 remove previous warning suppressor * remove unused import
* remove outdated tests * remove nose and add pytest * remove unused import * rename torch_unet to unet * format * linting * remove gpu profiling script this can be now acheived with the lightning profiler * lint with ruff and format with black * format tests * sort import with ruff and update CI workflow * remove duplicate event * black v23.3.0 * format * remove 3.11 support due to lapsolver issue cheind/py-lapsolver#18 * remove tf metric script test * run a minimal subset of pytests
mattersoflight
added this to the
0.1.0: 2.5D UNet for 3D virtual staining using pytorch lightning milestone
Jun 22, 2023
This also auto-generates CLI subcommand help message
* break data setup into smaller methods * wip: transforms use channel names directly * subpackage entries for unet * fix import * support multi-channel input and output * improve slicing syntax * rename val loss log name * move init docstring to class level * fix append channel call * multi-channel prediction writer * rename lightning module for virtual staining
* test stage dataloading * ground truth mask dataset * restore cli script entry for debugging * remove incomplete writer callback default * fix test dataset * compute test metrics with torchmetrics * evaluation submodule * load gt masks as int16 * depend on torchmetrics and pycocotools * trim custom metrics and add mAP * compute pearson correlation and mAP * mark POD as broken * test metrics * fix skipping * log coco metrics * replace print with debug logging * fixed POD metric * unmask POD test * add test coverage * test wrong dimension * fix mask loading * add diameter argument for cellpose prediction api * more flexible save locations * bump torchmetrics to 1.0 * fill missing segmentation metrics with -1 * remove constant input check * evaluate target channel on ground truth --------- Co-authored-by: Soorya Pradeep <[email protected]>
mattersoflight
approved these changes
Jul 18, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest a squash merge and refinement starting from there.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Mirroring the PyTorch implementation from microDL at mehta-lab/microDL@3d8eb24.
Relicensing
Now licensed under BSD-3-Clause and the new entity name (CZ Biohub SF).
TODO: