-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/hiera #418
Merged
Merged
Feature/hiera #418
Changes from all commits
Commits
Show all changes
31 commits
Select commit
Hold shift + click to select a range
6ed231d
Bump version: 0.1.5 → 0.1.6
004b823
Merge branch 'main' of https://github.com/AllenCellModeling/cyto-dl i…
0c607d9
Merge branch 'main' of https://github.com/AllenCellModeling/cyto-dl i…
2c0c275
add hiera
50646ae
start of mask2former
22ddadc
first take at transfomer
693c514
fix dimensionality, now updating instance queries instead of mask
913416a
give instance queries own dim
a561aa4
add mask creation
e5591d4
Merge branch 'main' into feature/hiera
9f6591c
remove experimental code
a773f3b
update to base patchify
70a0437
merge main
23f4b94
wip
357166f
update configs
a12d8cd
update patchify
f652138
rearrange encoder/decoder/mae
7ee08e3
add 2d hiera
71b8096
2d masked unit attention
038d573
precommit
5098732
update configs
453f0e5
update hiera model config
ec07e39
update deafults
128baa5
update tests
3c97933
delete patchify_conv
1d4a76d
fix jepa tests
663e52c
update mask transform
30a0526
add predictor
e6b58df
precommit
84717b7
update with ritviks comments
316a93a
replace all function names
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
# @package _global_ | ||
# to execute this experiment run: | ||
# python train.py experiment=example | ||
defaults: | ||
- override /data: im2im/mae.yaml | ||
- override /model: im2im/hiera.yaml | ||
- override /callbacks: default.yaml | ||
- override /trainer: gpu.yaml | ||
- override /logger: csv.yaml | ||
|
||
# all parameters below will be merged with parameters from default configurations set above | ||
# this allows you to overwrite only specified parameters | ||
|
||
tags: ["dev"] | ||
seed: 12345 | ||
|
||
experiment_name: YOUR_EXP_NAME | ||
run_name: YOUR_RUN_NAME | ||
|
||
# only source_col is needed for masked autoencoder | ||
source_col: raw | ||
spatial_dims: 3 | ||
raw_im_channels: 1 | ||
|
||
trainer: | ||
max_epochs: 100 | ||
gradient_clip_val: 10 | ||
|
||
data: | ||
path: ${paths.data_dir}/example_experiment_data/segmentation | ||
cache_dir: ${paths.data_dir}/example_experiment_data/cache | ||
batch_size: 1 | ||
_aux: | ||
# 2D | ||
# patch_shape: [16, 16] | ||
# 3D | ||
patch_shape: [16, 16, 16] | ||
|
||
callbacks: | ||
# prediction | ||
# saving: | ||
# _target_: cyto_dl.callbacks.ImageSaver | ||
# save_dir: ${paths.output_dir} | ||
# save_every_n_epochs: ${model.save_images_every_n_epochs} | ||
# stages: ["predict"] | ||
# save_input: False | ||
# training | ||
saving: | ||
_target_: cyto_dl.callbacks.ImageSaver | ||
save_dir: ${paths.output_dir} | ||
save_every_n_epochs: ${model.save_images_every_n_epochs} | ||
stages: ["train", "test", "val"] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,70 @@ | ||
_target_: cyto_dl.models.im2im.MultiTaskIm2Im | ||
|
||
save_images_every_n_epochs: 1 | ||
save_dir: ${paths.output_dir} | ||
|
||
x_key: ${source_col} | ||
|
||
backbone: | ||
_target_: cyto_dl.nn.vits.mae.HieraMAE | ||
spatial_dims: ${spatial_dims} | ||
patch_size: 2 # patch_size * num_patches should be your image shape (data._aux.patch_shape) | ||
num_patches: 8 # patch_size * num_patches = img_shape | ||
num_mask_units: 4 #Mask units are used for local attention. img_shape / num_mask_units = size of each mask unit in pixels, num_patches/num_mask_units = number of patches permask unit | ||
emb_dim: 4 | ||
# NOTE: this is a very small model for testing - for best performance, the downsampling ratios, embedding dimension, number of layers and number of heads should be adjusted to your data | ||
architecture: | ||
# mask_unit_attention blocks - attention is only done within a mask unit and not across mask units | ||
# the total amount of q_stride across the architecture must be less than the number of patches per mask unit | ||
- repeat: 1 # number of times to repeat this block | ||
q_stride: 2 # size of downsampling within a mask unit | ||
num_heads: 1 | ||
- repeat: 1 | ||
q_stride: 1 | ||
num_heads: 2 | ||
# self attention transformer - attention is done across all patches, irrespective of which mask unit they're in | ||
- repeat: 2 | ||
num_heads: 4 | ||
self_attention: True | ||
decoder_layer: 1 | ||
decoder_dim: 16 | ||
mask_ratio: 0.66666666666 | ||
context_pixels: 3 | ||
use_crossmae: True | ||
|
||
task_heads: ${kv_to_dict:${model._aux._tasks}} | ||
|
||
optimizer: | ||
generator: | ||
_partial_: True | ||
_target_: torch.optim.AdamW | ||
weight_decay: 0.05 | ||
|
||
lr_scheduler: | ||
generator: | ||
_partial_: True | ||
_target_: torch.optim.lr_scheduler.OneCycleLR | ||
max_lr: 0.0001 | ||
epochs: ${trainer.max_epochs} | ||
steps_per_epoch: 1 | ||
pct_start: 0.1 | ||
|
||
inference_args: | ||
sw_batch_size: 1 | ||
roi_size: ${data._aux.patch_shape} | ||
overlap: 0 | ||
progress: True | ||
mode: "gaussian" | ||
|
||
_aux: | ||
_tasks: | ||
- - ${source_col} | ||
- _target_: cyto_dl.nn.head.mae_head.MAEHead | ||
loss: | ||
postprocess: | ||
input: | ||
_target_: cyto_dl.models.im2im.utils.postprocessing.ActThreshLabel | ||
rescale_dtype: numpy.uint8 | ||
prediction: | ||
_target_: cyto_dl.models.im2im.utils.postprocessing.ActThreshLabel | ||
rescale_dtype: numpy.uint8 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,5 @@ | ||
from .cross_mae import CrossMAE_Decoder | ||
from .mae import MAE_Decoder, MAE_Encoder, MAE_ViT | ||
from .decoder import CrossMAE_Decoder, MAE_Decoder | ||
from .encoder import HieraEncoder, MAE_Encoder | ||
from .mae import MAE, HieraMAE | ||
from .seg import Seg_ViT, SuperresDecoder | ||
from .utils import take_indexes |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so last layer is global attention and first 2 layers are local attention? Is 3 layers the recommended hierarchy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correct. 3 layers is small enough to test quickly. All of the models with unit tests are tiny by default in the configs and I have somewhere in the docs that you should increase the model size if you want good performance.