-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update the projection head (normalization and size). #139
Comments
@mattersoflight In SimCLR and others using InfoNCE-style losses, there is an implicit L2-normalization of |
@ziw-liu L2 normalization is indeed equivalent to converting feature vectors into unit vectors. It also makes sense that the loss is computed between unit vectors (either cosine similarity or eucledian) given SimCLR paper. I agree that L2 normalization of projections doesn't need to be removed. NO NEED to implement that as an argument.
|
* draft projection head per Update the projection head (normalization and size). #139 * reorganize comments in example fit config * configurable stem stride and projection dimensions * update type hint and docstring for ContrastiveEncoder * clarify embedding_dim * use the forward method directly for projected * normalize projections only when fitting the projected features saved during prediction is now *not* normalized * remove unused logger * refactor training code into translation and representation modules * extract image logging functions * use AdamW instead of Adam for contrastive learning * inline single-use argument * fix normalization * fix MLP layer order * fix output dimensions * remove L2 normalization before computing loss * compute rank of features and projections * documentation --------- Co-authored-by: Shalin Mehta <[email protected]>
@ziw-liu the paper that compares different types of projection heads uses infoNCE loss (which reads the same as NT-Xent loss in simCLR). It may be that using NT-Xent loss promotes higher-rank embeddings. |
* Merging code related to figures (#146) * notes on standard report * Add code for generating figures --------- Co-authored-by: Alishba Imran <[email protected]> * produce a report of useful visualizations to assess the dimensionality and features learned by embeddings (#140) * notes on standard report * add lib of computed features * correlates PCA with computed features * compute for all timepoints * compute correlation * remove cv library usage * remove edge detection * convert to dataframe * for entire well * add std_dev feature * fix patch size --------- Co-authored-by: Soorya Pradeep <[email protected]> * Remove obsolete scripts for contrastive phenotyping (#150) * remove obsolete training and prediction scripts * lint contrastive scripts * SSL: fix MLP head and remove L2 normalization (#145) * draft projection head per Update the projection head (normalization and size). #139 * reorganize comments in example fit config * configurable stem stride and projection dimensions * update type hint and docstring for ContrastiveEncoder * clarify embedding_dim * use the forward method directly for projected * normalize projections only when fitting the projected features saved during prediction is now *not* normalized * remove unused logger * refactor training code into translation and representation modules * extract image logging functions * use AdamW instead of Adam for contrastive learning * inline single-use argument * fix normalization * fix MLP layer order * fix output dimensions * remove L2 normalization before computing loss * compute rank of features and projections * documentation --------- Co-authored-by: Shalin Mehta <[email protected]> * created and updated classify_feb_embeddings.py * Module and scripts for evaluating representations (#156) * docstring * move scripts from contrastive_scripts to viscy/scripts * organize files in applications/contrastive_phenotyping * delete unused evaluation code * more cleanup * refactor evaluation metrics for translation task * refactor viscy.evaluation -> viscy.translation.evaluation_metrics and viscy.representation.evaluation * WIP: representation evaluation module * WIP: representation eval - docstrings in numpy format * WIP: more documentation * refactor: feature_extractor moved to viscy.representation.evaluation * lint * bug fix * refactored common computations and dataset * add imbalance-learn dependecy to metrics * refactor classification of embeddings * organize viscy.representation.evaluation * ruff * Soorya's plotting script * WIP: combine two versions of plot_embeddings.py * simplify representation.viscy.evaluation - move LCA to its own module * refactor of viscy.representation.evaluation * refactored and tested PCA and UMAP plots --------- Co-authored-by: Soorya Pradeep <[email protected]> * delete duplicate file * lint * fix import paths * rename translation tests * rename translation metrics * Sample positive and negative samples with a time offset for the triplet contrastive task (#154) * wip: sample positive and negative samples from another time point * configure time interval in triplet data module * vectorized anchor filtering * conditional augmentation for anchor anchor is augmented if the positive is another time point * example training script for the CTC dataset this is optimized to run on MPS * add example CTC prediction config for MPS * add fig for mitosis * add script to save image patches * add save patches as npy * save figure at 300dpi * Linear probing (#160) * refactor linear probing with lightning * test convenience function * always convert to long before onehot * use onehot only during training * supply trainer through argument to avoid wrapping * only log per epoch * example script for linear probing * add comment about loss curve * fix sample filtering order for select tracks * add script to visualize integrated gradients * plot integrated gradients over time * Use sklearn's logistic regression for linear probing (#169) * use binary logistic regression to initialize the linear layer * plot integrated gradients from a binary classifier * add cmap to 'visual' requirements * move model assembling to lca * rename init argument * disable feature scaling * update test and evaluation scripts to use new API * add docstrings to LCA * Tweak attribution visualization (#170) * add maplotlib style sheet for figure making * add cell division attribution * add matplotlib style sheet * move attribution computation to lca * tweak contrast limits and text * add captum to optional dependencies * move attribution function to a method of the classifier * add script to show organelle dynamics * add occlusion attribution * more generic save path * add uninfected cell * tweak subplot spacing * UMAP line plot to assess temporal smoothness in features space (#176) * add maplotlib style sheet for figure making * add cell division attribution * add matplotlib style sheet * move attribution computation to lca * tweak contrast limits and text * add captum to optional dependencies * move attribution function to a method of the classifier * add script to show organelle dynamics * add occlusion attribution * more generic save path * add uninfected cell * tweak subplot spacing * lower case titles * reduce UMAP components to 2 and add indices * add script to make the bridge gaps figure * fixed import error * formatted with black * reduce to single arrow on plot * remove reduntant script * Fixes on correlation of PCA and UMAP components to computed_feature script (#159) * reduce initial patch size * add radial profiling * add function descriptions * add umap correlation * add def comments * change umap for all data * add script for 1 chan * add p-value analysis * add PCA analysis * remove duplicate script * Refactor and format code * Format code * Removed umap correlation * note for future refactor --------- Co-authored-by: Ziwen Liu <[email protected]> * updated eval module & cosine sim figures (#168) * updated files * format fixed for tests * updated scripts * umap dist code * bug fixes and linting * logistic regression script * add infection figure script * Add script for generating infection figure and perform prediction on the June dataset * Format code * Black format evaluation module and fix import in figure_cell_infection script * Refactor scatterplot colors and markers * Calculate model accuracy * Add script for appendix video * formatted code * updated displacement funcs for full embeddings * script for displacement computation * fix style * fix docstring format --------- Co-authored-by: Shalin Mehta <[email protected]> Co-authored-by: Soorya Pradeep <[email protected]> Co-authored-by: Ziwen Liu <[email protected]> * Fixup representation (#180) * fix docstrings and type hint for the ContrastiveEncoder * refactor the representation evaluation module into submodules * move shared image logging into utils * fix line end * fix import paths in example notebooks * Unified CLI entry point (#182) * remove obsolete metrics script for translation * move cellpose annotation script * consolidate CLI documentation * remove old CLI help * move translation CLI to its own module * move contrastive CLI to its own module * remove old CLI module * remove global entry script * share trainer class between tasks * move cli from init to main * inherit base CLI class for tasks * improve type hint and docstring * restore global CLI entry point * special case subclass mode for preprocessing * remove separate entry points * add CLI description message * make the setup function private * fix subclass mode detection * remove unused arguments from custom subcommands * use generic path in example * fix docstring style * update virtual staining example configs * update CTC SSL example configs * update infection SSL example configs * Remove outdated comment * updating the dlmbl notebooks * updating dependendencies to allow viscy>0.2 in examples * updating phase contrast demo notebook. * updating references to main * Store UMAP embeddings in SSL predictions (#184) * extract function for computing umap * specific return type for predict step * write umap in prediction * raise log level for umap computation * fix key conversion * Add representation section to readme (#186) * draft readme * direct link dynaCLR schematic * add DynaCLR schemetic figure * add static schematic and link to video --------- Co-authored-by: Ziwen Liu <[email protected]> Co-authored-by: Ziwen Liu <[email protected]> * fix link syntax in readme --------- Co-authored-by: Shalin Mehta <[email protected]> Co-authored-by: Alishba Imran <[email protected]> Co-authored-by: Soorya Pradeep <[email protected]> Co-authored-by: Alishba Imran <[email protected]> Co-authored-by: Soorya19Pradeep <[email protected]> Co-authored-by: Eduardo Hirata-Miyasaki <[email protected]>
* Merging code related to figures (#146) * notes on standard report * Add code for generating figures --------- Co-authored-by: Alishba Imran <[email protected]> * produce a report of useful visualizations to assess the dimensionality and features learned by embeddings (#140) * notes on standard report * add lib of computed features * correlates PCA with computed features * compute for all timepoints * compute correlation * remove cv library usage * remove edge detection * convert to dataframe * for entire well * add std_dev feature * fix patch size --------- Co-authored-by: Soorya Pradeep <[email protected]> * Remove obsolete scripts for contrastive phenotyping (#150) * remove obsolete training and prediction scripts * lint contrastive scripts * SSL: fix MLP head and remove L2 normalization (#145) * draft projection head per Update the projection head (normalization and size). #139 * reorganize comments in example fit config * configurable stem stride and projection dimensions * update type hint and docstring for ContrastiveEncoder * clarify embedding_dim * use the forward method directly for projected * normalize projections only when fitting the projected features saved during prediction is now *not* normalized * remove unused logger * refactor training code into translation and representation modules * extract image logging functions * use AdamW instead of Adam for contrastive learning * inline single-use argument * fix normalization * fix MLP layer order * fix output dimensions * remove L2 normalization before computing loss * compute rank of features and projections * documentation --------- Co-authored-by: Shalin Mehta <[email protected]> * created and updated classify_feb_embeddings.py * Module and scripts for evaluating representations (#156) * docstring * move scripts from contrastive_scripts to viscy/scripts * organize files in applications/contrastive_phenotyping * delete unused evaluation code * more cleanup * refactor evaluation metrics for translation task * refactor viscy.evaluation -> viscy.translation.evaluation_metrics and viscy.representation.evaluation * WIP: representation evaluation module * WIP: representation eval - docstrings in numpy format * WIP: more documentation * refactor: feature_extractor moved to viscy.representation.evaluation * lint * bug fix * refactored common computations and dataset * add imbalance-learn dependecy to metrics * refactor classification of embeddings * organize viscy.representation.evaluation * ruff * Soorya's plotting script * WIP: combine two versions of plot_embeddings.py * simplify representation.viscy.evaluation - move LCA to its own module * refactor of viscy.representation.evaluation * refactored and tested PCA and UMAP plots --------- Co-authored-by: Soorya Pradeep <[email protected]> * delete duplicate file * lint * fix import paths * rename translation tests * rename translation metrics * Sample positive and negative samples with a time offset for the triplet contrastive task (#154) * wip: sample positive and negative samples from another time point * configure time interval in triplet data module * vectorized anchor filtering * conditional augmentation for anchor anchor is augmented if the positive is another time point * example training script for the CTC dataset this is optimized to run on MPS * add example CTC prediction config for MPS * add fig for mitosis * add script to save image patches * add save patches as npy * save figure at 300dpi * Linear probing (#160) * refactor linear probing with lightning * test convenience function * always convert to long before onehot * use onehot only during training * supply trainer through argument to avoid wrapping * only log per epoch * example script for linear probing * add comment about loss curve * fix sample filtering order for select tracks * add script to visualize integrated gradients * plot integrated gradients over time * Use sklearn's logistic regression for linear probing (#169) * use binary logistic regression to initialize the linear layer * plot integrated gradients from a binary classifier * add cmap to 'visual' requirements * move model assembling to lca * rename init argument * disable feature scaling * update test and evaluation scripts to use new API * add docstrings to LCA * Tweak attribution visualization (#170) * add maplotlib style sheet for figure making * add cell division attribution * add matplotlib style sheet * move attribution computation to lca * tweak contrast limits and text * add captum to optional dependencies * move attribution function to a method of the classifier * add script to show organelle dynamics * add occlusion attribution * more generic save path * add uninfected cell * tweak subplot spacing * UMAP line plot to assess temporal smoothness in features space (#176) * add maplotlib style sheet for figure making * add cell division attribution * add matplotlib style sheet * move attribution computation to lca * tweak contrast limits and text * add captum to optional dependencies * move attribution function to a method of the classifier * add script to show organelle dynamics * add occlusion attribution * more generic save path * add uninfected cell * tweak subplot spacing * lower case titles * reduce UMAP components to 2 and add indices * add script to make the bridge gaps figure * fixed import error * formatted with black * reduce to single arrow on plot * remove reduntant script * Fixes on correlation of PCA and UMAP components to computed_feature script (#159) * reduce initial patch size * add radial profiling * add function descriptions * add umap correlation * add def comments * change umap for all data * add script for 1 chan * add p-value analysis * add PCA analysis * remove duplicate script * Refactor and format code * Format code * Removed umap correlation * note for future refactor --------- Co-authored-by: Ziwen Liu <[email protected]> * updated eval module & cosine sim figures (#168) * updated files * format fixed for tests * updated scripts * umap dist code * bug fixes and linting * logistic regression script * add infection figure script * Add script for generating infection figure and perform prediction on the June dataset * Format code * Black format evaluation module and fix import in figure_cell_infection script * Refactor scatterplot colors and markers * Calculate model accuracy * Add script for appendix video * formatted code * updated displacement funcs for full embeddings * script for displacement computation * fix style * fix docstring format --------- Co-authored-by: Shalin Mehta <[email protected]> Co-authored-by: Soorya Pradeep <[email protected]> Co-authored-by: Ziwen Liu <[email protected]> * Fixup representation (#180) * fix docstrings and type hint for the ContrastiveEncoder * refactor the representation evaluation module into submodules * move shared image logging into utils * fix line end * fix import paths in example notebooks * Unified CLI entry point (#182) * remove obsolete metrics script for translation * move cellpose annotation script * consolidate CLI documentation * remove old CLI help * move translation CLI to its own module * move contrastive CLI to its own module * remove old CLI module * remove global entry script * share trainer class between tasks * move cli from init to main * inherit base CLI class for tasks * improve type hint and docstring * restore global CLI entry point * special case subclass mode for preprocessing * remove separate entry points * add CLI description message * make the setup function private * fix subclass mode detection * remove unused arguments from custom subcommands * use generic path in example * fix docstring style * update virtual staining example configs * update CTC SSL example configs * update infection SSL example configs * Remove outdated comment * updating the dlmbl notebooks * updating dependendencies to allow viscy>0.2 in examples * updating phase contrast demo notebook. * updating references to main * Store UMAP embeddings in SSL predictions (#184) * extract function for computing umap * specific return type for predict step * write umap in prediction * raise log level for umap computation * fix key conversion * Add representation section to readme (#186) * draft readme * direct link dynaCLR schematic * add DynaCLR schemetic figure * add static schematic and link to video --------- Co-authored-by: Ziwen Liu <[email protected]> Co-authored-by: Ziwen Liu <[email protected]> * fix link syntax in readme --------- Co-authored-by: Shalin Mehta <[email protected]> Co-authored-by: Alishba Imran <[email protected]> Co-authored-by: Soorya Pradeep <[email protected]> Co-authored-by: Alishba Imran <[email protected]> Co-authored-by: Soorya19Pradeep <[email protected]> Co-authored-by: Eduardo Hirata-Miyasaki <[email protected]>
TL;DR: current projection head doesn't do what it is supposed to do. It should have a batch norm and the size of the features and projections may be reduced further.
In the previous implementations of contrastive learning models (
dynacontrast
), we used batch norm in the projection head after each MLP.This paper also recommends using a non-linear projection head with batch norm. Different projection heads are evaluated by comparing the rank of the features (# of independent features) before and after the projection head:
As expected, the rank(projections) << rank(features).
Our current model's behavior is the opposite: rank(projections)> rank(features) as seen from the examination of the principal components in each.
This seems to be the consequence of clipping of projections, which seems to be due to the use of ReLU without normalization.
plt.plot(np.mean(embedding_dataset["projections"].values,axis=1))
plt.plot(np.std(embedding_dataset["projections"].values,axis=1))
The text was updated successfully, but these errors were encountered: