-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: refactor contrastive learning code with virtual staining code #109
Comments
@Soorya19Pradeep I'm outdated! It's now in #108 |
For the napari UI I think we should first try interacting with the plugin through standardized data files so we don't have to maintain our own interface. |
The napari-clusters-plotter plugin does not implement readers. So it relies on what's available in the napari layer list (features are stored as attribute of the labels layer). I now think a workable way is to implement a custom reader in napari-iohub for the images and tracks so the visualization is easier (handle mixed dimensions and scales etc). The ultrack plugin does load the extra columns in its output CSVs as layer features so it can be used by the cluster plotter. As for clustering, I think dimensionality reduction should be done beforehand on all the cells, instead of on the limited number of cells in each FOV. |
That's interesting. Can this work?
@ziw-liu please go ahead and decide on a useful and low-maintenance solution. |
@ziw-liu @alishbaimran Given our offline discussion, here is the prioritization of features:
You could partition the refactor in 3 PRs, each of which implements the above and is tested with the corresponding training run. We will train contrastive phenotyping models via the python scripts and CLIs that wrap these scripts. We don't have to prioritize integration with lighting CLI yet. |
@ziw-liu and @alishbaimran I think we can bypass the patchification step by chunking the Zarr store in I got this idea while exploring the data we are preparing for release with the paper on mantis. |
@mattersoflight I consider the refactoring completed after #153. Feel free to open new issues for specific tasks. |
This issue tracks our progress toward integration of the contrastive learning code with virtual staining code.
Our preprocessing code is currently in good shape and consists of:
We are still improving the tracking to capture cell division and cells near the boundary of the FOV: @tayllatheodoro
Training
It works well via pytorch lightning CLI and configs. The dataloader also works well with the HCS data format.
Pending improvements:
Architecture :
ContrastiveEncoder
#138 @ziw-liuData loader and loss functions:
Prediction and evaluation
It works well via pytorch lightning CLI and configs.
In this round, we should make any changes in the code path that can affect the architecture.
The text was updated successfully, but these errors were encountered: