Skip to content

Releases: elephant-track/elephant-server

v0.4.3

19 Oct 16:38
9bb9dc7
Compare
Choose a tag to compare

Release notes

  • Update Makefile to work with CPU-only systems
    • remove --gpu option for CPU-only systems because it does not work when the option is specified

v0.4.2

21 Sep 13:04
eba45f7
Compare
Choose a tag to compare

Release notes

  • fix an issue in pad in the forward path of the model.
    • when n_keep_axials==4, slicing in removing pad gave an error

v0.4.1

11 Aug 09:40
5d7b5a2
Compare
Choose a tag to compare

Release notes

  • Fix an issue in flow label update
    • previous labels were taken over unintentionally
  • Fix an argument name
    • batchsize should be batch_size
  • Avoid error in cast
    • cast for None gives gives errors

v0.4.0

19 Jun 11:32
9e00ebf
Compare
Choose a tag to compare

Release notes

  • Introduce multiprocessing
    • PyTorch multiprocess training using DistributedDataParallel (DDP)
    • add Celery to bridge between Flask and PyTorch DDP
    • make nginx user and group unique (Docker)
    • update Makefile to use multiple GPUs
    • update logging to publish MQ only when running on Flask
    • refactoring
    • change algorithm for data loading in live mode
    • add is_mixed_precision to config
    • add is_cpu() to config
    • update tests
    • use chunk size smaller than 1024 in each dimension
    • use DDP only when there are multiple GPU devices
    • set num_workers of DataLoader to 0
  • Suppress warnings for torch.meshgrid
  • Increase performance (speed) in training
    • implement cache
    • use attrs to get labeled indices
    • augmentation is performed after crop (Todo: fix an issue of size shrink with rotation)
  • Increase performance (speed) in prediction
    • use torch.nn.DataParallel in prediction
    • set default values to config parameters (use_median, c_ratio, p_thresh, r_min, r_max)
    • update rotate augmentation (avoid black regions)
    • increase shm-size to 1g in Docker container
    • use memmap to speed up
    • change celery pool to thread to allow multiple tasks run in parallel
    • load labels using memmap
  • Introduce Flask-RESTX
    • reorganize the endpoints using Flask-RESTX
    • update some endpoint names
    • add get_state() fuction to redis_util
  • Update _update_seg_labels()
    • store only centroids for the attrs "label.indices.{t}"
    • fix an issue with the color in label_vis
  • Set priority to supervisord processes to shutdown gracefully
  • Reduce memory usage with memmap
    • use memmap in preprocessing of input image
    • use memmap in _find_and_push_spots() to save memory
    • use memmap in resize, which can consume a lot of memory
    • use memmap in prediction if it is set in config
    • create memmap dir if not exist in config loading
  • Implement input_size
    • input_size is used to resize the original image at runtime (either training or prediction)
  • Avoid np.mean when len(models) == 1
    • memmap will be used in the later steps
  • Handle abort and errors properly in data loading
    • FileNotFound error is raised when aborted
  • Support partitioned h5
    • add support for partitioned h5 files
  • Change pad mode from reflect to replace
    • reduce the artifact on the edge
  • Update environment.yml
    • add celery
    • add flask-restx
    • remove constraint for libzlib to resolve version conflicts
  • Update Jupyter notebook
    • update settings for Celery
  • Update dataset tool
    • write data in parallel
    • avoid too large chunks
  • Update scripts
    • update scripts to adapt to the latest code
    • add scripts for profiling
    • update Dockerfile to install profiling tools
  • Update normalize_zero_one
    • all values will be zeros when all values are the same
  • Update Singularity setup
    • add lines required to work with Celery
    • add Make targets for singularity
  • Update Makefile
    • singularity-launch accepts ELEPHANT_GPU environment variable
  • Change the order of pad and resize
    • resize (-> pad) -> forward (-> unpad) -> resize
  • Write image data using chunks
    • write image data from .h5 in a memory-frinedly manner
  • Refine I/O for segmentation
    • introduce FileLock
    • improve zarr and numpy indexing
    • update zarr version from 2.4.0 to 2.11.0
    • speed up _update_seg_labels()
  • Use indexing efficiently
    • avoid loading unnecessary data
  • Change Abort strategy
    • check state in a while loop instead of using the wait() method
  • Increase shm-size to 8g
    • sometimes the shm-size is required to be large
  • Fix Conda version in Colab notebook
    • to avoid errors in installation of modules
  • Fix issues
    • fix an issue caused when no scaling is applied
    • fix an issue caused when run on 2D data
    • set redis_client to None when not RUN_ON_FLASK
    • set bounds to the chunk size for img.zarr
    • properly pass the argument contrast
    • fix issues in contrast augmentation
      • contrast parameter was fixed to 0.5 to 1 in SegmentationDatasetZarr
    • fix an issue with padding (padding was not performed in the previous version even if it is not specified)
    • fix parameters passed to export_ctc_labels
    • is_mixed_precision is set to False by default for flow training (most loss values become zeros with float16)
    • update ssim calculation to avoid invalid values in mixed precision
    • improve indexing performance
    • fix issues in generate_dataset (multiprocessing should be used in a Celery process)
    • fix an issue with detect_spots (_find_and_push_spots() should be called before output operation)
    • fix an issue with scaling in flow prediction (appropriately take into account the resize_factor)
    • fix an issue with dataset api (remove a condition statement in dataset api to avoid an error)
    • fix _update_seg_labels
      • autobackground did not work well
      • make updating the labels faster
    • fix issues in label update
      • clear cache in _get_label_at()
      • remove chunk files just before label update
    • fix issues with CPU
      • add map_location to torch.load functions
      • update make target for singularity

v0.3.5

17 Mar 23:08
548908f
Compare
Choose a tag to compare

Release notes

  • update the Singularity definition file (elephant.def) to fix issues in singularity

v0.3.4

22 Dec 22:10
8935feb
Compare
Choose a tag to compare

Release notes

  • update environment.yml to resolve an issue with installing conda/mamba packages
  • update Dockerfile to make it friendly with development

v0.3.3

08 Dec 16:44
cfb4caa
Compare
Choose a tag to compare

Release notes

  • Update PyTorch (1.4.0 -> 1.10.0), CudaToolKit (10.1.243 -> 11.3) and cuDNN (7.6.3 -> 8.2.0)
    • support Ampere architecture
  • Use the same parameters for allclose in Numpy and PyTorch
    • Use the default parameters for PyTorch (rtol=1e-05, atol=1e-08)
    • exceptionally, the rtol values in test_models.py are set to 1e-4
    • assert is added to test_models.py

Acknowledgements

v0.3.2

03 Dec 19:17
50e3e22
Compare
Choose a tag to compare

Release notes

  • Avoid an error with groupmod

Acknowledgements

  • Satoshi Yamaji for reporting the issue

v0.3.1

02 Dec 00:52
d69e7d4
Compare
Choose a tag to compare

Release notes

  • Add error log for model reset during process is running

v0.3.0

01 Dec 22:59
dde7163
Compare
Choose a tag to compare

Release notes

  • Add tests
    • pytest
  • Fix training epoch to log
    • batch_index starts from 1 in log
    • log at the end of log_interval instead of the beginning of it
  • Add/update log messages
    • log at before/after each request
    • update "waiting" log
    • minor fix in a comment
  • Implement download_model endpoint
  • Implement uploading models (via reset)
    • when a request is multipart/form-data, an attached file is used to load state_dict(s) for resetting a model
    • if the uploaded model parameters are not compatible with a model, an error will be thrown
  • Replace Conda with Mamba
    • replace Conda with Mamba
    • use the same environment.yml in all options (Docker, Singularity and Colab)
  • Keep export results on the server
    • Fix zip name (.zip.zip to .zip)
  • Avoid error on the CPU environment
    • make the gpus/ endpoint compatible with the CPU environment
  • Explicitly kill all services before restart (Colab notebook)
    • the services can remain working when the notebook does not respond

Acknowledgements

  • @tischi and Arif Ul Maula Khan for reporting issues (Fix training epoch to log, Avoid error on the CPU environment, Explicitly kill all services before restart (Colab notebook)) and suggestions (Add/update log messages, Implement download_model endpoint, Implement uploading models (via reset))
  • Kojiro Mukai for reporting the issue on downloading the results after export