-
Notifications
You must be signed in to change notification settings - Fork 330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SLIC Layer: Superpixel #43
Comments
Hey @old-school-kid any chance I could get a little more context as to how people tend to use this in the CV domain? Particularly w/ deep models. Thanks |
Hi @LukeWood |
That would actually be great. I'm really interested to see how it's used in both material science and medical imaging! Thanks @old-school-kid |
Also, would you be interested in contributing this @old-school-kid ? |
In Material Science
In Medical Imaging
In Object Detection (Old methods, pre-2017) |
Sure, would love to. |
What is the final goal in your material science domain? Semantic segmentation? Or something different? |
E.g. I am taking a look at https://github.com/Scientific-Computing-Lab-NRCN/MLography but I don't know if we still need some intermediate preprocessing like SLIC before the network learning stack. |
@bhack
I went through the repo. They have directly fed the image to an UNET which is fine, but generally we go for noise reduction and clustering (to separate matrix and grains) and this has proven to achieve better results. |
You repo is a little bit old and I am not an expert in the material science domain. But I remember that a few year ago, at ECCV 2018, there was a paper proposing a learning/differentiable approach ((currently 118 citations): https://varunjampani.github.io/ssn/ But also if it was differentiable I don't know how much it is still popular today learning this intermediate representation. |
@old-school-kid I'm fairly sure this is a good fit and is something that we'd be interested in hosting as long as the contribution is well written & maintainable. So if you're interested please prepare a PR. |
https://varunjampani.github.io/ssn/ That was a nice article. Thank you for sharing! While that can be used as a intermediate representation in end-to-end learning, SLIC is just used as a pre-processing layer and in no way it is fused with learning. But the article you shared looks promising too.
The papers I have shared above under Material Science are from 2019, 2020, 2021. Moreover apart from research, it is used in industry as it can easily segment matrix in microstructures. |
I think that this part could be a little bit out of the scope if not strictly functional to the learning step for this specific library. But I still think that it is valid for a generic CV library (e.g. PIL, scikit image, Opencv, etc..). |
This is in scope. The real question is whether or not the impact is high enough to justify maintaining the layer. Medical image segmentation is an important domain with a lot of promise. SLIC is clearly showing promise there, and in other image segmentation areas. This is especially in scope if the SLIC layer is differentiable. Then we can also use it as a preprocessing layer, or as it's used in https://varunjampani.github.io/ssn/. The cost of maintaining a specialized layer like this is pretty low. So I am comfortable accepting it if @old-school-kid is willing to contribute a well refined implementation. If you do decide to work on it, feel free to add me as the reviewer on the PR 👍 . |
In the last part of @old-school-kid comment It was quite clear that he mentioned that It Is useful also as is, without considering if It is useful or not in the learning pipeline. Also more in general what I claimed Is that I think, just by a popularity/resources ratio point of view that this intermediate rappresentation, also when differentiable, It isn't not so popular anymore. Instead if we are evaluating the non differentiable version I think that It could be useful to review this as a baseline in the context of modern proposed solutions for unsupervised/self-supervised/semi-supervised image segmentation papers (e.g. https://arxiv.org/abs/2007.09990 and other works). |
Lets close this our as stale for now until theres a strong use case. If we end up trying to tackle a segmentation competition and can't compete without this, we should reprioritize. |
* Add einsum * address comments
* Add golden correctness tests for Adam and SGD * Fix dtype issues * Sync with main (keras-team#56) * Minor touch ups * Fix a pretty major bug * Format code * Big rethink of Variable API * Make build-by-run the default build(), leveraging new zero_history KerasTensor mode * Minor fixes * Format code * Switch back to build-by-eager-run for simplicity * Add raise upon build failure * Work around JAX bug. * Add a few more tests. * Add saving tests * Adds test suite for SGD and golden correctness tests for all optimizers (keras-team#40) * Add golden correctness tests for Adam and SGD * Fix dtype issues * Add binary accuracy (keras-team#41) * chore: adding binary accuracy * chore: fix docstring * Add tests for add_loss and activity regularization. * Reformat code * Add ActivityRegularization layer * Fix JAX CI. * Add Lambda Callback (keras-team#42) * Add LambdaCallback * Add Lambda Callback * Add Lambda Callback * Rename lambda_callback_test.py * Add einsum (keras-team#43) * Add einsum * address comments * Fix format line length (keras-team#45) * Add Embedding layer * Shorten lines * Add .vscode to .gitignore (keras-team#46) * rm vscode settings * add .vscode to gitignore * Set demo program backend (keras-team#48) * Add tests for training arg resolution in Layer. * Implement mixed precision. * Replace backend.execute with backend.numpy.XXX (keras-team#50) * Add cosine similarity loss and update l2_normalize from regularizers (keras-team#34) * Begin cosine loss * Add testing for cosine similarity * Fix formatting * Docstring standardization * Formatting * Create numerical_utils * Fix issue with call context lingering. * Add the EarlyStopping callback (keras-team#44) * add earlystopping callback * addressing comments * address comments * addressing comments * remove unused imports * re-enable imports checks (keras-team#51) * Add nn.one_hot (keras-team#52) * Add GaussianDropout layer. * Add GaussianNoise layer * Add Categorical Accuracy Metric (keras-team#47) * chore: adding categorical accuracy metric * chore: reformat docstrings * chore: reformat * chore: ndims with len * refactor the docstring * Fix typos * Implement masking. --------- Co-authored-by: Francois Chollet <[email protected]> Co-authored-by: Aritra Roy Gosthipaty <[email protected]> Co-authored-by: Ramesh Sampath <[email protected]> Co-authored-by: Chen Qian <[email protected]> Co-authored-by: Haifeng Jin <[email protected]> Co-authored-by: Gabriel Rasskin <[email protected]> * Adds rmsprop optimizer and tests * Add AdamW optimizer and tests, minor formatting changes * Implemented formatting fixes --------- Co-authored-by: Francois Chollet <[email protected]> Co-authored-by: Aritra Roy Gosthipaty <[email protected]> Co-authored-by: Ramesh Sampath <[email protected]> Co-authored-by: Chen Qian <[email protected]> Co-authored-by: Haifeng Jin <[email protected]> Co-authored-by: Gabriel Rasskin <[email protected]>
…m#72) * Add golden correctness tests for Adam and SGD * Fix dtype issues * Sync with main (keras-team#56) * Minor touch ups * Fix a pretty major bug * Format code * Big rethink of Variable API * Make build-by-run the default build(), leveraging new zero_history KerasTensor mode * Minor fixes * Format code * Switch back to build-by-eager-run for simplicity * Add raise upon build failure * Work around JAX bug. * Add a few more tests. * Add saving tests * Adds test suite for SGD and golden correctness tests for all optimizers (keras-team#40) * Add golden correctness tests for Adam and SGD * Fix dtype issues * Add binary accuracy (keras-team#41) * chore: adding binary accuracy * chore: fix docstring * Add tests for add_loss and activity regularization. * Reformat code * Add ActivityRegularization layer * Fix JAX CI. * Add Lambda Callback (keras-team#42) * Add LambdaCallback * Add Lambda Callback * Add Lambda Callback * Rename lambda_callback_test.py * Add einsum (keras-team#43) * Add einsum * address comments * Fix format line length (keras-team#45) * Add Embedding layer * Shorten lines * Add .vscode to .gitignore (keras-team#46) * rm vscode settings * add .vscode to gitignore * Set demo program backend (keras-team#48) * Add tests for training arg resolution in Layer. * Implement mixed precision. * Replace backend.execute with backend.numpy.XXX (keras-team#50) * Add cosine similarity loss and update l2_normalize from regularizers (keras-team#34) * Begin cosine loss * Add testing for cosine similarity * Fix formatting * Docstring standardization * Formatting * Create numerical_utils * Fix issue with call context lingering. * Add the EarlyStopping callback (keras-team#44) * add earlystopping callback * addressing comments * address comments * addressing comments * remove unused imports * re-enable imports checks (keras-team#51) * Add nn.one_hot (keras-team#52) * Add GaussianDropout layer. * Add GaussianNoise layer * Add Categorical Accuracy Metric (keras-team#47) * chore: adding categorical accuracy metric * chore: reformat docstrings * chore: reformat * chore: ndims with len * refactor the docstring * Fix typos * Implement masking. --------- Co-authored-by: Francois Chollet <[email protected]> Co-authored-by: Aritra Roy Gosthipaty <[email protected]> Co-authored-by: Ramesh Sampath <[email protected]> Co-authored-by: Chen Qian <[email protected]> Co-authored-by: Haifeng Jin <[email protected]> Co-authored-by: Gabriel Rasskin <[email protected]> * Adds rmsprop optimizer and tests * Add AdamW optimizer and tests, minor formatting changes * Implemented formatting fixes * Adds clip norm and clip value tests to Adam * Adds Adagrad and Adadelta optimizers * Applies fixes to formatting and deletes unnecessary kwargs --------- Co-authored-by: Francois Chollet <[email protected]> Co-authored-by: Aritra Roy Gosthipaty <[email protected]> Co-authored-by: Ramesh Sampath <[email protected]> Co-authored-by: Chen Qian <[email protected]> Co-authored-by: Haifeng Jin <[email protected]> Co-authored-by: Gabriel Rasskin <[email protected]>
…rl) (keras-team#80) * Add golden correctness tests for Adam and SGD * Fix dtype issues * Sync with main (keras-team#56) * Minor touch ups * Fix a pretty major bug * Format code * Big rethink of Variable API * Make build-by-run the default build(), leveraging new zero_history KerasTensor mode * Minor fixes * Format code * Switch back to build-by-eager-run for simplicity * Add raise upon build failure * Work around JAX bug. * Add a few more tests. * Add saving tests * Adds test suite for SGD and golden correctness tests for all optimizers (keras-team#40) * Add golden correctness tests for Adam and SGD * Fix dtype issues * Add binary accuracy (keras-team#41) * chore: adding binary accuracy * chore: fix docstring * Add tests for add_loss and activity regularization. * Reformat code * Add ActivityRegularization layer * Fix JAX CI. * Add Lambda Callback (keras-team#42) * Add LambdaCallback * Add Lambda Callback * Add Lambda Callback * Rename lambda_callback_test.py * Add einsum (keras-team#43) * Add einsum * address comments * Fix format line length (keras-team#45) * Add Embedding layer * Shorten lines * Add .vscode to .gitignore (keras-team#46) * rm vscode settings * add .vscode to gitignore * Set demo program backend (keras-team#48) * Add tests for training arg resolution in Layer. * Implement mixed precision. * Replace backend.execute with backend.numpy.XXX (keras-team#50) * Add cosine similarity loss and update l2_normalize from regularizers (keras-team#34) * Begin cosine loss * Add testing for cosine similarity * Fix formatting * Docstring standardization * Formatting * Create numerical_utils * Fix issue with call context lingering. * Add the EarlyStopping callback (keras-team#44) * add earlystopping callback * addressing comments * address comments * addressing comments * remove unused imports * re-enable imports checks (keras-team#51) * Add nn.one_hot (keras-team#52) * Add GaussianDropout layer. * Add GaussianNoise layer * Add Categorical Accuracy Metric (keras-team#47) * chore: adding categorical accuracy metric * chore: reformat docstrings * chore: reformat * chore: ndims with len * refactor the docstring * Fix typos * Implement masking. --------- Co-authored-by: Francois Chollet <[email protected]> Co-authored-by: Aritra Roy Gosthipaty <[email protected]> Co-authored-by: Ramesh Sampath <[email protected]> Co-authored-by: Chen Qian <[email protected]> Co-authored-by: Haifeng Jin <[email protected]> Co-authored-by: Gabriel Rasskin <[email protected]> * Adds rmsprop optimizer and tests * Add AdamW optimizer and tests, minor formatting changes * Implemented formatting fixes * Adds clip norm and clip value tests to Adam * Adds Adagrad and Adadelta optimizers * Applies fixes to formatting and deletes unnecessary kwargs * Adds Adamax and Adafactor and associated tests * Adds Nadam and Ftrl optimizers and associated tests --------- Co-authored-by: Francois Chollet <[email protected]> Co-authored-by: Aritra Roy Gosthipaty <[email protected]> Co-authored-by: Ramesh Sampath <[email protected]> Co-authored-by: Chen Qian <[email protected]> Co-authored-by: Haifeng Jin <[email protected]> Co-authored-by: Gabriel Rasskin <[email protected]>
Image Augment layer using SLIC
https://ieeexplore.ieee.org/document/6205760 cited by 7880
Implementation in skimage https://github.com/scikit-image/scikit-image/blob/v0.19.0/skimage/segmentation/slic_superpixels.py#L110-L385
Sharpen images by using a unsharp mask or something better I am unaware of.
The text was updated successfully, but these errors were encountered: