Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SLIC Layer: Superpixel #43

Closed
old-school-kid opened this issue Jan 12, 2022 · 17 comments
Closed

SLIC Layer: Superpixel #43

old-school-kid opened this issue Jan 12, 2022 · 17 comments

Comments

@old-school-kid
Copy link

Image Augment layer using SLIC
https://ieeexplore.ieee.org/document/6205760 cited by 7880
Implementation in skimage https://github.com/scikit-image/scikit-image/blob/v0.19.0/skimage/segmentation/slic_superpixels.py#L110-L385

Sharpen images by using a unsharp mask or something better I am unaware of.

@LukeWood
Copy link
Contributor

Hey @old-school-kid any chance I could get a little more context as to how people tend to use this in the CV domain? Particularly w/ deep models.

Thanks

@old-school-kid
Copy link
Author

Hi @LukeWood
SLIC isn't used in any CV model for image augmentation, afaik. But I have seen a lot of work using that as a pre-processing layer in my domain (material science) and even in medical imaging. In material science it helps in grain boundary detection and finding hairline fractures.
Do you want me to put links to some papers here using the preprocessing techniques? TIA.

@LukeWood
Copy link
Contributor

That would actually be great. I'm really interested to see how it's used in both material science and medical imaging!

Thanks @old-school-kid

@LukeWood
Copy link
Contributor

Also, would you be interested in contributing this @old-school-kid ?

@old-school-kid
Copy link
Author

Also, would you be interested in contributing this @old-school-kid ?

Sure, would love to.

@bhack
Copy link
Contributor

bhack commented Jan 28, 2022

What is the final goal in your material science domain? Semantic segmentation? Or something different?

@bhack
Copy link
Contributor

bhack commented Jan 28, 2022

E.g. I am taking a look at https://github.com/Scientific-Computing-Lab-NRCN/MLography but I don't know if we still need some intermediate preprocessing like SLIC before the network learning stack.

@old-school-kid
Copy link
Author

@bhack
Mostly Semantic segmentation, yes.

E.g. I am taking a look at https://github.com/Scientific-Computing-Lab-NRCN/MLography but I don't know if we still need some intermediate preprocessing like SLIC before the network learning stack.

I went through the repo. They have directly fed the image to an UNET which is fine, but generally we go for noise reduction and clustering (to separate matrix and grains) and this has proven to achieve better results.
This repo throws more light on this.

@bhack
Copy link
Contributor

bhack commented Jan 28, 2022

You repo is a little bit old and I am not an expert in the material science domain. But I remember that a few year ago, at ECCV 2018, there was a paper proposing a learning/differentiable approach ((currently 118 citations):

https://varunjampani.github.io/ssn/

But also if it was differentiable I don't know how much it is still popular today learning this intermediate representation.

@LukeWood
Copy link
Contributor

LukeWood commented Jan 28, 2022

@old-school-kid I'm fairly sure this is a good fit and is something that we'd be interested in hosting as long as the contribution is well written & maintainable.

So if you're interested please prepare a PR.

@old-school-kid
Copy link
Author

@bhack

You repo is a little bit old and I am not an expert in the material science domain. But I remember that a few year ago, at ECCV 2018, there was a paper proposing a learning/differentiable approach ((currently 118 citations):

https://varunjampani.github.io/ssn/

That was a nice article. Thank you for sharing! While that can be used as a intermediate representation in end-to-end learning, SLIC is just used as a pre-processing layer and in no way it is fused with learning. But the article you shared looks promising too.

But also if it was differentiable I don't know how much it is still popular today learning this intermediate representation.

The papers I have shared above under Material Science are from 2019, 2020, 2021. Moreover apart from research, it is used in industry as it can easily segment matrix in microstructures.

@bhack
Copy link
Contributor

bhack commented Jan 29, 2022

Moreover apart from research, it is used in industry as it can easily segment matrix in microstructures

I think that this part could be a little bit out of the scope if not strictly functional to the learning step for this specific library.

But I still think that it is valid for a generic CV library (e.g. PIL, scikit image, Opencv, etc..).

@LukeWood
Copy link
Contributor

This is in scope. The real question is whether or not the impact is high enough to justify maintaining the layer.

Medical image segmentation is an important domain with a lot of promise. SLIC is clearly showing promise there, and in other image segmentation areas.

This is especially in scope if the SLIC layer is differentiable. Then we can also use it as a preprocessing layer, or as it's used in https://varunjampani.github.io/ssn/.

The cost of maintaining a specialized layer like this is pretty low. So I am comfortable accepting it if @old-school-kid is willing to contribute a well refined implementation. If you do decide to work on it, feel free to add me as the reviewer on the PR 👍 .

@LukeWood LukeWood changed the title Superpixel and sharpen SLIC Layer: Superpixel Jan 30, 2022
@bhack
Copy link
Contributor

bhack commented Jan 30, 2022

In the last part of @old-school-kid comment It was quite clear that he mentioned that It Is useful also as is, without considering if It is useful or not in the learning pipeline.
For that specific part of the comment I think, but I could be wrong as I don't fully understand the Keras-cv policy, that we are not going to collect CV operations as is if they are not functional to the learning step or to populate metrics/visualizations.

Also more in general what I claimed Is that I think, just by a popularity/resources ratio point of view that this intermediate rappresentation, also when differentiable, It isn't not so popular anymore.

Instead if we are evaluating the non differentiable version I think that It could be useful to review this as a baseline in the context of modern proposed solutions for unsupervised/self-supervised/semi-supervised image segmentation papers (e.g. https://arxiv.org/abs/2007.09990 and other works).

@innat
Copy link
Contributor

innat commented Apr 24, 2022

@LukeWood
Copy link
Contributor

LukeWood commented May 4, 2023

Lets close this our as stale for now until theres a strong use case. If we end up trying to tackle a segmentation competition and can't compete without this, we should reprioritize.

@LukeWood LukeWood closed this as completed May 4, 2023
freedomtan pushed a commit to freedomtan/keras-cv that referenced this issue Jul 20, 2023
* Add einsum

* address comments
freedomtan pushed a commit to freedomtan/keras-cv that referenced this issue Jul 20, 2023
* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Sync with main (keras-team#56)

* Minor touch ups

* Fix a pretty major bug

* Format code

* Big rethink of Variable API

* Make build-by-run the default build(), leveraging new zero_history KerasTensor mode

* Minor fixes

* Format code

* Switch back to build-by-eager-run for simplicity

* Add raise upon build failure

* Work around JAX bug.

* Add a few more tests.

* Add saving tests

* Adds test suite for SGD and golden correctness tests for all optimizers (keras-team#40)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Add binary accuracy (keras-team#41)

* chore: adding binary accuracy

* chore: fix docstring

* Add tests for add_loss and activity regularization.

* Reformat code

* Add ActivityRegularization layer

* Fix JAX CI.

* Add Lambda Callback (keras-team#42)

* Add LambdaCallback

* Add Lambda Callback

* Add Lambda Callback

* Rename lambda_callback_test.py

* Add einsum (keras-team#43)

* Add einsum

* address comments

* Fix format line length (keras-team#45)

* Add Embedding layer

* Shorten lines

* Add .vscode to .gitignore (keras-team#46)

* rm vscode settings

* add .vscode to gitignore

* Set demo program backend (keras-team#48)

* Add tests for training arg resolution in Layer.

* Implement mixed precision.

* Replace backend.execute with backend.numpy.XXX (keras-team#50)

* Add cosine similarity loss and update l2_normalize from regularizers (keras-team#34)

* Begin cosine loss

* Add testing for cosine similarity

* Fix formatting

* Docstring standardization

* Formatting

* Create numerical_utils

* Fix issue with call context lingering.

* Add the EarlyStopping callback (keras-team#44)

* add earlystopping callback

* addressing comments

* address comments

* addressing comments

* remove unused imports

* re-enable imports checks (keras-team#51)

* Add nn.one_hot (keras-team#52)

* Add GaussianDropout layer.

* Add GaussianNoise layer

* Add Categorical Accuracy Metric (keras-team#47)

* chore: adding categorical accuracy metric

* chore: reformat docstrings

* chore: reformat

* chore: ndims with len

* refactor the docstring

* Fix typos

* Implement masking.

---------

Co-authored-by: Francois Chollet <[email protected]>
Co-authored-by: Aritra Roy Gosthipaty <[email protected]>
Co-authored-by: Ramesh Sampath <[email protected]>
Co-authored-by: Chen Qian <[email protected]>
Co-authored-by: Haifeng Jin <[email protected]>
Co-authored-by: Gabriel Rasskin <[email protected]>

* Adds rmsprop optimizer and tests

* Add AdamW optimizer and tests, minor formatting changes

* Implemented formatting fixes

---------

Co-authored-by: Francois Chollet <[email protected]>
Co-authored-by: Aritra Roy Gosthipaty <[email protected]>
Co-authored-by: Ramesh Sampath <[email protected]>
Co-authored-by: Chen Qian <[email protected]>
Co-authored-by: Haifeng Jin <[email protected]>
Co-authored-by: Gabriel Rasskin <[email protected]>
freedomtan pushed a commit to freedomtan/keras-cv that referenced this issue Jul 20, 2023
…m#72)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Sync with main (keras-team#56)

* Minor touch ups

* Fix a pretty major bug

* Format code

* Big rethink of Variable API

* Make build-by-run the default build(), leveraging new zero_history KerasTensor mode

* Minor fixes

* Format code

* Switch back to build-by-eager-run for simplicity

* Add raise upon build failure

* Work around JAX bug.

* Add a few more tests.

* Add saving tests

* Adds test suite for SGD and golden correctness tests for all optimizers (keras-team#40)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Add binary accuracy (keras-team#41)

* chore: adding binary accuracy

* chore: fix docstring

* Add tests for add_loss and activity regularization.

* Reformat code

* Add ActivityRegularization layer

* Fix JAX CI.

* Add Lambda Callback (keras-team#42)

* Add LambdaCallback

* Add Lambda Callback

* Add Lambda Callback

* Rename lambda_callback_test.py

* Add einsum (keras-team#43)

* Add einsum

* address comments

* Fix format line length (keras-team#45)

* Add Embedding layer

* Shorten lines

* Add .vscode to .gitignore (keras-team#46)

* rm vscode settings

* add .vscode to gitignore

* Set demo program backend (keras-team#48)

* Add tests for training arg resolution in Layer.

* Implement mixed precision.

* Replace backend.execute with backend.numpy.XXX (keras-team#50)

* Add cosine similarity loss and update l2_normalize from regularizers (keras-team#34)

* Begin cosine loss

* Add testing for cosine similarity

* Fix formatting

* Docstring standardization

* Formatting

* Create numerical_utils

* Fix issue with call context lingering.

* Add the EarlyStopping callback (keras-team#44)

* add earlystopping callback

* addressing comments

* address comments

* addressing comments

* remove unused imports

* re-enable imports checks (keras-team#51)

* Add nn.one_hot (keras-team#52)

* Add GaussianDropout layer.

* Add GaussianNoise layer

* Add Categorical Accuracy Metric (keras-team#47)

* chore: adding categorical accuracy metric

* chore: reformat docstrings

* chore: reformat

* chore: ndims with len

* refactor the docstring

* Fix typos

* Implement masking.

---------

Co-authored-by: Francois Chollet <[email protected]>
Co-authored-by: Aritra Roy Gosthipaty <[email protected]>
Co-authored-by: Ramesh Sampath <[email protected]>
Co-authored-by: Chen Qian <[email protected]>
Co-authored-by: Haifeng Jin <[email protected]>
Co-authored-by: Gabriel Rasskin <[email protected]>

* Adds rmsprop optimizer and tests

* Add AdamW optimizer and tests, minor formatting changes

* Implemented formatting fixes

* Adds clip norm and clip value tests to Adam

* Adds Adagrad and Adadelta optimizers

* Applies fixes to formatting and deletes unnecessary kwargs

---------

Co-authored-by: Francois Chollet <[email protected]>
Co-authored-by: Aritra Roy Gosthipaty <[email protected]>
Co-authored-by: Ramesh Sampath <[email protected]>
Co-authored-by: Chen Qian <[email protected]>
Co-authored-by: Haifeng Jin <[email protected]>
Co-authored-by: Gabriel Rasskin <[email protected]>
freedomtan pushed a commit to freedomtan/keras-cv that referenced this issue Jul 20, 2023
…rl) (keras-team#80)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Sync with main (keras-team#56)

* Minor touch ups

* Fix a pretty major bug

* Format code

* Big rethink of Variable API

* Make build-by-run the default build(), leveraging new zero_history KerasTensor mode

* Minor fixes

* Format code

* Switch back to build-by-eager-run for simplicity

* Add raise upon build failure

* Work around JAX bug.

* Add a few more tests.

* Add saving tests

* Adds test suite for SGD and golden correctness tests for all optimizers (keras-team#40)

* Add golden correctness tests for Adam and SGD

* Fix dtype issues

* Add binary accuracy (keras-team#41)

* chore: adding binary accuracy

* chore: fix docstring

* Add tests for add_loss and activity regularization.

* Reformat code

* Add ActivityRegularization layer

* Fix JAX CI.

* Add Lambda Callback (keras-team#42)

* Add LambdaCallback

* Add Lambda Callback

* Add Lambda Callback

* Rename lambda_callback_test.py

* Add einsum (keras-team#43)

* Add einsum

* address comments

* Fix format line length (keras-team#45)

* Add Embedding layer

* Shorten lines

* Add .vscode to .gitignore (keras-team#46)

* rm vscode settings

* add .vscode to gitignore

* Set demo program backend (keras-team#48)

* Add tests for training arg resolution in Layer.

* Implement mixed precision.

* Replace backend.execute with backend.numpy.XXX (keras-team#50)

* Add cosine similarity loss and update l2_normalize from regularizers (keras-team#34)

* Begin cosine loss

* Add testing for cosine similarity

* Fix formatting

* Docstring standardization

* Formatting

* Create numerical_utils

* Fix issue with call context lingering.

* Add the EarlyStopping callback (keras-team#44)

* add earlystopping callback

* addressing comments

* address comments

* addressing comments

* remove unused imports

* re-enable imports checks (keras-team#51)

* Add nn.one_hot (keras-team#52)

* Add GaussianDropout layer.

* Add GaussianNoise layer

* Add Categorical Accuracy Metric (keras-team#47)

* chore: adding categorical accuracy metric

* chore: reformat docstrings

* chore: reformat

* chore: ndims with len

* refactor the docstring

* Fix typos

* Implement masking.

---------

Co-authored-by: Francois Chollet <[email protected]>
Co-authored-by: Aritra Roy Gosthipaty <[email protected]>
Co-authored-by: Ramesh Sampath <[email protected]>
Co-authored-by: Chen Qian <[email protected]>
Co-authored-by: Haifeng Jin <[email protected]>
Co-authored-by: Gabriel Rasskin <[email protected]>

* Adds rmsprop optimizer and tests

* Add AdamW optimizer and tests, minor formatting changes

* Implemented formatting fixes

* Adds clip norm and clip value tests to Adam

* Adds Adagrad and Adadelta optimizers

* Applies fixes to formatting and deletes unnecessary kwargs

* Adds Adamax and Adafactor and associated tests

* Adds Nadam and Ftrl optimizers and associated tests

---------

Co-authored-by: Francois Chollet <[email protected]>
Co-authored-by: Aritra Roy Gosthipaty <[email protected]>
Co-authored-by: Ramesh Sampath <[email protected]>
Co-authored-by: Chen Qian <[email protected]>
Co-authored-by: Haifeng Jin <[email protected]>
Co-authored-by: Gabriel Rasskin <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants