Skip to content

InnerEye project aims at distinguishing images those have been subjected to an image filter and those which have not been. This repository contains all the classifiers built and tested to solve the classification problem in the InnerEye project.

License

Notifications You must be signed in to change notification settings

celestiallion/InnerEye-classifiers

Repository files navigation

InnerEye-classifiers

Color of daily objects says a lot about them. However, upon applying image filters, which are often available on social media, objects in an image undergo a color transformation. The objects in such an image often have color that confuses us or conveys other meaning. Therefore, applying an image filter is a kind of image editing.


Figure: An example of unedited image.


Figure: Edited image due to application of Nashville filter.


Figure: Edited image due to application of XPro2 filter.

This classifier, based on the survey result of the ongoing InnerEye project, can distinguish such edited and unedited images. InnerEye project aims at understanding the credibility of social interaction in the presence of both edited and unedited images on the social media platforms.

In this classifier, the challenge is to recognize how the color style of an image is different from it's unedited (images on which no filters has been applied) counterpart. However, under different illumination condition this color style changes. Assuming there is an invariant relationship among the colors for each of the image filters, this classifier has been built. To compile the dataset required, images are sampled from the Google Landmarks dataset first and then different image filters are applied to the images. Both the unedited and the edited counterparts of those are present in the dataset.

Because the color style of an image has to be understood, the content and the color of an image has to be separated. We need to work with only the style of the input image. The style and content separating mechanism in the classifier is based on the autoencoder architecture in MUNIT InnerEye classifier is then cotrained on the images (image reconstruction) and the edited image labels (unedited or edited). The classifier is multi-targeted to improve accuracy- therefore, the class target labels are {'unedited', '_1977', 'aden', 'brannan', 'brooklyn', 'clarendon', 'earlybird', 'gingham', 'hudson', 'inkwell', 'kelvin', 'lark', 'lofi', 'maven', 'mayfair', 'moon', 'nashville', 'perpetua', 'reyes', 'rise', 'slumber', 'stinson', 'toaster', 'valencia', 'walden', 'willow', 'xpro2'}.


Figure: Architecture of the classifier.

A classifier that does not separate style and content and classifies on the style does not converge.


Figure: Loss of the sequential classifier.

However, a classifier (our contribution) that separates style and content and classifies on the style converges.


Figure: Loss of the analytical classifier.

The author of the classifier is available at [email protected]

About

InnerEye project aims at distinguishing images those have been subjected to an image filter and those which have not been. This repository contains all the classifiers built and tested to solve the classification problem in the InnerEye project.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages