Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[MXNET-580] Add SN-GAN example #12419

Merged
merged 19 commits into from
Sep 12, 2018

Conversation

stu1130
Copy link
Contributor

@stu1130 stu1130 commented Aug 31, 2018

Description

Add Spectral Normalization GAN example

Checklist

Essentials

Please feel free to remove inapplicable items for your PR.

  • The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant JIRA issue created (except PRs with tiny changes)
  • Changes are complete (i.e. I finished coding on this PR)
  • All changes have test coverage:
  • Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
  • Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
  • Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
  • Code is well-documented:
  • For user-facing API changes, API doc string has been updated.
  • For new C++ functions in header files, their functionalities and arguments are documented.
  • For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
  • Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
  • To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change

Changes

Spectral Normalization GAN example

Comments

N/A

@stu1130 stu1130 requested a review from szha as a code owner August 31, 2018 04:26
_u = self.u.data(CTX)
_v = None

for _ in range(1):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This for loop is executed only once? Do we need a loop in that case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original paper use for loop to calculate the result. I should make it as a hyperparameter


self.params.setattr('u', _u)

return w / sigma
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a chance that sigma would be 0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The chance is quite small but I would deal with that

@stu1130 stu1130 changed the title [MXNET-580][WIP] Add SN-GAN example [MXNET-580] Add SN-GAN example Sep 4, 2018
@sandeep-krishnamurthy
Copy link
Contributor

Thanks for your contributions @stu1130
@ThomasDelteil - Can you please take a look at this? Thank you.

@ThomasDelteil
Copy link
Contributor

@stu1130 thanks for your contribution, can you add a README.md in the folder of the sn_gan with reference to papers / sample output etc.
Have a look at this one for inspiration for example: https://github.com/apache/incubator-mxnet/tree/master/example/gluon/embedding_learning

@vandanavk
Copy link
Contributor

Please execute pylint on these files using ci/other/pylintrc in incubator-mxnet folder and fix errors (if any).

pylint --rcfile=ci/other/pylintrc --ignore-patterns="..so$$,..dll$$,..dylib$$" example/gluon/sn-gan/*.py

BETA = 0.5 # beta1 for adam
OUTPUT_DIR = './data' # output directory
MANUAL_SEED = random.randint(1, 10000) # manual seed
CTX = mx.gpu() # change to gpu if you have gpu
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default should be cpu?

Copy link
Contributor Author

@stu1130 stu1130 Sep 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I should use argparse to let the user specify parameters. And default value would be cpu.

@stu1130 stu1130 changed the title [MXNET-580] Add SN-GAN example [MXNET-580] [WIP] Add SN-GAN example Sep 7, 2018
Copy link
Contributor

@sandeep-krishnamurthy sandeep-krishnamurthy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice work @stu1130 :-) Few minor comments, please address

@ThomasDelteil - Ping.

@@ -0,0 +1,38 @@
# Spectral Normalization GAN

This example implements [Spectral Normalization for Generative Adversarial Networks](https://openreview.net/pdf?id=B1QRgziT-) based on [CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please refer to arxiv - https://arxiv.org/abs/1802.05957

Example runs and the results:

```python
python train.py --use-gpu --data-path=data/CIFAR10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Write a note that user needs to download CIFAR10 dataset


def __init__(self, num_filter, kernel_size,
strides, padding, in_channels,
ctx, iterations=1):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set default ctx=mx.cpu()

g_net = gluon.nn.Sequential()
with g_net.name_scope():

g_net.add(gluon.nn.Conv2DTranspose(512, 4, 1, 0, use_bias=False))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is always more clear for readers in these kind of examples to have named parameters for these layers.

Copy link
Contributor

@ThomasDelteil ThomasDelteil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Solid contribution, thanks @stu1130



def get_training_data(batch_size):
""" hepler function to get dataloader"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

helper

def spectral_norm(self):
""" spectral normalization """
w = self.params.get('weight').data(self.ctx)
w_mat = w
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is necessary, you can simply use w in your nd.reshape(w_mat..

self.u = self.params.get(
'u', init=mx.init.Normal(), shape=(1, num_filter))

def spectral_norm(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would suggest using _spectral_norm(self) as it is a private function

parser.add_argument('--lr', type=float, default=0.0001,
help='learning rate. default is 0.0001.')
parser.add_argument('--lr-beta', type=float, default=0.5,
help='learning rate for the beta in margin based loss. default is 0.5s.')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0.5s ? what does s stand for here?

@@ -0,0 +1,40 @@
# Spectral Normalization GAN
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

## Learned Spectral Normalization

![alt text](https://github.com/taki0112/Spectral_Normalization-Tensorflow/blob/master/assests/sn.png)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a few samples of the generated images? It always makes it more appealing for people looking to try new models. Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am running the model with Xavier initializer and will update the image if it's better

Copy link
Contributor Author

@stu1130 stu1130 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

## Learned Spectral Normalization

![alt text](https://github.com/taki0112/Spectral_Normalization-Tensorflow/blob/master/assests/sn.png)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am running the model with Xavier initializer and will update the image if it's better

@stu1130 stu1130 changed the title [MXNET-580] [WIP] Add SN-GAN example [MXNET-580] Add SN-GAN example Sep 11, 2018
""" spectral normalization """
w = self.params.get('weight').data(self.ctx)
# the w preserve the original weight value to be used in line 75
w_mat = w
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

w is needed to be used for calculation later in the line 75

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assignment does not create a copy, reshape creates a copy. you can use:
w_mat = nd.reshape(w, [w.shape[0], -1])

--epochs EPOCHS number of training epochs. default is 100.
--lr LR learning rate. default is 0.0001.
--lr-beta LR_BETA learning rate for the beta in margin based loss.
default is 0.5s.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: the s is also here still

Copy link
Contributor

@ThomasDelteil ThomasDelteil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

last minor change

Copy link
Contributor

@sandeep-krishnamurthy sandeep-krishnamurthy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, a very useful contribution.

@sandeep-krishnamurthy sandeep-krishnamurthy merged commit 46a5cee into apache:master Sep 12, 2018
zhreshold added a commit that referenced this pull request Sep 12, 2018
szha pushed a commit that referenced this pull request Sep 12, 2018
* Revert "Removing the re-size for validation data, which breaking the validation accuracy of CIFAR training (#12362)"

This reverts commit ceabcaa.

* Revert "[MXNET-580] Add SN-GAN example (#12419)"

This reverts commit 46a5cee.

* Revert "Remove regression checks for website links (#12507)"

This reverts commit 619bc3e.

* Revert "Revert "Fix flaky test: test_mkldnn.test_activation #12377 (#12418)" (#12516)"

This reverts commit 7ea0533.

* Revert "further bump up tolerance for sparse dot (#12527)"

This reverts commit 90599e1.

* Revert "Fix broken URLs (#12508)"

This reverts commit 3d83c89.

* Revert "Temporarily disable flaky tests (#12520)"

This reverts commit 35ca13c.

* Revert "Add support for more req patterns for bilinear sampler backward (#12386)"

This reverts commit 4ee866f.

* Revert "Change the way NDArrayIter handle the last batch (#12285)"

This reverts commit 597a637.
anirudh2290 pushed a commit to anirudh2290/mxnet that referenced this pull request Sep 19, 2018
* update sn-gan example

* fix naming

* add more comments

* fix naming and refine comments

* make power iteration as one hyperparameter

* deal with divided by zero problem

* replace 0.00000001 with EPSILON

* refactor the example

* add README

* address the feedback

* refine the composing

* fix the typo, delete the redundant piece of code and update the result image

* update folder name to align with others

* update image name

* add the variable back

* remove the redundant piece of code and fix typo
anirudh2290 pushed a commit to anirudh2290/mxnet that referenced this pull request Sep 19, 2018
* Revert "Removing the re-size for validation data, which breaking the validation accuracy of CIFAR training (apache#12362)"

This reverts commit ceabcaa.

* Revert "[MXNET-580] Add SN-GAN example (apache#12419)"

This reverts commit 46a5cee.

* Revert "Remove regression checks for website links (apache#12507)"

This reverts commit 619bc3e.

* Revert "Revert "Fix flaky test: test_mkldnn.test_activation apache#12377 (apache#12418)" (apache#12516)"

This reverts commit 7ea0533.

* Revert "further bump up tolerance for sparse dot (apache#12527)"

This reverts commit 90599e1.

* Revert "Fix broken URLs (apache#12508)"

This reverts commit 3d83c89.

* Revert "Temporarily disable flaky tests (apache#12520)"

This reverts commit 35ca13c.

* Revert "Add support for more req patterns for bilinear sampler backward (apache#12386)"

This reverts commit 4ee866f.

* Revert "Change the way NDArrayIter handle the last batch (apache#12285)"

This reverts commit 597a637.
@stu1130 stu1130 deleted the sn_gan_example branch February 7, 2020 19:20
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Example Gluon pr-awaiting-review PR is waiting for code review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants