Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why should update rollout policy in this way? #29

Closed
vanpersie32 opened this issue Aug 11, 2017 · 8 comments
Closed

why should update rollout policy in this way? #29

vanpersie32 opened this issue Aug 11, 2017 · 8 comments

Comments

@vanpersie32
Copy link

According to the paper, rollout policy is the same with generator policy. So self.Wi = self.lstm.Wi, but in the code, here update parameters of rollout policy in a different way. Can you please explain why? Thank you very much @LantaoYu @wnzhang

@eduOS
Copy link

eduOS commented Oct 9, 2017

I am also wondering why there should be a delay. But to make it the same as that in the paper you can just set the update_rate to 1.

@zichaow
Copy link

zichaow commented Oct 16, 2017

I also noticed this; the update for the rollout seems to take the form of a convex combination of the parameters from the rollout and the generator. Wonder what's the justification for such an update.

@gcbanana
Copy link

@eduOS To make it the same as that in the paper, why set the update_rate to 1? Shouldn't it set to be 0?
self.Wi = self.update_rate * self.Wi + (1 - self.update_rate) * tf.identity(self.lstm.Wi)
After one-step training of the generator, the lstm.Wi is changed, but self.Wi is not changed. If the rate is set to 1, self.Wi = self.Wi, it won't be changed. It makes me confused.

@eduOS
Copy link

eduOS commented Jan 31, 2018

@gcbanana You are right. @vanpersie32 I learned that this trick is a regularization method which is the so-called weight decay. I'd like you to see this: #21

@lucaslingle
Copy link

lucaslingle commented Aug 28, 2018

I had the same question.

I don't think that this is weight decay, because it's not being applied to the gradients, and it's not decaying the rollout network's weights towards zero. Rather, it's updating them in a way that maintains an exponential moving average of the generator network weights.

I recently found a reinforcement learning paper which did the same thing, in a different context.
They said it improved the stability of the learning process.

In their case, they weren't using a rollout network, but the motivation here may be similar.

References:
[1] https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
[2] https://arxiv.org/pdf/1509.02971.pdf

@vanpersie32
Copy link
Author

@lucaslingle You are right. Close the issue

@vanpersie32
Copy link
Author

vanpersie32 commented Aug 29, 2018

This is a trick for stabilizing the training process, and setting the parameters of rollout to same with generator will degrade performance of seqgan.

@eduOS
Copy link

eduOS commented Aug 29, 2018

In face it is the same as L2 regularization. It keeps the weights small and hence stable as stated in other comments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants