Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consistency with the original paper? #6

Open
amirj opened this issue Jan 30, 2017 · 9 comments
Open

Consistency with the original paper? #6

amirj opened this issue Jan 30, 2017 · 9 comments

Comments

@amirj
Copy link

amirj commented Jan 30, 2017

Could you reproduce the result of language modeling in the original paper?

@amirj
Copy link
Author

amirj commented Feb 1, 2017

or translation experiments?

@paarthneekhara
Copy link
Owner

No, I could not. Not sure why. Check out this alternate implementation https://github.com/buriburisuri/ByteNet , though from the results I don't think it is able to reproduce the paper results either.

@tindzk
Copy link

tindzk commented Apr 2, 2017

Thanks for the great work! I compared the current implementation with the paper and noticed a few discrepancies:

  • Section 3.3 only applies to the translation model, but the implemented prediction model has an embedding of size n x 2d (w_source_embedding). The paper does not state the correct embedding size, but intuitively I would set it to n x c with c being the number of characters.

  • The authors write: "Each layer is wrapped in a residual block that contains additional convolutional layers with filters of size 1 × 1 (He et al., 2016)." decode_layer() does not seem to initialise any filters of size 1 x 1.

  • The authors use Multiplicative Units instead of ReLUs for the prediction model (section 3.6). See section 4.1 of this paper for an explanation of Multiplicative Units.

  • Layer Normalisation should be used before the activation function (section 3.6).

@speakerjohnash
Copy link

Can you quantify what sections of the code you feel you are confident align with the paper itself and which parts you are uncertain about?

@paarthneekhara
Copy link
Owner

@tindzk

  1. Yes, the author does not mention the embedding size for prediction model. You can try setting it to c, but (intuitively) I don't think that will significantly improve the performance.

  2. The deocde layer does infact pack the dilated convolutions by 1X1 convolutions defined. See conv1 and conv2). The default filter width is 1 in ops.conv1d.
    `def decode_layer(self, input_, dilation, layer_no):
    options = self.options
    relu1 = tf.nn.relu(input_, name = 'dec_relu1_layer{}'.format(layer_no))
    conv1 = ops.conv1d(relu1, options['residual_channels'], name = 'dec_conv1d_1_layer{}'.format(layer_no))

     relu2 = tf.nn.relu(conv1, name = 'enc_relu2_layer{}'.format(layer_no))
     dilated_conv = ops.dilated_conv1d(relu2, options['residual_channels'], 
     	dilation, options['decoder_filter_width'],
     	causal = True, 
     	name = "dec_dilated_conv_laye{}".format(layer_no)
     	)
     
     relu3 = tf.nn.relu(dilated_conv, name = 'dec_relu3_layer{}'.format(layer_no))
     conv2 = ops.conv1d(relu3, 2 * options['residual_channels'], name = 'dec_conv1d_2_layer{}'.format(layer_no))
     
     return input_ + conv2`
    
  3. Yes, the multiplicative units have not been implemented for the prediction model. You can try doing that to check if that improves the performance.

  4. Layer normalisation seems to be an update in the paper. Earlier this was sub-batch normalisation. Check this https://arxiv.org/pdf/1610.10099v1.pdf . I'll implement this soon.

@paarthneekhara
Copy link
Owner

@msevrens
I think except layer normalisation, everything is in line with paper. I cannot guarantee a bug-free implementation since the results are not aligned with the paper. I'll be working on this in the coming week.

@speakerjohnash
Copy link

@paarthneekhara

If you replace the output embedding targets with word embeddings, your code actually translates semantically very well but syntactically poorly. It also over estimates common tokens in translation.

I think you can actually see this pattern when the output targets are character embeddings too. In that case, uncommon collections of characters appear to be easily predicted even though the sequences are long and complex but interspersed between all of the correct learned sequences are random injections of common tokens.

I don't know how that information can help others in recreating the paper but there it is.

@speakerjohnash
Copy link

@paarthneekhara

Why is the target embedding going into the decoder? How does that work at evaluation time with no target to feed?

@paarthneekhara
Copy link
Owner

Just an update in case you are still following this. I rewrote almost the complete model. I noticed there was a bug in the encoder for the translation model earlier (in the dilated conv1d implementation). I have updated the ops and training is in progress. It looks good, but it will take time to train since I am not using a powerful GPU.

Also, the model is now neater and faster. It accepts variable length sentences as opposed to the last
implementation in which the sentence length had to specified while initializing the model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants