Skip to content

mivadi/NLP2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Natural Language Processing 2 (2018)

In this course we did two projects to create machine translation models.

Project 1: IBM 1 and IBM 2

This study applies two well-known translation models, namely IBM1 and IBM2, to train bilingual corpora and assign alignments to possible translations. Apart from the classical Expectation-Maximization training, a Variational Bayes model was also implemented for the first model. The performance of both models were indicated through both the perplexity of the train data and the alignment error-rate of the validation data.

Project 2: Neural Machine Translation Model

Abstract: This project applies Neural Machine Translation in a parallel corpora. The model makes use of a sequence-to-sequence positional embedding with different encoders and decoders, namely: linear, Long Short-Term Memory and Gated Recurrent Unit. An attention mechanism was implemented in the decoder through dot and bilinear product in order to focus on specific areas of the sentences. Finally, the output sentences from the model were evaluated through four different scores.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published