Skip to content

Commit

Permalink
Update 2017-01-10-Deep-Learning-Research-Review-Week-3&barryclark#58-…
Browse files Browse the repository at this point in the history
…Natural-Language-Processing.html
  • Loading branch information
adeshpande3 authored Jan 10, 2017
1 parent ce7167b commit 7115678
Showing 1 changed file with 13 additions and 13 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
---
<link href="https://afeld.github.io/emoji-css/emoji.css" rel="stylesheet">
<img src="/assets/Cover7th.png">
<p><em>This is the 3<sup>rd</sup> installment of a new series called Deep Learning Research Review. Every couple weeks or so, I&rsquo;ll be summarizing and explaining research papers in specific subfields of deep learning. This week focuses on applying deep learning to Natural Language Processing. The </em><a href="https://adeshpande3.github.io/adeshpande3.github.io/Deep-Learning-Research-Review-Week-2-Reinforcement-Learning"><em>last post</em></a><em> was Reinforcement Learning and the <a href="https://adeshpande3.github.io/adeshpande3.github.io/Deep-Learning-Research-Review-Week-1-Generative-Adversarial-Nets">post</a>&nbsp;before was Generative Adversarial Networks ICYMI</em></p>
<p><em>This is the 3<sup>rd</sup> installment of a new series called Deep Learning Research Review. Every couple weeks or so, I&rsquo;ll be summarizing and explaining research papers in specific subfields of deep learning. This week focuses on applying deep learning to Natural Language Processing. The </em><a href="https://adeshpande3.github.io/adeshpande3.github.io/Deep-Learning-Research-Review-Week-2-Reinforcement-Learning" target="_blank"><em>last post</em></a><em> was Reinforcement Learning and the <a href="https://adeshpande3.github.io/adeshpande3.github.io/Deep-Learning-Research-Review-Week-1-Generative-Adversarial-Nets" target="_blank">post</a>&nbsp;before was Generative Adversarial Networks ICYMI</em></p>
<h2><strong>Introduction to Natural Language Processing</strong></h2>
<p><span style="text-decoration: underline;">Introduction</span></p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Natural language processing (NLP) is all about creating systems that process or &ldquo;understand&rdquo; language in order to perform certain tasks. These tasks could include</p>
Expand Down Expand Up @@ -50,13 +50,13 @@ <h2><strong>Word2Vec</strong></h2>
<img src="/assets/NLP10.png">
<p>Let&rsquo;s dig deeper into this. The above cost function is basically saying that we&rsquo;re going to add the log probabilities of &lsquo;I&rsquo; and &lsquo;love&rsquo; as well as &lsquo;NLP&rsquo; and &lsquo;love&rsquo; (where &lsquo;love&rsquo; is the center word in both cases). The variable T represents the number of training sentences. Let&rsquo;s look closer at that log probability.</p>
<img src="/assets/NLP11.png">
<p>V<sub>c</sub> is the word vector of the center word. Every word has two vector representations (U<sub>o</sub> and U<sub>w</sub>), one for when the word is used as the center word and one for when it&rsquo;s used as the outer word. The vectors are trained with stochastic gradient descent. This is definitely one of the more confusing equations to understand, so if you&rsquo;re still having trouble visualizing what&rsquo;s happening, you can go <a href="https://www.quora.com/How-does-word2vec-work">here</a> and <a href="https://www.youtube.com/watch?v=D-ekE-Wlcds">here</a> for additional resources.</p>
<p>V<sub>c</sub> is the word vector of the center word. Every word has two vector representations (U<sub>o</sub> and U<sub>w</sub>), one for when the word is used as the center word and one for when it&rsquo;s used as the outer word. The vectors are trained with stochastic gradient descent. This is definitely one of the more confusing equations to understand, so if you&rsquo;re still having trouble visualizing what&rsquo;s happening, you can go <a href="https://www.quora.com/How-does-word2vec-work" target="_blank">here</a> and <a href="https://www.youtube.com/watch?v=D-ekE-Wlcds" target="_blank">here</a> for additional resources.</p>
<p><strong>One Sentence Summary</strong>: Word2Vec seeks to find vector representations of different words by maximizing the log probability of context words given a center word and modifying the vectors through SGD.</p>
<p>(Optional: The authors of the <a href="https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf">paper</a> then go into more detail about how negative sampling and subsampling of frequent words can be used to get more precise word vectors. )</p>
<p>(Optional: The authors of the <a href="https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf" target="_blank">paper</a> then go into more detail about how negative sampling and subsampling of frequent words can be used to get more precise word vectors. )</p>
<p>Arguably, the most interesting contribution of Word2Vec was the appearance of linear relationships between different word vectors. After training, the word vectors seemed to capture different grammatical and semantic concepts.</p>
<img src="/assets/NLP12.png">
<p>It&rsquo;s pretty incredible how these linear relationships could be formed through a simple objective function and optimization technique.</p>
<p><strong>Bonus</strong>: Another cool word vector initialization method: <a href="http://nlp.stanford.edu/pubs/glove.pdf">GloVe</a> (Combines the ideas of coocurence matrices with Word2Vec)</p>
<p><strong>Bonus</strong>: Another cool word vector initialization method: <a href="http://nlp.stanford.edu/pubs/glove.pdf" target="_blank">GloVe</a> (Combines the ideas of coocurence matrices with Word2Vec)</p>
<h2><strong>Recurrent Neural Networks (RNNs)</strong></h2>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Okay, so now that we have our word vectors, let&rsquo;s see how they fit into recurrent neural networks. RNNs are the go-to for most NLP tasks today. The big advantage of the RNN is that it is able to effectively use data from previous time steps. This is what a small piece of an RNN looks like.</p>
<img src="/assets/NLP13.png">
Expand All @@ -77,7 +77,7 @@ <h2><strong>Gated Recurrent Units (GRUs)</strong></h2>
<p>The key difference is that different weights are used for each gate. This is indicated by the differing superscripts. The update gate uses W<sup>z</sup> and U<sup>z</sup> while the reset gate uses W<sup>r</sup> and U<sup>r</sup>.</p>
<p>Now, the new memory container is computed through the following.</p>
<img src="/assets/NLP20.png">
<p>(The open dot indicates a <a href="https://en.wikipedia.org/wiki/Hadamard_product_(matrices)">Hadamard product</a>)</p>
<p>(The open dot indicates a <a href="https://en.wikipedia.org/wiki/Hadamard_product_(matrices)" target="_blank">Hadamard product</a>)</p>
<p>Now, if you take a closer look at the formulation, you&rsquo;ll see that if the reset gate unit is close to 0, then that whole term becomes 0 as well, thus ignoring the information in h<sub>t-1</sub> from the previous time steps. In this scenario, the unit is only a function of the new word vector x<sub>t</sub>.</p>
<p>The final formulation of h(t) is written as</p>
<img src="/assets/NLP21.png">
Expand Down Expand Up @@ -105,14 +105,14 @@ <h2><strong>Gated Recurrent Units (GRUs)</strong></h2>
<h2><strong>Long Short Term Memory Units (LSTMs)</strong></h2>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; If you&rsquo;re comfortable with GRUs, then LSTMs won&rsquo;t be too far of a leap forward. An LSTM is also made up of a series of gates.</p>
<img src="/assets/NLP23.png">
<p>Definitely a lot more information to take in. Since this can be thought of as an extension to the idea behind a GRU, I won&rsquo;t go too far into the analysis, but for a more in depth walkthrough of each gate and each piece of computation, check out Chris Olah&rsquo;s amazingly well written <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/">blog post</a>. It is by far, the most popular tutorial on LSTMs, and will definitely help those of you looking for more intuition as to why and how these units work so well.</p>
<p>Definitely a lot more information to take in. Since this can be thought of as an extension to the idea behind a GRU, I won&rsquo;t go too far into the analysis, but for a more in depth walkthrough of each gate and each piece of computation, check out Chris Olah&rsquo;s amazingly well written <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/" target="_blank">blog post</a>. It is by far, the most popular tutorial on LSTMs, and will definitely help those of you looking for more intuition as to why and how these units work so well.</p>
<h2><strong>Comparing and Contrasting LSTMs and GRUs</strong></h2>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Let&rsquo;s start off with the similarities. Both of these units have the special function of being able to keep long term dependencies between words in a sequence. Long term dependencies refer to situations where two words or phrases may occur at very different time steps, but the relationship between them is still critical to solving the end goal. LSTMs and GRUs are able to capture these dependencies through gates that can ignore or keep certain information in the sequence.&nbsp;</p>
<p>The difference between the two units lies in the number of gates that they have (GRU &ndash; 2, LSTM &ndash; 3). This affects the number of nonlinearities the input passes through and ultimately affects the overall computation. The GRU also doesn&rsquo;t have the same memory cell (c<sub>t</sub>) that the LSTM has.</p>
<h2><strong>Before Getting Into the Papers</strong></h2>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Just want to make one quick note. There are a couple other deep models that are useful in NLP. Recursive neural networks and CNNs for NLP are sometimes used in practice, but aren&rsquo;t as prevalent as RNNs, which really are the backbone behind most deep learning NLP systems.</p>
<p>Alright. Now that we have a good understanding of deep learning in relation to NLP, let&rsquo;s look at some papers. Since there are numerous different problem areas within NLP (from machine translation to question answering), there are a number of papers that we could look into, but here are 3 that I found to be particularly insightful. 2016 had some great advancements in NLP, but let&rsquo;s first start with one from 2015.</p>
<h2><span style="text-decoration: underline;"><a href="https://arxiv.org/pdf/1410.3916v11.pdf"><strong>Memory Networks</strong></a></span></h2>
<h2><span style="text-decoration: underline;"><a href="https://arxiv.org/pdf/1410.3916v11.pdf" target="_blank"><strong>Memory Networks</strong></a></span></h2>
<p><span style="text-decoration: underline;">Introduction</span></p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The first paper, we&rsquo;re going to talk about is a quite influential publication in the subfield of Question Answering. Authored by Jason Weston, Sumit Chopra, and Antoine Bordes, this paper introduced a class of models called memory networks.</p>
<p>The intuitive idea is that in order to accurately answer a question regarding a piece of text, you need to somehow store the initial information given to you. If I were to ask you the question &ldquo;What does RNN stand for&rdquo;, (assuming you&rsquo;ve read this post fully J) you&rsquo;ll be able to give me an answer because the information you absorbed by reading the first part of this post was stored somewhere in your memory. You just had to take a few seconds to locate that info and articulate it in words. Now, I have no clue how the brain is able to do that, but the idea of having a storage place for this information still remains.</p>
Expand All @@ -132,11 +132,11 @@ <h2><span style="text-decoration: underline;"><a href="https://arxiv.org/pdf/141
<img src="/assets/NLP30.png">
<p>For those interested, these are some papers that built off of this memory network approach</p>
<ul>
<li><a href="https://arxiv.org/pdf/1503.08895v5.pdf">End to End Memory Networks</a> (only requires supervision on outputs, not supporting sentences)</li>
<li><a href="https://arxiv.org/pdf/1506.07285v5.pdf">Dynamic Memory Networks</a></li>
<li><a href="https://arxiv.org/pdf/1611.01604v2.pdf">Dynamic Coattention Networks</a> (Just got released 2 months ago and had the highest test score on Stanford&rsquo;s Question Answering Dataset at the time)</li>
<li><a href="https://arxiv.org/pdf/1503.08895v5.pdf" target="_blank">End to End Memory Networks</a> (only requires supervision on outputs, not supporting sentences)</li>
<li><a href="https://arxiv.org/pdf/1506.07285v5.pdf" target="_blank">Dynamic Memory Networks</a></li>
<li><a href="https://arxiv.org/pdf/1611.01604v2.pdf" target="_blank">Dynamic Coattention Networks</a> (Just got released 2 months ago and had the highest test score on Stanford&rsquo;s Question Answering Dataset at the time)</li>
</ul>
<h2><span style="text-decoration: underline;"><a href="https://arxiv.org/pdf/1503.00075v3.pdf"><strong>Tree LSTMs for Sentiment Analysis</strong></a></span></h2>
<h2><span style="text-decoration: underline;"><a href="https://arxiv.org/pdf/1503.00075v3.pdf" target="_blank"><strong>Tree LSTMs for Sentiment Analysis</strong></a></span></h2>
<p><span style="text-decoration: underline;">Introduction</span></p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The next paper looks into an advancement in Sentiment Analysis, the task of determining whether a phrase has a positive or negative connotation/meaning. More formally, sentiment can be defined as &ldquo;a view or attitude toward a situation or event&rdquo;. At the time, LSTMs were the most commonly used units in sentiment analysis networks. Authored by Kai Sheng Tai, Richard Socher, and Christopher Manning, this paper introduces a novel way of chaining together LSTMs in a non-linear structure. &nbsp;</p>
<p>The motivation behind this non-linear arrangement lies in the notion that natural language exhibits the property that words in sequence become phrases. These phrases, depending on the order of the words, can hold different meanings from their original word components. In order to represent this characteristic, a network of LSTM units must be arranged into a tree structure where different units are affected by their children nodes.</p>
Expand All @@ -145,7 +145,7 @@ <h2><span style="text-decoration: underline;"><a href="https://arxiv.org/pdf/150
<img src="/assets/NLP28.png">
<p>With this new tree-based structure, there are some mathematical changes including child units having forget gates. For those interested in the details, check the paper for more info. What I would like to focus on, however, is understanding why these models work better than a linear LSTM.</p>
<p>With a Tree-LSTM, a single unit is able to incorporate the hidden states of all of its children nodes. This is interesting because a unit is able to value each of its children nodes differently. During training, the network could realize that a specific word (maybe the word &ldquo;not&rdquo; or &ldquo;very&rdquo; in sentiment analysis) is extremely important to the overall sentiment of the sentence. The ability to value that node higher provides a lot of flexibility to network and could improve performance.</p>
<h2><span style="text-decoration: underline;"><a href="https://arxiv.org/pdf/1609.08144v2.pdf"><strong>Neural Machine Translation</strong></a></span></h2>
<h2><span style="text-decoration: underline;"><a href="https://arxiv.org/pdf/1609.08144v2.pdf" target="_blank"><strong>Neural Machine Translation</strong></a></span></h2>
<p><span style="text-decoration: underline;">Introduction</span></p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; The last paper we&rsquo;ll look at today describes an approach to the task of Machine Translation. Authored by Google ML visionaries Jeff Dean, Greg Corrado, Orial Vinyals, and others, this paper introduced a machine translation system that serves as the backbone behind Google&rsquo;s popular Translate service. The system reduced translation errors by an average of 60% compared to the previous production system Google used.</p>
<p>Traditional approaches to automated translation include variants of phrase-based matching. This approach required large amounts of linguistic domain knowledge and ultimately its design proved to be too brittle and lacked generalization ability. One of the problems with the traditional approach was that it would try to translate the input sentence piece by piece. It turns out the more effective approach (that NMT uses) is to translate the whole sentence at a time, thus allowing for a broader context and a more natural rearrangement of words.</p>
Expand All @@ -155,7 +155,7 @@ <h2><span style="text-decoration: underline;"><a href="https://arxiv.org/pdf/160
<p>The rest of the paper mainly focuses on the challenges associated with deploying such a service at scale. Topics such as amount of computational resources, latency, and high volume deployment are discussed at length.</p>
<h2><strong>Conclusion</strong></h2>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; With that, we conclude this post on how deep learning can contribute to natural language processing tasks. In my mind, some future goals in the field could be to improve customer service chatbots, perfect machine translation, and hopefully get question answering systems to obtain a deeper understanding of unstructured or lengthy pieces of text (like Wikipedia pages).</p>
<p>&nbsp;Special thanks to Richard Socher and the staff behind <a href="http://cs224d.stanford.edu/index.html">Stanford CS 224D</a>. Great slides (most of the images are attributed to their slides) and fantastic lectures.</p>
<p>&nbsp;Special thanks to Richard Socher and the staff behind <a href="http://cs224d.stanford.edu/index.html" target="_blank">Stanford CS 224D</a>. Great slides (most of the images are attributed to their slides) and fantastic lectures.</p>
<p>Dueces. <i class="em em-v"></i></p>
<a href = "/assets/Sources7.txt" target= "_blank"> Sources</a>
<p></p>
Expand Down

0 comments on commit 7115678

Please sign in to comment.