Skip to content

Commit

Permalink
Update 2016-11-16-Deep-Learning-Research-Review-Week-2&barryclark#58-…
Browse files Browse the repository at this point in the history
…Reinforcement-Learning.html
  • Loading branch information
adeshpande3 authored Nov 16, 2016
1 parent bcbf571 commit 7b67ddc
Showing 1 changed file with 5 additions and 5 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ <h2><strong>Introduction to Reinforcement Learning</strong></h2>
<img src="/assets/IRL1.png">
<p>The first category, <strong>supervised learning</strong>, is the one you may be most familiar with. It relies on the idea of creating a function or model based on a set of training data, which contains inputs and their corresponding labels. Convolutional Neural Networks are a great example of this, as the images are the inputs and the outputs are the classifications of the images (dog, cat, etc).</p>
<p><strong>Unsupervised learning</strong> seeks to find some sort of structure within data through methods of cluster analysis. One of the most well-known ML clustering algorithms, K-Means, is an example of unsupervised learning.</p>
<p><strong>Reinforcement learning</strong> is the task of learning what actions to take, given a certain situation/environment, so as to maximize a reward signal. The interesting difference between supervised and reinforcement is that this reward signal simply tells you whether the action (or input) that the agent takes is good or bad. It doesn&rsquo;t tell you anything about what the <em>best</em> action is. Contrast this to CNNs where the corresponding label for each image input is a definite instruction of what the output should be for each input.&nbsp; Another unique component of RL is that an agent&rsquo;s actions will affect the subsequent data that it receives. For example, an agent&rsquo;s action of moving left instead of right means that the agent will receive different input from the environment at the next time step. Let&rsquo;s look at an example to start off.</p>
<p><strong>Reinforcement learning</strong> is the task of learning what actions to take, given a certain situation/environment, so as to maximize a reward signal. The interesting difference between supervised and reinforcement learning is that this reward signal simply tells you whether the action (or input) that the agent takes is good or bad. It doesn&rsquo;t tell you anything about what the <em>best</em> action is. Contrast this to CNNs where the corresponding label for each image input is a definite instruction of what the output should be for each input.&nbsp; Another unique component of RL is that an agent&rsquo;s actions will affect the subsequent data it receives. For example, an agent&rsquo;s action of moving left instead of right means that the agent will receive different input from the environment at the next time step. Let&rsquo;s look at an example to start off.</p>
<p><span style="text-decoration: underline;">The RL Problem</span></p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; So, let&rsquo;s first think about what have in a reinforcement learning problem. Let&rsquo;s imagine a tiny robot in a small room. We haven&rsquo;t programmed this robot to move or walk or take any action. It&rsquo;s just standing there. This robot is our <strong>agent</strong>.</p>
<img src="/assets/IRL2.png">
Expand All @@ -34,7 +34,7 @@ <h2><strong>Introduction to Reinforcement Learning</strong></h2>
<img src="/assets/IRL4.png">
<li><strong>Action-value function Q</strong>: The expected return from being in a state S, following a policy &pi;, and taking an action a (Equation will be same as above except that we have an additional condition that A<sub>t</sub> = a).</li>
</ol>
<p>Now that we have all the components, what do we do with this MDP? Well, we want to solve it, of course. By solving an MDP, you&rsquo;ll be able to find the optimal behavior (policy) that maximizes the amount of reward the agent can expected to get from any state in the environment.</p>
<p>Now that we have all the components, what do we do with this MDP? Well, we want to solve it, of course. By solving an MDP, you&rsquo;ll be able to find the optimal behavior (policy) that maximizes the amount of reward the agent can expect to get from any state in the environment.</p>
<p><span style="text-decoration: underline;">Solving the MDP</span></p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; We can solve an MDP and get the optimum policy through the use of dynamic programming and specifically through the use of <strong>policy iteration </strong>(there is another technique called value iteration, but won&rsquo;t go into that right now). The idea is that we take some initial policy &pi;<sub>1</sub> and evaluate the state value function for that policy. The way we do this is through the <strong>Bellman expectation equation</strong>.</p>
<img src="/assets/IRL5.png">
Expand All @@ -47,14 +47,14 @@ <h2><strong>Introduction to Reinforcement Learning</strong></h2>
<img src="/assets/IRL7.png">
<p>Now, we&rsquo;re going to go through the same process of policy evaluation and policy improvement, except we replace our state value function V with our action value function Q. Now, I&rsquo;m going to skip over the details of what changes with the evaluation/improvement steps. To understand MDP free evaluation and improvement methods, topics such as Monte Carlo Learning, Temporal Difference Learning, and SARSA would require whole blogs just themselves (If you are interested, though, please take a listen to David Silver&rsquo;s <a href="https://www.youtube.com/watch?v=PnHCvfgC_ZA">Lecture 4</a> and <a href="https://www.youtube.com/watch?v=0g4j2k_Ggc4">Lecture 5</a>). Right now, however, I&rsquo;m going to jump ahead to value function approximation and the methods discussed in the AlphaGo and Atari Papers, and hopefully that should give a taste of modern RL techniques. <strong>The main takeaway is that we want to find the optimal policy &pi;<sup>*</sup> that maximizes our action value function Q.</strong></p>
<p><span style="text-decoration: underline;">Value Function Approximation</span></p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; So, if you think about everything we&rsquo;ve learned up until this point, we&rsquo;ve treated our problem in a relatively simplistic way. Look at the above Q equation. We&rsquo;re taking a specific state S and action A, and then computing a number that basically tells us what the expected return is. Now let&rsquo;s imagine that our agent moves 1 millimeter to the right. This means we have a whole new state S&rsquo;, and now we&rsquo;re going to have to compute a Q value for that. In real world RL problems, there are millions and millions of states so it&rsquo;s important that our value functions understand generalization in that we don&rsquo;t have to store a completely separate value function for every possible state. The solution is to use a <strong>Q value function approximation </strong>that is able to generalize to unknown states.</p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; So, if you think about everything we&rsquo;ve learned up until this point, we&rsquo;ve treated our problem in a relatively simplistic way. Look at the above Q equation. We&rsquo;re taking in a specific state S and action A, and then computing a number that basically tells us what the expected return is. Now let&rsquo;s imagine that our agent moves 1 millimeter to the right. This means we have a whole new state S&rsquo;, and now we&rsquo;re going to have to compute a Q value for that. In real world RL problems, there are millions and millions of states so it&rsquo;s important that our value functions understand generalization in that we don&rsquo;t have to store a completely separate value function for every possible state. The solution is to use a <strong>Q value function approximation </strong>that is able to generalize to unknown states.</p>
<p>So, what we want is some function, let&rsquo;s call is Qhat, that gives a rough approximation of the Q value given some state S and some action A.</p>
<img src="/assets/IRL8.png">
<p>This function is going to take in S, A, and a good old weight vector W (Once you see that W, you already know we&rsquo;re bringing in some gradient descent). It is going to compute the dot product between x (which is just a feature vector that represents S and A) and W. The way we&rsquo;re going to improve this function is by calculating the loss between the true Q value (let&rsquo;s just assume that it&rsquo;s given to us for now) and the output of the approximate function.</p>
<p>This function is going to take in S, A, and a good old weight vector W (Once you see that W, you already know we&rsquo;re bringing in some gradient descent <i class="em em-grinning"></i>). It is going to compute the dot product between x (which is just a feature vector that represents S and A) and W. The way we&rsquo;re going to improve this function is by calculating the loss between the true Q value (let&rsquo;s just assume that it&rsquo;s given to us for now) and the output of the approximate function.</p>
<img src="/assets/IRL9.png">
<p>After we compute the loss, we use gradient descent to find the minimum value, at which point we will have our optimal W vector. This idea of function approximation is going to be very key when taking a look at the papers a little later.</p>
<p><span style="text-decoration: underline;">Just One More Thing</span></p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Before getting to the papers, just wanted to touch on one last thing. An interesting discussion with the topic of reinforcement learning is that of exploration vs exploitation. <strong>Exploitation</strong> is the agent&rsquo;s process of taking what it already knows, and then making the actions that it knows will produce the maximum reward. This sounds great, right? The agent will always be making the best action based on its current knowledge. However, there is a key phrase in that statement. It&rsquo;s <em>current knowledge</em>. If the agent hasn&rsquo;t explored enough of the state space, it can&rsquo;t possibly know whether it is really taking the best possible action. This idea of taking actions with the main purpose of exploring the state space is called <strong>exploration</strong>.</p>
<p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Before getting to the papers, just wanted to touch on one last thing. An interesting discussion with the topic of reinforcement learning is that of exploration vs exploitation. <strong>Exploitation</strong> is the agent&rsquo;s process of taking what it already knows, and then making the actions that it knows will produce the maximum reward. This sounds great, right? The agent will always be making the best action based on its current knowledge. However, there is a key phrase in that statement. <em>Current knowledge</em>. If the agent hasn&rsquo;t explored enough of the state space, it can&rsquo;t possibly know whether it is really taking the best possible action. This idea of taking actions with the main purpose of exploring the state space is called <strong>exploration</strong>.</p>
<p>This idea can be easily related to a real world example. Let&rsquo;s say you have a choice of what restaurant to eat at tonight. You (acting as the agent) know that you like Mexican food, so in RL terms, going to a Mexican restaurant will be the action that maximizes your reward, or happiness/satisfaction in this case. However, there is also a choice of Italian food, which you&rsquo;ve never had before. There&rsquo;s a possibility that it could be better than Mexican food, or could be a lot worse. This tradeoff between whether to exploit an agent&rsquo;s past knowledge vs trying something new in hope of discovering a greater reward is one of the major challenges in reinforcement learning (and in our daily lives tbh).</p>
<p><strong>Other Resources for Learning RL</strong></p>
<p><strong>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </strong>Phew. That was a lot of info. By no means, however, was that a comprehensive overview of the field. If you&rsquo;d like a more in-depth overview of RL, I&rsquo;d strongly recommend these resources.</p>
Expand Down

0 comments on commit 7b67ddc

Please sign in to comment.