From 8506d5e5972685df1a9fca85703b35149ce83ff6 Mon Sep 17 00:00:00 2001 From: Nelson Alves Date: Sun, 1 Nov 2020 18:01:11 -0300 Subject: [PATCH] :poop: Adiciona notebook de PPO --- .vscode/settings.json | 3 + .../Actor Critic/PPO/PPO.ipynb" | 438 ++++++++++++++++++ 2 files changed, 441 insertions(+) create mode 100644 .vscode/settings.json create mode 100644 "Aprendizado por Refor\303\247o Profundo/Actor Critic/PPO/PPO.ipynb" diff --git a/.vscode/settings.json b/.vscode/settings.json new file mode 100644 index 0000000..ccbee1c --- /dev/null +++ b/.vscode/settings.json @@ -0,0 +1,3 @@ +{ + "python.pythonPath": "/home/nelson/anaconda3/envs/torch/bin/python" +} \ No newline at end of file diff --git "a/Aprendizado por Refor\303\247o Profundo/Actor Critic/PPO/PPO.ipynb" "b/Aprendizado por Refor\303\247o Profundo/Actor Critic/PPO/PPO.ipynb" new file mode 100644 index 0000000..250a364 --- /dev/null +++ "b/Aprendizado por Refor\303\247o Profundo/Actor Critic/PPO/PPO.ipynb" @@ -0,0 +1,438 @@ +{ + "metadata": { + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.5-final" + }, + "orig_nbformat": 2, + "kernelspec": { + "name": "Python 3.8.5 64-bit ('torch': conda)", + "display_name": "Python 3.8.5 64-bit ('torch': conda)", + "metadata": { + "interpreter": { + "hash": "a5cd74ba85a3b6a037c59ac3f3634fcdd9437555c9fe253dd51f04000fcd493e" + } + } + } + }, + "nbformat": 4, + "nbformat_minor": 2, + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Proximal Policy Optimization (PPO)\n", + "\n", + "Como vimos na aula de A2C, uma função objetivo muito utilizada é:\n", + "\n", + "$$\n", + " J(\\theta) = \\mathbb{E}_{s,a\\sim\\pi_\\theta} [A^{\\pi_\\theta}_w(s,a)], \\qquad\n", + " \\nabla_\\theta J(\\theta) = \\mathbb{E}_{s,a\\sim\\pi_\\theta} [\\nabla_\\theta \\log \\pi_\\theta(a|s)\\cdot A^{\\pi_\\theta}_w(s,a)].\n", + "$$\n", + "\n", + "Os índices na função _advantage_ $A$ indicam que $A$ depende tanto dos pesos $w$ utilizados para calcular o estimar de cada estado, quanto da política $\\pi_\\theta$, que determina quais trajetórias o agente vai seguir dentro do ambiente.\n", + "\n", + "> Obs: pode-se mostrar que essa formulação é equivalente à formulação que utiliza somatórias no tempo:\n", + "$$\n", + " J(\\theta) = \\mathbb{E}_{(s_0,a_0,\\dots)\\sim\\pi_\\theta} \\left[\\sum_{t=0}^\\infty \\gamma^t A^{\\pi_\\theta}_w(s_t,a_t)\\right], \\qquad\n", + " \\nabla_\\theta J(\\theta) = \\mathbb{E}_{(s_0,a_0,\\dots)\\sim\\pi_\\theta} \\left[\\sum_{t=0}^\\infty \\nabla_\\theta \\log \\pi_\\theta(a_t|s_t)\\cdot A^{\\pi_\\theta}_w(s_t,a_t)\\right].\n", + "$$\n", + "\n", + "Note que uma pequena variação no espaço de parâmetros ($\\Delta\\theta = \\alpha\\nabla_\\theta J$) pode causar uma grande variação no espaço de políticas. Isso significa que, em geral, a taxa de aprendizado $\\alpha$ não pode ser muito alta; caso contrário, corremos o risco de obter uma nova política que não funcione. Consequentemente, a eficiência amostral de A2C também é limitada.\n", + "\n", + "\n", + "## Trust Region Policy Optimization (TRPO)\n", + "\n", + "Uma maneira de resolver esse problema é limitar as variações na política. Para isso, vamos utilizar a divergência KL $KL(\\pi_1 || \\pi_2)$, que pode ser, simplificadamente, encarada como uma medida da diferença entre duas políticas (ou, em geral, duas distribuições de probabilidade).\n", + "\n", + "TRPO define uma região de confiança (trust region) para garantir que a política nova não se distancie demais da política antiga:\n", + "$$E_{s\\sim\\pi_{\\theta_{\\mathrm{old}}}}\\bigl[KL\\bigl(\\pi_{\\mathrm{old}}(\\cdot|s)\\,||\\,\\pi(\\cdot|s)\\bigr)\\bigr] \\le \\delta.$$\n", + "\n", + "No entanto, maximizar a função objetivo de A2C sujeito a essas restrições é um pouco complicado. Então, vamos utilizar uma aproximação da função objetivo de A2C:\n", + "\n", + "$$L(\\theta_{\\mathrm{old}},\\theta) = E_{s,a\\sim\\pi_{\\theta_{\\mathrm{old}}}} \\left[\\frac{\\pi_\\theta(a|s)}{\\pi_{\\theta_{\\mathrm{old}}}(a|s)} A^{\\pi_{\\theta_{\\mathrm{old}}}}(s,a)\\right].$$\n", + "\n", + "Ou seja, TRPO consiste em:\n", + "$$\\text{maximizar } E_{s,a\\sim\\pi_{\\theta_{\\mathrm{old}}}} \\left[\\frac{\\pi_\\theta(a|s)}{\\pi_{\\theta_{\\mathrm{old}}}(a|s)} A^{\\pi_{\\theta_{\\mathrm{old}}}}(s,a)\\right] \\text{ sujeito a } E_{s\\sim\\pi_{\\theta_{\\mathrm{old}}}}\\bigl[KL\\bigl(\\pi_{\\mathrm{old}}(\\cdot|s)\\,||\\,\\pi(\\cdot|s)\\bigr)\\bigr] \\le \\delta.$$\n", + "\n", + "> Para entender como chegamos $L(\\theta_{\\mathrm{old}},\\theta)$ é uma aproximação de $J(\\theta)$, podemos fazer:\n", + "\\begin{align*}\n", + "J(\\theta) &= E_{\\pi_\\theta}[A^{\\pi_\\theta}(s,a)] \\\\\n", + " &= E_{\\pi_\\theta}[A^{\\pi_{\\theta_{\\mathrm{old}}}}(s,a)] \\\\\n", + "\t\t&= \\sum_{s,a} \\rho_{\\pi_\\theta}(s)\\cdot \\pi_\\theta(a|s) \\cdot A^{\\pi_{\\theta_{\\mathrm{old}}}}(s,a) \\\\\n", + "\t\t&= \\sum_{s,a} \\rho_{\\pi_\\theta}(s)\\cdot \\pi_{\\theta_{\\mathrm{old}}}(a|s) \\cdot \\frac{\\pi_\\theta(a|s)}{\\pi_{\\theta_{\\mathrm{old}}}(a|s)}A^{\\pi_{\\theta_{\\mathrm{old}}}}(s,a) \\\\\n", + "\t\t&\\approx \\sum_{s,a} \\rho_{\\pi_{\\theta_{\\mathrm{old}}}}(s)\\cdot \\pi_{\\theta_{\\mathrm{old}}}(a|s) \\cdot \\frac{\\pi_\\theta(a|s)}{\\pi_{\\theta_{\\mathrm{old}}}(a|s)}A^{\\pi_{\\theta_{\\mathrm{old}}}}(s,a) \\\\\n", + "\t\t&= E_{\\pi_{\\theta_{\\mathrm{old}}}} \\left[\\frac{\\pi_\\theta(a|s)}{\\pi_{\\theta_{\\mathrm{old}}}(a|s)} A^{\\pi_\\theta}(s,a)\\right]\n", + "\\end{align*}\n", + "\n", + "\n", + "## Proximal Policy Optimization (PPO)\n", + "\n", + "Como já foi mencionado, a restrição ($KL < \\delta$) imposta em TRPO torna o algoritmo relativamente complicado. PPO é uma tentativa de simplificar esse algoritmo. Ao invés de utilizar trust regions, PPO mexe diretamente com a função objetivo:\n", + "\n", + "$$\n", + " L(\\theta_{\\mathrm{old}},\\theta) = E_{s,a\\sim\\pi_{\\theta_{\\mathrm{old}}}} \\Bigl[\\min\\left(r A^{\\pi_{\\theta_{\\mathrm{old}}}}(s,a),\\, \\operatorname{clip}(r,1-\\varepsilon,1+\\varepsilon) A^{\\pi_{\\theta_{\\mathrm{old}}}}(s,a)\\right)\\Bigr],\n", + " \\quad\n", + " r = \\frac{\\pi_\\theta(a|s)}{\\pi_{\\theta_{\\mathrm{old}}}(a|s)}.\n", + "$$\n", + "Essa função pode ser reescrita como:\n", + "$$\n", + " L(\\theta_{\\mathrm{old}},\\theta) = E_{s,a\\sim\\pi_{\\theta_{\\mathrm{old}}}} \\Bigl[\\min\\left(r A^{\\pi_{\\theta_{\\mathrm{old}}}}(s,a),\\, g(\\varepsilon, A^{\\pi_{\\theta_{\\mathrm{old}}}}(s,a))\\right)\\Bigr],\n", + " \\quad\n", + " g(\\varepsilon, A) = \\begin{cases}\n", + " (1+\\varepsilon) A, & A \\ge 0 \\\\\n", + " (1-\\varepsilon) A, & A < 0.\n", + " \\end{cases}\n", + "$$\n", + "\n", + "Nota-se que:\n", + "- Quando a vantagem é positiva, se $r$ aumentar, então $L$ aumenta. No entanto, esse benefício é limitado pelo clip: se $r > 1+\\varepsilon$, não há mais benefício para $r$ aumentar.\n", + "- Quando a vantagem é positiva, se $r$ diminuir, então $L$ aumenta. No entanto, esse benefício é limitado pelo clip: se $r M 1-\\varepsilon$, não há mais benefício para $r$ diminuir." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Rede Divida" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import torch.nn as nn\n", + "import torch.nn.functional as F\n", + "from torch.distributions import Categorical\n", + "class ActorCritic(nn.Module):\n", + " def __init__(self, observation_shape, action_shape):\n", + " super(ActorCritic, self).__init__()\n", + " self.policy1 = nn.Linear(observation_shape, 64)\n", + " self.policy2 = nn.Linear(64, 64)\n", + " self.policy3 = nn.Linear(64, action_shape)\n", + " \n", + " self.value1 = nn.Linear(observation_shape, 64)\n", + " self.value2 = nn.Linear(64, 64)\n", + " self.value3 = nn.Linear(64, 1)\n", + "\n", + " def forward(self, state):\n", + " dists = torch.tanh(self.policy1(state))\n", + " dists = torch.tanh(self.policy2(dists))\n", + " dists = F.softmax(self.policy3(dists), dim=-1)\n", + " probs = Categorical(dists)\n", + " \n", + " v = torch.tanh(self.value1(state))\n", + " v = torch.tanh(self.value2(v))\n", + " v = self.value3(v)\n", + "\n", + " return probs, v" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Experience Replay" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "class ExperienceReplay:\n", + " \"\"\"Experience Replay Buffer para A2C.\"\"\"\n", + " def __init__(self, max_length, observation_space):\n", + " \"\"\"Cria um Replay Buffer.\n", + "\n", + " Parâmetros\n", + " ----------\n", + " max_length: int\n", + " Tamanho máximo do Replay Buffer.\n", + " observation_space: int\n", + " Tamanho do espaço de observação.\n", + " \"\"\"\n", + " self.length = 0\n", + " self.max_length = max_length\n", + "\n", + " self.states = np.zeros((max_length, observation_space), dtype=np.float32)\n", + " self.actions = np.zeros((max_length), dtype=np.int32)\n", + " self.rewards = np.zeros((max_length), dtype=np.float32)\n", + " self.next_states = np.zeros((max_length, observation_space), dtype=np.float32)\n", + " self.dones = np.zeros((max_length), dtype=np.float32)\n", + " self.logp = np.zeros((max_length), dtype=np.float32)\n", + "\n", + " def update(self, states, actions, rewards, next_states, dones, logp):\n", + " \"\"\"Adiciona uma experiência ao Replay Buffer.\n", + "\n", + " Parâmetros\n", + " ----------\n", + " state: np.array\n", + " Estado da transição.\n", + " action: int\n", + " Ação tomada.\n", + " reward: float\n", + " Recompensa recebida.\n", + " state: np.array\n", + " Estado seguinte.\n", + " done: int\n", + " Flag indicando se o episódio acabou.\n", + " \"\"\"\n", + " self.states[self.length] = states\n", + " self.actions[self.length] = actions\n", + " self.rewards[self.length] = rewards\n", + " self.next_states[self.length] = next_states\n", + " self.dones[self.length] = dones\n", + " self.logp[self.length] = logp\n", + " self.length += 1\n", + "\n", + " def sample(self):\n", + " \"\"\"Retorna um batch de experiências.\n", + " \n", + " Parâmetros\n", + " ----------\n", + " batch_size: int\n", + " Tamanho do batch de experiências.\n", + "\n", + " Retorna\n", + " -------\n", + " states: np.array\n", + " Batch de estados.\n", + " actions: np.array\n", + " Batch de ações.\n", + " rewards: np.array\n", + " Batch de recompensas.\n", + " next_states: np.array\n", + " Batch de estados seguintes.\n", + " dones: np.array\n", + " Batch de flags indicando se o episódio acabou.\n", + " \"\"\"\n", + " self.length = 0\n", + "\n", + " return (self.states, self.actions, self.rewards, self.next_states, self.dones, self.logp)" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "import torch\n", + "import torch.optim as optim\n", + "\n", + "class PPO:\n", + " def __init__(self, observation_space, action_space, lr=7e-4, gamma=0.99, lam=0.95, vf_coef=0.5, entropy_coef=0.005,clip_param =0.2, epochs =10, n_steps=5):\n", + " self.device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", + "\n", + " self.gamma = gamma\n", + " self.lam = lam\n", + " self.vf_coef = vf_coef\n", + " self.entropy_coef = entropy_coef\n", + " self.clip_param = clip_param\n", + " self.epochs = epochs\n", + "\n", + " self.n_steps = n_steps\n", + " self.memory = ExperienceReplay(n_steps, observation_space.shape[0])\n", + "\n", + " self.actorcritic = ActorCritic(observation_space.shape[0], action_space.n).to(self.device)\n", + " self.actorcritic_optimizer = optim.Adam(self.actorcritic.parameters(), lr=lr)\n", + "\n", + " def act(self, state):\n", + " state = torch.FloatTensor(state).to(self.device).unsqueeze(0)\n", + " probs, _ = self.actorcritic.forward(state)\n", + " action = probs.sample()\n", + " log_prob = probs.log_prob(action)\n", + " return action.cpu().detach().item(), log_prob\n", + "\n", + " def remember(self, state, action, reward, next_state, done, logp):\n", + " self.memory.update(state, action, reward, next_state, done, logp)\n", + "\n", + " def compute_gae(self, rewards, dones, v, v2):\n", + " T = len(rewards)\n", + "\n", + " returns = torch.zeros_like(rewards)\n", + " gaes = torch.zeros_like(rewards)\n", + " \n", + " future_gae = torch.tensor(0.0, dtype=rewards.dtype)\n", + " next_return = torch.tensor(v2[-1], dtype=rewards.dtype)\n", + "\n", + " not_dones = 1 - dones\n", + " deltas = rewards + not_dones * self.gamma * v2 - v\n", + "\n", + " for t in reversed(range(T)):\n", + " returns[t] = next_return = rewards[t] + self.gamma * not_dones[t] * next_return\n", + " gaes[t] = future_gae = deltas[t] + self.gamma * self.lam * not_dones[t] * future_gae\n", + "\n", + " gaes = (gaes - gaes.mean()) / (gaes.std() + 1e-8) # Normalização\n", + "\n", + " return gaes, returns\n", + "\n", + " def train(self):\n", + " if self.memory.length < self.n_steps:\n", + " return\n", + "\n", + " (states, actions, rewards, next_states, dones, old_logp) = self.memory.sample()\n", + "\n", + " states = torch.FloatTensor(states).to(self.device)\n", + " actions = torch.FloatTensor(actions).to(self.device)\n", + " rewards = torch.FloatTensor(rewards).unsqueeze(-1).to(self.device)\n", + " next_states = torch.FloatTensor(next_states).to(self.device)\n", + " dones = torch.FloatTensor(dones).unsqueeze(-1).to(self.device)\n", + " old_logp = torch.FloatTensor(old_logp).to(self.device)\n", + "\n", + " for epoch in range(self.epochs):\n", + " probs, v = self.actorcritic.forward(states)\n", + " with torch.no_grad():\n", + " _, v2 = self.actorcritic.forward(next_states)\n", + "\n", + " new_logp = probs.log_prob(actions)\n", + "\n", + " advantages, returns = self.compute_gae(rewards, dones, v, v2)\n", + "\n", + " ratio = (new_logp.unsqueeze(-1) - old_logp.unsqueeze(-1)).exp()\n", + " surr1 = ratio * advantages.detach()\n", + " surr2 = torch.clamp(ratio, 1.0 - self.clip_param, 1.0 + self.clip_param) * advantages.detach()\n", + "\n", + " entropy = probs.entropy().mean()\n", + "\n", + " policy_loss = - torch.min(surr1,surr2).mean()\n", + " value_loss = self.vf_coef * F.mse_loss(v, returns.detach())\n", + " entropy_loss = -self.entropy_coef * entropy\n", + "\n", + " self.actorcritic_optimizer.zero_grad()\n", + " (policy_loss + entropy_loss + value_loss).backward()\n", + " self.actorcritic_optimizer.step()\n", + "\n", + " return policy_loss + entropy_loss + value_loss" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Treinando" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "import math\n", + "from collections import deque\n", + "\n", + "def train(agent, env, total_timesteps):\n", + " total_reward = 0\n", + " episode_returns = deque(maxlen=20)\n", + " avg_returns = []\n", + "\n", + " state = env.reset()\n", + " timestep = 0\n", + " episode = 0\n", + "\n", + " while timestep < total_timesteps:\n", + " action, log_prob = agent.act(state)\n", + " next_state, reward, done, _ = env.step(action)\n", + " agent.remember(state, action, reward, next_state, done, log_prob.detach().cpu().numpy())\n", + " loss = agent.train()\n", + " timestep += 1\n", + "\n", + " total_reward += reward\n", + "\n", + " if done:\n", + " episode_returns.append(total_reward)\n", + " episode += 1\n", + " next_state = env.reset()\n", + "\n", + " if episode_returns:\n", + " avg_returns.append(np.mean(episode_returns))\n", + "\n", + " total_reward *= 1 - done\n", + " state = next_state\n", + "\n", + " ratio = math.ceil(100 * timestep / total_timesteps)\n", + "\n", + " avg_return = avg_returns[-1] if avg_returns else np.nan\n", + " \n", + " print(f\"\\r[{ratio:3d}%] timestep = {timestep}/{total_timesteps}, episode = {episode:3d}, avg_return = {avg_return:10.4f}\", end=\"\")\n", + "\n", + " return avg_returns" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "output_type": "stream", + "name": "stdout", + "text": [ + "[100%] timestep = 75000/75000, episode = 480, avg_return = 265.2000" + ] + } + ], + "source": [ + "import gym\n", + "\n", + "env = gym.make(\"CartPole-v1\")\n", + "agente = PPO(env.observation_space, env.action_space)\n", + "returns = train(agente, env, 75000)" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "output_type": "display_data", + "data": { + "text/plain": "
", + "image/svg+xml": "\n\n\n\n \n \n \n \n 2020-11-01T17:56:04.930085\n image/svg+xml\n \n \n Matplotlib v3.3.2, https://matplotlib.org/\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD4CAYAAAAXUaZHAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAAAqLklEQVR4nO2dd7wV1bXHvwuQpoQuIogUUZ8dvFY0MSIWbC9GY41EjXw+sTw1xiiJJbZPjHkaa1BeNKJGCWJUNIgFsMQoiCgIYkGKgDSle4Fb2O+PPZN77uWWU6bsmVnfz+d89p595syse2bO765Ze+29xRiDoiiKki6axW2AoiiKEjwq7oqiKClExV1RFCWFqLgriqKkEBV3RVGUFNIibgMAunTpYnr37h23GYqiKInigw8++MYY07W+95wQ9969ezN9+vS4zVAURUkUIrKoofc0LKMoipJCVNwVRVFSiIq7oihKClFxVxRFSSEq7oqiKCkkL3EXkYUi8rGIfCQi0722TiLymoh84ZUdvXYRkftEZJ6IzBKRgWH+AYqiKMq2FOK5/9AYc4Axpszbvg6YZIzpD0zytgFOAPp7r+HAyKCMVRRFUfKjlDz3U4GjvPpo4A3gWq/9cWPnEn5PRDqISHdjzLJSDFUUZ5k5E559tvjP9+wJw4cHZ0+ULFsG//d/UFUVtyVNc8YZsO++cVsRGfmKuwFeFREDPGyMGQV0yxHs5UA3r94DWJzz2SVeWy1xF5HhWM+eXr16FWe9orjAHXfAmDEgUvhn/fUUTjsNunQJ1q4oePppuOkmWy/m748KY2DRIhg9Om5LIiPfsMwRxpiB2JDLpSLy/dw3PS+9oFU/jDGjjDFlxpiyrl3rHT2rKMlg40YYMAC2bi38NdKLWibB862PigpbbtpU3N8f1atfv+R+x0WSl+dujFnqlStF5DngYGCFH24Rke7ASm/3pcAuOR/v6bUpSjr47DOYMqVme9486NSpuGM18/yrrVtLtysOqqtt2bx5vHY0hUjNU1JGaFLcRWR7oJkxZoNXPxa4BRgPDAPu8MoXvI+MBy4TkTHAIcA6jbcrqeLXv4bx42u3nXtuccfyxd0XyaThe8Oui3uzZiru9dANeE5sPK0F8JQxZqKIvA+MFZGLgEXAT7z9JwBDgXlAOXBB4FYrSpyUl8OBB8JLL9W0FRtaTIvn3szxITMiyf2Oi6RJcTfGzAf2r6f9W2BwPe0GuDQQ6xTFRaqroXVr2Gmn0o+VBnFv4cTkso2TwbCM4/9uFcVBqquDC0P4x0mquFdVuR+SgUyGZVTcFaVQghT3NHjuSRD3DIZlVNwVpVCCDEUktUN15kw4/XR4/vnkiLt67oqiNIp67jZb6NlnoW1bOOusuK1pmgyGZRLQE6IojqHiXpMCOXOm2yNTfTQsoyhKk2iHqv0OmjVLhrBDJsMy6rkr2eX88+Hddwv/3KJF0LdvMDYk1XNPSkeqj4ZlFCVD/OMf0KuXnRemEA4+2P5jCIKkdqgmJQXSJ4NhGRV3Jf2sXQvfflu7zRg70vT00+GWW2IxC4DNm225alV8NhRDUgYv+WhYRlFShDGwZQv06WMFvj46dozUpG3YcUdbbrddvHYUioZlnEfFXUknlZWw227w1Vd2e/hwOOKI2vu0aAEnnRS9bbn4op60sEzSxF3DMoqSEr77zgr7CSfAUUdZce/QIW6rtsUXyKSI+1tvwQ03wOefJ0vc1XNXlJTg52GfeCJc6vA8dn7cOiniPnGiFfijj4ZBg+K2Jn/Uc1eUlOCLu+udfknz3P0ZMSdNituSwshgh6oOYlLSiYp7OFRVuf+d1kcGwzIq7ko6qay0petClERxT1Ks3SeDYRkVdyV9lJfDSm9JXxX3YElafrtPBsMyCbxKitII5eWw886wbp3dbtMmXnuawhd3P2XTdZIclvEHjGWEBF4lRWmE9eutsJ95Jhx7rE2FdJkddrCl6/+EfJIq7uXlsHx53FZEioZllHThd6QecwxceKH7otmypS1dDxk8+aQdJzB6dPJG0wJUVNgnugyRwH/BitIIvrgnpdPPnzLXdXGfMcN6v5dfDocdFrc1hdOlC2zYELcVkaLirqSLpKRA+iRF3KuqbAjp7rvjtqQ4MtihqmEZJV2ouIdDUmPtPiruipJwVNzDobIyOd9pfWRQ3BN8tRQlh61bYcQI+OQTu50UIUqKuKvnnjgSfLUUJYclS+DOO6FzZ9hrL9hnn7gtyg8V92jIoLhrWEZJB3445u67Yc4c6N8/XnvyxXVxnzABDj8cXn5ZxT1hJPhqKUoOSYu1+7gu7v/8J0yfDj/8oR0UllRU3BUlofhzsyQlv93HdXGvrLQ54q+8ErclpZFBcdewjJIO1HMPh6TH2n0yKO4puGpK3lRVwdtvNzyBUpcucNBB0doUFCru4VBZmczpBuqi4t4wItIcmA4sNcacJCJ9gDFAZ+AD4KfGmAoRaQU8DhwIfAucaYxZGLjlSv588w3ccw+MHw8ff9z4vl9/Dd27R2JWYFRV2TVTQcU9aNIi7hlcrKOQX8IVwFzge972H4A/GWPGiMhDwEXASK9cY4zZTUTO8vY7M0CblXyZNQt+8Qv4979r2tq3hxdfrJmwyufVV+HGG+2MivmI++rVMGoUHHqoXYA6Ltasgd697WyQAK1axWdLMTTzIqOuCc/y5fae+PLL5P3DrI8MLtaR11UTkZ7AicDtwC9FRICjgXO8XUYDv8OK+6leHWAc8ICIiDGu3b0Z4F//ssI+eDDsvbf13n1PsS5LltjSX8GoKZ5/3g4aAisAffuWam1xrFhhhf3ss23K3pFHxmNHsfjXwzXhue02ePBBWz/mmHhtCYIMhmXy7VC9B/g14N+BnYG1xhgv0MkSoIdX7wEsBvDeX+ftr0SNL9Rjx8K99zYs7FDz6F1Rkd+xy8tr6kFlUlx1Fey2m33dd19+n/Fj7aedBpddljzP3dWwTHm5fYKbP98+6SUdFfdtEZGTgJXGmA+CPLGIDBeR6SIyfdWqVUEeWvHxxT2fmKkfplm4EBYsqP365ptt98/tlB0/vmRTAZtTXVVlQz6vv57fZ5LakerjqrhXVdl/lH36QOvWcVtTOhkU93x+EYOAU0RkKNAaG3O/F+ggIi0877wnsNTbfymwC7BERFoA7bEdq7UwxowCRgGUlZVl61uPikLE3V8R6PTT639/0CC7WIPPvHk19fffr/8z33wDzzxjc8/PPhvatWvchooK+MEP7AjTfNcUVXEPh+rq5I0ZaAwV920xxowARgCIyFHAr4wx54rIM8Dp2IyZYcAL3kfGe9vveu9P1nh7TBQi7ocdBuPGwcaNtduXLYMxY6ynnrtM2Q47wHnn2fTJhx+u/Zlnn4Xhw60H7tO8OVx0UeM2VFTYJ4jmzVXc40bFPfGU8ou4FhgjIrcBHwKPeO2PAE+IyDxgNXBWaSYqBfPRR3bt0DVrbDZGszy6Vpo3hx//uP73rruu4c9dfz1s2mRDKieeaNs++MCe+4orYKedbMerv2B1Y/hpd1kSdx/XhEfFPfEU9IswxrwBvOHV5wMH17PPZuCMAGxTisEYeOop62VfeCEcfHDjHamlct55cPvt8PnnNeJeWWnjtPfcY8V3xAibNjl5cuPHWrs2f8997Vobwvn6a7udZHF3UXiqq5P9ndbFxe84ZFJ09RTApj7+8Y+2fu+9NbH0sOjXz5a52TOVlTUdtC1awPnn2zh6U6vPDxgAxx1nnzyaEveFC20e/+DBdnrfpI6sBTeFRz33xKPiniaqquDxx239ySfDF3aoiedffz1ce60V84qK2nH+0aMLO+b//i9s2dL4Pn5/whVXwMknF3Z813BReFTcE49OHJYmXnjBhj/Axtyj4vDDbTl1Klx+Obz55rYjYAshn7BMIZ3FruOS8Dz2GOy5J0yZouKecFTc04SfnfLmm9CpU3TnPfdcW44aBQ88ACtX2nh4sTRvbtMof/ELOzrynHO2HTmr4h4Or70GS5fap6ErrojbmuBw6TuOCA3LpAl/YNHee0d7Xt9LX7fOLnNX6qC0Ll1g4sTaufS33GJHrvr4WTIq7sFSVQU9etj01zTh0nccESruaeGaa2w6IkQ/otAX9/Xrgzn3qFFw9dW2PnMm/OxnNt0S7D+Qc8+1Haqg4h40aZm/vS4ufccRkcKrmEEqK20n5M47w5lnQtu20Z7fF/QpU2p718XSpg0ccICt+6mO115rPfqVK+1cNgceCD/6UfRPKWGwdSt89VXcVljSlgLpo+KuJBI/HHPVVfCrX0V//uOOs2GTTZtqOleDYp99YN994dNPa9oGDLDT0UbZrxAmlZXuzKFfVZWujlQfEVi8OG4rIkXFPQ28954t27SJ5/zt28MNN4Rz7F69bD57mmnXzh2vMq1hmWXLas+NlAE0WybpLFlSsyp9ly7x2qIUhyshg61b7VNEGsW9T590/l2NoOKedFautOV11zU8o6PiNi6I+w032HDM5MmljVFwFV1mT0kUmzfDKafY+uDB6YyVZgEXxH32bOjWDS65BIYMideWMNBl9pREMX++HXDSrp3NHlGSiQviXllp89tvvDFeO8LChe84YjQsk2T83O8nn4SOHeO1RSkeF4THn245rbjwHUeMintSqaiww/MhviwZJRhcEB4V99ShYZmkMnt2zfJ2e+0Vry1KacQhPJWVNqTns3FjNLOIxkUGxV0996Tyl7/YctIkGytVkkscwnPBBTY90H+9/z5sv320NkRJBsVdPfcksmABjBxp6717x2qKEgBxCM/XX0P//vCb39S0HXlktDZEiYq74jzV1fA//2PrTz8NffvGa49SOnHkYFdX27mIfvazaM8bFxkUdw3LJI25c+Gll2z9kEPitUUJhjhysNO20lJTqLgrzuMvUvH88zZWqiSfOIRHxT31qLgnDX+RiozNk5FqVNzDJ4PTD6i4Jw0V9/QRh7indWrfhsjg9AMq7klDxT19qOcePhqWUZxHxT19xCXuWbqHVNwVp1m9umbZuSz9MNNOlMJzyy3Qrx989pl67ilHFSIpLFxof5R+3DDqdVKV8IhSeF55BcrL4ayzspPjDiruisOsWGGF/corYdAg2H//uC1SgiJK4amuhv32gyeeiOZ8rqDirjhLdbUtTzihZlk9JR1ELe5ZCsf4iMRtQeRozD0paEdqelFxDx9f3DPkvau4JwVf3LP4w0w7UQ6wUXGP144IUXFPCn5YRj339BHlABsV93jtiJAmxV1EWovINBGZKSJzRORmr72PiEwVkXki8ncRaem1t/K253nv9w75b8gG6rmnl6jDMll0EFTc62ULcLQxZn/gAOB4ETkU+APwJ2PMbsAa4CJv/4uANV77n7z9lFLxPXcV9/ShMffwaeZJnYp7Dcay0dvcznsZ4GhgnNc+Gvhvr36qt433/mCRDHZVB8WyZbDvvnblHMim15V2ohD3l1+GX/4SVq7Mprj7EpSh+WXyUgoRaQ58AOwGPAh8Caw1xnixApYA/lpvPYDFAMaYKhFZB3QGvqlzzOHAcIBevXqV9lekmS++sOulHnecXSt1773jtkgJg/Xrwz3+TTfBjBl28FtZWbjncpEMhmXyEndjTDVwgIh0AJ4D9iz1xMaYUcAogLKysux844Xih2NGjIAf/CBeW5Rw2LABPv003HNUVsLQoTB+fLjncRUV98YxxqwVkSnAYUAHEWnhee89AX8p9aXALsASEWkBtAe+DdDmbKEdqemnW7fwz5HVWLvP5s229J2lDJBPtkxXz2NHRNoAQ4C5wBTgdG+3YcALXn28t433/mRjMvTvMmg0BTL97LRT+B7l1q3ZFnf/+82QuOejGN2B0V7cvRkw1hjzkoh8AowRkduAD4FHvP0fAZ4QkXnAauCsEOzODuq5p58oOlSz7rl36GDLDPmZTYq7MWYWMKCe9vnAwfW0bwbOCMQ6RVMgs4CKe/hkMOauI1RdR8U9/UQl7s0y/HNXcVecYsECmD/f1jXmnl7Ucw+fDIq7KoarzJgBBx5Ys73DDvHZooSLinv4qLgrzrBqlS1vvx0OPxx23TVee5TwCHNWyD//Gf75T3s/qbiruCsO4GfJDBkCBx0Ury1KuIQ5K+TIkbBkiV19aejQcM6RBFTcFWfQjtTsEGZYprraOghjx4Zz/KSQQXHXDlVX0ZWXskPY4q4Ogoq74hA6eCk7qLiHj2viXl5uJwHs0AEeeyyUU6i4u4pOO5AdVNzDxzVxX74cPvnEXpt+/UI5hSqHaxgDH3wAc+bYbRX39KPiHj6uiXtlpS3vvx+OPDKUU6hyuMbbb9dM7SsC7drFa48SPiru4ePaSky+uG+3XWin0LCMa6xZY8uHHoKPPoIdd4zVHCUCVNzDx7WVmFTcM4gfaz/0UJubrKQfFffwcSkss3gxzJ1r6y1bhnYaDcu4hua3Z48wxP1Xv4IJE2D1ar2XIB5x37zZTiMydy5MmgRffgmXXw7nn19jx/e+F9rpVdxdQ8U9e4Qh7s8/bx/9Tz8dzj472GMnkajFvbwc+vSxC5Ln8tOf2vKmm+y0IocfHpoJKu6uoeKePcIQ9+pq2zH/+OPBHjepRC3ukydbYe/aFV5+GXbf3aY+jhplf9tXXx16soSKu2uouGePsMRd76EaohT3LVvg5JNt/f33ayb9O+QQ+4oI7VB1DRX37BHGrJAq7rWJUtzXrbPlgAHQq1f452sAFXfX8MU9y6vmZI0wZoVUca9NlOJeUWHLSy6pOW8MaFjGFbZsgXHj4K237Lb+MLODhmXCJ+qwDECrVuGfqxFU3F3hlVfgvPNsffvtoX37eO1RokPFPXyiFPepU20ZYg57PuizvyuUl9ty8mT4+mtdVi9LqLiHT1Tivm4dnHuurXfuHO65mkA9d1fwY+09eoQ6sEFxkCDFfdUqu7B6ZaWKey5Rifv69ba8/HIYPDjcczWBirsraJZMdhGpeXIrlaOPhtmzbV0nnashKnH3O1PLymLtTAUVd3dQcc8u69bBhg3BHGv1ajj2WLjqKjjiiGCOmQaiEndHOlNBY+7uoCsvZZfWrYPrQK+uht694fjjtd8mF1/c/d9ZGLz3ng3HQOydqaDi7g668lJ2adcuuEd47UitHz8W7k+1GwbPPANTptj5YgYMCO88eaJK4goalskuQY5QVXGvHz9zJczvpqrKJkO880545ygAFfe42bLFTs06bZrd1h9m9ghyhKqKe/1EEXOvqgp18Y1CUXGPmxdfhDPOsPW2be0AJiVbqOcePlGIe2WlU2FVjbnHzcaNtnz9dVi0yHauKdmiWTP13MMmijVUHfPcmxR3EdlFRKaIyCciMkdErvDaO4nIayLyhVd29NpFRO4TkXkiMktEBob9RyQav/d+jz2gS5d4bVHiQcMy4RPmGqqbN8O778KyZckSd6AKuNoYsxdwKHCpiOwFXAdMMsb0ByZ52wAnAP2913BgZOBWpwlf3B16nFMiJoiwzP3327mJdGRq/YQZlrntNpshM3GiU3NCNakoxphlwDKvvkFE5gI9gFOBo7zdRgNvANd67Y8bYwzwnoh0EJHu3nGUuqi4K0F47rfeajvnd98dBg0Kxq40Eaa4r15tRX3sWNhzz+CPXyQFKYqI9AYGAFOBbjmCvRzo5tV7AItzPrbEa6sl7iIyHOvZ0yvGCe1jx8+7VXHPLkF47lVVduHl++8Pxqa0Eaa4V1baRIhjjw3+2CWQd4eqiOwAPAtcaYxZn/ue56UX9K0ZY0YZY8qMMWVdu3Yt5KPpoLLSDnj45BO7reKeXYLw3DXW3jhhintVlZO/37wsEpHtsML+N2PMP7zmFX64RUS6A/4y30uBXXI+3tNrU3J59tmaVenbtHFiLgolJoLw3FXcGydsz92hjlSffLJlBHgEmGuMuTvnrfHAMK8+DHghp/18L2vmUGCdxtvrYe1aW774ovXeHbw5lIgIYspfFffGUc+9XgYBPwU+FpGPvLbfAHcAY0XkImAR8BPvvQnAUGAeUA5cEKTBqcHvSD30UE2BzDq5OdjFzjFTVaXi3hhhiPvbb8ODD9rpBjp0CO64AZFPtsy/gIbuuG1mo/fi75eWaFf68TtS1WNXcnOwixVo9dwbJwxxf/xxu+5x//5w8snBHTcg3HuWyAqaAqn4lDp60hj7UnFvmDDEvboadt4Z5s4N7pgBosoSF+q5Kz6+uBeTMVNeDiu9XAZ1FBomLHF3+B+q3g1R8/77cPHFdhFs0B+kUtrQ+AMOgC++sPU2bQIzKXWouCuh8957MHMmnHIK7L9/jdemZBdfeDZtKnziuKVL7eCZc86BU08N3ra0oOKuhI6/KMfo0U72sCsx4IfoVq+Gjh0L+2x1tXUShg1ret8sE5a4O+ycuWtZWtGOVKUuffrYshjhKSXDJktk0HNXcY8aFXelLqXE3B0XGGdQcVdCR9dKVepSbCqkMeq554uKuxI6vufu8E2hREyxnru/v95LTROkuD/1FAwYAG++6fR3r7GBqKmqsp6awx0xSsQU67nrU2D+BCnuEybAZ5/BkCFOZyipuEfFnDlwww3w8cf6Y1RqU+wgJhX3/AlS3KuroUcPeOGFpveNEXUfo+Lll+G556BtWzj33LitUVyimLDMggU2LAAq7vlQyijgujgea/dRzz0q/Fj71KmFD1RR0k0xYZmyMpsXD06t2+ksQXvuCRB39dyjQlMglYYoxnNfu9Y+Ab71Flygs2o3SQbFXZUmKjRLRmmIQj13PwWyXz848sjw7EoTGRR39dwLoboarrkG9t3XzgNSCP5iCsUuxqCkl0I9d02BLBy/89kPZZV6rAR89+q5N8b8+dCypV3f9JhjYNasmvfatoUxY+DMM/M7lqNLcSkOUKjnrlkyheP/9r73vdKPlZCBY6o2jbHvvna+7FxatoSKClt/9FE46STYfvuGj1FRAePH238MKu5KfRTquau4F47/29OwjAJsK+wAn34K331n66++WjPpU0NMnAhnnGFTIXfaKXgbleRTaJqeinvhBBFzv/VWu7jO669bJ89x1JVsiLo3wbRpNl7ni/nEiXD88bBqlRX7hrx3/x/BK6/AYYeFZ6+SXDQsEz5BiPuHH9opmS++GI47Lhi7QkTFvSGuuKKm/vvf27zi3M7Q446Du+6Cq6+uyYSpD/+9vn2hXbtwbFWSTbFhGZ3CIn+CEPfKSujZE26/PRibQkbFvSH8pcuWLrWL4NaH7zk19qP0f4gab1caohDPfdKkmo599dzzJwhxr6hIRDjGRxWnIaqr4dBDGxZ2yC9WqvntSlP4wrNhQ+P7bd1qQ4H+PdWjR7h2pYmgPPcELWivz3UNUV3dtLddiLir5640xJYttmxKOKqr7f10zTWwYgWcdlr4tqWFUsX9pZdgyhQV91TgDzpqDF/c/dBLfWhYRmmKfPti/HupY0fYccfw7EkjpYr7DTfYMkFJEao4DVFdbQcvNUZjMffNm+G3v4V//9tuq7grDZFvzF2zZIqn2DnzfTZtsinNCelMBRX3hqmqanxwEjQelpkxA+6+G7p0gcMP10wZpWHyzZZRcS+eUtapBRs6S9hsrhqWaYh8RqE1Ju4TJ9py/Hh45x313JWGUc89fEoJy0ydCgsXNv0k7xiqOHV59107mnTxYujevfF9G4q5f/WVHc0GNi9WURpDPffwKUXc77rLlgcdFJw9EaDiXpfrr4fJk+3NsO++je9bn+f+0Ud28VywWQ277BKKmUqKUM89fEoR902b7G96+PBgbQoZFfdcNmywwn7iiTb1qSn8H9k779i5tQGWLLFlixa2A0ZRmkI99/ApRdy3bElcSAbyiLmLyKMislJEZue0dRKR10TkC6/s6LWLiNwnIvNEZJaIDAzT+MB59FFbNjZwKRff4xo2zP53/+47+MlPbNusWYl7jFNiIh/PfeRIuPlmW1dxL5xixX3pUnjttXSKO/AYcHydtuuAScaY/sAkbxvgBKC/9xoOjAzGzIj49ltbPvhgfvvvt19N/dNP4b77rMj361fjyStKUzQ1GG7zZrjkEvjrX6FTJ/iv/4rOtrRQrLg/8IAt+/cP1p4IaFLcjTFvAXWXLzkVGO3VRwP/ndP+uLG8B3QQkSZ6JR1h3TrbCdqqVf6j0Pbe2077C/DHP8JvfmPr06Ylag4KJWaaCstUVtry97+3DsigQdHYlSaKFfcNG6wmjBoVvE0hU2wqZDdjzDKvvhzo5tV7AItz9lvitW2DiAwXkekiMn3VqlVFmhEgCxbYcsiQwj7nZ9Q8/bQtzz7beleKki9NhWU01l46xYi7MfZ33blzIpfHLLlD1RhjRKTgXgpjzChgFEBZWVkAy6MUwZo1dtmt5s3h889t22WXFXaMffax8fX1623aY69ewduppJumPHcV99IpRtw//tiu4ZDQCdqKFfcVItLdGLPMC7us9NqXArm5fz29NvfYutX+Rx4+3F68G2+07W3aFH6splImFaUxmvLcdWbR0ilG3Nets+UjjwRvTwQUK+7jgWHAHV75Qk77ZSIyBjgEWJcTvnGL776zF/rhh2vaTjoJBiYrwUdJAeq5h08xc8u8+KItg1hUOwaaFHcReRo4CugiIkuAm7CiPlZELgIWAV7+HxOAocA8oBy4IASbg+H112tvz55tO0gVJWryjbnrFBbFU8zcMs89Z8vevQM3JwqavFuMMWc38NbgevY1wKWlGhUJt9xSe3uvveKxQ1HUcw+fYsIyFRV2DEtT05A4SjZdgSVL7DQBPjvtlMjecCUlNOa5z5ljZxgFFfdSKFTcP/7YzhHVtm14NoVMNsXdT1v08UemKkocNOS5r11rs7F8OnSIyqL0Uai4+9OPHHNMOPZEQDbFfaQ3cHbFCli5UmPtSrz4nru/KLvP88/b8ne/g6FD4cADo7QqXRQq7mvW2PJHPwrHngjInrivWGEHLA0ZYpcq0+XKlLjx78EddqjdPnWqLS+7zKbtKsVTiLhXVdkR59ttl+hwbfYW65g1y5bH150uR1Fiws+CqW9dgJYtVdiDwBfp9eub3nfDBlueeGJ49kRA9sR9yhRbDt4m2UdR4sHvKM0V961bYcIEncoiKHxxz8cT95MtVNwTxhNP2DLfaX0VJWzqW9GrvNyWP/5x9PakEf87zmcE+mOP2XL33UMzJwqyJe5bt9o0yHPOga5d47ZGUSwi1nvPFfdp02yp4y+CoZARquXldsru738/XJtCJlvifscdtkzoiDMlxTRvXjOHDNSk6+q8RcGQ7wjV1ath3DjYfvvwbQqZZIv7okV22oB88Sfev+aacOxRlGKp67lv2WKdkCOPjM2kVJFvtsxib8byQqf+dpBki/udd9rVkNaubXrfESNg2TK70IEOBlFcI1fcv/vOzmvSunW8NqWJfMXd70xVcY+ZTp3sxfI7nxrDD8ncdVe4NilKscybZ8vzz4eNG3W6gSDJN+buT++bgnUZki3uu+5qy6biaOPH2/LnP4dDDgnXJkUpho0boX17W//Xv2zpZ3YppZNvzL2yEg4+OBXr1CZb3JtaWBjs4sIPPWTr/oIciuIaffvWeJXG2JDMgAHx2pQm8gnLzJ8P771nJxJMAekX9z//GV5+2dZ32aXh/RQlTnJj7pWVcPHF8dqTNvIR908/tWUK4u2QBXH3hxvrzI+Ky7RoUZMKuWULtGoVrz1pIx9xf+cdW/7wh+HbEwHpFvfq6pqQzAXuLgqlKP/x3B98EDZtUnEPmnw6VMeOtaWGZRygKXGfNs3OApnQNRCVDOGL+7hxdvvUU+O1J23k06G6fLn93lMyUVuyxb2xC3bZZXD44bb+xhuRmaQoReGL+5YtdoGIgw6K26J00pDnPnmyzVhK0RTgyRb3hjz3jRvt4y3Yi3XAAZGapSgF44t7RYWGZMJCpGFxX7TIlinqyE6HuNe9YK+8YksRu+BBgifcVzJC8+awdCmsWmXncFeCpzFx96ck6dcvOntCJh3iXtdzv+02W379tU4SpiSDHXesWZQ5JR16ztGsWeMdqt26pWr+/GQvs1efuBtj54do3Vp/JEpyGDMGFi609d12i9WU1CJSf//c6tXw7bdw9dXR2xQi6RN3/wdy+eWRm6MoRdOmTSqGvDtNQ2EZP0MpZYMc0xGWeestO82AXwc7+6OiKIpPQ+K+caMtUzYWJh3ifuWV1vOproa//c22HXhgbGYpiuIgDYm7P2V4yqZYTra4110tfsYMeO01O11nz57x2KQoipvU16G6Zg3cequtb7dd9DaFSLLFve76kn7s7Oijo7dFURS3qa9DdflyWw4blrqU6WSLe79+cMMN8NJLdvvOO215ySXx2aQoipvUF5bZtMmWp50WvT0hk+xsGRG45RZb79zZpjMBlJXFZ5OiKG4iYpfaBDsD54IFcOGFdrtNm/jsColke+65XHedLYcMSd3jlaIoAbDHHnY8QcuWNr6+++4waxacfDIMHBi3dYGTbM89l6uvhmOPtSvaKIqi1OXVV+Gee+zkbC1a2Dl8TjkF9t8/bstCIRRxF5HjgXuB5sBfjDF3hHGeOieF/fYL/TSKoiSUjh3h5pvjtiIyAg/LiEhz4EHgBGAv4GwR2avxTymKoihBEkbM/WBgnjFmvjGmAhgD6MoDiqIoERKGuPcAFudsL/HaaiEiw0VkuohMX7VqVQhmKIqiZJfYsmWMMaOMMWXGmLKuXbvGZYaiKEoqCUPclwK506v19NoURVGUiAhD3N8H+otIHxFpCZwFjA/hPIqiKEoDBJ4KaYypEpHLgFewqZCPGmPmBH0eRVEUpWFCyXM3xkwAJoRxbEVRFKVpxDS2pmBURoisAhYV+fEuwDcBmhMGamPpuG4fuG+j6/aB2lgouxpj6s1IcULcS0FEphtjnJ4pTG0sHdftA/dtdN0+UBuDJD0ThymKoij/QcVdURQlhaRB3EfFbUAeqI2l47p94L6NrtsHamNgJD7mriiKomxLGjx3RVEUpQ4q7oqiKCkk0eIuIseLyGciMk9Ergv5XI+KyEoRmZ3T1klEXhORL7yyo9cuInKfZ9csERmY85lh3v5fiMiwnPYDReRj7zP3iRS+VqCI7CIiU0TkExGZIyJXuGSniLQWkWkiMtOz72avvY+ITPWO+Xdv2gpEpJW3Pc97v3fOsUZ47Z+JyHE57YHcEyLSXEQ+FJGXXLNRRBZ61+AjEZnutTlxjXOO0UFExonIpyIyV0QOc8lGEdnD+/7813oRudIlG0vGGJPIF3Zqgy+BvkBLYCawV4jn+z4wEJid03YncJ1Xvw74g1cfCrwMCHAoMNVr7wTM98qOXr2j9940b1/xPntCETZ2BwZ69XbA59gFU5yw0/vMDl59O2Cqd6yxwFle+0PAL7z6JcBDXv0s4O9efS/vercC+nj3QfMg7wngl8BTwEvetjM2AguBLnXanLjGOfaMBn7u1VsCHVyzsY6WLAd2ddXGov6uKE8WqOFwGPBKzvYIYETI5+xNbXH/DOju1bsDn3n1h4Gz6+4HnA08nNP+sNfWHfg0p73WfiXY+wIwxEU7gbbADOAQ7Gi/FnWvK3Z+osO8egtvP6l7rf39gronsDOZTgKOBl7yzumMjdQv7s5cY6A9sAAvYcNFG+vYdSzwjss2FvNKclgmr0VBQqabMWaZV18OdPPqDdnWWPuSetqLxgsPDMB6x87Y6YU7PgJWAq9hvdi1xpiqeo75Hzu899cBnYuwu1DuAX4NbPW2OztmowFeFZEPRGS41+bMNcY+qawC/uqFtv4iIts7ZmMuZwFPe3VXbSyYJIu7Uxj779mJvFIR2QF4FrjSGLM+97247TTGVBtjDsB6xwcDe8ZlS32IyEnASmPMB3Hb0ghHGGMGYtcpvlREvp/7ZtzXGPsEMxAYaYwZAHyHDXH8BwdsBMDrOzkFeKbue67YWCxJFncXFgVZISLdAbxyZRO2Ndbes572ghGR7bDC/jdjzD9ctdMYsxaYgg1TdBARf4bS3GP+xw7v/fbAt0XYXQiDgFNEZCF2/d+jgXtdstEYs9QrVwLPYf9JunSNlwBLjDFTve1xWLF3yUafE4AZxpgV3raLNhZHlDGgIF9Y72A+9hHQ75jaO+Rz9qZ2zP2P1O58udOrn0jtzpdpXnsnbCyyo/daAHTy3qvb+TK0CPsEeBy4p067E3YCXYEOXr0N8DZwEtZryu2svMSrX0rtzsqxXn1vandWzsd2igV6TwBHUdOh6oSNwPZAu5z6v4HjXbnGOXa+Dezh1X/n2eeUjd5xxgAXuPZbCeIV2YlCMd72YH+Ojdv+NuRzPQ0sAyqxnslF2NjqJOAL4PWciyrAg55dHwNlOce5EJjnvXJvqjJgtveZB6jTGZWnjUdgHyNnAR95r6Gu2AnsB3zo2TcbuNFr7+v9EOZhRbSV197a257nvd8351i/9Wz4jJwshCDvCWqLuxM2enbM9F5z/M+7co1zjnEAMN271s9jhc81G7fHPmW1z2lzysZSXjr9gKIoSgpJcsxdURRFaQAVd0VRlBSi4q4oipJCVNwVRVFSiIq7oihKClFxVxRFSSEq7oqiKCnk/wEPJ7Wt/dH/OQAAAABJRU5ErkJggg==\n" + }, + "metadata": { + "needs_background": "light" + } + } + ], + "source": [ + "import matplotlib.pyplot as plt\n", + "\n", + "plt.plot(returns, 'r')\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "👷 Provavelmente tem alguma coisa errado 👷" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ] +} \ No newline at end of file