From 118ee9225dc42162736e9a9fdcabddf08b613a36 Mon Sep 17 00:00:00 2001 From: Lionel Miller Date: Thu, 5 Sep 2019 13:34:03 +0300 Subject: [PATCH] Replace image links from postimg.org to postimg.cc --- week1_intro/primer/recap_tensorflow.ipynb | 2 +- week4_approx/dqn_atari.ipynb | 10 +++++----- week5_policy_based/practice_a3c.ipynb | 4 ++-- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/week1_intro/primer/recap_tensorflow.ipynb b/week1_intro/primer/recap_tensorflow.ipynb index db8e1cad5..0f11de306 100644 --- a/week1_intro/primer/recap_tensorflow.ipynb +++ b/week1_intro/primer/recap_tensorflow.ipynb @@ -243,7 +243,7 @@ "\n", "Here's what you should see:\n", "\n", - "\n", + "\n", "\n", "Tensorboard also allows you to draw graphs (e.g. learning curves), record images & audio ~~and play flash games~~. This is useful when monitoring learning progress and catching some training issues.\n", "\n", diff --git a/week4_approx/dqn_atari.ipynb b/week4_approx/dqn_atari.ipynb index 3d461c971..96ac2d2cb 100644 --- a/week4_approx/dqn_atari.ipynb +++ b/week4_approx/dqn_atari.ipynb @@ -47,7 +47,7 @@ "metadata": {}, "source": [ "### Let's play some old videogames\n", - "![img](https://s17.postimg.org/y9xcab74f/nerd.png)\n", + "![img](https://s17.postimg.cc/y9xcab74f/nerd.png)\n", "\n", "This time we're gonna apply approximate q-learning to an atari game called Breakout. It's not the hardest thing out there, but it's definitely way more complex than anything we tried before.\n" ] @@ -193,7 +193,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "![img](https://s17.postimg.org/ogg4xo51r/dqn_arch.png)" + "![img](https://s17.postimg.cc/ogg4xo51r/dqn_arch.png)" ] }, { @@ -311,7 +311,7 @@ "### Experience replay\n", "For this assignment, we provide you with experience replay buffer. If you implemented experience replay buffer in last week's assignment, you can copy-paste it here __to get 2 bonus points__.\n", "\n", - "![img](https://s17.postimg.org/ms4zvqj4v/exp_replay.png)" + "![img](https://s17.postimg.cc/ms4zvqj4v/exp_replay.png)" ] }, { @@ -409,7 +409,7 @@ "\n", "$$ Q_{reference}(s,a) = r + \\gamma \\cdot \\max _{a'} Q_{target}(s',a') $$\n", "\n", - "![img](https://s17.postimg.org/x3hcoi5q7/taget_net.png)\n", + "![img](https://s17.postimg.cc/x3hcoi5q7/taget_net.png)\n", "\n" ] }, @@ -673,7 +673,7 @@ "\n", "But hey, look on the bright side of things:\n", "\n", - "![img](https://s17.postimg.org/hy2v7r8hr/my_bot_is_training.png)" + "![img](https://s17.postimg.cc/hy2v7r8hr/my_bot_is_training.png)" ] }, { diff --git a/week5_policy_based/practice_a3c.ipynb b/week5_policy_based/practice_a3c.ipynb index bcecb82e3..0e2a6681b 100644 --- a/week5_policy_based/practice_a3c.ipynb +++ b/week5_policy_based/practice_a3c.ipynb @@ -103,7 +103,7 @@ "Your assignment here is to build and apply a neural network - with any framework you want. \n", "\n", "For starters, we want you to implement this architecture:\n", - "![https://s17.postimg.org/orswlfzcv/nnet_arch.png](https://s17.postimg.org/orswlfzcv/nnet_arch.png)\n", + "![https://s17.postimg.cc/orswlfzcv/nnet_arch.png](https://s17.postimg.cc/orswlfzcv/nnet_arch.png)\n", "\n", "After your agent gets mean reward above 50, we encourage you to experiment with model architecture to score even better." ] @@ -263,7 +263,7 @@ "metadata": {}, "source": [ "### Training on parallel games\n", - "![img](https://s7.postimg.org/4y36s2b2z/env_pool.png)\n", + "![img](https://s7.postimg.cc/4y36s2b2z/env_pool.png)\n", "\n", "To make actor-critic training more stable, we shall play several games in parallel. This means ya'll have to initialize several parallel gym envs, send agent's actions there and .reset() each env if it becomes terminated. To minimize learner brain damage, we've taken care of them for ya - just make sure you read it before you use it.\n" ]