From e79139c90fffe1a5a1f85d771d150c8dd26998b1 Mon Sep 17 00:00:00 2001 From: Torthu Date: Thu, 12 Nov 2015 21:22:46 +0800 Subject: [PATCH 001/139] Update recurrent.md --- SOURCE/tutorials/recurrent.md | 149 ++++++++++++++++------------------ 1 file changed, 68 insertions(+), 81 deletions(-) diff --git a/SOURCE/tutorials/recurrent.md b/SOURCE/tutorials/recurrent.md index d1be50e..d54f9f7 100755 --- a/SOURCE/tutorials/recurrent.md +++ b/SOURCE/tutorials/recurrent.md @@ -1,157 +1,144 @@ -# Recurrent Neural Networks +# 递归神经网络 -## Introduction +## 介绍 -Take a look at [this great article] -(http://colah.github.io/posts/2015-08-Understanding-LSTMs/) -for an introduction to recurrent neural networks and LSTMs in particular. +可以在 [this great article](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) 看看递归神经网络特别是 LSTM 的介绍。 -## Language Modeling +## 语言模型 -In this tutorial we will show how to train a recurrent neural network on -a challenging task of language modeling. The goal of the problem is to fit a -probabilistic model which assigns probablities to sentences. It does so by -predicting next words in a text given a history of previous words. For this -purpose we will use the Penn Tree Bank (PTB) dataset, which is a popular -benchmark for measuring quality of these models, whilst being small and -relatively fast to train. +此教程将展示如何在高任务难度的语言模型中训练递归神经网络。该问题的目标是调整一个概率模型以将概率分配到句柄。这实际上是通过预测给出了之前的词语历史记录的文本的接下来的词语来做到的。为此,我们将使用 PTB(Penn Tree Bank) 数据集,这是一种流行的用于测量这些模型的质量的基准,同时还具有小型化和相对的训练快速的的特点。 -Language modeling is key to many interesting problems such as speech -recognition, machine translation, or image captioning. It is also fun, too -- -take a look [here] (http://karpathy.github.io/2015/05/21/rnn-effectiveness/). +语言模型是许多诸如语音识别,机器翻译或图像字幕等有趣的难题的关键所在。没错,这真的很有意思-- +可以参看 [here](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)。 -For the purpose of this tutorial, we will reproduce the results from -[Zaremba et al., 2014] (http://arxiv.org/abs/1409.2329), which achieves very -good results on the PTB dataset. +基于此目标,本教程将重现 [Zaremba et al., 2014](http://arxiv.org/abs/1409.2329) 的成果,该成果是应用 PTB 数据集得到的很棒的结果。 -## Tutorial Files +## 教程文件 -This tutorial references the following files from `models/rnn/ptb`: +本教程使用的下面的文件引用自 `models/rnn/ptb`: -File | Purpose +文件 | 作用 --- | --- -`ptb_word_lm.py` | The code to train a language model on the PTB dataset. -`reader.py` | The code to read the dataset. +`ptb_word_lm.py` | 在 PTB 数据集上训练一个语言模型. +`reader.py` | 读取数据集. -## Download and Prepare the Data +## 下载及准备数据 -The data required for this tutorial is in the data/ directory of the -PTB dataset from Tomas Mikolov's webpage: -http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz +本教程需要的数据在 data/ 路径下,其是 Tomas Mikolov 的网站上的的 PTB 数据集 http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz。 -The dataset is already preprocessed and contains overall 10000 different words, -including the end-of-sentence marker and a special symbol (\) for rare -words. We convert all of them in the `reader.py` to unique integer identifiers -to make it easy for the neural network to process. +该数据集已经预先处理过并且包含了全部的 10000 个不同的词语,其中包括语句结束标记符以及针对稀有词语的特殊符号 (\) 。我们把所有 `reader.py` 中的词语转换成唯一整型标识符,使其易于神经网络处理。 -## The Model +## 模型 ### LSTM -The core of the model consists of an LSTM cell that processes one word at the +模型的核心由一个 LSTM 单元组成,The core of the model consists of an LSTM cell that processes one word at the time and computes probabilities of the possible continuations of the sentence. -The memory state of the network is initialized with a vector of zeros and gets +网络的存储状态由一个零矢量初始化并在读取每一个词语后更新。The memory state of the network is initialized with a vector of zeros and gets updated after reading each word. Also, for computational reasons, we will process data in mini-batches of size `batch_size`. -The basic pseudocode looks as follows: +基础的伪代码就像下面这样:The basic pseudocode looks as follows: ```python lstm = rnn_cell.BasicLSTMCell(lstm_size) -# Initial state of the LSTM memory. +# 初始化 LSTM 存储状态Initial state of the LSTM memory. state = tf.zeros([batch_size, lstm.state_size]) loss = 0.0 for current_batch_of_words in words_in_dataset: - # The value of state is updated after processing each batch of words. + # 每次处理一批词语后更新状态值The value of state is updated after processing each batch of words. output, state = lstm(current_batch_of_words, state) - # The LSTM output can be used to make next word predictions + # LSTM 输出可用于产生下一个词语的预测The LSTM output can be used to make next word predictions logits = tf.matmul(output, softmax_w) + softmax_b probabilities = tf.nn.softmax(logits) loss += loss_function(probabilities, target_words) ``` -### Truncated Backpropagation +### 截断反向传播 -In order to make the learning process tractable, it is a common practice to +为使学习过程易于处理,通常的做法是将反向传播的梯度截断成展开步骤的一个固定数字(`num_steps`)。In order to make the learning process tractable, it is a common practice to truncate the gradients for backpropagation to a fixed number (`num_steps`) of unrolled steps. -This is easy to implement by feeding inputs of length `num_steps` at a time and +通过一次提供长度为 `num_steps` 的输入和每次迭代之后进行向后传递,这会很容易实现。This is easy to implement by feeding inputs of length `num_steps` at a time and doing backward pass after each iteration. -A simplifed version of the code for the graph creation for truncated +一个简化版的用于图形创建的截断反向传播代码:A simplifed version of the code for the graph creation for truncated backpropagation: ```python -# Placeholder for the inputs in a given iteration. +# 一次给定的迭代中的输入占位符Placeholder for the inputs in a given iteration. words = tf.placeholder(tf.int32, [batch_size, num_steps]) lstm = rnn_cell.BasicLSTMCell(lstm_size) -# Initial state of the LSTM memory. +# 初始化 LSTM 存储状态Initial state of the LSTM memory. initial_state = state = tf.zeros([batch_size, lstm.state_size]) for i in range(len(num_steps)): - # The value of state is updated after processing each batch of words. + # 每次处理一批词语后更新状态值The value of state is updated after processing each batch of words. output, state = lstm(words[:, i], state) - # The rest of the code. + # 其余的代码。。。The rest of the code. # ... final_state = state ``` -And this is how to implement an iteration over the whole dataset: +下面展现如何实现迭代整个数据集:And this is how to implement an iteration over the whole dataset: ```python -# A numpy array holding the state of LSTM after each batch of words. +# 一个 numpy 数组,保存每一批词语之后的 LSTM 状态A numpy array holding the state of LSTM after each batch of words. numpy_state = initial_state.eval() total_loss = 0.0 for current_batch_of_words in words_in_dataset: numpy_state, current_loss = session.run([final_state, loss], - # Initialize the LSTM state from the previous iteration. + # 初始化来自上一次迭代的 LSTM 状态Initialize the LSTM state from the previous iteration. feed_dict={initial_state: numpy_state, words: current_batch_of_words}) total_loss += current_loss ``` -### Inputs +### 输入Inputs -The word IDs will be embedded into a dense representation (see the +在提供给 LSTM 前,IDs 将被嵌入到一个密集的表示中(查看 [矢量表示教程](../../tutorials/word2vec/index.md))。The word IDs will be embedded into a dense representation (see the [Vector Representations Tutorial](../../tutorials/word2vec/index.md)) before feeding to -the LSTM. This allows the model to efficiently represent the knowledge about -particular words. It is also easy to write: +the LSTM. 这种方式允许模型高效地表现特定词语的知识。This allows the model to efficiently represent the knowledge about +particular words. 代码也很容易写:It is also easy to write: ```python -# embedding_matrix is a tensor of shape [vocabulary_size, embedding size] +# embedding_matrix 是形状的张量 is a tensor of shape [vocabulary_size, embedding size] word_embeddings = tf.nn.embedding_lookup(embedding_matrix, word_ids) ``` -The embedding matrix will be initialized randomly and the model will learn to +嵌入的矩阵会被随机地初始化,模型将仅通过看一眼数据就学会区分词语的意思。The embedding matrix will be initialized randomly and the model will learn to differentiate the meaning of words just by looking at the data. ### Loss Fuction -We want to minimize the average negative log probability of the target words: - -$$ \text{loss} = -\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i} $$ +我们想使目标词语的平均负对数概率最小 +```math +\text{loss} = -\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i} +``` -It is not very difficult to implement but the function +实现起来并非很难,但是这里已经有了可用的函数 `sequence_loss_by_example` ,可以直接在这里使用。It is not very difficult to implement but the function `sequence_loss_by_example` is already available, so we can just use it here. -The typical measure reported in the papers is average per-word perplexity (often +文献报告中的典型方法是平均每个词语的困惑度,计算式为The typical measure reported in the papers is average per-word perplexity (often just called perplexity), which is equal to -$$e^{-\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i}} = e^{\text{loss}} $$ +```math +e^{-\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i}} = e^{\text{loss}} +``` -and we will monitor its value throughout the training process. +同时我们会监视训练过程中的困惑度值。and we will monitor its value throughout the training process. -### Stacking multiple LSTMs +### 多 LSTM 堆叠Stacking multiple LSTMs -To give the model more expressive power, we can add multiple layers of LSTMs -to process the data. The output of the first layer will become the input of +要想给模型更多的表单能力,可以添加多层 LSTM 来处理数据。To give the model more expressive power, we can add multiple layers of LSTMs +to process the data. 第一层的输出作为第二层的输入,以此类推。The output of the first layer will become the input of the second and so on. -We have a class called `MultiRNNCell` that makes the implementation seamless: +类 `MultiRNNCell` 可以无缝的将其实现。We have a class called `MultiRNNCell` that makes the implementation seamless: ```python lstm = rnn_cell.BasicLSTMCell(lstm_size) @@ -159,50 +146,50 @@ stacked_lstm = rnn_cell.MultiRNNCell([lstm] * number_of_layers) initial_state = state = stacked_lstm.zero_state(batch_size, tf.float32) for i in range(len(num_steps)): - # The value of state is updated after processing each batch of words. + # 每次处理一批词语后更新状态值The value of state is updated after processing each batch of words. output, state = stacked_lstm(words[:, i], state) - # The rest of the code. + # 其余的代码。。。The rest of the code. # ... final_state = state ``` -## Compile and Run the Code +## 编译并运行代码 -First, the library needs to be built. To compile it on CPU: +首先需要构建库,在 CPU 上编译: ``` bazel build -c opt tensorflow/models/rnn/ptb:ptb_word_lm ``` -And if you have a fast GPU, run the following: +如果你有一个强大的 GPU,可以运行: ``` bazel build -c opt --config=cuda tensorflow/models/rnn/ptb:ptb_word_lm ``` -Now we can run the model: +运行模型: ``` bazel-bin/tensorflow/models/rnn/ptb/ptb_word_lm \ --data_path=/tmp/simple-examples/data/ --alsologtostderr --model small ``` -There are 3 supported model configurations in the tutorial code: "small", -"medium" and "large". The difference between them is in size of the LSTMs and +教程代码中有 3 个支持的模型配置:"small", +"medium" 和 "large"。There are 3 supported model configurations in the tutorial code: "small", +"medium" and "large". 它们的不同有 LSTM 的大小,以及用于训练的超参数集。The difference between them is in size of the LSTMs and the set of hyperparameters used for training. -The larger the model, the better results it should get. The `small` model should +模型越大,得到的结果应该更好。The larger the model, the better results it should get.在测试集中 `small` 模型应该可以达到低于 120 的困惑度,`large` 模型则是低于 80,考虑到它可能花费数小时来训练。The `small` model should be able to reach perplexity below 120 on the test set and the `large` one below 80, though it might take several hours to train. -## What Next? +## 接下来是什么? -There are several tricks that we haven't mentioned that make the model better, -including: +还有几个优化模型的技巧没有提到,包括: -* decreasing learning rate schedule, +* 降低学习曲线decreasing learning rate schedule, * dropout between the LSTM layers. -Study the code and modify it to improve the model even further. +学习和更改代码以进一步改善模型。 From f645d66af3ecca351834013f08b5306e9dfb96d3 Mon Sep 17 00:00:00 2001 From: Torthu Date: Thu, 12 Nov 2015 21:37:14 +0800 Subject: [PATCH 002/139] Update recurrent.md --- SOURCE/tutorials/recurrent.md | 78 +++++++++++++---------------------- 1 file changed, 29 insertions(+), 49 deletions(-) diff --git a/SOURCE/tutorials/recurrent.md b/SOURCE/tutorials/recurrent.md index d54f9f7..fa2f814 100755 --- a/SOURCE/tutorials/recurrent.md +++ b/SOURCE/tutorials/recurrent.md @@ -32,25 +32,21 @@ ### LSTM -模型的核心由一个 LSTM 单元组成,The core of the model consists of an LSTM cell that processes one word at the -time and computes probabilities of the possible continuations of the sentence. -网络的存储状态由一个零矢量初始化并在读取每一个词语后更新。The memory state of the network is initialized with a vector of zeros and gets -updated after reading each word. Also, for computational reasons, we will -process data in mini-batches of size `batch_size`. +模型的核心由一个 LSTM 小单元组成,其可以在某时刻处理一个词语,以及计算语句可能的延续性的概率。网络的存储状态由一个零矢量初始化并在读取每一个词语后更新。而且,由于计算上的原因,我们将以 `batch_size` 为最小批量来处理数据。 -基础的伪代码就像下面这样:The basic pseudocode looks as follows: +基础的伪代码就像下面这样: ```python lstm = rnn_cell.BasicLSTMCell(lstm_size) -# 初始化 LSTM 存储状态Initial state of the LSTM memory. +# 初始化 LSTM 存储状态. state = tf.zeros([batch_size, lstm.state_size]) loss = 0.0 for current_batch_of_words in words_in_dataset: - # 每次处理一批词语后更新状态值The value of state is updated after processing each batch of words. + # 每次处理一批词语后更新状态值. output, state = lstm(current_batch_of_words, state) - # LSTM 输出可用于产生下一个词语的预测The LSTM output can be used to make next word predictions + # LSTM 输出可用于产生下一个词语的预测 logits = tf.matmul(output, softmax_w) + softmax_b probabilities = tf.nn.softmax(logits) loss += loss_function(probabilities, target_words) @@ -58,60 +54,52 @@ for current_batch_of_words in words_in_dataset: ### 截断反向传播 -为使学习过程易于处理,通常的做法是将反向传播的梯度截断成展开步骤的一个固定数字(`num_steps`)。In order to make the learning process tractable, it is a common practice to -truncate the gradients for backpropagation to a fixed number (`num_steps`) -of unrolled steps. -通过一次提供长度为 `num_steps` 的输入和每次迭代之后进行向后传递,这会很容易实现。This is easy to implement by feeding inputs of length `num_steps` at a time and -doing backward pass after each iteration. +为使学习过程易于处理,通常的做法是将反向传播的梯度截断成展开步骤的一个固定数字(`num_steps`)。 +通过一次提供长度为 `num_steps` 的输入和每次迭代之后进行向后传递,这会很容易实现。 -一个简化版的用于图形创建的截断反向传播代码:A simplifed version of the code for the graph creation for truncated -backpropagation: +一个简化版的用于图形创建的截断反向传播代码: ```python -# 一次给定的迭代中的输入占位符Placeholder for the inputs in a given iteration. +# 一次给定的迭代中的输入占位符. words = tf.placeholder(tf.int32, [batch_size, num_steps]) lstm = rnn_cell.BasicLSTMCell(lstm_size) -# 初始化 LSTM 存储状态Initial state of the LSTM memory. +# 初始化 LSTM 存储状态. initial_state = state = tf.zeros([batch_size, lstm.state_size]) for i in range(len(num_steps)): - # 每次处理一批词语后更新状态值The value of state is updated after processing each batch of words. + # 每次处理一批词语后更新状态值. output, state = lstm(words[:, i], state) - # 其余的代码。。。The rest of the code. + # 其余的代码. # ... final_state = state ``` -下面展现如何实现迭代整个数据集:And this is how to implement an iteration over the whole dataset: +下面展现如何实现迭代整个数据集: ```python -# 一个 numpy 数组,保存每一批词语之后的 LSTM 状态A numpy array holding the state of LSTM after each batch of words. +# 一个 numpy 数组,保存每一批词语之后的 LSTM 状态. numpy_state = initial_state.eval() total_loss = 0.0 for current_batch_of_words in words_in_dataset: numpy_state, current_loss = session.run([final_state, loss], - # 初始化来自上一次迭代的 LSTM 状态Initialize the LSTM state from the previous iteration. + # 初始化来自上一次迭代的 LSTM 状态. feed_dict={initial_state: numpy_state, words: current_batch_of_words}) total_loss += current_loss ``` -### 输入Inputs +### 输入 -在提供给 LSTM 前,IDs 将被嵌入到一个密集的表示中(查看 [矢量表示教程](../../tutorials/word2vec/index.md))。The word IDs will be embedded into a dense representation (see the -[Vector Representations Tutorial](../../tutorials/word2vec/index.md)) before feeding to -the LSTM. 这种方式允许模型高效地表现特定词语的知识。This allows the model to efficiently represent the knowledge about -particular words. 代码也很容易写:It is also easy to write: +在提供给 LSTM 前,IDs 将被嵌入到一个密集的表示中(查看 [矢量表示教程](../../tutorials/word2vec/index.md))。这种方式允许模型高效地表现特定词语的知识,代码也很容易编写: ```python -# embedding_matrix 是形状的张量 is a tensor of shape [vocabulary_size, embedding size] +# embedding_matrix 为形状的张量 [vocabulary_size, embedding size] word_embeddings = tf.nn.embedding_lookup(embedding_matrix, word_ids) ``` -嵌入的矩阵会被随机地初始化,模型将仅通过看一眼数据就学会区分词语的意思。The embedding matrix will be initialized randomly and the model will learn to -differentiate the meaning of words just by looking at the data. +嵌入的矩阵会被随机地初始化,模型将仅通过看一眼数据就学会区分词语的意思。 ### Loss Fuction @@ -120,25 +108,21 @@ differentiate the meaning of words just by looking at the data. \text{loss} = -\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i} ``` -实现起来并非很难,但是这里已经有了可用的函数 `sequence_loss_by_example` ,可以直接在这里使用。It is not very difficult to implement but the function -`sequence_loss_by_example` is already available, so we can just use it here. +实现起来并非很难,但是这里已经有了可用的函数 `sequence_loss_by_example` ,可以直接在这里使用。 -文献报告中的典型方法是平均每个词语的困惑度,计算式为The typical measure reported in the papers is average per-word perplexity (often -just called perplexity), which is equal to +文献报告中的典型方法是平均化每个词语的困惑度,计算式为 ```math e^{-\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i}} = e^{\text{loss}} ``` -同时我们会监视训练过程中的困惑度值。and we will monitor its value throughout the training process. +同时我们会监视训练过程中的困惑度值。 -### 多 LSTM 堆叠Stacking multiple LSTMs +### 多 LSTM 堆叠 -要想给模型更多的表单能力,可以添加多层 LSTM 来处理数据。To give the model more expressive power, we can add multiple layers of LSTMs -to process the data. 第一层的输出作为第二层的输入,以此类推。The output of the first layer will become the input of -the second and so on. +要想给模型更多的表单能力,可以添加多层 LSTM 来处理数据。第一层的输出作为第二层的输入,以此类推。 -类 `MultiRNNCell` 可以无缝的将其实现。We have a class called `MultiRNNCell` that makes the implementation seamless: +类 `MultiRNNCell` 可以无缝的将其实现: ```python lstm = rnn_cell.BasicLSTMCell(lstm_size) @@ -146,10 +130,10 @@ stacked_lstm = rnn_cell.MultiRNNCell([lstm] * number_of_layers) initial_state = state = stacked_lstm.zero_state(batch_size, tf.float32) for i in range(len(num_steps)): - # 每次处理一批词语后更新状态值The value of state is updated after processing each batch of words. + # 每次处理一批词语后更新状态值. output, state = stacked_lstm(words[:, i], state) - # 其余的代码。。。The rest of the code. + # 其余的代码. # ... final_state = state @@ -177,13 +161,9 @@ bazel-bin/tensorflow/models/rnn/ptb/ptb_word_lm \ ``` 教程代码中有 3 个支持的模型配置:"small", -"medium" 和 "large"。There are 3 supported model configurations in the tutorial code: "small", -"medium" and "large". 它们的不同有 LSTM 的大小,以及用于训练的超参数集。The difference between them is in size of the LSTMs and -the set of hyperparameters used for training. +"medium" 和 "large"。它们的不同仅有 LSTM 的大小,以及用于训练的超参数集。 -模型越大,得到的结果应该更好。The larger the model, the better results it should get.在测试集中 `small` 模型应该可以达到低于 120 的困惑度,`large` 模型则是低于 80,考虑到它可能花费数小时来训练。The `small` model should -be able to reach perplexity below 120 on the test set and the `large` one below -80, though it might take several hours to train. +模型越大,得到的结果应该更好。在测试集中 `small` 模型应该可以达到低于 120 的困惑度,`large` 模型则是低于 80,考虑到它可能花费数小时来训练。 ## 接下来是什么? From 08d4441c1e314d2f6b5f5ade8ecc2aa9ee632aee Mon Sep 17 00:00:00 2001 From: Torthu Date: Thu, 12 Nov 2015 21:39:31 +0800 Subject: [PATCH 003/139] Update recurrent.md --- SOURCE/tutorials/recurrent.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/SOURCE/tutorials/recurrent.md b/SOURCE/tutorials/recurrent.md index fa2f814..f7be7c9 100755 --- a/SOURCE/tutorials/recurrent.md +++ b/SOURCE/tutorials/recurrent.md @@ -6,10 +6,9 @@ ## 语言模型 -此教程将展示如何在高任务难度的语言模型中训练递归神经网络。该问题的目标是调整一个概率模型以将概率分配到句柄。这实际上是通过预测给出了之前的词语历史记录的文本的接下来的词语来做到的。为此,我们将使用 PTB(Penn Tree Bank) 数据集,这是一种流行的用于测量这些模型的质量的基准,同时还具有小型化和相对的训练快速的的特点。 +此教程将展示如何在高任务难度的语言模型中训练递归神经网络。该问题的目标是适应一个概率模型以将概率分配到语句。这实际上是通过预测给出了之前的词语历史记录的文本的接下来的词语来做到的。为此,我们将使用 PTB(Penn Tree Bank) 数据集,这是一种流行的用于测量这些模型的质量的基准,同时还具有小型化和相对的训练快速的的特点。 -语言模型是许多诸如语音识别,机器翻译或图像字幕等有趣的难题的关键所在。没错,这真的很有意思-- -可以参看 [here](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)。 +语言模型是许多诸如语音识别,机器翻译或图像字幕等有趣的难题的关键所在。没错,这相当很有意思--可以参看 [here](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)。 基于此目标,本教程将重现 [Zaremba et al., 2014](http://arxiv.org/abs/1409.2329) 的成果,该成果是应用 PTB 数据集得到的很棒的结果。 From d1b1581464cf6fcc5f07a8e49909e0303811deb6 Mon Sep 17 00:00:00 2001 From: Torthu Date: Thu, 12 Nov 2015 21:40:18 +0800 Subject: [PATCH 004/139] Update recurrent.md --- SOURCE/tutorials/recurrent.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/SOURCE/tutorials/recurrent.md b/SOURCE/tutorials/recurrent.md index f7be7c9..8123c2e 100755 --- a/SOURCE/tutorials/recurrent.md +++ b/SOURCE/tutorials/recurrent.md @@ -2,7 +2,7 @@ ## 介绍 -可以在 [this great article](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) 看看递归神经网络特别是 LSTM 的介绍。 +可以在 [this great article](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) 查看递归神经网络特别是 LSTM 的介绍。 ## 语言模型 From e379cc48c377738179307e335a2625dbde8fd09b Mon Sep 17 00:00:00 2001 From: Torthu Date: Thu, 12 Nov 2015 21:46:06 +0800 Subject: [PATCH 005/139] Update recurrent.md --- SOURCE/tutorials/recurrent.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/SOURCE/tutorials/recurrent.md b/SOURCE/tutorials/recurrent.md index 8123c2e..e82bb19 100755 --- a/SOURCE/tutorials/recurrent.md +++ b/SOURCE/tutorials/recurrent.md @@ -23,7 +23,7 @@ ## 下载及准备数据 -本教程需要的数据在 data/ 路径下,其是 Tomas Mikolov 的网站上的的 PTB 数据集 http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz。 +本教程需要的数据在 data/ 路径下,其是 Tomas Mikolov 的网站上的的 PTB 数据集http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz。 该数据集已经预先处理过并且包含了全部的 10000 个不同的词语,其中包括语句结束标记符以及针对稀有词语的特殊符号 (\) 。我们把所有 `reader.py` 中的词语转换成唯一整型标识符,使其易于神经网络处理。 From f499607095f0280b1cfafdec754fb1c03a1f8ba5 Mon Sep 17 00:00:00 2001 From: Torthu Date: Thu, 12 Nov 2015 21:49:57 +0800 Subject: [PATCH 006/139] Update recurrent.md --- SOURCE/tutorials/recurrent.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/SOURCE/tutorials/recurrent.md b/SOURCE/tutorials/recurrent.md index e82bb19..00ad60f 100755 --- a/SOURCE/tutorials/recurrent.md +++ b/SOURCE/tutorials/recurrent.md @@ -94,13 +94,13 @@ for current_batch_of_words in words_in_dataset: 在提供给 LSTM 前,IDs 将被嵌入到一个密集的表示中(查看 [矢量表示教程](../../tutorials/word2vec/index.md))。这种方式允许模型高效地表现特定词语的知识,代码也很容易编写: ```python -# embedding_matrix 为形状的张量 [vocabulary_size, embedding size] +# embedding_matrix 为形状的张量 [vocabulary_size, 嵌入的大小] word_embeddings = tf.nn.embedding_lookup(embedding_matrix, word_ids) ``` 嵌入的矩阵会被随机地初始化,模型将仅通过看一眼数据就学会区分词语的意思。 -### Loss Fuction +### 损失函数 我们想使目标词语的平均负对数概率最小 ```math @@ -168,7 +168,7 @@ bazel-bin/tensorflow/models/rnn/ptb/ptb_word_lm \ 还有几个优化模型的技巧没有提到,包括: -* 降低学习曲线decreasing learning rate schedule, +* 降低学习曲线, * dropout between the LSTM layers. 学习和更改代码以进一步改善模型。 From e18c29a9ea51374411f688d637105596a90df8c6 Mon Sep 17 00:00:00 2001 From: Torthu Date: Thu, 12 Nov 2015 21:50:39 +0800 Subject: [PATCH 007/139] Update recurrent.md --- SOURCE/tutorials/recurrent.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/SOURCE/tutorials/recurrent.md b/SOURCE/tutorials/recurrent.md index 00ad60f..92107b1 100755 --- a/SOURCE/tutorials/recurrent.md +++ b/SOURCE/tutorials/recurrent.md @@ -2,7 +2,7 @@ ## 介绍 -可以在 [this great article](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) 查看递归神经网络特别是 LSTM 的介绍。 +可以在 [this great article](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) 查看递归神经网络以及 LSTM 的介绍。 ## 语言模型 From ef9a8ef789fa291be1d785cd8b5c45091af31b64 Mon Sep 17 00:00:00 2001 From: Torthu Date: Thu, 12 Nov 2015 22:00:57 +0800 Subject: [PATCH 008/139] Update recurrent.md --- SOURCE/tutorials/recurrent.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/SOURCE/tutorials/recurrent.md b/SOURCE/tutorials/recurrent.md index 92107b1..390ab94 100755 --- a/SOURCE/tutorials/recurrent.md +++ b/SOURCE/tutorials/recurrent.md @@ -168,7 +168,7 @@ bazel-bin/tensorflow/models/rnn/ptb/ptb_word_lm \ 还有几个优化模型的技巧没有提到,包括: -* 降低学习曲线, -* dropout between the LSTM layers. +* 降低学习率, +* 多 LSTM 层间 dropout. 学习和更改代码以进一步改善模型。 From de6952c84a361555ea7a2fe467ae47aca18c8f03 Mon Sep 17 00:00:00 2001 From: Torthu Date: Thu, 12 Nov 2015 22:17:56 +0800 Subject: [PATCH 009/139] Update index.md --- SOURCE/tutorials/recurrent/index.md | 164 +++++++++++----------------- 1 file changed, 66 insertions(+), 98 deletions(-) diff --git a/SOURCE/tutorials/recurrent/index.md b/SOURCE/tutorials/recurrent/index.md index d1be50e..131e4c6 100755 --- a/SOURCE/tutorials/recurrent/index.md +++ b/SOURCE/tutorials/recurrent/index.md @@ -1,157 +1,127 @@ -# Recurrent Neural Networks +# 递归神经网络 -## Introduction +## 介绍 -Take a look at [this great article] -(http://colah.github.io/posts/2015-08-Understanding-LSTMs/) -for an introduction to recurrent neural networks and LSTMs in particular. +可以在 [this great article](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) 查看递归神经网络以及 LSTM 的介绍。 -## Language Modeling +## 语言模型 -In this tutorial we will show how to train a recurrent neural network on -a challenging task of language modeling. The goal of the problem is to fit a -probabilistic model which assigns probablities to sentences. It does so by -predicting next words in a text given a history of previous words. For this -purpose we will use the Penn Tree Bank (PTB) dataset, which is a popular -benchmark for measuring quality of these models, whilst being small and -relatively fast to train. +此教程将展示如何在高任务难度的语言模型中训练递归神经网络。该问题的目标是适应一个概率模型以将概率分配到语句。这实际上是通过预测给出了之前的词语历史记录的文本的接下来的词语来做到的。为此,我们将使用 PTB(Penn Tree Bank) 数据集,这是一种流行的用于测量这些模型的质量的基准,同时还具有小型化和相对的训练快速的的特点。 -Language modeling is key to many interesting problems such as speech -recognition, machine translation, or image captioning. It is also fun, too -- -take a look [here] (http://karpathy.github.io/2015/05/21/rnn-effectiveness/). +语言模型是许多诸如语音识别,机器翻译或图像字幕等有趣的难题的关键所在。没错,这相当很有意思--可以参看 [here](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)。 -For the purpose of this tutorial, we will reproduce the results from -[Zaremba et al., 2014] (http://arxiv.org/abs/1409.2329), which achieves very -good results on the PTB dataset. +基于此目标,本教程将重现 [Zaremba et al., 2014](http://arxiv.org/abs/1409.2329) 的成果,该成果是应用 PTB 数据集得到的很棒的结果。 -## Tutorial Files +## 教程文件 -This tutorial references the following files from `models/rnn/ptb`: +本教程使用的下面的文件引用自 `models/rnn/ptb`: -File | Purpose +文件 | 作用 --- | --- -`ptb_word_lm.py` | The code to train a language model on the PTB dataset. -`reader.py` | The code to read the dataset. +`ptb_word_lm.py` | 在 PTB 数据集上训练一个语言模型. +`reader.py` | 读取数据集. -## Download and Prepare the Data +## 下载及准备数据 -The data required for this tutorial is in the data/ directory of the -PTB dataset from Tomas Mikolov's webpage: -http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz +本教程需要的数据在 data/ 路径下,其是 Tomas Mikolov 的网站上的的 PTB 数据集http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz。 -The dataset is already preprocessed and contains overall 10000 different words, -including the end-of-sentence marker and a special symbol (\) for rare -words. We convert all of them in the `reader.py` to unique integer identifiers -to make it easy for the neural network to process. +该数据集已经预先处理过并且包含了全部的 10000 个不同的词语,其中包括语句结束标记符以及针对稀有词语的特殊符号 (\) 。我们把所有 `reader.py` 中的词语转换成唯一整型标识符,使其易于神经网络处理。 -## The Model +## 模型 ### LSTM -The core of the model consists of an LSTM cell that processes one word at the -time and computes probabilities of the possible continuations of the sentence. -The memory state of the network is initialized with a vector of zeros and gets -updated after reading each word. Also, for computational reasons, we will -process data in mini-batches of size `batch_size`. +模型的核心由一个 LSTM 小单元组成,其可以在某时刻处理一个词语,以及计算语句可能的延续性的概率。网络的存储状态由一个零矢量初始化并在读取每一个词语后更新。而且,由于计算上的原因,我们将以 `batch_size` 为最小批量来处理数据。 -The basic pseudocode looks as follows: +基础的伪代码就像下面这样: ```python lstm = rnn_cell.BasicLSTMCell(lstm_size) -# Initial state of the LSTM memory. +# 初始化 LSTM 存储状态. state = tf.zeros([batch_size, lstm.state_size]) loss = 0.0 for current_batch_of_words in words_in_dataset: - # The value of state is updated after processing each batch of words. + # 每次处理一批词语后更新状态值. output, state = lstm(current_batch_of_words, state) - # The LSTM output can be used to make next word predictions + # LSTM 输出可用于产生下一个词语的预测 logits = tf.matmul(output, softmax_w) + softmax_b probabilities = tf.nn.softmax(logits) loss += loss_function(probabilities, target_words) ``` -### Truncated Backpropagation +### 截断反向传播 -In order to make the learning process tractable, it is a common practice to -truncate the gradients for backpropagation to a fixed number (`num_steps`) -of unrolled steps. -This is easy to implement by feeding inputs of length `num_steps` at a time and -doing backward pass after each iteration. +为使学习过程易于处理,通常的做法是将反向传播的梯度截断成展开步骤的一个固定数字(`num_steps`)。 +通过一次提供长度为 `num_steps` 的输入和每次迭代之后进行向后传递,这会很容易实现。 -A simplifed version of the code for the graph creation for truncated -backpropagation: +一个简化版的用于图形创建的截断反向传播代码: ```python -# Placeholder for the inputs in a given iteration. +# 一次给定的迭代中的输入占位符. words = tf.placeholder(tf.int32, [batch_size, num_steps]) lstm = rnn_cell.BasicLSTMCell(lstm_size) -# Initial state of the LSTM memory. +# 初始化 LSTM 存储状态. initial_state = state = tf.zeros([batch_size, lstm.state_size]) for i in range(len(num_steps)): - # The value of state is updated after processing each batch of words. + # 每次处理一批词语后更新状态值. output, state = lstm(words[:, i], state) - # The rest of the code. + # 其余的代码. # ... final_state = state ``` -And this is how to implement an iteration over the whole dataset: +下面展现如何实现迭代整个数据集: ```python -# A numpy array holding the state of LSTM after each batch of words. +# 一个 numpy 数组,保存每一批词语之后的 LSTM 状态. numpy_state = initial_state.eval() total_loss = 0.0 for current_batch_of_words in words_in_dataset: numpy_state, current_loss = session.run([final_state, loss], - # Initialize the LSTM state from the previous iteration. + # 初始化来自上一次迭代的 LSTM 状态. feed_dict={initial_state: numpy_state, words: current_batch_of_words}) total_loss += current_loss ``` -### Inputs +### 输入 -The word IDs will be embedded into a dense representation (see the -[Vector Representations Tutorial](../../tutorials/word2vec/index.md)) before feeding to -the LSTM. This allows the model to efficiently represent the knowledge about -particular words. It is also easy to write: +在提供给 LSTM 前,IDs 将被嵌入到一个密集的表示中(查看 [矢量表示教程](../../tutorials/word2vec/index.md))。这种方式允许模型高效地表现特定词语的知识,代码也很容易编写: ```python -# embedding_matrix is a tensor of shape [vocabulary_size, embedding size] +# embedding_matrix 为形状的张量 [vocabulary_size, 嵌入的大小] word_embeddings = tf.nn.embedding_lookup(embedding_matrix, word_ids) ``` -The embedding matrix will be initialized randomly and the model will learn to -differentiate the meaning of words just by looking at the data. +嵌入的矩阵会被随机地初始化,模型将仅通过看一眼数据就学会区分词语的意思。 -### Loss Fuction +### 损失函数 -We want to minimize the average negative log probability of the target words: - -$$ \text{loss} = -\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i} $$ +我们想使目标词语的平均负对数概率最小 +```math +\text{loss} = -\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i} +``` -It is not very difficult to implement but the function -`sequence_loss_by_example` is already available, so we can just use it here. +实现起来并非很难,但是这里已经有了可用的函数 `sequence_loss_by_example` ,可以直接在这里使用。 -The typical measure reported in the papers is average per-word perplexity (often -just called perplexity), which is equal to +文献报告中的典型方法是平均化每个词语的困惑度,计算式为 -$$e^{-\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i}} = e^{\text{loss}} $$ +```math +e^{-\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i}} = e^{\text{loss}} +``` -and we will monitor its value throughout the training process. +同时我们会监视训练过程中的困惑度值。 -### Stacking multiple LSTMs +### 多 LSTM 堆叠 -To give the model more expressive power, we can add multiple layers of LSTMs -to process the data. The output of the first layer will become the input of -the second and so on. +要想给模型更多的表单能力,可以添加多层 LSTM 来处理数据。第一层的输出作为第二层的输入,以此类推。 -We have a class called `MultiRNNCell` that makes the implementation seamless: +类 `MultiRNNCell` 可以无缝的将其实现: ```python lstm = rnn_cell.BasicLSTMCell(lstm_size) @@ -159,50 +129,48 @@ stacked_lstm = rnn_cell.MultiRNNCell([lstm] * number_of_layers) initial_state = state = stacked_lstm.zero_state(batch_size, tf.float32) for i in range(len(num_steps)): - # The value of state is updated after processing each batch of words. + # 每次处理一批词语后更新状态值. output, state = stacked_lstm(words[:, i], state) - # The rest of the code. + # 其余的代码. # ... final_state = state ``` -## Compile and Run the Code +## 编译并运行代码 -First, the library needs to be built. To compile it on CPU: +首先需要构建库,在 CPU 上编译: ``` bazel build -c opt tensorflow/models/rnn/ptb:ptb_word_lm ``` -And if you have a fast GPU, run the following: +如果你有一个强大的 GPU,可以运行: ``` bazel build -c opt --config=cuda tensorflow/models/rnn/ptb:ptb_word_lm ``` -Now we can run the model: +运行模型: ``` bazel-bin/tensorflow/models/rnn/ptb/ptb_word_lm \ --data_path=/tmp/simple-examples/data/ --alsologtostderr --model small ``` -There are 3 supported model configurations in the tutorial code: "small", -"medium" and "large". The difference between them is in size of the LSTMs and -the set of hyperparameters used for training. +教程代码中有 3 个支持的模型配置:"small", +"medium" 和 "large"。它们的不同仅有 LSTM 的大小,以及用于训练的超参数集。 + +模型越大,得到的结果应该更好。在测试集中 `small` 模型应该可以达到低于 120 的困惑度,`large` 模型则是低于 80,考虑到它可能花费数小时来训练。 -The larger the model, the better results it should get. The `small` model should -be able to reach perplexity below 120 on the test set and the `large` one below -80, though it might take several hours to train. +## 接下来是什么? -## What Next? +还有几个优化模型的技巧没有提到,包括: -There are several tricks that we haven't mentioned that make the model better, -including: +* 降低学习率, +* 多 LSTM 层间 dropout. -* decreasing learning rate schedule, -* dropout between the LSTM layers. +学习和更改代码以进一步改善模型。 -Study the code and modify it to improve the model even further. +原文:[Recurrent Neural Networks](https://github.com/jikexueyuanwiki/tensorflow-zh/blob/master/SOURCE/tutorials/recurrent.md) 翻译:[Warln](https://github.com/Warln) From 6d534cfea1977769c01b489f21bc0778ba32efc5 Mon Sep 17 00:00:00 2001 From: Torthu Date: Thu, 12 Nov 2015 22:18:40 +0800 Subject: [PATCH 010/139] Update recurrent.md --- SOURCE/tutorials/recurrent.md | 162 ++++++++++++++++++++-------------- 1 file changed, 98 insertions(+), 64 deletions(-) diff --git a/SOURCE/tutorials/recurrent.md b/SOURCE/tutorials/recurrent.md index 390ab94..d1be50e 100755 --- a/SOURCE/tutorials/recurrent.md +++ b/SOURCE/tutorials/recurrent.md @@ -1,127 +1,157 @@ -# 递归神经网络 +# Recurrent Neural Networks -## 介绍 +## Introduction -可以在 [this great article](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) 查看递归神经网络以及 LSTM 的介绍。 +Take a look at [this great article] +(http://colah.github.io/posts/2015-08-Understanding-LSTMs/) +for an introduction to recurrent neural networks and LSTMs in particular. -## 语言模型 +## Language Modeling -此教程将展示如何在高任务难度的语言模型中训练递归神经网络。该问题的目标是适应一个概率模型以将概率分配到语句。这实际上是通过预测给出了之前的词语历史记录的文本的接下来的词语来做到的。为此,我们将使用 PTB(Penn Tree Bank) 数据集,这是一种流行的用于测量这些模型的质量的基准,同时还具有小型化和相对的训练快速的的特点。 +In this tutorial we will show how to train a recurrent neural network on +a challenging task of language modeling. The goal of the problem is to fit a +probabilistic model which assigns probablities to sentences. It does so by +predicting next words in a text given a history of previous words. For this +purpose we will use the Penn Tree Bank (PTB) dataset, which is a popular +benchmark for measuring quality of these models, whilst being small and +relatively fast to train. -语言模型是许多诸如语音识别,机器翻译或图像字幕等有趣的难题的关键所在。没错,这相当很有意思--可以参看 [here](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)。 +Language modeling is key to many interesting problems such as speech +recognition, machine translation, or image captioning. It is also fun, too -- +take a look [here] (http://karpathy.github.io/2015/05/21/rnn-effectiveness/). -基于此目标,本教程将重现 [Zaremba et al., 2014](http://arxiv.org/abs/1409.2329) 的成果,该成果是应用 PTB 数据集得到的很棒的结果。 +For the purpose of this tutorial, we will reproduce the results from +[Zaremba et al., 2014] (http://arxiv.org/abs/1409.2329), which achieves very +good results on the PTB dataset. -## 教程文件 +## Tutorial Files -本教程使用的下面的文件引用自 `models/rnn/ptb`: +This tutorial references the following files from `models/rnn/ptb`: -文件 | 作用 +File | Purpose --- | --- -`ptb_word_lm.py` | 在 PTB 数据集上训练一个语言模型. -`reader.py` | 读取数据集. +`ptb_word_lm.py` | The code to train a language model on the PTB dataset. +`reader.py` | The code to read the dataset. -## 下载及准备数据 +## Download and Prepare the Data -本教程需要的数据在 data/ 路径下,其是 Tomas Mikolov 的网站上的的 PTB 数据集http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz。 +The data required for this tutorial is in the data/ directory of the +PTB dataset from Tomas Mikolov's webpage: +http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz -该数据集已经预先处理过并且包含了全部的 10000 个不同的词语,其中包括语句结束标记符以及针对稀有词语的特殊符号 (\) 。我们把所有 `reader.py` 中的词语转换成唯一整型标识符,使其易于神经网络处理。 +The dataset is already preprocessed and contains overall 10000 different words, +including the end-of-sentence marker and a special symbol (\) for rare +words. We convert all of them in the `reader.py` to unique integer identifiers +to make it easy for the neural network to process. -## 模型 +## The Model ### LSTM -模型的核心由一个 LSTM 小单元组成,其可以在某时刻处理一个词语,以及计算语句可能的延续性的概率。网络的存储状态由一个零矢量初始化并在读取每一个词语后更新。而且,由于计算上的原因,我们将以 `batch_size` 为最小批量来处理数据。 +The core of the model consists of an LSTM cell that processes one word at the +time and computes probabilities of the possible continuations of the sentence. +The memory state of the network is initialized with a vector of zeros and gets +updated after reading each word. Also, for computational reasons, we will +process data in mini-batches of size `batch_size`. -基础的伪代码就像下面这样: +The basic pseudocode looks as follows: ```python lstm = rnn_cell.BasicLSTMCell(lstm_size) -# 初始化 LSTM 存储状态. +# Initial state of the LSTM memory. state = tf.zeros([batch_size, lstm.state_size]) loss = 0.0 for current_batch_of_words in words_in_dataset: - # 每次处理一批词语后更新状态值. + # The value of state is updated after processing each batch of words. output, state = lstm(current_batch_of_words, state) - # LSTM 输出可用于产生下一个词语的预测 + # The LSTM output can be used to make next word predictions logits = tf.matmul(output, softmax_w) + softmax_b probabilities = tf.nn.softmax(logits) loss += loss_function(probabilities, target_words) ``` -### 截断反向传播 +### Truncated Backpropagation -为使学习过程易于处理,通常的做法是将反向传播的梯度截断成展开步骤的一个固定数字(`num_steps`)。 -通过一次提供长度为 `num_steps` 的输入和每次迭代之后进行向后传递,这会很容易实现。 +In order to make the learning process tractable, it is a common practice to +truncate the gradients for backpropagation to a fixed number (`num_steps`) +of unrolled steps. +This is easy to implement by feeding inputs of length `num_steps` at a time and +doing backward pass after each iteration. -一个简化版的用于图形创建的截断反向传播代码: +A simplifed version of the code for the graph creation for truncated +backpropagation: ```python -# 一次给定的迭代中的输入占位符. +# Placeholder for the inputs in a given iteration. words = tf.placeholder(tf.int32, [batch_size, num_steps]) lstm = rnn_cell.BasicLSTMCell(lstm_size) -# 初始化 LSTM 存储状态. +# Initial state of the LSTM memory. initial_state = state = tf.zeros([batch_size, lstm.state_size]) for i in range(len(num_steps)): - # 每次处理一批词语后更新状态值. + # The value of state is updated after processing each batch of words. output, state = lstm(words[:, i], state) - # 其余的代码. + # The rest of the code. # ... final_state = state ``` -下面展现如何实现迭代整个数据集: +And this is how to implement an iteration over the whole dataset: ```python -# 一个 numpy 数组,保存每一批词语之后的 LSTM 状态. +# A numpy array holding the state of LSTM after each batch of words. numpy_state = initial_state.eval() total_loss = 0.0 for current_batch_of_words in words_in_dataset: numpy_state, current_loss = session.run([final_state, loss], - # 初始化来自上一次迭代的 LSTM 状态. + # Initialize the LSTM state from the previous iteration. feed_dict={initial_state: numpy_state, words: current_batch_of_words}) total_loss += current_loss ``` -### 输入 +### Inputs -在提供给 LSTM 前,IDs 将被嵌入到一个密集的表示中(查看 [矢量表示教程](../../tutorials/word2vec/index.md))。这种方式允许模型高效地表现特定词语的知识,代码也很容易编写: +The word IDs will be embedded into a dense representation (see the +[Vector Representations Tutorial](../../tutorials/word2vec/index.md)) before feeding to +the LSTM. This allows the model to efficiently represent the knowledge about +particular words. It is also easy to write: ```python -# embedding_matrix 为形状的张量 [vocabulary_size, 嵌入的大小] +# embedding_matrix is a tensor of shape [vocabulary_size, embedding size] word_embeddings = tf.nn.embedding_lookup(embedding_matrix, word_ids) ``` -嵌入的矩阵会被随机地初始化,模型将仅通过看一眼数据就学会区分词语的意思。 +The embedding matrix will be initialized randomly and the model will learn to +differentiate the meaning of words just by looking at the data. -### 损失函数 +### Loss Fuction -我们想使目标词语的平均负对数概率最小 -```math -\text{loss} = -\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i} -``` +We want to minimize the average negative log probability of the target words: -实现起来并非很难,但是这里已经有了可用的函数 `sequence_loss_by_example` ,可以直接在这里使用。 +$$ \text{loss} = -\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i} $$ -文献报告中的典型方法是平均化每个词语的困惑度,计算式为 +It is not very difficult to implement but the function +`sequence_loss_by_example` is already available, so we can just use it here. -```math -e^{-\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i}} = e^{\text{loss}} -``` +The typical measure reported in the papers is average per-word perplexity (often +just called perplexity), which is equal to + +$$e^{-\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i}} = e^{\text{loss}} $$ -同时我们会监视训练过程中的困惑度值。 +and we will monitor its value throughout the training process. -### 多 LSTM 堆叠 +### Stacking multiple LSTMs -要想给模型更多的表单能力,可以添加多层 LSTM 来处理数据。第一层的输出作为第二层的输入,以此类推。 +To give the model more expressive power, we can add multiple layers of LSTMs +to process the data. The output of the first layer will become the input of +the second and so on. -类 `MultiRNNCell` 可以无缝的将其实现: +We have a class called `MultiRNNCell` that makes the implementation seamless: ```python lstm = rnn_cell.BasicLSTMCell(lstm_size) @@ -129,46 +159,50 @@ stacked_lstm = rnn_cell.MultiRNNCell([lstm] * number_of_layers) initial_state = state = stacked_lstm.zero_state(batch_size, tf.float32) for i in range(len(num_steps)): - # 每次处理一批词语后更新状态值. + # The value of state is updated after processing each batch of words. output, state = stacked_lstm(words[:, i], state) - # 其余的代码. + # The rest of the code. # ... final_state = state ``` -## 编译并运行代码 +## Compile and Run the Code -首先需要构建库,在 CPU 上编译: +First, the library needs to be built. To compile it on CPU: ``` bazel build -c opt tensorflow/models/rnn/ptb:ptb_word_lm ``` -如果你有一个强大的 GPU,可以运行: +And if you have a fast GPU, run the following: ``` bazel build -c opt --config=cuda tensorflow/models/rnn/ptb:ptb_word_lm ``` -运行模型: +Now we can run the model: ``` bazel-bin/tensorflow/models/rnn/ptb/ptb_word_lm \ --data_path=/tmp/simple-examples/data/ --alsologtostderr --model small ``` -教程代码中有 3 个支持的模型配置:"small", -"medium" 和 "large"。它们的不同仅有 LSTM 的大小,以及用于训练的超参数集。 +There are 3 supported model configurations in the tutorial code: "small", +"medium" and "large". The difference between them is in size of the LSTMs and +the set of hyperparameters used for training. -模型越大,得到的结果应该更好。在测试集中 `small` 模型应该可以达到低于 120 的困惑度,`large` 模型则是低于 80,考虑到它可能花费数小时来训练。 +The larger the model, the better results it should get. The `small` model should +be able to reach perplexity below 120 on the test set and the `large` one below +80, though it might take several hours to train. -## 接下来是什么? +## What Next? -还有几个优化模型的技巧没有提到,包括: +There are several tricks that we haven't mentioned that make the model better, +including: -* 降低学习率, -* 多 LSTM 层间 dropout. +* decreasing learning rate schedule, +* dropout between the LSTM layers. -学习和更改代码以进一步改善模型。 +Study the code and modify it to improve the model even further. From e011a81c3264989ce9a9381b90e005c592c6b11e Mon Sep 17 00:00:00 2001 From: Torthu Date: Fri, 13 Nov 2015 12:02:28 +0800 Subject: [PATCH 011/139] Update recurrent.md --- SOURCE/tutorials/recurrent.md | 166 ++++++++++++++-------------------- 1 file changed, 68 insertions(+), 98 deletions(-) diff --git a/SOURCE/tutorials/recurrent.md b/SOURCE/tutorials/recurrent.md index d1be50e..2806cfe 100755 --- a/SOURCE/tutorials/recurrent.md +++ b/SOURCE/tutorials/recurrent.md @@ -1,157 +1,127 @@ -# Recurrent Neural Networks +# 递归神经网络 -## Introduction +## 介绍 -Take a look at [this great article] -(http://colah.github.io/posts/2015-08-Understanding-LSTMs/) -for an introduction to recurrent neural networks and LSTMs in particular. +可以在 [this great article](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) 查看递归神经网络以及 LSTM 的介绍。 -## Language Modeling +## 语言模型 -In this tutorial we will show how to train a recurrent neural network on -a challenging task of language modeling. The goal of the problem is to fit a -probabilistic model which assigns probablities to sentences. It does so by -predicting next words in a text given a history of previous words. For this -purpose we will use the Penn Tree Bank (PTB) dataset, which is a popular -benchmark for measuring quality of these models, whilst being small and -relatively fast to train. +此教程将展示如何在高任务难度的语言模型中训练递归神经网络。该问题的目标是适应一个概率模型以将概率分配到语句。这实际上是通过预测给出了之前的词语历史记录的文本的接下来的词语来做到的。为此,我们将使用 PTB(Penn Tree Bank) 数据集,这是一种流行的用于测量这些模型的质量的基准,同时还具有小型化和相对的训练快速的的特点。 -Language modeling is key to many interesting problems such as speech -recognition, machine translation, or image captioning. It is also fun, too -- -take a look [here] (http://karpathy.github.io/2015/05/21/rnn-effectiveness/). +语言模型是许多诸如语音识别,机器翻译或图像字幕等有趣的难题的关键所在。没错,这相当很有意思--可以参看 [here](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)。 -For the purpose of this tutorial, we will reproduce the results from -[Zaremba et al., 2014] (http://arxiv.org/abs/1409.2329), which achieves very -good results on the PTB dataset. +基于此目标,本教程将重现 [Zaremba et al., 2014](http://arxiv.org/abs/1409.2329) 的成果,该成果是应用 PTB 数据集得到的很棒的结果。 -## Tutorial Files +## 教程文件 -This tutorial references the following files from `models/rnn/ptb`: +本教程使用的下面的文件引用自 `models/rnn/ptb`: -File | Purpose +文件 | 作用 --- | --- -`ptb_word_lm.py` | The code to train a language model on the PTB dataset. -`reader.py` | The code to read the dataset. +`ptb_word_lm.py` | 在 PTB 数据集上训练一个语言模型. +`reader.py` | 读取数据集. -## Download and Prepare the Data +## 下载及准备数据 -The data required for this tutorial is in the data/ directory of the -PTB dataset from Tomas Mikolov's webpage: -http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz +本教程需要的数据在 data/ 路径下,其是 Tomas Mikolov 的网站上的的 PTB 数据集http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz。 -The dataset is already preprocessed and contains overall 10000 different words, -including the end-of-sentence marker and a special symbol (\) for rare -words. We convert all of them in the `reader.py` to unique integer identifiers -to make it easy for the neural network to process. +该数据集已经预先处理过并且包含了全部的 10000 个不同的词语,其中包括语句结束标记符以及针对稀有词语的特殊符号 (\) 。我们把所有 `reader.py` 中的词语转换成唯一整型标识符,使其易于神经网络处理。 -## The Model +## 模型 ### LSTM -The core of the model consists of an LSTM cell that processes one word at the -time and computes probabilities of the possible continuations of the sentence. -The memory state of the network is initialized with a vector of zeros and gets -updated after reading each word. Also, for computational reasons, we will -process data in mini-batches of size `batch_size`. +模型的核心由一个 LSTM 小单元组成,其可以在某时刻处理一个词语,以及计算语句可能的延续性的概率。网络的存储状态由一个零矢量初始化并在读取每一个词语后更新。而且,由于计算上的原因,我们将以 `batch_size` 为最小批量来处理数据。 -The basic pseudocode looks as follows: +基础的伪代码就像下面这样: ```python lstm = rnn_cell.BasicLSTMCell(lstm_size) -# Initial state of the LSTM memory. +# 初始化 LSTM 存储状态. state = tf.zeros([batch_size, lstm.state_size]) loss = 0.0 for current_batch_of_words in words_in_dataset: - # The value of state is updated after processing each batch of words. + # 每次处理一批词语后更新状态值. output, state = lstm(current_batch_of_words, state) - # The LSTM output can be used to make next word predictions + # LSTM 输出可用于产生下一个词语的预测 logits = tf.matmul(output, softmax_w) + softmax_b probabilities = tf.nn.softmax(logits) loss += loss_function(probabilities, target_words) ``` -### Truncated Backpropagation +### 截断反向传播 -In order to make the learning process tractable, it is a common practice to -truncate the gradients for backpropagation to a fixed number (`num_steps`) -of unrolled steps. -This is easy to implement by feeding inputs of length `num_steps` at a time and -doing backward pass after each iteration. +为使学习过程易于处理,通常的做法是将反向传播的梯度截断成展开步骤的一个固定数字(`num_steps`)。 +通过一次提供长度为 `num_steps` 的输入和每次迭代之后进行向后传递,这会很容易实现。 -A simplifed version of the code for the graph creation for truncated -backpropagation: +一个简化版的用于图形创建的截断反向传播代码: ```python -# Placeholder for the inputs in a given iteration. +# 一次给定的迭代中的输入占位符. words = tf.placeholder(tf.int32, [batch_size, num_steps]) lstm = rnn_cell.BasicLSTMCell(lstm_size) -# Initial state of the LSTM memory. +# 初始化 LSTM 存储状态. initial_state = state = tf.zeros([batch_size, lstm.state_size]) for i in range(len(num_steps)): - # The value of state is updated after processing each batch of words. + # 每次处理一批词语后更新状态值. output, state = lstm(words[:, i], state) - # The rest of the code. + # 其余的代码. # ... final_state = state ``` -And this is how to implement an iteration over the whole dataset: +下面展现如何实现迭代整个数据集: ```python -# A numpy array holding the state of LSTM after each batch of words. +# 一个 numpy 数组,保存每一批词语之后的 LSTM 状态. numpy_state = initial_state.eval() total_loss = 0.0 for current_batch_of_words in words_in_dataset: numpy_state, current_loss = session.run([final_state, loss], - # Initialize the LSTM state from the previous iteration. + # 初始化来自上一次迭代的 LSTM 状态. feed_dict={initial_state: numpy_state, words: current_batch_of_words}) total_loss += current_loss ``` -### Inputs +### 输入 -The word IDs will be embedded into a dense representation (see the -[Vector Representations Tutorial](../../tutorials/word2vec/index.md)) before feeding to -the LSTM. This allows the model to efficiently represent the knowledge about -particular words. It is also easy to write: +在提供给 LSTM 前,IDs 将被嵌入到一个密集的表示中(查看 [矢量表示教程](../../tutorials/word2vec/index.md))。这种方式允许模型高效地表现特定词语的知识,代码也很容易编写: ```python -# embedding_matrix is a tensor of shape [vocabulary_size, embedding size] +# embedding_matrix 为形状的张量 [vocabulary_size, 嵌入的大小] word_embeddings = tf.nn.embedding_lookup(embedding_matrix, word_ids) ``` -The embedding matrix will be initialized randomly and the model will learn to -differentiate the meaning of words just by looking at the data. +嵌入的矩阵会被随机地初始化,模型将仅通过看一眼数据就学会区分词语的意思。 -### Loss Fuction +### 损失函数 -We want to minimize the average negative log probability of the target words: - -$$ \text{loss} = -\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i} $$ +我们想使目标词语的平均负对数概率最小 +```math +\text{loss} = -\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i} +``` -It is not very difficult to implement but the function -`sequence_loss_by_example` is already available, so we can just use it here. +实现起来并非很难,但是这里已经有了可用的函数 `sequence_loss_by_example` ,可以直接在这里使用。 -The typical measure reported in the papers is average per-word perplexity (often -just called perplexity), which is equal to +文献报告中的典型方法是平均化每个词语的困惑度,计算式为 -$$e^{-\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i}} = e^{\text{loss}} $$ +```math +e^{-\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i}} = e^{\text{loss}} +``` -and we will monitor its value throughout the training process. +同时我们会监视训练过程中的困惑度值。 -### Stacking multiple LSTMs +### 多 LSTM 堆叠 -To give the model more expressive power, we can add multiple layers of LSTMs -to process the data. The output of the first layer will become the input of -the second and so on. +要想给模型更多的表单能力,可以添加多层 LSTM 来处理数据。第一层的输出作为第二层的输入,以此类推。 -We have a class called `MultiRNNCell` that makes the implementation seamless: +类 `MultiRNNCell` 可以无缝的将其实现: ```python lstm = rnn_cell.BasicLSTMCell(lstm_size) @@ -159,50 +129,50 @@ stacked_lstm = rnn_cell.MultiRNNCell([lstm] * number_of_layers) initial_state = state = stacked_lstm.zero_state(batch_size, tf.float32) for i in range(len(num_steps)): - # The value of state is updated after processing each batch of words. + # 每次处理一批词语后更新状态值. output, state = stacked_lstm(words[:, i], state) - # The rest of the code. + # 其余的代码. # ... final_state = state ``` -## Compile and Run the Code +## 编译并运行代码 -First, the library needs to be built. To compile it on CPU: +首先需要构建库,在 CPU 上编译: ``` bazel build -c opt tensorflow/models/rnn/ptb:ptb_word_lm ``` -And if you have a fast GPU, run the following: +如果你有一个强大的 GPU,可以运行: ``` bazel build -c opt --config=cuda tensorflow/models/rnn/ptb:ptb_word_lm ``` -Now we can run the model: +运行模型: ``` bazel-bin/tensorflow/models/rnn/ptb/ptb_word_lm \ --data_path=/tmp/simple-examples/data/ --alsologtostderr --model small ``` -There are 3 supported model configurations in the tutorial code: "small", -"medium" and "large". The difference between them is in size of the LSTMs and -the set of hyperparameters used for training. +教程代码中有 3 个支持的模型配置:"small", +"medium" 和 "large"。它们的不同仅有 LSTM 的大小,以及用于训练的超参数集。 + +模型越大,得到的结果应该更好。在测试集中 `small` 模型应该可以达到低于 120 的困惑度,`large` 模型则是低于 80,考虑到它可能花费数小时来训练。 + +## 接下来是什么? -The larger the model, the better results it should get. The `small` model should -be able to reach perplexity below 120 on the test set and the `large` one below -80, though it might take several hours to train. +还有几个优化模型的技巧没有提到,包括: -## What Next? +* 降低学习率, +* 多 LSTM 层间 dropout. -There are several tricks that we haven't mentioned that make the model better, -including: +学习和更改代码以进一步改善模型。 -* decreasing learning rate schedule, -* dropout between the LSTM layers. +原文:[Recurrent Neural Networks](http://tensorflow.org/tutorials/recurrent/index.md) -Study the code and modify it to improve the model even further. +翻译:[Warln](https://github.com/Warln) From b54e72a976148d6b83cedc1ea3d8bfdc37c20d28 Mon Sep 17 00:00:00 2001 From: Torthu Date: Fri, 13 Nov 2015 12:03:47 +0800 Subject: [PATCH 012/139] Update recurrent.md --- SOURCE/tutorials/recurrent.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/SOURCE/tutorials/recurrent.md b/SOURCE/tutorials/recurrent.md index 2806cfe..45fb55d 100755 --- a/SOURCE/tutorials/recurrent.md +++ b/SOURCE/tutorials/recurrent.md @@ -168,11 +168,11 @@ bazel-bin/tensorflow/models/rnn/ptb/ptb_word_lm \ 还有几个优化模型的技巧没有提到,包括: -* 降低学习率, +* 减少学习率, * 多 LSTM 层间 dropout. -学习和更改代码以进一步改善模型。 +继续学习和更改代码以进一步改善模型吧。 原文:[Recurrent Neural Networks](http://tensorflow.org/tutorials/recurrent/index.md) -翻译:[Warln](https://github.com/Warln) +翻译:[Warln](https://github.com/Warln) From 2e87c1f8f71fd84441b451c3822f961a85aecc1a Mon Sep 17 00:00:00 2001 From: Torthu Date: Fri, 13 Nov 2015 12:04:54 +0800 Subject: [PATCH 013/139] Update index.md --- SOURCE/tutorials/recurrent/index.md | 164 +++++++++++++++++----------- 1 file changed, 98 insertions(+), 66 deletions(-) diff --git a/SOURCE/tutorials/recurrent/index.md b/SOURCE/tutorials/recurrent/index.md index 131e4c6..d1be50e 100755 --- a/SOURCE/tutorials/recurrent/index.md +++ b/SOURCE/tutorials/recurrent/index.md @@ -1,127 +1,157 @@ -# 递归神经网络 +# Recurrent Neural Networks -## 介绍 +## Introduction -可以在 [this great article](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) 查看递归神经网络以及 LSTM 的介绍。 +Take a look at [this great article] +(http://colah.github.io/posts/2015-08-Understanding-LSTMs/) +for an introduction to recurrent neural networks and LSTMs in particular. -## 语言模型 +## Language Modeling -此教程将展示如何在高任务难度的语言模型中训练递归神经网络。该问题的目标是适应一个概率模型以将概率分配到语句。这实际上是通过预测给出了之前的词语历史记录的文本的接下来的词语来做到的。为此,我们将使用 PTB(Penn Tree Bank) 数据集,这是一种流行的用于测量这些模型的质量的基准,同时还具有小型化和相对的训练快速的的特点。 +In this tutorial we will show how to train a recurrent neural network on +a challenging task of language modeling. The goal of the problem is to fit a +probabilistic model which assigns probablities to sentences. It does so by +predicting next words in a text given a history of previous words. For this +purpose we will use the Penn Tree Bank (PTB) dataset, which is a popular +benchmark for measuring quality of these models, whilst being small and +relatively fast to train. -语言模型是许多诸如语音识别,机器翻译或图像字幕等有趣的难题的关键所在。没错,这相当很有意思--可以参看 [here](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)。 +Language modeling is key to many interesting problems such as speech +recognition, machine translation, or image captioning. It is also fun, too -- +take a look [here] (http://karpathy.github.io/2015/05/21/rnn-effectiveness/). -基于此目标,本教程将重现 [Zaremba et al., 2014](http://arxiv.org/abs/1409.2329) 的成果,该成果是应用 PTB 数据集得到的很棒的结果。 +For the purpose of this tutorial, we will reproduce the results from +[Zaremba et al., 2014] (http://arxiv.org/abs/1409.2329), which achieves very +good results on the PTB dataset. -## 教程文件 +## Tutorial Files -本教程使用的下面的文件引用自 `models/rnn/ptb`: +This tutorial references the following files from `models/rnn/ptb`: -文件 | 作用 +File | Purpose --- | --- -`ptb_word_lm.py` | 在 PTB 数据集上训练一个语言模型. -`reader.py` | 读取数据集. +`ptb_word_lm.py` | The code to train a language model on the PTB dataset. +`reader.py` | The code to read the dataset. -## 下载及准备数据 +## Download and Prepare the Data -本教程需要的数据在 data/ 路径下,其是 Tomas Mikolov 的网站上的的 PTB 数据集http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz。 +The data required for this tutorial is in the data/ directory of the +PTB dataset from Tomas Mikolov's webpage: +http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz -该数据集已经预先处理过并且包含了全部的 10000 个不同的词语,其中包括语句结束标记符以及针对稀有词语的特殊符号 (\) 。我们把所有 `reader.py` 中的词语转换成唯一整型标识符,使其易于神经网络处理。 +The dataset is already preprocessed and contains overall 10000 different words, +including the end-of-sentence marker and a special symbol (\) for rare +words. We convert all of them in the `reader.py` to unique integer identifiers +to make it easy for the neural network to process. -## 模型 +## The Model ### LSTM -模型的核心由一个 LSTM 小单元组成,其可以在某时刻处理一个词语,以及计算语句可能的延续性的概率。网络的存储状态由一个零矢量初始化并在读取每一个词语后更新。而且,由于计算上的原因,我们将以 `batch_size` 为最小批量来处理数据。 +The core of the model consists of an LSTM cell that processes one word at the +time and computes probabilities of the possible continuations of the sentence. +The memory state of the network is initialized with a vector of zeros and gets +updated after reading each word. Also, for computational reasons, we will +process data in mini-batches of size `batch_size`. -基础的伪代码就像下面这样: +The basic pseudocode looks as follows: ```python lstm = rnn_cell.BasicLSTMCell(lstm_size) -# 初始化 LSTM 存储状态. +# Initial state of the LSTM memory. state = tf.zeros([batch_size, lstm.state_size]) loss = 0.0 for current_batch_of_words in words_in_dataset: - # 每次处理一批词语后更新状态值. + # The value of state is updated after processing each batch of words. output, state = lstm(current_batch_of_words, state) - # LSTM 输出可用于产生下一个词语的预测 + # The LSTM output can be used to make next word predictions logits = tf.matmul(output, softmax_w) + softmax_b probabilities = tf.nn.softmax(logits) loss += loss_function(probabilities, target_words) ``` -### 截断反向传播 +### Truncated Backpropagation -为使学习过程易于处理,通常的做法是将反向传播的梯度截断成展开步骤的一个固定数字(`num_steps`)。 -通过一次提供长度为 `num_steps` 的输入和每次迭代之后进行向后传递,这会很容易实现。 +In order to make the learning process tractable, it is a common practice to +truncate the gradients for backpropagation to a fixed number (`num_steps`) +of unrolled steps. +This is easy to implement by feeding inputs of length `num_steps` at a time and +doing backward pass after each iteration. -一个简化版的用于图形创建的截断反向传播代码: +A simplifed version of the code for the graph creation for truncated +backpropagation: ```python -# 一次给定的迭代中的输入占位符. +# Placeholder for the inputs in a given iteration. words = tf.placeholder(tf.int32, [batch_size, num_steps]) lstm = rnn_cell.BasicLSTMCell(lstm_size) -# 初始化 LSTM 存储状态. +# Initial state of the LSTM memory. initial_state = state = tf.zeros([batch_size, lstm.state_size]) for i in range(len(num_steps)): - # 每次处理一批词语后更新状态值. + # The value of state is updated after processing each batch of words. output, state = lstm(words[:, i], state) - # 其余的代码. + # The rest of the code. # ... final_state = state ``` -下面展现如何实现迭代整个数据集: +And this is how to implement an iteration over the whole dataset: ```python -# 一个 numpy 数组,保存每一批词语之后的 LSTM 状态. +# A numpy array holding the state of LSTM after each batch of words. numpy_state = initial_state.eval() total_loss = 0.0 for current_batch_of_words in words_in_dataset: numpy_state, current_loss = session.run([final_state, loss], - # 初始化来自上一次迭代的 LSTM 状态. + # Initialize the LSTM state from the previous iteration. feed_dict={initial_state: numpy_state, words: current_batch_of_words}) total_loss += current_loss ``` -### 输入 +### Inputs -在提供给 LSTM 前,IDs 将被嵌入到一个密集的表示中(查看 [矢量表示教程](../../tutorials/word2vec/index.md))。这种方式允许模型高效地表现特定词语的知识,代码也很容易编写: +The word IDs will be embedded into a dense representation (see the +[Vector Representations Tutorial](../../tutorials/word2vec/index.md)) before feeding to +the LSTM. This allows the model to efficiently represent the knowledge about +particular words. It is also easy to write: ```python -# embedding_matrix 为形状的张量 [vocabulary_size, 嵌入的大小] +# embedding_matrix is a tensor of shape [vocabulary_size, embedding size] word_embeddings = tf.nn.embedding_lookup(embedding_matrix, word_ids) ``` -嵌入的矩阵会被随机地初始化,模型将仅通过看一眼数据就学会区分词语的意思。 +The embedding matrix will be initialized randomly and the model will learn to +differentiate the meaning of words just by looking at the data. -### 损失函数 +### Loss Fuction -我们想使目标词语的平均负对数概率最小 -```math -\text{loss} = -\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i} -``` +We want to minimize the average negative log probability of the target words: -实现起来并非很难,但是这里已经有了可用的函数 `sequence_loss_by_example` ,可以直接在这里使用。 +$$ \text{loss} = -\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i} $$ -文献报告中的典型方法是平均化每个词语的困惑度,计算式为 +It is not very difficult to implement but the function +`sequence_loss_by_example` is already available, so we can just use it here. -```math -e^{-\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i}} = e^{\text{loss}} -``` +The typical measure reported in the papers is average per-word perplexity (often +just called perplexity), which is equal to -同时我们会监视训练过程中的困惑度值。 +$$e^{-\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i}} = e^{\text{loss}} $$ -### 多 LSTM 堆叠 +and we will monitor its value throughout the training process. -要想给模型更多的表单能力,可以添加多层 LSTM 来处理数据。第一层的输出作为第二层的输入,以此类推。 +### Stacking multiple LSTMs -类 `MultiRNNCell` 可以无缝的将其实现: +To give the model more expressive power, we can add multiple layers of LSTMs +to process the data. The output of the first layer will become the input of +the second and so on. + +We have a class called `MultiRNNCell` that makes the implementation seamless: ```python lstm = rnn_cell.BasicLSTMCell(lstm_size) @@ -129,48 +159,50 @@ stacked_lstm = rnn_cell.MultiRNNCell([lstm] * number_of_layers) initial_state = state = stacked_lstm.zero_state(batch_size, tf.float32) for i in range(len(num_steps)): - # 每次处理一批词语后更新状态值. + # The value of state is updated after processing each batch of words. output, state = stacked_lstm(words[:, i], state) - # 其余的代码. + # The rest of the code. # ... final_state = state ``` -## 编译并运行代码 +## Compile and Run the Code -首先需要构建库,在 CPU 上编译: +First, the library needs to be built. To compile it on CPU: ``` bazel build -c opt tensorflow/models/rnn/ptb:ptb_word_lm ``` -如果你有一个强大的 GPU,可以运行: +And if you have a fast GPU, run the following: ``` bazel build -c opt --config=cuda tensorflow/models/rnn/ptb:ptb_word_lm ``` -运行模型: +Now we can run the model: ``` bazel-bin/tensorflow/models/rnn/ptb/ptb_word_lm \ --data_path=/tmp/simple-examples/data/ --alsologtostderr --model small ``` -教程代码中有 3 个支持的模型配置:"small", -"medium" 和 "large"。它们的不同仅有 LSTM 的大小,以及用于训练的超参数集。 - -模型越大,得到的结果应该更好。在测试集中 `small` 模型应该可以达到低于 120 的困惑度,`large` 模型则是低于 80,考虑到它可能花费数小时来训练。 +There are 3 supported model configurations in the tutorial code: "small", +"medium" and "large". The difference between them is in size of the LSTMs and +the set of hyperparameters used for training. -## 接下来是什么? +The larger the model, the better results it should get. The `small` model should +be able to reach perplexity below 120 on the test set and the `large` one below +80, though it might take several hours to train. -还有几个优化模型的技巧没有提到,包括: +## What Next? -* 降低学习率, -* 多 LSTM 层间 dropout. +There are several tricks that we haven't mentioned that make the model better, +including: -学习和更改代码以进一步改善模型。 +* decreasing learning rate schedule, +* dropout between the LSTM layers. -原文:[Recurrent Neural Networks](https://github.com/jikexueyuanwiki/tensorflow-zh/blob/master/SOURCE/tutorials/recurrent.md) 翻译:[Warln](https://github.com/Warln) +Study the code and modify it to improve the model even further. From 0c392e9e988f14b50d3b0cd9640ea66fe54bb05c Mon Sep 17 00:00:00 2001 From: Torthu Date: Fri, 13 Nov 2015 12:06:52 +0800 Subject: [PATCH 014/139] Update recurrent.md --- SOURCE/tutorials/recurrent.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/SOURCE/tutorials/recurrent.md b/SOURCE/tutorials/recurrent.md index 45fb55d..238c757 100755 --- a/SOURCE/tutorials/recurrent.md +++ b/SOURCE/tutorials/recurrent.md @@ -107,7 +107,7 @@ word_embeddings = tf.nn.embedding_lookup(embedding_matrix, word_ids) \text{loss} = -\frac{1}{N}\sum_{i=1}^{N} \ln p_{\text{target}_i} ``` -实现起来并非很难,但是这里已经有了可用的函数 `sequence_loss_by_example` ,可以直接在这里使用。 +实现起来并非很难,但是这里已经有个函数 `sequence_loss_by_example` ,可以直接使用。 文献报告中的典型方法是平均化每个词语的困惑度,计算式为 From a2ffecaebb23271abab4378b1b4bb4f8484bc3d2 Mon Sep 17 00:00:00 2001 From: Torthu Date: Fri, 13 Nov 2015 12:11:44 +0800 Subject: [PATCH 015/139] Update recurrent.md --- SOURCE/tutorials/recurrent.md | 1 - 1 file changed, 1 deletion(-) diff --git a/SOURCE/tutorials/recurrent.md b/SOURCE/tutorials/recurrent.md index 238c757..8d48433 100755 --- a/SOURCE/tutorials/recurrent.md +++ b/SOURCE/tutorials/recurrent.md @@ -174,5 +174,4 @@ bazel-bin/tensorflow/models/rnn/ptb/ptb_word_lm \ 继续学习和更改代码以进一步改善模型吧。 原文:[Recurrent Neural Networks](http://tensorflow.org/tutorials/recurrent/index.md) - 翻译:[Warln](https://github.com/Warln) From 822be0735cdb130552eb2bb50b38221d3e17369d Mon Sep 17 00:00:00 2001 From: litai wong Date: Fri, 13 Nov 2015 17:03:03 +0800 Subject: [PATCH 016/139] Update pdes.md --- SOURCE/tutorials/pdes.md | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/SOURCE/tutorials/pdes.md b/SOURCE/tutorials/pdes.md index 5dbb758..44f0f30 100755 --- a/SOURCE/tutorials/pdes.md +++ b/SOURCE/tutorials/pdes.md @@ -1,11 +1,9 @@ -# Partial Differential Equations +# 偏微分方程 -TensorFlow isn't just for machine learning. Here we give a (somewhat -pedestrian) example of using TensorFlow for simulating the behavior of a -partial differential equation. We'll simulate the surface of square pond as a -few raindrops land on it. -Note: This tutorial was originally prepared as an IPython notebook. + ***TensorFlow*** 不只仅仅为了机器学习。现在,我们将给出一个某人正在使用 ***TensorFlow*** 中的偏积分方程来模拟的例子。我们将要模拟几滴落入一块方形池塘水面的雨点。 + +注:本教程最初是准备做为一个IPython的手册。 ## Basic Setup From 9bb1ccf385319d1e7b75495262bbd2f6a58fd72a Mon Sep 17 00:00:00 2001 From: litai wong Date: Fri, 13 Nov 2015 17:15:52 +0800 Subject: [PATCH 017/139] Update pdes.md --- SOURCE/tutorials/pdes.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/SOURCE/tutorials/pdes.md b/SOURCE/tutorials/pdes.md index 44f0f30..e464d57 100755 --- a/SOURCE/tutorials/pdes.md +++ b/SOURCE/tutorials/pdes.md @@ -1,13 +1,13 @@ # 偏微分方程 - ***TensorFlow*** 不只仅仅为了机器学习。现在,我们将给出一个某人正在使用 ***TensorFlow*** 中的偏积分方程来模拟的例子。我们将要模拟几滴落入一块方形池塘水面的雨点。 + ***TensorFlow*** 不只仅仅为了机器学习。在这里,我们将给出一个某人正在使 ***TensorFlow*** 中的偏积分方程模拟的例子。我们将要模拟几滴落入方形池塘水面的雨点。 -注:本教程最初是准备做为一个IPython的手册。 +注:本教程最初是准备做为一个 **IPython** 的手册。 -## Basic Setup +## 基本设置 -A few imports we'll need. +我们必要的一些引用。 ```python #Import libraries for simulation @@ -20,7 +20,7 @@ from cStringIO import StringIO from IPython.display import clear_output, Image, display ``` -A function for displaying the state of the pond's surface as an image. +一个用于表示池塘表面状态的函数。 ```python def DisplayArray(a, fmt='jpeg', rng=[0,1]): From 38135d6a3f8ae3e4afbd7bbf613935aa4050f94f Mon Sep 17 00:00:00 2001 From: allenyang Date: Fri, 13 Nov 2015 19:59:53 +0800 Subject: [PATCH 018/139] =?UTF-8?q?=E7=BF=BB=E8=AF=91=E9=83=A8=E5=88=86?= =?UTF-8?q?=E4=B8=AD=E6=96=87=EF=BC=8Cgit=E6=8F=90=E4=BA=A4=E6=B5=8B?= =?UTF-8?q?=E8=AF=95?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- SOURCE/tutorials/deep_cnn.md | 128 ++++++++++++----------------------- 1 file changed, 44 insertions(+), 84 deletions(-) diff --git a/SOURCE/tutorials/deep_cnn.md b/SOURCE/tutorials/deep_cnn.md index 323ad54..d60b8a0 100755 --- a/SOURCE/tutorials/deep_cnn.md +++ b/SOURCE/tutorials/deep_cnn.md @@ -1,109 +1,69 @@ -# Convolutional Neural Networks +# 卷积神经网络 -> **NOTE:** This tutorial is intended for *advanced* users of TensorFlow -and assumes expertise and experience in machine learning. +> **注意:** 本教程适用于对Tensorflow有丰富经验的用户,并假定用户有机器学习相关领域的专业知识和经验。 -## Overview +## 概述 -CIFAR-10 classification is a common benchmark problem in machine learning. The -problem is to classify RGB 32x32 pixel images across 10 categories: -```airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck.``` +对CIFAR-10 数据集的分类是机器学习中一个公开的基准测试问题,其任务是对一组32x32RGB的图像进行分类,这些图像涵盖了10个类别: +```飞机, 汽车, 鸟, 猫, 鹿, 狗, 青蛙, 马, 船以及卡车。``` ![CIFAR-10 Samples](./cifar_samples.png "CIFAR-10 Samples, from http://www.cs.toronto.edu/~kriz/cifar.html") -For more details refer to the [CIFAR-10 page](http://www.cs.toronto.edu/~kriz/cifar.html) -and a [Tech Report](http://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf) -by Alex Krizhevsky. - -### Goals - -The goal of this tutorial is to build a relatively small convolutional neural -network (CNN) for recognizing images. In the process, this tutorial: - -1. Highlights a canonical organization for network architecture, -training and evaluation. -2. Provides a template for constructing larger and more sophisticated models. - -The reason CIFAR-10 was selected was that it is complex enough to exercise -much of TensorFlow's ability to scale to large models. At the same time, -the model is small enough to train fast, which is ideal for trying out -new ideas and experimenting with new techniques. - -### Highlights of the Tutorial -The CIFAR-10 tutorial demonstrates several important constructs for -designing larger and more sophisticated models in TensorFlow: - -* Core mathematical components including [convolution]( -../../api_docs/python/nn.md#conv2d), [rectified linear activations]( -../../api_docs/python/nn.md#relu), [max pooling]( -../../api_docs/python/nn.md#max_pool) and [local response normalization]( -../../api_docs/python/nn.md#local_response_normalization). -* [Visualization](../../how_tos/summaries_and_tensorboard/index.md) -of network activities during training, including input images, -losses and distributions of activations and gradients. -* Routines for calculating the -[moving average](../../api_docs/python/train.md#ExponentialMovingAverage) -of learned parameters and using these averages -during evaluation to boost predictive performance. -* Implementation of a -[learning rate schedule](../../api_docs/python/train.md#exponential_decay) -that systematically decrements over time. -* Prefetching [queues](../../api_docs/python/io_ops.md#shuffle_batch) -for input -data to isolate the model from disk latency and expensive image pre-processing. +想了解更多信息请参考[CIFAR-10 page](http://www.cs.toronto.edu/~kriz/cifar.html),以及Alex Krizhevsky写的[技术报告](http://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf) + +### 目标 + +本教程的目标是建立一个用于识别图像的相对较小的卷积神经网络,在这一过程中,本教程会: + +1. 着重于建立一个规范的网络组织结构,训练并进行评估; +2. 为建立更大规模更加复杂的模型提供一个范例 -We also provide a multi-GPU version of the model which demonstrates: +选择CIFAR-10是因为它的复杂程度足以用来检验TensorFlow中的大部分功能,并可将其扩展为更大的模型。与此同时由于模型较小所以训练速度很快,比较适合用来测试新的想法,检验新的技术。 -* Configuring a model to train across multiple GPU cards in parallel. -* Sharing and updating variables among multiple GPUs. +### 本教程的重点 +CIFAR-10 教程演示了在TensorFlow上构建更大更复杂模型的个种重要内容: +* 相关核心数学对象,如[卷积](../../api_docs/python/nn.md#conv2d)、[修正线性激活](../../api_docs/python/nn.md#relu)、[最大池化](../../api_docs/python/nn.md#max_pool)以及[局部响应归一化](../../api_docs/python/nn.md#local_response_normalization); +* 训练过程中一些网络行为的[可视化](../../how_tos/summaries_and_tensorboard/index.md),这些行为包括输入图像、损失情况、网络行为的分布情况以及梯度; +* 算法的学习参数的[移动平均值](../../api_docs/python/train.md#ExponentialMovingAverage)的常用计算方式,以及在评估阶段使用这些平均值提高预测性能; +* 实现了一种机制,使得[学习率](../../api_docs/python/train.md#exponential_decay)随着时间的推移而递减; +* 为输入数据设计预存取[队列](../../api_docs/python/io_ops.md#shuffle_batch),将磁盘延迟和高开销的图像预处理操作模型分离开来处理; -We hope that this tutorial provides a launch point for building larger CNNs for -vision tasks on TensorFlow. +我们也提供了模型的多GUP版本,用以表明: +* 可以配置模型使其在多个GPU上并行的训练 +* 可以在多个GPU之间共享和更新变量值 -### Model Architecture +我们希望本教程给大家开了个头,使得在Tensorflow上可以为视觉相关工作建立更大型的Cnns模型 -The model in this CIFAR-10 tutorial is a multi-layer architecture consisting of -alternating convolutions and nonlinearities. These layers are followed by fully -connected layers leading into a softmax classifier. The model follows the -architecture described by -[Alex Krizhevsky](https://code.google.com/p/cuda-convnet/), with a few -differences in the top few layers. -This model achieves a peak performance of about 86% accuracy within a few hours -of training time on a GPU. Please see [below](#evaluating-a-model) and the code -for details. It consists of 1,068,298 learnable parameters and requires about -19.5M multiply-add operations to compute inference on a single image. +### 模型架构 -## Code Organization +本教程中的模型是一个多层架构,由卷积层和nonlinearities交替多次排列后构成。这些层最终通过全连通层对接到softmax分类器上。这一模型除了最上层的几层外,基本跟[Alex Krizhevsky](https://code.google.com/p/cuda-convnet/)提出的模型一致。 -The code for this tutorial resides in -[`tensorflow/models/image/cifar10/`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/). +在一个GPU上经过几个小时的训练后,该模型达到了最高86%的精度。细节请查看[下面](#evaluating-a-model)的描述以及代码。模型中包含了1,068,298个学习参数,分类一副图像需要大概19.5M的乘加操作。 + +## 代码组织 + +本教程的代码位于[`tensorflow/models/image/cifar10/`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/). File | Purpose --- | --- -[`cifar10_input.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_input.py) | Reads the native CIFAR-10 binary file format. -[`cifar10.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10.py) | Builds the CIFAR-10 model. -[`cifar10_train.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_train.py) | Trains a CIFAR-10 model on a CPU or GPU. -[`cifar10_multi_gpu_train.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_multi_gpu_train.py) | Trains a CIFAR-10 model on multiple GPUs. -[`cifar10_eval.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_eval.py) | Evaluates the predictive performance of a CIFAR-10 model. +[`cifar10_input.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_input.py) | 读取本地CIFAR-10的二进制文件格式的内容。 +[`cifar10.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10.py) | 建立CIFAR-10的模型。 +[`cifar10_train.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_train.py) | 在CPU或GPU上训练CIFAR-10的模型。 +[`cifar10_multi_gpu_train.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_multi_gpu_train.py) | 在多GPU上训练CIFAR-10的模型。 +[`cifar10_eval.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10_eval.py) | 评估CIFAR-10模型的预测性能。 -## CIFAR-10 Model +## CIFAR-10 模型 -The CIFAR-10 network is largely contained in +CIFAR-10 网络模型部分的代码位于 [`cifar10.py`](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/models/image/cifar10/cifar10.py). -The complete training -graph contains roughly 765 operations. We find that we can make the code most -reusable by constructing the graph with the following modules: - -1. [**Model inputs:**](#model-inputs) `inputs()` and `distorted_inputs()` add -operations that read and preprocess CIFAR images for evaluation and training, -respectively. -1. [**Model prediction:**](#model-prediction) `inference()` +完整的训练图中包含约765个操作。但是我们发现通过下面的模块构造图可以最大限度的提高代码复用: + +1. [**模型输入:**](#model-inputs) 包括`inputs()` 、 `distorted_inputs()`等一些操作,用于各自独立的读取并对CIFAR的图像进行预处理,做为后续评估和训练的输入; +2. [**模型预测:**](#model-prediction) 包括`inference()`等一些操作,用于进行统计计算,比如在提供的图像进行分类; adds operations that perform inference, i.e. classification, on supplied images. -1. [**Model training:**](#model-training) `loss()` and `train()` -add operations that compute the loss, -gradients, variable updates and visualization summaries. +3. [**模型训练:**](#model-training) 包括`loss()` and `train()`等一些操作,用于计算损失、计算梯度、进行变量更新以及可视化。 ### Model Inputs From 0d93c9632a1dc485bb159890aed89624c61d3fcd Mon Sep 17 00:00:00 2001 From: litai wong Date: Fri, 13 Nov 2015 20:35:14 +0800 Subject: [PATCH 019/139] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 28d1302..55ed2c2 100644 --- a/README.md +++ b/README.md @@ -77,7 +77,7 @@ PS: 想探讨TensorFlow技术的可以加"TensorFlow技术交流群":495115006 - [Vector Representations of Words](SOURCE/tutorials/word2vec.md)翻译:([@xyang40](https://github.com/xyang40)) - [Recurrent Neural Networks](SOURCE/tutorials/recurrent.md) 翻译:([@Warln](https://github.com/Warln)) - [Mandelbrot集合](SOURCE/tutorials/mandelbrot.md) 翻译:([@ericxk](https://github.com/ericxk))√ - - [Partial Differential Equations](SOURCE/tutorials/pdes.md) + - [Partial Differential Equations](SOURCE/tutorials/pdes.md) 翻译: ([@wangaicc](https://github.com/wangaicc)) - [MNIST Data Download](SOURCE/tutorials/mnist_download.md) 翻译: ([@JoyLiu](https://github.com/fengsehng)) - 运作方式 - [总览](SOURCE/how_tos/overview.md) From 845846b463979ab985fe82bece929b74c8436b2d Mon Sep 17 00:00:00 2001 From: litai wong Date: Fri, 13 Nov 2015 21:09:37 +0800 Subject: [PATCH 020/139] Update pdes.md --- SOURCE/tutorials/pdes.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/SOURCE/tutorials/pdes.md b/SOURCE/tutorials/pdes.md index e464d57..7b4cf6c 100755 --- a/SOURCE/tutorials/pdes.md +++ b/SOURCE/tutorials/pdes.md @@ -1,7 +1,6 @@ # 偏微分方程 - - ***TensorFlow*** 不只仅仅为了机器学习。在这里,我们将给出一个某人正在使 ***TensorFlow*** 中的偏积分方程模拟的例子。我们将要模拟几滴落入方形池塘水面的雨点。 + ***TensorFlow*** 不只仅仅为了机器学习。在这里,我们将给出一个某人正在使用的 ***TensorFlow*** 中偏积分方程模拟的例子。接下来我们将要模拟的是:几滴落入方形池塘水面的雨点。 注:本教程最初是准备做为一个 **IPython** 的手册。 @@ -36,11 +35,13 @@ Here we start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if we were doing this in an executable .py file. +这样我们就可以很方便的打开一个交互的 ***TensorFlow*** 会话。如果我们需要以后方便调用就能将相关代码写到一个可以执行的python文件中 + ```python sess = tf.InteractiveSession() ``` -## Computational Convenience Functions +## 方便的计算功能 ```python @@ -64,7 +65,7 @@ def laplace(x): return simple_conv(x, laplace_k) ``` -## Define the PDE +## 定义偏积分方程 Our pond is a perfect 500 x 500 square, as is the case for most ponds found in nature. @@ -90,7 +91,7 @@ for n in range(40): DisplayArray(u_init, rng=[-0.1, 0.1]) ``` -![jpeg](pde_output_1.jpg) +![jpeg](https://github.com/wangaicc/tensorflow-zh/raw/master/SOURCE/tutorials/pdes/pde_output_1.jpg) Now let's specify the details of the differential equation. @@ -135,7 +136,7 @@ for i in range(1000): DisplayArray(U.eval(), rng=[-0.1, 0.1]) ``` -![jpeg](pde_output_2.jpg) +![jpeg](https://github.com/wangaicc/tensorflow-zh/raw/master/SOURCE/tutorials/pdes/pde_output_2.jpg) Look! Ripples! From 435e1059a59495248f08f46993a4a236c25786a0 Mon Sep 17 00:00:00 2001 From: litai wong Date: Fri, 13 Nov 2015 21:11:15 +0800 Subject: [PATCH 021/139] Update pdes.md --- SOURCE/tutorials/pdes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/SOURCE/tutorials/pdes.md b/SOURCE/tutorials/pdes.md index 7b4cf6c..30110e5 100755 --- a/SOURCE/tutorials/pdes.md +++ b/SOURCE/tutorials/pdes.md @@ -138,5 +138,5 @@ for i in range(1000): ![jpeg](https://github.com/wangaicc/tensorflow-zh/raw/master/SOURCE/tutorials/pdes/pde_output_2.jpg) -Look! Ripples! +看!! 多么美丽的涟漪! From a2f5b6440ea3446ec86cd99175da41604dd00518 Mon Sep 17 00:00:00 2001 From: frank-tancf Date: Fri, 13 Nov 2015 23:14:04 +0800 Subject: [PATCH 022/139] transfile --- SOURCE/how_tos/summaries_and_tensoreboard_trans.md | 1 + 1 file changed, 1 insertion(+) create mode 100644 SOURCE/how_tos/summaries_and_tensoreboard_trans.md diff --git a/SOURCE/how_tos/summaries_and_tensoreboard_trans.md b/SOURCE/how_tos/summaries_and_tensoreboard_trans.md new file mode 100644 index 0000000..b732b3e --- /dev/null +++ b/SOURCE/how_tos/summaries_and_tensoreboard_trans.md @@ -0,0 +1 @@ +TensorBoard涉及到的运算通常是训练庞大的 \ No newline at end of file From 2d228e13eb92cae590099c7569f98ff6d2104016 Mon Sep 17 00:00:00 2001 From: frank-tancf Date: Fri, 13 Nov 2015 23:27:07 +0800 Subject: [PATCH 023/139] test1 --- SOURCE/how_tos/summaries_and_tensoreboard_trans.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/SOURCE/how_tos/summaries_and_tensoreboard_trans.md b/SOURCE/how_tos/summaries_and_tensoreboard_trans.md index b732b3e..4192a1e 100644 --- a/SOURCE/how_tos/summaries_and_tensoreboard_trans.md +++ b/SOURCE/how_tos/summaries_and_tensoreboard_trans.md @@ -1 +1,3 @@ -TensorBoard涉及到的运算通常是训练庞大的 \ No newline at end of file +# TensorBoard: + +TensorBoard涉及到的运算通常是训练庞大的深度神经网络这样复杂而又难以理解的运算。为了更方便TensorFlow程序的理解,调试与优化,我们发布了一套叫做TensorFlow的可视化工具。 \ No newline at end of file From 9c43a91fe635425a6bfb0efde58dfae260116303 Mon Sep 17 00:00:00 2001 From: TerenceCooper Date: Sat, 14 Nov 2015 09:34:46 +0800 Subject: [PATCH 024/139] applied for the approval of FAQ translation --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index b80b42a..e18e64a 100644 --- a/README.md +++ b/README.md @@ -94,7 +94,7 @@ PS: 想探讨TensorFlow技术的可以加"TensorFlow技术交流群":495115006 - [总览](SOURCE/resources/overview.md) - [BibTex 引用](SOURCE/resources/bib.md) - [示例使用](SOURCE/resources/uses.md) 翻译:([@andyiac](https://github.com/andyiac)) - - [FAQ](SOURCE/resources/faq.md) + - [FAQ](SOURCE/resources/faq.md)翻译:([@Terence Cooper](https://github.com/TerenceCooper)) - [术语表](SOURCE/resources/glossary.md) - [Tensor排名、形状和类型](SOURCE/resources/dims_types.md) From 60c7abb367467ad064507eddcb3d2aba57b4bb2c Mon Sep 17 00:00:00 2001 From: TerenceCooper Date: Sat, 14 Nov 2015 09:41:51 +0800 Subject: [PATCH 025/139] added a space before a word --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index e18e64a..fda8997 100644 --- a/README.md +++ b/README.md @@ -94,7 +94,7 @@ PS: 想探讨TensorFlow技术的可以加"TensorFlow技术交流群":495115006 - [总览](SOURCE/resources/overview.md) - [BibTex 引用](SOURCE/resources/bib.md) - [示例使用](SOURCE/resources/uses.md) 翻译:([@andyiac](https://github.com/andyiac)) - - [FAQ](SOURCE/resources/faq.md)翻译:([@Terence Cooper](https://github.com/TerenceCooper)) + - [FAQ](SOURCE/resources/faq.md) 翻译:([@Terence Cooper](https://github.com/TerenceCooper)) - [术语表](SOURCE/resources/glossary.md) - [Tensor排名、形状和类型](SOURCE/resources/dims_types.md) From 6d6dc641bb371d6adb88de6c4a306b6dd8b26195 Mon Sep 17 00:00:00 2001 From: bingjin <545915891@qq.com> Date: Sat, 14 Nov 2015 15:33:49 +0800 Subject: [PATCH 026/139] halfway through, but still the quality is not high, needs to double check terms and ask for clarifications --- SOURCE/images/mnist_digits.png | Bin 0 -> 11142 bytes SOURCE/images/mnist_subgraph.png | Bin 0 -> 54451 bytes SOURCE/tutorials/mnist_tf.md | 107 ++++++++++++++++++++++++++++--- 3 files changed, 99 insertions(+), 8 deletions(-) create mode 100644 SOURCE/images/mnist_digits.png create mode 100644 SOURCE/images/mnist_subgraph.png diff --git a/SOURCE/images/mnist_digits.png b/SOURCE/images/mnist_digits.png new file mode 100644 index 0000000000000000000000000000000000000000..1c094f8a56ba07e87ad4881986bae568f0ff063f GIT binary patch literal 11142 zcmd6NbySpZ)GbPi2t$a(&>i3md@-Q6hy0@B?KNI7&PQqmwK}2$ z{6cq(^`#Q^!zXS{EL57|-AJx0&EdQmY;MZ_`NHx32)xtsFPV)#Mh6H2@ zS+nzy7AIhaKpyrWfR;aCJ%JX-l;MB#3%ez6%u&>vrWg^)=@S}H5%y&fALn~7^^w?P zD(CV(y()e2E(#EgYPzLHSmy!p8u}X78(wzWGi%Dwskpq1!*OBDd;QIfV9q!RN&&QC z(X@#F{}(fLI0*uHI;E5FVLSc|)xz3Y@N0MzHEUFO zmK1t^^n(uJrB|l&ylCBBScfEkV*9UWt55sV!5Ma-|Iz^g#{C?1mZqK!_{&VyP^1K0 z0m7Kd5jkm_Z}X|7q}8ZWr*vU)amuz{HgDXHpE6ojm^!xGG+miFQ?*d@nc+01#Inbt zedEF9_XnnDX&SI-?^lPLm9CoUqnoG;FN8tOU5kFXv*Fuv9RaOhd#4-la5H8jgf82m zY1U@BPU*0v5Vvt{x+7l}2HT)*{U>GS1^dPY`xhC*aDE~qi@(q|H$=sI7M5L zR$)U$=?f=pYG;>ee;w!a{=8qUk5Y%F4qCJ?y(W>-)-FiUHBgzz9k*sA3YVtO8<$V9 z&`WtNLyjL>QUW7=n6>GVb>=P{K#LJ5ag7%&>C^7}^sUpwDjxU7Id;;>ZqsTl*7~{k z3JHRlF}W;>{XuI2VZ~sHR0|_=f7;f+t~6p#9TcZsc5QLHVLDB|@Dc^9m-xW7*s^xp zK|r%ui#r`J6p7ugeaBK#@?GWUo@>iOMJ`KZcfgu};dI_O%M?2Vr$eP~!ztFoSE0FL zr#a?Jmu-FXneQgb0}LT`aL<)8q1gcD_$o#=ulC=?(>S-V=c<(dV{eb$Xx6^5y73KL ziaJY@YT?ATGjn3UIk##ds%nGC>QJ@t6&tCT*ZJWE%1T=sJFwH5_Xn@a1aHjRm%_w; zX^kY$*8LMttJ-v0oWjDwmRqckLtdgZ(X!xR=&~7s%h1q6I4j%tb{34t;NPOw>$`bJ zGfQeu%I0aXkPN(YaQf4nHM+Uf?mMdbc~0!eP8;TeK7ScD+!b?kuOPQ#hx0L?O2+8s znW!y)hBj+*Mn;BgUMIfL+L1eVdQnM<4aF!5ma3HSmE&!jfq?-DZZJ5B*1Wtb|DApI zc=;$<2E%D^8fkIBI~hvVRB0Yyh0XQs`7;_D8{f8hU7R1S6{=CHKbx^*S^c}@@TCWj2m2%Ih>iQN1uG!58#N^qo$<;Hvwpq zOJjdSxsm{6MG9jAyJHmRog*h19u|68YJ9h;|0W1U9u4DX+zE^b1DN6Tn{v{5r%Y8V zVQMF+RpT6(*u-|%Jz*$5cAx|pY+M@x#L$hbMv`W+vxi4`VpA>)>H-?s;|P_8h}~;! z3oM*?#+GNyx#b`L?s4u7((iYK!pLSdW~9Ms?FGv!3`>0%`kIeA_Uh)=S+E}LW99TF z3WPs5D|zXpEic8RP~?h(E^9Jw@b}3{?Ywa^{Ls?Lh)s`Oua>r<@;9lG-S-#mXJVTx z2&V8trM}iG7rv7JW6Wv&Z?skdP|@omaKfl#`BcmrSaVL^20GWpLYB5Q-0GJ+d4unX_Q;)?(OT=D5@G#A|a*Wf!IXCTJW0)$~ri zez|*RMf>b@C+Xb58JLG@b=v4Af5xNR&V>Oc{@Z@@@9PO8%d)hoV993fQWWz|tKGf5 z%S%}4iS6zzKVDRPb)tm#<0;G!>tecI>fm)CgwmBASa(l66Bzj@qC1VN`o%DWz8{p|?RON+v~L8DySnkwAR#0aB-=^A?3xL9_lchj&s?u!(ze_5XmvGLmkq>` z%EXKW)^RA~Jv)AIIu7rTTJn6g;-`3&JsQfAKDXD?VE(-@bJp?3ho*%a&iKvf;FKfe zMvyiYxdJ)j1)B^<#Kgo@?TlSRW7Yt|y9@8Twj?l4+SV)P!`-0jZcjNB+CaQbwNSS%<#8ZSic6#T(4HZaM zEZ2D+L~96pqQw_4Z(chqP4CDF?=KIASL<4T0(oSa<9X^}nA@|6T$^8})trKZ?pb1j zf&qc_5FA<+s(bSw@D9!*yK{#vcU)V*7RSp6Ax*vw1y#e(D<8|mTaXvR!8}ep8RGlw zD?v1`aAKSZ&L6#1xp1#uX5pcZU5PO(95sA9eP^Zwe;|aZ$CfSXZR&{ic{tj|eP}1B z>r20RO^(5_Fj(-!{`I-W5Ai>LI}3udNlh8qy{a2X5@gIt7X2^&X(4`(j^}p{=d0ES zX6-_#84klHg4GR4!mD8L0>zk7w|mkXltu)&lNf3 zEC3xnZEeI=3X=V<4nBYc_sNqd;4_+9TV+<(UY}8e(}ttVNc2)v)aUM22Rl zCuAef9E6%iz`J1L+4VbR1rcgB0qBT1cM6}Kgx)=ZDT-;c#yJ(r`g5wHl8E^q0FtbE z0SwmbvnMOou$lGV{u$}YvND|3*3MLB2I%liI!C?Fw*LEc_!l0{8^Yqbe}f8BDJK~q zJY6(jqLg9L1uRz>I((RiFGHOROG@-hCvk%%^Fi8Wjh2;4*#OuF4$EU7?KRre^YCcz zYTX$-{>C{f++cy2H4o}y!-;JYD=)LJeOg1Cs7_-vMAzasWTgLi&{w6t4P#C}b;;*+ zsT_zo;`N&8ofUC31i+KNAph9DZRY~bViXEBUHLXUD=U|E`rwSF%m5Cxs+|QaZEnvJ z{xW+s$1wxiqhS`Afk%nIh}rw{mAJqOPx=tbmyYK?;#*TwQ(GIDf4%;DUW)g}vhFg6 z4J6WtKI}o1>!nuxPIFUEv6$kGvZ$uTH0LDa zES)f|7Kftyi-Y*sM2%aMa7?*SmD8&q;4J_e#Bq3QX0&YI7&*kJmlE&Z*Ff+7lnM(x z%s^)CC@p=lPyP|c{altm%`x9{XYC4DB5v%fGb1E%-15f3%o7i(*>$oo`u!tEf2Dp z_Kko9P4CDgW32xO3NM|zC%a+ID>;Y#Ih0W$z@X-u^{egY%uI_~y*{P}k01v(xbsYw z!D_gSPVe|QA@`L7wZ!XRn#H{7%KGJ`LXxI0PCZ-4t-)yjj{4p@iDrk6YD~M3$(4nN zEZF<@!(Wo&MNKh{{M_Z!^^$gxG(I=7JyRI#?I3S^E}SSX`Qt7e!uLrrAK;B( zCVnW=r#&0dL%0BVDoDQxo^W{0T+sUa0Q7Uy>LM-H4LCxx*ucmrMUm<2)iNV~40 zMqnr4yrY|RNz}2Sn;uD9k~EzF-cc9AC6XF;pnkaVX&<(f%93kY2stTX!F?y%;Pm`~ zk)%0cv@FBA()?i0lBj%)#POINzZB)akw(iRSE{CW78o*aY$T*r6TY^zwAj{1L>Z^( zge5&LoCudp06bWWH8=%RXvw(Y@?otm8;%aZEO0M=s9YZ5jz_hA`SRGhN87C3h$T-b zk`OnTJ6#!;igdECLpr(#dFEz4CBdajyCO0+Bzc~Ud%9^`&#Lgln%9&)QR8m&?20(N z>*C@f`}Xfs4U+f;?o8FSqox8IP)w0VhUbz+cK<6T;t#KG%7}5l5>pSYID}O&$^X&P zp4@06%*ChFK)T<@~90-IUBusKj_ zxu#jTvD!8En-k$->9trey6-uY3yh2n6HhS-673nlj;_t7JClviOjW9k%Ko7<;K+9T zCi?iHQd8pzZz?qzi6o-+1T#-ByoAC7#mwj6{g<;}PVw0p{vc;q;}~-*uDqI|_b3{-)?$6Em-006LCXh?0lZYE%a>`khy5d` zFPJ}-orcu+n4CmfKMQJD%u(N>6*|9#^qi89l_}fD9b9?-! zvn{|9v#giapr1`FXtHo`9oG$Wov4t$4pE|0dhSjBoC;GyK3}u)ZM$l|CMevsO6zzK zUH7@h*QbJC#3@;CftVn85!+jf9C(ZJTU0H&4KX0a4fbrUV8IP|P}WAgf!1jp*KZ*J zmA#D^wsb-m^t_aB;9Ke&^}GCnNz)vbYho>!xwuc^72IvgMG^gLuYqmQ{d)i8_7^jo zP-Z_99HhoRaZpZR>u5AQ5TNYb#7C5Au^zsz(g>Vc14QlO;^q8SfquT>z((s+fRspl zx!Dh;lL5pfb9?kJ3ziN_O9p;%@BbNSB>1>HexRDNp<3t!kk!pClw2B+xqAH^14anW z7vIdmjfiejYt1L&(&~`lN7rYW_73D@%s+J&QE|JtW^3L*zd3~rU9-HCsh(90PfLBguC^PGn@3SRFy;{R zk1QKirjnlv*%Z|h>+X3~P9uwo08t$rRFct{dADq(#z+JTGH_Dh2zTL_4u<}XX~&j@ zYF#!6&Q^2G=d^9-Uk7KQNV-uJsA?AMJXEan6ytpre3 zYLM`xD+4UyP;!2}<1Co3xxKZ;LJ@7c`rOHtwsI1WoP88K7dve{?Aj;^6xtDJjY|h& zLPA0!|1On$P3u+T+F1z0Z;#vJ8ZXBxQ$&yuN0v!y&{uEhje8d01JtqD!98m43~N3N z=L$ysqpDa^uYRKty=xK?0W1;z;(jh+oXHSjnjvlaLGJ71N!!Q61_CisM05knKeqpF zZ$JB0#cw>3B-pp!P?*6rp| z*2Ae7YMJaJ=Pp?8jwdJC&<4D>x7W(b>cWfKjglYG^nvv?nQqeMhf+6JPS75`xF_De z)xLQWcs;FCwtVL8+s&Y4lhFY80a=bHykp4sWt&nN5yh%=i{a-dvvg?f1we#RX4XJ5 zM6BYsB>7M}^)vN4T2lccw$iY5;^{EjUsJZ@*|HELO&_O}J^B zUuPiJiC#bi9h(;RkK7H)b=Hmqep{K|-V1ow*CY#j*pmWOEa>Qwo$xN?TYJ?@51`Ra z+PbtDp3~Ff_Wgb-CZ@)msAg~NdsX&6B=P*Ue+vXgVLkEsZBI?{jIaMefCYlG^-&Ki zmGIuD0tptA1N5ZHvn%hEp`cE6ka#LCp8SZ!nR@Z5-%T(RRJ!Fl8pj~8HPuZv#96?} z<4z}$86WcCu>8hWVz#0r@m+S*}AF+dGD@moiIY@OcUx?)#l&>Bhfh z?le*cEeakFr9-6X~bkI)HEPOok*yl!u<&cJGA4Ch|Nq{m8Fl8^-Pz2|=eq%q4{P>Xb`@S^t^ zK1AJn>eglDkp;9>P(Pv64g-)?uyN;OPzs_pM^8MdUQKIcGvG+NtI$H++v1)>zc6n{ zpBkop7@x5}r7Pm>oW*>L8xR(usnZ^&23A1e9RU`)&S-DcD?A-XghAPlCl7<8l;i_J zO(69qbI-3qij2(V+N+MOMUnXptT zpGYuN%OoW%ZoDf0wGbfAm2Iq=b4%3L__V(*){3)2@0R`xo#i=W-r6U0|J}RB1y^0E zF%O&`rxGEJ#0>1*9Qkc=6b!c`LnFlo&NF3q!5jm z4;yLZj9HY8(H2t(pF}4tYr^pDts7^H!KIfdzj@Cl3KXKI)nN>3ATfwUXMF&YBzDoH zZScG?Z|BC;p71GtAJf40f5LL#8gislGQvSmbjPQoF5!(@ai9VH>+Or%h2d&4@kW2w zmR4Oh&sJ*YRUpWQ9+=LGDZ}&_L&=lv3@|8?x=gtm-q311Ff2WyXq*H8`D>3~v&n%= zmrEX*wLhN`m~c;JqItnSNB#V81-Y<5<+;N}oYkN2XfnK8W1glDBdm}(`+&y`0&m6P zOWVfQ7S5HJ>Xtxb&D#g-+S4w)q`dvFZluJ|;+INU6x)~NBKd_wfou!Gkao`+X#VyD zv!|TknMto2n;i>DfN}<1R*$WFT;M7kq5GUxMkDR!_%IrXj+|s)_P9_lYw+E-RMGN? ze#b5y$oW5mz7L>X&LY9^SE=SmR;)6gEu_PU8l1WOWk<=+Me_a;ka#sKrw>}OdPV=D z^^kW1J7V3|H)&yMC&O>^UPSuf2G|B#93YwLl+x`y3jE<<-;EwsJPVY^WI*Jeq3VXN z=yW;#E%K{^oa*xAO6eNsZnT7W?vvc`0g?hVNzrViTvR+7!eIaU8cN=68HdJ8>~VF< zZV+VO*fMEra$t_cYx6{3{t}(!(`BpiH4ObxtvA{+#JFg6)FD3ajEX+)Q$bEGR_=P= z*tskBgg;lha@>!OW3ORMDfQTbEnWQ(2gXV91nt8bK(hMXOPJ+rLXz zW*Jx;N4VZjORXP!QNsEG2bW6Moa(SOG<5Sc`bLrhrXDzL#fjG4s%^a{3FeJZMFrr} zDFrrcN%8&WBcq_oyx2scoOr9;uiLNW)u;Vjb+)vD_~D2qjf%f>84JvhBByk4lzhJu zkB1dllp~FgH6l0^VeVT$x7S>;mLkQ5^L{U#_yKJ234LlwiDE*~MQYma`C;VTp6igS z_u0N7F9pZ~LWrMBor`_>-JIo|6w&_)MG6c4k9 zneRb{yG>V)+-36!C4Do#5qe8vN_50SN5a>Ebh|@x1dx6MFn|f9N|AdhgBnXX>XZ)_{ z$?wGgO_~XsdG=8X0aCj)f0&?Qo&ner%Tv)mE$Nhh<>j7`)$ys5Y#5zclO!8kk|z${ z;3D_44NM(ZB!Cpo$zp6+KbpO_Wo5_QDLt4+e4Hhn%*a1)%*PUaH5zwTwM+<1()`GP|G0kk3HA%4b-1M zxb;>**=yt0K?k)`Ol&NOtJKj=V2&=@Fg41-!$V?mmj8N+Fw(77zIgFsi)pCbJC*GL zKLiK3fQ&>men6!rcmcA|6O0wE3#qbapG!)HEs1Y>74wO(o-QX*Mau#muN0Vnt)9vK z<`7ZY-1#hK%lE`->s4RzjbIjfAgXVt+i!zS<)WCo#b5mNqnC`tJp$%hWViC%&tuBn zXoJ-%hCw*cY^Qe;7(r~Q76yuXmg@|Xt;ldwH51Fw6Kar1<&5E1-W3C~FE9cWeQVy! zv`)1+rN4%4t*x!#5eS4tWshcY(2-`i^lD+Y&*jPV+q=u@_K4i`x}}X}Kn+$L1Ts~n zDDdZ~LEQuN_{*!MyO?#lH7z2oPqZXHK0dcQl`EG4#b{^`x&QS7wEsqVB(7^+vsivJ z)p3n@Pz?$=0Vw@EmK>ungMj4fW@~zL)m?ml^?`+z9i_3iBZ*8rEimNMTe+vG{o*NqA^#lgxqHMOE#9}U$zh4C&u1@KDx zJhJ50x@nS?z8*E|v;Q;%&Y^Anubq?SRdGK7GQ8iYi^dO2Kuji$kMJ(mU~mm~v>R_F z^#!N_333*UW*9emGLrEcHeO$*S^VWHkx?O8Q6|Mgxn-I++M6L^{59;#w8g))yUZ3R|4w6OJX=2+5;T8Am}e;_CDHl;Kb|5QD`sWOu%{Wa*n)D;vSM$bjVFt^52=}XJDpMO zMA4y)X*g}62TN9*X>{cHuJNv3|4AgKzAru&I@`07LSWoE2uLrAD(-6h8fum|=S~OK z3>;1r*_~B_f75+`Q;0S4@cG=yV-XSHlwqj=GHpR_q?D%v-zokqzx^?0g=NnC4Br2_ zE3os+tes+lR=G}PW#!-oHx}KBMrcXR51Uh0-oWo=?}wiGpnj_WfYHhIJ-HoV3Y;A6 zfOq}P0s?8v0&s{mi%CXNWFGF$&Oz~SShyufC0Z7Av>!{Iqj{>)&e3>NqWhyID&cW3 zA-wPeusIRJ@;gL48X!P05WM=SyL?67($U*(g*zWkenbgEBbFj5`8=8jQ8l0-7$ zD@ZW%G2?XKZGUo_e_uNSPT5xsOCaGfJGr`&>eo)UQ#u^1e+bZs{_u1`DOk{Hi!^~Ra{3J*lULi>^*A-Z&>44;2r{srDR3mfq{Cd=OR8j&qVIvql(IB)u+uKc?$1=1d9-&j+J6d zd-$R9tolRVj$lX&Ys8OjXULi)MF)`6g(=ClhRWQ@l80DBbwJ|MAQ>GUecS5k;^PzZ zg**eJYTB)VTtK*deF_u*Fpvi!o&T-+S?k2Mv5dP2W;f zTpVvG0l_C(G0jncI(BwEg(yr8bSXpw9??aT(OP*zg`DGtf-in zWjB7K!BnOMUp8(e4W5b)Q_{dnIYv2a&tDmWm9ARG`1WA5EC7 zN0tNUb~!9)A>-zWk4SW;H~t@2pa8NaOa6E~PjNMK(6L@cJ&9Zy6Yx7j>u?&@udBL2 zd&CD3lfHkjGhGQX_X>^K!V@>|i}vhIktigNWFxAyv~)CEY%LCes4SN8htM-yy zwlARZgiK-#bg=M*F4X$OXZjIGgOtkq$X-tZ!r1^}|Bhz=%B;nG$}cl|3mC0HE)BK@ z#ij@9_i%szgTdFq8*RuBo^f9SAFLA*HWvOaf-H!9XP-jjA%`5<3v{6O&-v!0I@T>jOhyvH|IhPJu)^2fvAgi6-g|$s6`Dg0R znq0>Tr|mBHT}zg!Le-d>A((*@&UbSsC!*wP%&ArMD@njKAsY^JA2H?>hm|ralyzO1 z6uz^B=6qu$!3FI>!1+)u)IdNZyD=m>#^-{{zhy&rq0JQ^g|k55IY!CXfu0MIp!Fx)y)s>Ej-mNUAcK-U!5iB>+`z!UhAY#)Z<-OA^lMQe{woK zcTm-=x($QGHGbBQ=Agm?&@(Z(@vny^gK|nFEE5`?X9)@Jn2Vo|z=i(w+=RcZBLo5- zaNn`BDgQ$FZIGtbUqWAgf-+opGk>C68a}T8Pg>(YKAqU+_1Xinr{_Zv>~j^moX6YT z^AZ^;b~V(NO+rxouVv&6aoGK@2J%ZvGF7p)-2kv@uj(Sp2<$vn=DP6_ zTWyb*2U7qpRU$)1?uk}vvzu|OL>Ds@A0-?cRAR?}wZ9<5=s?z4CBjHxtPr1kk;|X) zKA_C5=@%D4g3y3ka%Um=?$G#X(|(@=Y#C^m$qf3+2c+=okl=4eLk#K< zAwL0WB#i{sILQ2ZghfcH8oG?zbFJ4cZdLYDAc^jrMC7IoTY~KM`i)WUEndFRc|Z54 z8~g~H0J=}cH@@whR=K8rpex+nTmjU1!rrth$2egNb1(i3x+hfRNkgB_^+Z9Sw_x@E z$Ysf$iP^l+V#QR;r^Ru9BC_b0ua+CWS-}z71oR(&z{QVrN+08gVoCt}NI;TG{3x5y z((qU~R^m^%^nFY-fv;wzzy}I-`{Kcka7GyPoBsgSq2rW&Bk(R;r@wrJzzqPARF-${ zG*&t|a>qBO5ID*O98olzBVL}ilQUVd)#rvHv=9hj#d6qVqz}bmFJpj1maZ&2eWK3t zbLT|e$Fy2^c}$s+=)!Ap&-LKaGn?qP1>a^jV#zzbz$UFUkeEIJM$?A(QtvPled-6H zM*u7}60>c%f$l>SiP~{R;JO<}aQ`&q=8kYwTHmpK$;wb#gqlEAOHEU`8kI3N^BRxo zX&+Kf&i34EjEF3_!E){?bu9_l9uEJ^rQ)lRoPiDgoQ%VGVasRu1GVB1j3ci&10oCw zJB)rN2}qah|NVuX{-jw$)KP07>EY2Bg(#7Jhn(cpfA@H`mxO|@)8)SZ`Y<8?^FGY~ e$KK6zvAb~oNL{_>_dsJUnu3g~bcLjG!2bc#JK28# literal 0 HcmV?d00001 diff --git a/SOURCE/images/mnist_subgraph.png b/SOURCE/images/mnist_subgraph.png new file mode 100644 index 0000000000000000000000000000000000000000..958f5e70789c3c729f548959b76067811910c9dc GIT binary patch literal 54451 zcmZU)byU^S@;(fQ4w3FgN?N)Pozf-U-Q93dkOqU2Zj|os?vU;dL0UTA58`*d_uk)H z=O19P&+OT=XP$ZH*mHAIsdH%ER<<>|#nGW=T4xYF!7*AV^sW6B|<>L?KPL{27E9iu0kd z1%Zn+{DT_xA7_Ork5aek?`jIEm6sMrE+Y?zD-MtD1J`Feqhd0!;^6=OkW(P#NS{-i zHvdqfBnSQ1#}FfSS5*uh8son{5Lg=W@r1Ux!mO}B#L)lxh^Np0*UwLXe#HtM)&g6i zi}UuszZVSq>ShxD|FH&w1Q_GE^~3VNEO|0)JfG`->>~#u$%C_4APkxkSpQ=rbeK@} z|NaU?+7N{4hooEo`rH55jVOKo{l$N81u+y1!iwFMTOCjze>O5jg1pRvNuy;B!~*I| zsf2Rr$>U{sKM;lZiU<r1GLsyvoGnRBmAV|ZMmVR7>mkV-i+;mViOI4ODbXfx>mPb`f;LD22aqWfZgGDuB zQlXa-J5dtnjamJAw6FcLFcC9KjH!QBycz$%OV{mPB#c^>yA;h&=x?!P^P4HSF5sZ> z%I&A$Rq}=K`jjdGbD3QzGRGrIRMAOybPLE};o1Az8HHJD2{Hihcqc=C^r_U1Wk9$`W6(9ZSXO(rL z&ec|L_Dg$b%T2JVMb%1p0FOnu!0nzsA6~y0`Pg(2I0LTQp(SX(yzJGS((2XgY00q;UA_rz{?%9V-*;}_qW=NT7M9IG7zm=DEeL;^c=(G zT6&dWnzWL&6&6>rR{j8KX@$q*#P#V3AV8)V#sylihBt7H6!vA+Oh+WQZ(7^9i||o! zv2~kRU<&l=39S;AYTx4jyurPAH$xc09}Y9?l7Um5Tt;Wc9%EY6tQ^ zS-XRNJTR?#$|?&6%9+d`w)evYXutZr+3WgVaq{cwsk5a+WU;SG2A{KPQ-LsD9r+ke~FYJ}PJ@gdaWH9r(~mTkYSCyU86WWWrk@7)_R-ixDqqB7feF9Kb^GSYXt z5}@c!Oc;N88`d)GIIaGliQ#i%+eb2o?lzk#Z2y)iqZCdU~Z#08~2 zUz>D2FCK6Q3FVHj4v157 z`mC;SBIf2&?`pn_j-Sgd_zp$N!XWOA>7qH#f(|4;xqn#?f{$sK(_+#aompHwJP(3E zsi>+J6QdIIpeH3I)m_ZnI1O{obg^nSdYi6#?{R?Sz)tgQU$+`QTREj~Vk zLM%j!XlMB(ty!DT?}lU9|JKg;dMg3nY3Wa$xje;)uNUt9&7a=-GmGN%i%zJVa~SxS z^6gjq@Z^4m0Vy{8pw?1C8;##0A(vbNO*ycn0WV%pt7qp`2qT}VI1TE)6e=o`GLvE> zJ?At$Wv~2nc$rz@`$8dFws(_E_WZkgz*}&ZVzJ1-3~UTuEi7#jwX-XxKF7~lYuGQT zAFdd6p@9)!D-pe%}azS^(Q?7)^-$yS`&Spq-J_PAM|K4<{SF zo%_wc?g}0jX1ymr6@|MezpJvmyT4ijlEB#H5u;RDo*#SL?EH7t@C5??N=DKrFW~s1rOLQkl&uodkL2OKCt&Umr|1R z=mOrV1omWtSZ{k3v)ol*AUh0ncIg(+;JHieB4|?oiTa`ZV5 za<~NSw$yu_nQ!jwcCh&}J9YF zq92hom^mxX$e$%${=o3dizOf+m|h*Kvid2?%pJVG-0Y*QrDgfUI5FgC=K;}Ud#K~)X2oPhhOESHq{psSzB9*Zqj)FuGopBS zptwPf^e=&b=MJS+l0bZ2OWWX6t|~3kSKy}`nAEOKH(Ph|m?t(6O$XC_d=!R7tL*}r z{l00AsmFiK&)yr_>pYU72TMrNr%-gwXjDM+4H1@SWrTT^ZucM=!~Mc1$ZloPVbNvz z;Nkw3iMVQL+<62-xNL#_p~=GYD2dnaZXso6Xm8_AcX_nx@FXoaHyv6BzbUx)l$(3e z@jW#xuafG$6Xz+-&byULRASV1r=gdFoj!5z1+ddm$;Bf4rO({`aI{ohP`15G+Ryhi z8)9y4evB*g2wKj46t_frEye>tjE>A)6NzV#xB41(R(V5mymur;=HSZA!a^rNWwKng ze}seyhK7>77?XI;g7!;1v^&r5u$|mOTK;dUdGhtfLh0q!_z@v<$!%DRmD&Ah`Fg~J za+%l6llAh}h?MA7Alvlz_2DW5|ItTPt@2{kt|TrZ#@)!e?1Ep+PEgxz5SZyjtp>dX z1aUgty^?;!v?eJpJLEUvy!u~U{?H9s+U;08v^F%%WBSsZ_Jz!O$r*z7R&4kOb%EtUU_j5kbif|T(3KII(0!;J=U1baQeqpz`{TE_Y{<#2 zUAxgtv-_8Z+b|@ahr)cT?R1fs%|U0QY8i}*(7KdM=1ZxNV2Zn zq<7qRV?+i8#2;~RwdB2DoB)y#r#!2A^#&adM(RWIY!W9fW+OgF`ka+n+8mo1vOAes zZvx3L6q`u;Fz_zyfid%}Y_EHNO)rq>dPgM}35+bVmO}iu7-Dju3cr6ZFLI;_l`|OW zT{%s0?cL5m#eyS(qX?$hL3IY_mz0bkk2!O)S)ZO%@7joTS4a+}9rgZ*VUJx-iMpo# zqh-M~<;dQ$K@@pA;H!0Ug6~=D&`Hbxa&=nZB@MPu8I>Ee>}vzbh#dyhbc2w+317cr zbInKntkJ2d{Go`y7`*E<6o1LVBmoKuzjx$BZ#2~^7C5{IEc#m?qJuNOWqy|KV^*{-w_D+i+& zZz&vfIGk?w$5*G}dTXQh2g+4DDq>iAY~=84g3*>*_S|J>92z3Fx3@vN;QJrcctu|9 zq@qSHT-)B7nTy&#)}RHF&XDylZtXD!L;@#C0{;!vLBDSvUvrHIXDPMZAP#H743g5LzAMrZG! zB;hWFu^&+wS*RiUo4CE}Z*y#Y|Gcs6zKv#42J&>4Dl})Muu*&=n=9@wrOq#2|JYwk zYD7Id`y8AzKwt$}^@--k*>g8kmj=aef$hDR%qwt^L;p?RQr%|ToB+d8oU&N7s^Zwk zzXnC5(UBFmQ)yc$IG&0GM?UZ_Dw!HC^}Z4by=$e6dG>H`f{1bP+Jmknyk9c?Zb%W%u>4*^ec~szzO7n9l46?ue0%*Su8b8Zk*3m0RhvHHY%01QhAl_GucvEL*+Ja z8GgyYXAF`Hr5j1##3MS6Apub=2nP>()Ha^G-f#1VJdEWY6)6$>7-j&+cidGEPG6`6ZVtZuM#l8(*VuSn`V0~!0<+$eFuB+9w1zONWBV+AO%Z9U z!-H>(A(X@B?=R{#T*D+zog~N&vt^b)#-fUSK}D6~#fi*QY5Eak(40jXW9}xjYt9mG zCq<~iz2eXkx@r_Bx{K2_dK0KC2~BRLpP%9vtb&;_Wqm!~Zswa;eji0Ds4e1p)AoiS z;PrSyWOd1#zwQdcGSqU7j$}tiS;~^%J%eSb3CEKFRl&RCxc0Qn3Q7=iBlC`MZkqe| z!X3?_(Qnh84(Ye6WNBF;d0U6KihV?TH_BBylbn&OR!#|EMV1M1H=)0Q1*yX@v?!>r zpL2e{PQzVLr!JwQn`qzZ+Rs4*Tv( z{qvGZ_!ky`diRbBzB@cXS@7VTiG|}X$(9s=m!gVeb=5r!)d`tpen~1|SzmH?o&O57 z^tAlKzvVe0_qUfipHO=v38)6s3T-CdCFTE#?l=ShUHn+~*c;|=u0JEEMd#bkP<|g~ z-`PHpHI+4Egt`)K?WA?(S6VV~`%#aLjTs8U@LpxS0_i=i-S}^9YG_Hl#cG=b&ivr) zMOn%*O%Upr(Ws!9vlINixHHju-u?z$U^T&G)Fl~ujiiH)v6kJ2a^eBXVuM(^DUFrNqs@B~Iiw+R4I z6W^GHGZM;Z-hN_Y)q7t*o+X7sir2aAOlqJ0ascX$UVY4jN?(2GU3!uMJ|p7q!<$wN{Co`SVhU;Bn6nc5z*JX-WJaJLZev0zxBh)7cil;a3E7s!ytOO*7 z3XHOHxYYP5Jw4skOcO#S78vwVBw)8j2GY4D@YzWS`-r+Su=#q3*ZAV-H)zbaCPKV6 zVxzCB;3_>h>*%t+wQZ68G5A*$S@y44kK>ru1_Xm!@YQk8=}sAaYzSBfep^sIr05T( z-$`lx>=qDc?nQ3Y&w+-BUQk@z{g!(v$B+rbFv&LG5DGV>A-^nB?znnJztDYBjH(PF zZOx*hQU~2inZpL>o5XK=*x{hlyQ1r0JrVnMqR0EA<2(=e?L~vYh86WJK`cumE`$g& zB*VQ89Q-t4Vj%-~*>ol`_bC;|QrIRdAr4fX(133QV9bcrfG+fj+7*Q4)r^18x((tSGD5cM1uHBk+7CmKWuT-pDNO# zLPmM{_^+a$aEdap9TaW5B`JkcnNovkDyE3DFr!N7D){hHbX##l+DYaRpWCd9yA2jJfYq~P@{C^{Te%7 zMin_0^a_#I$x*83)puURzGn6}+z`1D%JH4g;VN8Dc#(V#iIoR#W;L0^N~I!O>AI5~ z3|--{A~fjtCE^`JcUqvIzfl=EhYJcl{`=qzjL;ry{M!_%Ga%2bRE}n4Tv!QsT_N3B zYY=G&)swQPGf8$&;4|fe@JI3gBtHR2=f{^iMw>ul z-MDIc{r-e@4B%|w?38`N0`z#85OimScxEFp!X?Sk@LAiC8O7u>%W<8L1JkdCY2nl6 zs%`KjV@RWhhUD`pH2sN1fka1vlWF_Ryt*m|e!|lfo7i3HJQPmraDVlL6yc(#<& z>88x)*4DfA*t2xSRKDQ5JKufwZpz@*;@ah#KjeMaE(RzYB0G4h zRQ>~=Br3J;q{nuVHy@_d_83OhJ}BV#ZD8;FBVV$sKS#*r)>F2gV$Ahus2_o*pQp;T z=r3{w>mrUdf1w%xj;Q_UM}+Ko{f~yvT$Q%@Q&Us%-H~C3-+X(<38iN?9d~xdDWwKU znf&E-(?18rSs3?+I-t=Iw}O(Qo)5QYV=01U!9@NMqYD#kLT4mR;i8_mI~f7i zq@*L?!CP0DleWX16%#2+%!MZ2BH*l2ys>kv=WI8NX^0S#iUGGB%H+YVAI0qE8r68c zoxGIjw>3u%YZ=Gxa$uQS8IYI3{>VMuxOzl)dgtFteJY3%j<8Q^G6Dz1wKA`$J}l`wHZ%J0bbU^xT4gV$z2j)kGdPqS}?M zUb(3~x;sf`UmdBeuURY(%+?y$j*kb#rat$_J_5&Xo0+d@IZpOIn`6z`%AR-Qv@-t< zHcE8lC#H*T4<8c%T>QIXP!cHJQ5ZJu>r7u;hbz#q#TSQG@QSSNncA`MQhBFg@w0qr zgUMnxzJ11`kuR%I@OX<{z0n(^^Ii{I-CTj}Y~+#sNGh7RJUWwsW@lwQ>dVY88mK~*WM&IAevnYRI=;(5YP6A!t5 zg(BCRDjd+y*Go;sGaAjDm8p$#t4(^pd=2nBDF#vlnNJb@N)Yf?jsu7SCUzHvKz$J{ z2VCAL4BsdV-za+eh^>dpYTTu#nH4q8l5`Nyea1U`gIn8S(8T&aZ%lM1!OZf;uc+{h)gQJ*`tIXvP#kXk6+z2ZRKS>q8SDy^Ca%pSa-V3FsWbTox6-h9GuYv(&E=2y=%XxKU%!a+TNe*;{#D9uZ z{^ry|U^-aeaAW-K+7!#LsvxP}hOzN_fqD1JPZJaaiWK47wV#{1t1g72{8Y`9H+)kO zLBNoH{W&P61At^Dk^l+*TkM%UI01|#Hjs$!&(M)b*ZZp6l{Cz}9{Fdug&UXK#A+k0 zB=)X_e?Pgri8pWJ>!&ir*<#=Aj};fe!L0Nz<9Fo-tTi+LmKi-@okewnlOek%iF70t z<02b(2rlFF?rST+_nD9M^FhUS?oJj#U%I~3mXPRxs!p9qgqRyfwg~ZazbFe5tW{=Y zkk=%QPw7ek)lhvZcj#%OU}xY4$c=a(z5Px_eKx*@WfXV*i z+O|Dt_%3GU=bOLhmr-YOjGNcfBWd#v-kunoQCRXB(NzlAvEBdfcy&rVy}XJ5qt~yD zdW89xWi@+86lmDDCcvV|$-OYe#E^~(=`DD)`$NfcD4s$V%eu6i=yM{H#!}j$nGna? z@HKBC3MPaj=4RHv@5VDAXJFX>oWZ^x-7W=CsGik+%(`WcUjpSKFFx(&j=8Xn+wfFu zec6p*R7`!I0+S=dPS%Hk)DY4?yL^p|Wsm0x=WS3Xgtnkm|JFu*0YxK}{u01*@yWSM ze6)D72$~HQxLgpw+lzN4soEtJwyrYQFhJ}Vx#MX6Mcz-o)B7`}XHWmc#b!l|Mh8W& zB-J}f7AcoGZGe`0WG(Q#TD|{{yut)_i zSR#qad~xMN%5BP?1Wvt!)wQ)cD+CyS2{65(KhzPJetwzxD-B_RX%Y56im!ImvsfP7 zGKim1%sDtPH5r3SvM-80nko2fhi;S3(Y#Fd|Fi(VBF#qb?BeM&Bt}AcMv~bz_|Ehg z7a8bp+z%JSh#Z>OqN`l3FhYf(nU~sjDD?#5#4~7tyC{e;@%YT?t}B44=F1B~$h;u| zJLa%yNs1Oh8A^-FlXI7@t*)Y0_TFNRy2u)#I9Fs(ugSVt$B`q*-Au&)monOtIfh9$nb z!7SHM&nrRPtuJuLKm&4}qvf8~b0F5C1A)kKpnIB;o7+7f*_T~tZf>4eZLas`RalE@ zfmz%q*mGSqe#HL;0Iy7TXp|bU6()wisUU^y4P(S_6KshFga(p$$Aa|6c%qkT}RdrVG zYA;=hZgkY#oTUSO2Ml*p@Cr#C+WV2H-p8Blqg5NL*`TPcxIU&YyDq*kn(z{<*0A$G z%9zYOV&wRG==p$NV2;|=c3mTiZ_GGXLI zz#ZJS=WliT@tv!XV$^Uj9W1l7m0y z=pVMEo;)A5X|x)h`S}StJ3BK=uAdv7 z-Mr;NhkD~I=cw3ztfcVCvDZ?;phNvT^5hF><~dei=#g5ctr;!^1q@=X46Gd}hVX3GwmfQOE4YwETep7gx>aKcnYCQc+oX zi?)<&5xeVR)lI*kN$q%d<#gpMM9lAwny;8%a~QCEc1Mf|TLei5=1n@|0<`jp7L=aR|RANUZ zZaSoonlDN7-X@#BgNhRm1R`V-?K!~^D?YUx$XbTD6z;wuRA1YHk+kv8ytj2{BmBkD z20VM&F+v@{Ecwglxe~obRM3ut71YHB_5P9<+pcbvnqxR4BM`*7u33*vm&#?)UHC@je#b-VM~aapG-HiUFN}(Ix;f z#q}#9cHhKLy9b70_=4UX@TmBPaL)J~mgu_!G6|A-EG<_Z-_F?G543F3i}orxDUzMWGvQi0iq$4(~%5BI&eMri=F zMPSRMO8td2Td#-38M6kkjex6y0srIY zGY>AF7RYy!z@95NgFxnMSntlpCcrOs6UU5h2o-sA<87nxMOR?zHRdNMn5L_H|3XVt zZSh(MYK%IYIELG1+3DA>_2i&QnMwjF4;GCQR9%ot!k4 zU{!zGqG6G5o;>mKT3gKL{}6c9eS_jZMpcPTPTv3J3=I6*HH2Ig-rbhu47&eZT#QUf zAwJ)qVZpp|L&FH=hi;4;HI9CtwCpjy}F1GDiZ#wJD=GQ9+mJ4VcMH`Y&nuob*?U8~E1k|9EDz zn#mn^$0+1Bbv>5cny=i}?>=Y<(Et+zp5SmJ8B3hZ+z;z9K<8A?cGvtFktekcAS{aA z_~>uGij%7Wagh0W!s&UTXmTY@&5wYV;??g(p$1o{a_`}l2zi&BLE0Tj6{s|Rj}h$y zZ$L27H#DSnoX|%;{f<&PFrwn;Ktg&tS$pL#wSF+*F9Pd#W={tJfu)Q+uusEo#`X=D zB)FD*yWbCt?b}va7J=EoEGA^p!qn7qL~ZlaexwolYDO23ovR;A@rPV@XP#0H;d9E- zl?u#Wdhy~#B9D{tglpG3^ARgXI=Zo-4L0j??107j`S~J2y$JdTIuLI{xQ{=8x5sI~ z5$8bqaeb5`as{Ou{QoLMh&WAKy_Rl|(GzWO3Ky0VO~P*uN1iE{!03BYXw@)XaW=fg zHk6j-FqR=QElFW{aWa@&l41#4M+~!&GEM)s>RXhZ7oZdz)X{uaHHNZ6ntd*{&u-;+ zPZ86F4)aZ}0SVHl)KrqoCOQ~ROhi~vf;hPZ1^-Uf0O3h*jSQRRg)to{6EnlD9w4N( zjbAY^FifTc`jOqw0NGC%@;5XB_mResd(b8dl(UOQ+$mp;&}XljjC~WYHKh^tn;UGa z*TfFMhAi@EjvYinq2j*RHenjq7FikSl}N+}20`uU z0UT#rctfAkHa$Ik2_xR1hzD~xsG;OFH7+z2)NkY1X;Ef8CqOe~GupuIbH-avSHQ7^ zvFgl-?R%~(eUWQ5DN88_9C3w!8`txd;K5oY)IfMi+O<2Ltv6+rg>TrxtNDpU2p&~g zmJ}2zJIH0og)HK~ou*TU&JmzW{5mo6b^)mwhwfsxMbZfj0_)J2Gt)`{he|6!l!+oRo z----fi9lS;2H6UeAq-b|aYSCYUuE7BD6)8XRF!5gzwz;xZ%t`eRb9v4_EEK=;8^y- zhOGFb_>Ml}HASR#H-%YIq^FIO#SrrvOF+)Oo13xoC=3)3Xjq}Ok)kBZ3I)4^F@jRg z1`x~KKlP2;yL~)Vt*)^Zzt%Q1UNG7eQVDDbe+`Qco~~{7Jd0uK8|H~ztF&mp+^;@< zV5v1cv!P|%hb%aGb|P5Hp%`8b%^f$FK`&=*TeNOwF)itc9q z+bRmwi{xKp^J-yvNs31CM{jL;QB4zuM76uUQFXM!l{i+utL8J7_aep&Mf7g$u`%$0 zVS$yiMngqv_)X|@7|O5Hn6r9REMX}wQCUqM4&_Kwt{0##z6wY0oIWD8;_E3iXfl}?sO+^jr1svdb?S%Txz`6LP>g*!XikuL{}PUHG|{w7@ygV^JpmkrWAfL z4_Ws*e7a1ZnOdlC)1^H(HEYsm(A}37Yf<6wp2`R-MQ=7Wn<3LxTva%NLF+$ul(=n9 z)S3KJxVIHu9v~r zHyrY|HsA{wiGte=^Bk|{zEn_C{bNl;))B^{YBk8MRzc!@03R-1ZPt45Yx3e3R^63r zI`^+b@Nzr2i?GeQUuZQ{y->VcPm?QIb{*b23o(wl6w({s>L3Y*FH*e#HdiVo4!Ha2 z49Uk4%cy!3@N1u>eV|xnVX9P1natVju^fq*8JVxt)Re=rJ1#=djjUBVNW+(ls^{Z; zmW$HAv&J6gu5({de1r&UHD}@EMnuGhL0uFXL=6;4m1WTVbY94twXw6SUL4>Z>)hf4 zDug}|IqMSoGdO=st0OG5`@rr+zuT1MwaHFVX_yNlJ)IQ^+m70Y?iQ7)qa|?Q848e$ zk@qC6g@{nHFhbp-`>sQpgoEI+h-R2`#x}q&&qg3=(gKQ$N=F0PX8_#hZH@ja_MA=d z(KB|lm9h~{4dF&=*~8*Rka>POagwMt)QziHVoF`sYmhIkSxymg*=nIWm&k<%-U8%L z2o3t%V~@LqLrLpxd=({S@^4$Ovv5*5CU4VRXb6}RgJLKZas4iKhQYp8Jfl_RHUAFNRmJqk3d5VU0BHb2GKkyjIwcv5xF+{YGz-VT3Pk{X0G&O_RTD zh7%1#8zN_xXC%@ydb@0ev*pEmB|f9&KWY*7|ES-P!~#hw0C0{6h+3YLEhE7>vjBD20U&FKrQT01 zM+ALgqbS;RQM)Fj^oNPG;X=rA@LuZ;sdIC4)8&FR25|9{i$V2iHmrU(aYkbIoKw4W zDbdWJ@C#3ClGvyGh?&onL=kbXUo1FCNlHp;wf<}e zAjCMWVHXA5bbh+O^}C;RWuC1n$KyW~BkmK~OgaWok8T0gsvd4mn1*OWxT?V6Lha}& zdd31zDUR$1Rc-9+y!q`3jjWykl8weH0UWX2j8X>9zX8-hM!PKMzMYg2rD!4C;Hi%Jw+iBTuBAaCX+I+OcV zUQ$6CN5Z|q-_l-JI`q+uEC%9n%jH{f24z2dW++;Yz^}5Cm2+xV<=N74d~KGh?FZp>e#Co;2*|lt2s@ z8f{BWm&YrT;8drR9O{u*+EYBj4*S!^6f2i^_W+(^LeW=M{b|P8cKv?ks)w67IYBDn zZMY~h_?B2m`;iWN&wijD;!3~c1oX;Kd`hj>{ctwRksp<2Q=;mZ)#*wRd>;^wO-C3! zmyV?FVIPjLC0%Vhog64y_?t0Gujp^4GnaH?mg--Uv+$$rwYYln`yPMA?)b84M>7}B zGM-?|S#tVs-X;$;j!%8HhE#cBAX?&GKsm>+idetDXf8K;lvbbMhYOCZaMANkI;CO+NlkC+T@=eT_0xx$kRnAWwXbiRp)zoG>**yrhcY=G~N`--e z$1g(FNPqnejj;=XlG@_M9IXetxw*wN%m8qP#q9tVxbfryAaj3`?i9VYq}I2rxQ&U~ zu~|fU?|Rxvzq4Z}$oQZEjji#!JD|_8Myf3NTz?d`x>5|l+@g#8QkD|^EdapBa|&IaOV%rFXcH2{{G$>6Bl&`g@2r&Q z^q_VPp2jlv!vujIt*szD3vOKAMl)s=uMIDrKrxJ-Kj&GWu2M2Swvl)&j&JR~a}O7; zR7O=HCWN*GzCagER0ft3m{_r3VgpUnQx6RJK-(5Q5A=DC{eBS;eaqjfm=IG+Lj&&7 zmPEeb)FdgX!sK^_6D@Uo5~Rh}=AZtEZ@ZuGDi;=|U=dqqU9nGt&yq^g&3`MtnIg={ zoS1ZKf5MnF&%i>Q92V3wV1c$p+h%Kx-rre}l z2ZzFU=0PWi$y0KSHjG|R1Z{bN?Vo~Y8UJmyQd$Pq8+cRnMA{fUSGzqVfa5SZm^t#icjQg$PJ^#3T4 zaa59`2%u|dk-^%aJxp0{J#FY=n%bpAb%{mhSL1lx=M81gyCwn*tuLQWNwR1#`Bru> zxk&(#!m#Kfx!iVmXPNRY!F~{Dfq*G+kq%^azhQie4M9fTsF_D)Y6ZGPrwQff?NJ(H zSXx?^;aXVi04F6%-KJ18XKBaQt~T4gEjd{J?n)>1Q0e=xI%y16`;6k^;^{?2WU&3I zWhnOARl3xF67qGSfsB64KZFtCP~TSC5JcvqzXMA!nN zBX4MYJnk5%&np1v7QZe6Q;v1|N05+=hs|q3NzE8A}a){S|7(5(K-Vpi+$`7 z=CouuaXbWwWy55`{)_XH(Zp8u_3UqoKJx>NMSXVKSGniY`G_8y3rHm}vTb1znljo< z#!#q^t7`NjUyCL=vn8~5-1woj@4qD{GbJdN82s4O1v14*G0~R%-z(7u0j?df7ClBE z=FYa$GUg78Qy45(hyEuHDk#l-X-|+)`l$B9+SBwOgiFsG12mUszr=Z;-?urkLQ?rX zjDcqf@pMYg!qDz&K;Hs_FER|hG3pGVm(9E_RHqJwfnFB#xi}Ef(1^!5QSE;Rd|`?@ z8Stp?Z%yq4xYwOyWMnoUR4K(m(j{(!uf_m1K=GjR5g?UxquPMS0ZeQ8oDV5e3jgzu z0OV^5^6n$u*(U)gPCyP=%UI*Or3Tm*$_N2CP_0W`D&N<U;2|9&`O1RN;~sumjWSGV{tU$P5=(7wMkqw+!| z7WBq8@cV<)r0;NwJ)cP(x9eirO{eqk9A5$FIOGki)zbm+AxCNQ4EdC?U|83eeEDy_ z=%1Ify-{Nq<6@sI(ot>ozc0d&zj-_8F<)F=;0pv}S1NeHoH2j`D*|_$>bJ4EHdkqH za@@XhoOTc|yM>YMypfH9mZNC{*p64j7+Thk#n8{39v7Rz)T?R#%i2c0ed6?dgvZ<6 z8+)o%pKM|rgeE{T^tWMl&?vB4cpv4^5++={^0D7zxGSZ#1= z>1bz%k10~5JCTg_c(2m%)8iCMPba}o0KY4l>@hhaY}f{6$RFZN?|I#q9TpCz%4^mM3o6 zimYJ>-$2%CSD<~5*+7Wog{~ClXbv6U9yhRjHv*C&Htu&AI4`WE#e`(#AOWR)9a(Vt zlku8{tA6^vW#umrP1hITl24N7ZB45|E@^b&EtG1oHcj{;3%HZ4&~^u{RKDNebs}8I zJyH026AK#~J1~g;6Xtu=oZMWQD{7Toea69hm!0E>+asNald6M?w51d+(vmodh}+th zKTN?29xi4yvddAsD87NfVCCukWPQX=yN$SkJyY8C78~OYnOA|gKf;?bvQ9i^@cgeS z%hqO`)?0GCIsA}`3ZD$8{VT5UQQ=2oK*u8?D=X{Eo>dqTh=NpAT?M@C3>>Id$N#z# zhE*TIn(s#a`puD;eAl5^0BU>_YhBwmMBJD;q zje0tz7u3RF`b5XO_=f5SV` zq`w(IB+`CNa~B}I64D`s;`lpSv%JQQXY6y-gcL~Y#Fgpv?C02LL4=r1`*xH#XwT+3 zI{X@>12(kQhB@~`$HBP8UfSH<>=@x}q^+&3`qqol^hULi zGbk06n|A>&(UhH=BWZ#Q_hxEHj36@qtIVyClme90is8fgY7~IhO|aI5C4L(S(FMA$ zk3i3#QRB;5=176ev3gLzx6v^s7J>@t-EdNC`AUG{2m|fC)^vN=I9-*wxTw3C;z_b! ziJR$ad_?Nn8$(t*!U=S>+}OfKXNh04V$QC13Aw$`Ej=`gFnqDScj&Jf=cKkQ;tGHK z0OVL<+bzEiW%uzyRwc%E4ERgcD&Z zp~9IOTH?snztE7R{QY+*=8;l{EmmDnlvbrF&M(2>#Eqpu;j6RQ!)xa44G%5V706Kzjg#7ihL;qLcs- z;$0Lj4LOLeimQVxt*}e61Y7lhd{BGS2dH@Q{BM|KWk)E>>=wB8z54s6WhrB|VDAd_ zE8(X|g6S@7+fL@pj#&!ZnW%+)wN#@MUB!Byv`B~ zxoC-Q8I)Uo5Yo|$&7BWYlchVcwBEfT^*t6rS;!Ro$Vk-z#CNfUzO}V=9}oSxr%S>>8$T&yIyW4h7&*cVoaASbH!(9dCvb@IUk7r>0tb z`ovZs6@)~6!~E`I#DMjjO|R-)=KRkrc9wfv{n?JwRUca}o09SYtXkdhV%-WiSw})4wGB;=e}@^0_?h2I9YXRi=uN%@Z@h+Hg}BS>$2Z zvO(7~@yw2$-3M!CQ3OsQu*g!c47NZVC({Q+@<&Ux1x+X&K|;0gzwAW29_c>bBvEWUg*(;(oq{$Wc{@?<8}F`I!Fp$CmjC=!5tkTh9e z1(gG*G^X@iY={7Y1bITMa`+1fVnhp?8gn6dMed)rJ?>Pp2_9!LKey z4`NromqzFN+^Z_E%tB+TPY;kwTPg_06Ch~7o66{sHmk*V97t)IngM%~V&~Vw{-j$0 z*5VdW_<|m6{wl=asZ!1L>g-fRSA~wdfJBr4FWz}FANWdvg_C^??zO|#M_l(JU(#en zn7(UhTw!x0;BQh@t8R9eiFWwB*XiDA_ zGM*(E%Z!C;X=?IJxf32aEC)xo@tT^Wb??=1p^o&m?Xl+BQUfLouW-220A3_8c(R3nwm*cQ%zVe$HzTS$X0@A5qQFVjyqGhj{{oc zAY_a!L<=*--C~MyZ7z-^@h$41-NKf=HZ1Dvh8b8^!7tZzqHK=yv%e$e;I5aaV;f(d zjheXdkMW-BGp3+4x$VvJ;@?j{MvzcRQ94e(Cu2+r_egkzM?gt$TmNVeTb(0rU}>iz zC6NxrPZMxGXi#LRl_KZEX+z4-iAvmID1LxnzZ~)E#xvU+ZklB0a0O5lD>p|gFoRBL z#>@h87v$YC{Po~o`vunKV7q)jY>*)yGj0=Gmcuqr(KgOnRjE_HbBZQK=%9O;mC7{p zTWRr&pUtjCgBj5q{wB573(X$hwcyaIr*VzXkrx^&6!7Pbfa>1Il}kN=-KF4%Wq8{Q zuhpdM7n6@?49DA^Z1-V%1=maP9P0~pD8qH*h+->zjP#`l&fDurgI#OE6Gsk0(W$rE zV%Dgs5U2_PdP`xiF6&TYx9a?!o*udEO5Gw(XO?aENk%{9ac}RSfqIUg97mH-=z6n>S*!YaC-{5VWcI+L1GdfslJivB44a^ z%ElizCI)q=cu?P1vTe;vl%$X1Xek(p+eV+LL06am%zKf$~Uti)?RVqB&X-U*^?Nw|Z!f~1Ga9M>PE7gX4Aibq z!=kd$s9gRUavq(8KfQdx()axGi=r=3gXJp%cxc9J_+_OBlRTa9r+pLO47`noso%ql zoY5GQx*03qnuOi6TUlLELuRZDG-4-<)osT3l`h28BQ6#d{a%$?efkJr3}cWLQ6xE-Aj{u<9r-i)Qc2EI3@ z5}RlIJ!bsrf1v!x7>vmPI{lrPwD4)npWM61NL|&wUG#B8g*PfYADo!Wn>CpeCo!+_ z%*XT%6*xA!5iJ>BwB$aH*)`>;?)1ZRUl}@9eSnOdHf*n~M&6I6qvo6ueP8?2mp)!{ z#adj=z9aZ}TQf3;TOolkZ==@)NAUtrf7 zEZfi{7KEPjHj%AgN@cF)fg{80Yx!2+GdNBP#FV?-}0~#|h zt#AmwX+&RR1(sDb!jBUC?euh9L!V-g2fmIxl;otN#ovaHYZ`G5{c!edCsNY}A`tk} za+5oA<}?Nm&P03LNn{SW249{zh40QRz<~Z&pST@)_f9`*)>XlC@Ar`7|1+Lik&3bh zci@+A-HoMXlhC~8XIL|3DW18#6O~J!!Jki0#bZxDhK$#q!kUqb@l>$~fp$MSGdz}y zTci?2A4isHI%nsZi9s^Ho);FWE4lDIz71GbUIU-a(%1M(4LXp6?|9eY-+f-BkN7L( zZ2t&8zYo)YT!Q-4i=xlwPygM&&BMSfwpS0*ed(A!-HSkHC#q{2(c*8v=*}6i1}(li zcNYCq`hoNPm^eo@<@`BJ&KZG;<40YB$hzXF856}ht+MPmUOuPKkB`^<8gCuVKn6N7 z^4`Z#T=P7(-@O>$>wFW-pWh5*oW{Y_2l4ZJcVqQyN0HIF3yo9Cu#lF%!Y5F&YcKH`p9V1;*Ts$L{XL%a@LwgGrk|x+wZqpr&LoX6#;$m5p~} z@%+Ne+&gI{;Q$2(E7W0j;r*2WS>`kX3OFv!z(JR*4V1CN!sjdpEqFW!Dj4$IBu+!1 z)rp;_t~yW&fXh)upy=aVOu2VDs#=tpC=uwWT!M<6Wte}_Zqit>42`pwpd@&{P)7~Q zR{^CZ9YEpqD+a9X+2b?qKmYSTt$}Ag`)19Wb;;w70a@sl`T6IcTi;Rmv5&%_R`jrA z;5p;o$?v*LJO{V-_I7JfOey*VA|m?GxB;uy)xe)KA4|-{sH^5ZbiZPr=ETW1q^1tI z5V$3MAm~1qnK2kAPqpEip+nH>KZy~;ufg$_6Bv1I&lar6i$35;RXJ9yI|{DEJM!*D zC@XBlua~}r#xx4KgJ}QOax9z^^cmuN&!tZH03Es^22_0FfqaGD&HlWd8h{u%B`s$7dP`wCLW+=b7I4$QzcQ?V`hcW>)ON(0n!>Z$1S0o%x%e%|zb_ z7Q?ltB3C}_g4vS0VQ)U$1r0yb)zfr7O{6S7NE6*kOP4HmggKH(_c$hYh-p0`N5hSu zX^9EuW*3EKFvARRxV9lF<1ryX%MdMerW0c>%BXdmqr&@{Bhu!c<_I_27PsyqcJiFu z%sK30(g;BNK3E9yUbxywc}&2#@FIM6u+yn1z%HyjWjS;L{z@#{kb}p)tMGDb2j2N8 z2ZbIl?(jBZ!0g#*+)$00%5=;rsmIw%qAv%t(m%wR`DN%XE*;;$ue|aK9)9>?D=(ZH z^!D3tTai2|;`qGdK$dawSZ2CFooxOGIR_nb!apFX4Fia_S1d>wpv7z!s z%6WZ>oaiEW@4fe0!i+a9*FfWq7OKBBZ_uuUpx%6iz-cW$bK;Igy)c4221 zbiw5|9YnhrF`^|{EVJ%tWuQAQU6gn(hE3)=If(?z*Db z9Bs4Pp19ySUZ3Fz(uf7lm#)7TiDgh#tzCSpL}vj`0@T^aHYP?C8dk<_U+rP z=Wb8uu?s-+**qJ9t%4GR-Pa|>`4?Y&A@|PYy_J)s0gJ|Z+KDhADmLAMS_cVr{Jb-?l z<7gf_1^L&cTdwL%^e*E3=38#L#foHXPZWamZU}c>T_YBGF7`=tb8{vBV$@=jQuHyD zGb30sw05E?1WP+ac{RE27m5KFh+W@UFN|q z7=kTI{67U4zh?)`2sOKK=<2)P=93-I+S*!7ojUchdSVn$&UH)Fkui3TX_dAMTe2|o zq$q%Un}P+Rsbki=UC8rq9&6l7dlHHgh`p^5Xa0^Bc5bw)GqIBsi8Z9y-YDXF8@ZD% zQM6w19DEjWoQ0E}Sqpv772R#=K1L4#B@qyT3k3N5;Kt{P?~|AD z%E!l$nKK&?&KheyM<~F!XFrF7{v3qmTxC87gI%TRy2;d=_p0w;l7)@W$_`&p!LivSfxk0uzrMn~t~Xh_e%JL}FK# zM&^;7?ZJZwTOD-!oJ6}2<3o%8=ic7QH|~~3=<$5~9oq_zWe2J&YaG9eMSG$6;e)R$ zs{jK_Ff4MSExExm^F_e5fVwI-UogFSx%8$zcP9`{8!~el*nW(=O7MI``@BQbo+J=^ zQya1Q4LdemXNm%k11pu4l~=fA#wI7BU&RbYnR$rZvoptJ<_Pi`X#77CMh z3qrd!k)3t^-zhI~H`X29I5~L2cF0bVJy8fo!mZOuT#S3D5rN#ZP1;3ZqAV#v_jtYo zWnFht3shbrMSy)3yU61^#0WczCw}f}{&1x7(%bVeZUN^@v;WO?`V@4jw+T|$aJ zHX?S4`9kNO>wxe(rmmt4wY0QY;?Nuu9>UO$W}{}*qY)!Ug!Y3lk20e=>_oG}@1(>B zlCi|uJG^#*$Nbx);Tch97kBLF#;TJWY^5=OvyITY!w!@^l4MsdrgqC>aHP{n)()*?z-d0kB`(T zbsiCj8v)*r?91|b&pqD*eC``JO*0oSZW-+5h3VpHyerrT=Y7j1A(Wy^AYnuwUse=& zY&d*@QQYxnpa|lqA3MhEWX8kg(fBXZ!blgkL`n7tiFQ$EM4k~v97*m<(Aizw+&&^} z5B_2gT4R=N2#)$0Y$;?1o$Z61g69)IHG+qmQE*dL&@IXrN--+t#XD7>NF$Zio-`VMI+`VfNk@=OG z3&i%r&L;Q4BG9QqkglD4|0jd@KxArLXK@k%KD+szY|5z^xCq$z1at4%p6%G7L9ehVkMYMNkkIb)_$3uq>Ss~z+M8FLM_&%qxg6~5! zNM{##P90o{G+Pb=GXP~WGdUo?F2bT?|tQ4)5D<266S z9;JwtL#r44nB$#{Yo%SpvEyfVTjaxrZUiRSiE$S*9YQC$55}8kG_P+ghivz>?6Ix0 z-J7{ztV{lnVv^Rv;Q5Vq%VF>`6Vc3fSLbFFPKrN!R9SY^DE@e_(%nN7x?^1{UtiB| zDYUzmvJIM`N#>bgALud91oNzLQXm>W(-;|+?*ON{zQ?bo`+j9UZ;TK$(E<6LaXX7g z@CXJmqg$sN1Mu{{Msx-@5a6@ah&&2C{%u4ajTXtg{wBSZD14m~kH8J>Dcn4|LiADK zu`%!kV?-X^H=NY5=nlNG^09+%<_yNd+Gi|tM#vd)XF77suRU_5{WBjr5_I9-O3^!> zF9?oavrg!u+||e*MIQ%2?8{RGuQzC)gJ{-B3|?QbyWk;_bSI(c>uN0?b{ttw6XZqi zchPH8eL1t4-)bK);%6q-A=?@2FjxrM|BhAN@|{6wJ&uU>E9YJ1bdJPd-Ond zZlmQ!%)mTsS6tFYNd%lkfbVd=_qg%-$oqirKoeclNqLQTUN~^WxqW7>+<50F(XQ`U z(Z^9`j{Z~Z@y1{f6t{(6wp%so>+7v~B)s7n8LqD;Y!mI0qu6F+$IDFmrN|6klED1U zy&D8`qTOuamWLfDM!sa~PV9wyPW$m!TW-J}tj0_n5004H)q?bq=*)AF=Zi1Cu~u2H zB`tc1V0CI}Xb5{P5;Nb%h&pzpIC9OuO{d8&BlE*Hfu zN2JYWIzh($&UAV>N5S3!ip1z>IQiGHu^~Xq9`6NOh23V^CLv=z3>JlUcVmh~yEwE* zaSe_r$zvH_UjaB`)Bc=tm0(W>BM2!N_}O?e@R=C=jN~yCdz!2Y^vhca1t`jYnCcXsQFrP_mvlM%pQ`)zh=Cgo(p@|bG_U%Oa^%Vr- zD*D(Fq|oD?oYJtC5yN* z;xKp}y5QwL*a~=W@FhSYz~@9)OAjPhYf))A7+@mZ8BYikXohWt?-I5fBOurbi`HB+ z)kb6qupTM??5=S}0NU%2)&N=p6pb~q{B^dg6QIyzKtXQo({XT?d!@({;M&%F-^fXpZmJ0a?bM7A-z^=u4Y!cGn?6K)NM zA1(IAt(0PrFIkT4^LY_$IW&QY%$ktFK|^F2k5&yi>%4rgaBzSk!3?h12iVvy*omOK zF-4*Apm0_xlp8w%vn&%uAfJyM7^eF#BLK^FPHiO?1ng1c`D~HUi#~NDt~IwWvaDk+ z`Z!;Yi!joCZRE(2ku6P(&&JM#S+j_pP#VO(UK=6Ez1oAb+63#0eFwgG`Cer`QuHz4Q=&}=aTfXH z%njQa`>Wg-U&74MOHLZ`DK)KqQT&FPC^UFuXWBESx4ZEdaPv#x=F@GFkO@8+9t4*7@VZCpp~d-Itucy{tl z^hh&LjDNF(Y1TRn`GPty<^;@We7GaS9DFeXknO-knq=Km{Bh&+o%gAt9>=^|t^1Z8 zZ(97gjwdZGOdQBz;5TmkY$ALo^Ezj-ubsrZA}vg8Z}i%t*h}PTM^3LjdT!9VPme&Q z$PkE$=%X7aZz764H-xJ-Bf%7S+&D*sFA|O_8le~NNH9k%IQO8d^~oYrl{$-C2$;p9 zwRTwFYV6ZbKgG0Z)B1K2+pq1*3>XWci7d%IQ+LY@@OxQ?B32c>IuWBd^8LcjeR zq%(8ihf@c*@xG=Png(OzU&A!6Fp!)2N|CFJsEa`};3?1dY}nDt`T?PJHC z?ZFL$&B;{9PAMzJ9_Q3EVm7VF99Ymmj#FyqLOyox+-bRZDk5rR>Lse8k1t_%e4G}j zg`GFPv^fWln|-m^-mDW8N>%h`oxV`ct7F_npsudYa_LlHk;v2=y$C42_`;5NqUbY` zB+b$*Mi|n)lmkfir3TD=Uv)AfQ*)8F^o5cu3KJ7(grm~*24N&WMe_@V-sQ)bHF|kJ zm^FICKg;F6pobAV^b~!x-e~S$Ux_Bp?`hRD^Y!FD=-hgxj^Mk}?J+={LO5!okM4}T zk(?4;i=&4WdVE2%bH;fAd;vsD=r_MWJwM>ad3>!BeoB-%z)-5h1X7cJEt$at8{-ftp<7+o*C~0 zGyQ|CDE6Y^R>wM_MThqngJ#L(OWvsf03ZNKL_t*Y7pYe7zVpysm~+K=UoyC!lCuf$ znaX*Ve8(&Goj_DXAFWz+sW20+`<~6-oq=v1>m<<^YvRtj9tRrs z?c0aRlP5bXV+oKO-sgOYyVYgZtU1RvLTiy(Vu02{`#N*RGH8~iNPwz~Xc1^lt*NQ$ zHhO17FV@4!m=|_bC}=pn!OSs>WgeV*EyISEJ+^U<;8X1NbqYl&z?Ck7N zexhR%hyd5ZXXW$$aCRzESQ;QVTiMV!>S@V<6HNN~ffJI?wqKev=h2iH|beyA8gONFF zEavrvn^V>o-N-mNLCZq22J@oe;=7pxu@u44vg(CWKW6DDcK#z3_$QR>>X`Tt;NZfZ zJ$tNvX*B2j#;0_Nf7R{Vw_Ag{-9^Vwd;*axVMIkAUuqmliB(_)I|jZ;_#o%a&6}F8 zB2H=`5`B?tv~D~T>z~1l;$dZWBd>7tY)0_8U~@7gV+}N-kRj{ozHVPe#;g~bOm3od zvE*_dg@uKVYX+<}?OO$`py^z5%j8)J?oJ>@!Am0gem32aJ zpRVo`4od*XYU^XJ_%0Kl&w&Usfz;}YycyoWva~>`&woYF9 z%`KE8Ci-IKbITi<^EUWWq3U-_Cq8z zE$`&6S{;QSUBtB}Saj~mSe=#6GDd5q^;XIK1n4GC;m7B>Qr`$fMD+2YPrv{Rd}z|Imos~lOT3!76DUqJLpm!&WIHMDi>R!2l{ zjDAU@a|OE)G=h-NTF%E}N0|4To93m~*4EuLR$0qnF5bn zqOD1|Yu7HTk0Id~oo7re&PpT1)TvWraVcFR1_b!L*}Hcyrc9XB_bmFIAR&i z`E=}ja2W_bh~vEsL~olzQj5NhMpQO>P?G2AEk4lwh8yR@u#?1r2F??qn=eCtb5yT4 zuEl0yWwtQ}frv)pyaLV}iDpDwx+Jk<&TBYX07rwb{hVAM^#5n?UBIKN(sbb$Au%KY zia z_WP~9YQ5i;v0R~OJ?U<2^j@Ee6@KYb(ZcscDOxz+=n_h-jp`D|cbcyCRJa)JyYh!BQ_FJ#^+J)T)!Ao!|z{O^};z!!APY)j5sfhJ|>1zP!SPs$_S(5-ksy)-j6Z+ z%FNmm4u09; zP5J;=bs=&~b5WL8hPmYySWSIkarT5YA1FVV0B7Sk%tZ-olXFo0r3D4n3cS9z1y=?? zkKF1KoVX_jt5!IWyRZmtIG>gvh?@{<#5*E=$_mKyxB*o)+d4X*Y27)$!mU_3ai9|B z-h1zj$of-M5OL!gu9*=7^~K=reUw|MB7sz(P}5GLK&y}uTtex#M{fX6l|tTP;LZ7} zbVIOfhH<^%ceu*9Ndig>E)Sz-!uAGdU(WsJn{PA>_+26`G!UTI8Q-Z$t%VsHQqng{ zTKc$O3$@TLeNWqcy?iEpBz)Xt)5Xr8jbB1^*$bCVIo-~fWi&k9Vw|WfDupA)1=IJf zs9t4*DQybwtlER&`9SHB6yPss@wBxHXL1VQDk+5>F&Irt-*OksF=;4RXhu#*&*#ZXmf?;Rc0-P!T(|&~wvI_fJ{}>5jxNZCW5yqpL8B zrH6A?uXXFzAvZTS;B~{WeLiVhk~|d zzBU);7ct$n7$-+%1Ak^bQAZi$Nd9=x?U7bvet~b^yjdfBA+HU*b`1azCb4P#Y)xpx z%?jVka8a6gI=^VW<)%8E3B9LS80kE|e=0tVq=Wya6_D>B6=9xykyDA|53)up8x=)* zVt{?cl-46YYj?)?wjCqi2}Sf&A}nIANVu=h6wdvP9(Nhe*n9hIbnB!tn96l&;fDbC zG%Rsr1W%1nM%QQ3$G8M$?ewR=zmeQ<{3To=ef~mH-*u?b{_w*Owap&q>izfMukS)e z>UFI(@l21~0Qa8(%}UOV-lR#Bj5OkI9}BHE)&jna&Z07;*+Hj$K2(MNYsWeIFA zF$_r~m4Q^;_!&}8%VeIk>d{q;ADA><)a_XyNo^J{WB15jz}O?v*u)WQ?SaU+yZl zOv&=|&p+3GKXKwjpNSvu$BiO4^xc_whHDPzB;C%sW5ePemq%jE?`3zFO&1m4iuAdvQBqtDmoo;rc}|p7IDye);as^MFvY<-^lOwS{%_=0 zzKZG}{s`E$AI#zTu(=$|oN0(LWgyMg0&8Y9?3sngHA7Pbu+b_&ZMmw`r!vgZZHk3E zyV7EZ73rf(r{>Pt15mD>wE7{kNgo5gsfd!zt``fG!A?~!i7Lqm6+I+OES}BApK0;) zbpt?x#I>0kywHZg_Y!AH