diff --git a/docs/en/week09/09-1.md b/docs/en/week09/09-1.md index 2c42e6b56..337dd7797 100644 --- a/docs/en/week09/09-1.md +++ b/docs/en/week09/09-1.md @@ -62,7 +62,7 @@ The answer about whether it helps is not clear. People interested in this are ei
**Fig 3:** Structure of Convolutional RELU with Group Sparsity -As can be seen above, you are start with an image, you have an encoder which is basically Convolution RELU and some kind of scaling layer after this. You train with group sparsity. You have a linear decoder and a criterion which is group by 1. You take the group sparsity as a regulariser. This is like L2 pooling with an architecture similar to group sparsity. +As can be seen above, you start with an image, you have an encoder which is basically Convolution RELU and some kind of scaling layer after this. You train with group sparsity. You have a linear decoder and a criterion which is group by 1. You take the group sparsity as a regulariser. This is like L2 pooling with an architecture similar to group sparsity. You can also train another instance of this network. This time, you can add more layers and have a decoder with the L2 pooling and sparsity criterion, train it to reconstruct its input with pooling on top. This will create a pretrained 2-layer convolutional net. This procedure is also called Stacked Autoencoder. The main characteristic here is that it is trained to produce invariant features with group sparsity.