Skip to content

Commit

Permalink
Fix to #161
Browse files Browse the repository at this point in the history
- signficantly change the documentation file
- link to it from index.md
- remove the image resizing script, since (a) it does not work, (b) is obviated by using ImagesLayer
- add sample prototxt that uses ImagesLayer.
  • Loading branch information
sergeyk committed Mar 20, 2014
1 parent c10ba54 commit 3b51aab
Show file tree
Hide file tree
Showing 4 changed files with 286 additions and 57 deletions.
70 changes: 38 additions & 32 deletions docs/feature_extraction.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,59 +3,65 @@ layout: default
title: Caffe
---

Extracting Features Using Pre-trained Model
===========================================
Extracting Features
===================

CAFFE represents Convolution Architecture For Feature Extraction. Extracting features using pre-trained model is one of the strongest requirements users ask for.
In this tutorial, we will extract features using a pre-trained model.
Follow instructions for [setting up caffe](installation.html) and for [getting](getting_pretrained_models.html) the pre-trained ImageNet model.
If you need detailed information about the tools below, please consult their source code, in which additional documentation is usually provided.

Because of the record-breaking image classification accuracy and the flexible domain adaptability of [the network architecture proposed by Krizhevsky, Sutskever, and Hinton](http://books.nips.cc/papers/files/nips25/NIPS2012_0534.pdf), Caffe provides a pre-trained reference image model to save you from days of training.
Select data to run on
---------------------

If you need detailed usage help information of the involved tools, please read the source code of them which provide everything you need to know about.
We'll make a temporary folder to store things into.

Get the Reference Model
-----------------------
mkdir examples/_temp

Assume you are in the root directory of Caffe.
Generate a list of the files to process.
We're going to use the images that ship with caffe.

cd models
./get_caffe_reference_imagenet_model.sh
find `pwd`/examples/images -type f -exec echo {} \; > examples/_temp/file_list.txt

After the downloading is finished, you will have models/caffe_reference_imagenet_model.
The `ImagesLayer` we'll use expects labels after each filenames, so let's add a 0 to the end of each line

Preprocess the Data
-------------------
sed "s/$/ 0/" examples/_temp/file_list.txt > examples/_temp/file_list.txt

Generate a list of the files to process.
Define the Feature Extraction Network Architecture
--------------------------------------------------

examples/feature_extraction/generate_file_list.py /your/images/dir /your/images.txt
In practice, subtracting the mean image from a dataset significantly improves classification accuracies.
Download the mean image of the ILSVRC dataset.

The network definition of the reference model only accepts 256*256 pixel images stored in the leveldb format. First, resize your images if they do not match the required size.
data/ilsvrc12/get_ilsvrc_aux.sh

build/tools/resize_and_crop_images.py --num_clients=8 --image_lib=opencv --output_side_length=256 --input=/your/images.txt --input_folder=/your/images/dir --output_folder=/your/resized/images/dir_256_256
We will use `data/ilsvrc212/imagenet_mean.binaryproto` in the network definition prototxt.

Set the num_clients to be the number of CPU cores on your machine. Run "nproc" or "cat /proc/cpuinfo | grep processor | wc -l" to get the number on Linux.
Let's copy and modify the network definition.
We'll be using the `ImagesLayer`, which will load and resize images for us.

build/tools/generate_file_list.py /your/resized/images/dir_256_256 /your/resized/images_256_256.txt
build/tools/convert_imageset /your/resized/images/dir_256_256 /your/resized/images_256_256.txt /your/resized/images_256_256_leveldb 1
cp examples/feature_extraction/imagenet_val.prototxt examples/_temp

In practice, subtracting the mean image from a dataset significantly improves classification accuracies. Download the mean image of the ILSVRC dataset.
Edit `examples/_temp/imagenet_val.prototxt` to use correct path for your setup (replace `$CAFFE_DIR`)

data/ilsvrc12/get_ilsvrc_aux.sh
Extract Features
----------------

You can directly use the imagenet_mean.binaryproto in the network definition proto. If you have a large number of images, you can also compute the mean of all the images.
Now everything necessary is in place.

build/tools/compute_image_mean.bin /your/resized/images_256_256_leveldb /your/resized/images_256_256_mean.binaryproto
build/tools/extract_features.bin models/caffe_reference_imagenet_model examples/_temp/imagenet_val.prototxt fc7 examples/_temp/features 10

Define the Feature Extraction Network Architecture
--------------------------------------------------
The name of feature blob that you extract is `fc7`, which represents the highest level feature of the reference model.
We can use any other layer, as well, such as `conv5` or `pool3`.

If you do not want to change the reference model network architecture , simply copy examples/imagenet into examples/your_own_dir. Then point the source and meanfile field of the data layer in imagenet_val.prototxt to /your/resized/images_256_256_leveldb and /your/resized/images_256_256_mean.binaryproto respectively.
The last parameter above is the number of data mini-batches.

Extract Features
----------------
The features are stored to LevelDB `examples/_temp/features`, ready for access by some other code.

Now everything necessary is in place.
If you'd like to use the Python wrapper for extracting features, check out the [layer visualization notebook](http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/filter_visualization.ipynb).

Clean Up
--------

build/tools/extract_features.bin models/caffe_reference_imagenet_model examples/feature_extraction/imagenet_val.prototxt fc7 examples/feature_extraction/features 10
Let's remove the temporary directory now.

The name of feature blob that you extract is fc7 which represents the highest level feature of the reference model. Any other blob is also applicable. The last parameter above is the number of data mini-batches.
rm -r examples/_temp
1 change: 1 addition & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ Even in CPU mode, computing predictions on an image takes only 20 ms when images
* [LeNet / MNIST Demo](/mnist.html): end-to-end training and testing of LeNet on MNIST.
* [CIFAR-10 Demo](/cifar10.html): training and testing on the CIFAR-10 data.
* [Training ImageNet](/imagenet_training.html): end-to-end training of an ImageNet classifier.
* [Feature extraction with C++](/feature_extraction.html): feature extraction using pre-trained model
* [Running Pretrained ImageNet \[notebook\]][pretrained_imagenet]: run classification with the pretrained ImageNet model using the Python interface.
* [Running Detection \[notebook\]][imagenet_detection]: run a pretrained model as a detector.
* [Visualizing Features and Filters \[notebook\]][visualizing_filters]: trained filters and an example image, viewed layer-by-layer.
Expand Down
25 changes: 0 additions & 25 deletions examples/feature_extraction/generate_file_list.py

This file was deleted.

247 changes: 247 additions & 0 deletions examples/feature_extraction/imagenet_val.prototxt
Original file line number Diff line number Diff line change
@@ -0,0 +1,247 @@
name: "CaffeNet"
layers {
layer {
name: "data"
type: "images"
source: "$CAFFE_DIR/examples/_temp/file_list.txt"
meanfile: "$CAFFE_DIR/data/ilsvrc12/imagenet_mean.binaryproto"
batchsize: 50
new_height: 256
new_width: 256
mirror: false
cropsize: 227
}
top: "data"
top: "label"
}
layers {
layer {
name: "conv1"
type: "conv"
num_output: 96
kernelsize: 11
stride: 4
}
bottom: "data"
top: "conv1"
}
layers {
layer {
name: "relu1"
type: "relu"
}
bottom: "conv1"
top: "conv1"
}
layers {
layer {
name: "pool1"
type: "pool"
pool: MAX
kernelsize: 3
stride: 2
}
bottom: "conv1"
top: "pool1"
}
layers {
layer {
name: "norm1"
type: "lrn"
local_size: 5
alpha: 0.0001
beta: 0.75
}
bottom: "pool1"
top: "norm1"
}
layers {
layer {
name: "conv2"
type: "conv"
num_output: 256
group: 2
kernelsize: 5
pad: 2
}
bottom: "norm1"
top: "conv2"
}
layers {
layer {
name: "relu2"
type: "relu"
}
bottom: "conv2"
top: "conv2"
}
layers {
layer {
name: "pool2"
type: "pool"
pool: MAX
kernelsize: 3
stride: 2
}
bottom: "conv2"
top: "pool2"
}
layers {
layer {
name: "norm2"
type: "lrn"
local_size: 5
alpha: 0.0001
beta: 0.75
}
bottom: "pool2"
top: "norm2"
}
layers {
layer {
name: "conv3"
type: "conv"
num_output: 384
kernelsize: 3
pad: 1
}
bottom: "norm2"
top: "conv3"
}
layers {
layer {
name: "relu3"
type: "relu"
}
bottom: "conv3"
top: "conv3"
}
layers {
layer {
name: "conv4"
type: "conv"
num_output: 384
group: 2
kernelsize: 3
pad: 1
}
bottom: "conv3"
top: "conv4"
}
layers {
layer {
name: "relu4"
type: "relu"
}
bottom: "conv4"
top: "conv4"
}
layers {
layer {
name: "conv5"
type: "conv"
num_output: 256
group: 2
kernelsize: 3
pad: 1
}
bottom: "conv4"
top: "conv5"
}
layers {
layer {
name: "relu5"
type: "relu"
}
bottom: "conv5"
top: "conv5"
}
layers {
layer {
name: "pool5"
type: "pool"
kernelsize: 3
pool: MAX
stride: 2
}
bottom: "conv5"
top: "pool5"
}
layers {
layer {
name: "fc6"
type: "innerproduct"
num_output: 4096
}
bottom: "pool5"
top: "fc6"
}
layers {
layer {
name: "relu6"
type: "relu"
}
bottom: "fc6"
top: "fc6"
}
layers {
layer {
name: "drop6"
type: "dropout"
dropout_ratio: 0.5
}
bottom: "fc6"
top: "fc6"
}
layers {
layer {
name: "fc7"
type: "innerproduct"
num_output: 4096
}
bottom: "fc6"
top: "fc7"
}
layers {
layer {
name: "relu7"
type: "relu"
}
bottom: "fc7"
top: "fc7"
}
layers {
layer {
name: "drop7"
type: "dropout"
dropout_ratio: 0.5
}
bottom: "fc7"
top: "fc7"
}
layers {
layer {
name: "fc8"
type: "innerproduct"
num_output: 1000
}
bottom: "fc7"
top: "fc8"
}
layers {
layer {
name: "prob"
type: "softmax"
}
bottom: "fc8"
top: "prob"
}
layers {
layer {
name: "accuracy"
type: "accuracy"
}
bottom: "prob"
bottom: "label"
top: "accuracy"
}

0 comments on commit 3b51aab

Please sign in to comment.