Skip to content

Commit

Permalink
Merge pull request BVLC#161 from kloudkl/simplify_feature_extraction
Browse files Browse the repository at this point in the history
Feature extraction, feature binarization and image retrieval examples
  • Loading branch information
sergeyk committed Mar 20, 2014
2 parents 12c0220 + 40fbbc2 commit d074b29
Show file tree
Hide file tree
Showing 9 changed files with 562 additions and 0 deletions.
61 changes: 61 additions & 0 deletions docs/feature_extraction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
layout: default
title: Caffe
---

Extracting Features Using Pre-trained Model
===========================================

CAFFE represents Convolution Architecture For Feature Extraction. Extracting features using pre-trained model is one of the strongest requirements users ask for.

Because of the record-breaking image classification accuracy and the flexible domain adaptability of [the network architecture proposed by Krizhevsky, Sutskever, and Hinton](http://books.nips.cc/papers/files/nips25/NIPS2012_0534.pdf), Caffe provides a pre-trained reference image model to save you from days of training.

If you need detailed usage help information of the involved tools, please read the source code of them which provide everything you need to know about.

Get the Reference Model
-----------------------

Assume you are in the root directory of Caffe.

cd models
./get_caffe_reference_imagenet_model.sh

After the downloading is finished, you will have models/caffe_reference_imagenet_model.

Preprocess the Data
-------------------

Generate a list of the files to process.

examples/feature_extraction/generate_file_list.py /your/images/dir /your/images.txt

The network definition of the reference model only accepts 256*256 pixel images stored in the leveldb format. First, resize your images if they do not match the required size.

build/tools/resize_and_crop_images.py --num_clients=8 --image_lib=opencv --output_side_length=256 --input=/your/images.txt --input_folder=/your/images/dir --output_folder=/your/resized/images/dir_256_256

Set the num_clients to be the number of CPU cores on your machine. Run "nproc" or "cat /proc/cpuinfo | grep processor | wc -l" to get the number on Linux.

build/tools/generate_file_list.py /your/resized/images/dir_256_256 /your/resized/images_256_256.txt
build/tools/convert_imageset /your/resized/images/dir_256_256 /your/resized/images_256_256.txt /your/resized/images_256_256_leveldb 1

In practice, subtracting the mean image from a dataset significantly improves classification accuracies. Download the mean image of the ILSVRC dataset.

data/ilsvrc12/get_ilsvrc_aux.sh

You can directly use the imagenet_mean.binaryproto in the network definition proto. If you have a large number of images, you can also compute the mean of all the images.

build/tools/compute_image_mean.bin /your/resized/images_256_256_leveldb /your/resized/images_256_256_mean.binaryproto

Define the Feature Extraction Network Architecture
--------------------------------------------------

If you do not want to change the reference model network architecture , simply copy examples/imagenet into examples/your_own_dir. Then point the source and meanfile field of the data layer in imagenet_val.prototxt to /your/resized/images_256_256_leveldb and /your/resized/images_256_256_mean.binaryproto respectively.

Extract Features
----------------

Now everything necessary is in place.

build/tools/extract_features.bin models/caffe_reference_imagenet_model examples/feature_extraction/imagenet_val.prototxt fc7 examples/feature_extraction/features 10

The name of feature blob that you extract is fc7 which represents the highest level feature of the reference model. Any other blob is also applicable. The last parameter above is the number of data mini-batches.
25 changes: 25 additions & 0 deletions examples/feature_extraction/generate_file_list.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
#!/usr/bin/env python
import os
import sys

def help():
print 'Usage: ./generate_file_list.py file_dir file_list.txt'
exit(1)

def main():
if len(sys.argv) < 3:
help()
file_dir = sys.argv[1]
file_list_txt = sys.argv[2]
if not os.path.exists(file_dir):
print 'Error: file dir does not exist ', file_dir
exit(1)
file_dir = os.path.abspath(file_dir) + '/'
with open(file_list_txt, 'w') as output:
for root, dirs, files in os.walk(file_dir):
for name in files:
file_path = file_path.replace(os.path.join(root, name), '')
output.write(file_path + '\n')

if __name__ == '__main__':
main()
9 changes: 9 additions & 0 deletions include/caffe/net.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,13 @@ class Net {
inline int num_outputs() { return net_output_blobs_.size(); }
inline vector<Blob<Dtype>*>& input_blobs() { return net_input_blobs_; }
inline vector<Blob<Dtype>*>& output_blobs() { return net_output_blobs_; }
// has_blob and blob_by_name are inspired by
// https://github.com/kencoken/caffe/commit/f36e71569455c9fbb4bf8a63c2d53224e32a4e7b
// Access intermediary computation layers, testing with centre image only
bool has_blob(const string& blob_name);
const shared_ptr<Blob<Dtype> > blob_by_name(const string& blob_name);
bool has_layer(const string& layer_name);
const shared_ptr<Layer<Dtype> > layer_by_name(const string& layer_name);

protected:
// Function to get misc parameters, e.g. the learning rate multiplier and
Expand All @@ -91,11 +98,13 @@ class Net {
// Individual layers in the net
vector<shared_ptr<Layer<Dtype> > > layers_;
vector<string> layer_names_;
map<string, int> layer_names_index_;
vector<bool> layer_need_backward_;
// blobs stores the blobs that store intermediate results between the
// layers.
vector<shared_ptr<Blob<Dtype> > > blobs_;
vector<string> blob_names_;
map<string, int> blob_names_index_;
vector<bool> blob_need_backward_;
// bottom_vecs stores the vectors containing the input for each layer.
// They don't actually host the blobs (blobs_ does), so we simply store
Expand Down
4 changes: 4 additions & 0 deletions include/caffe/util/math_functions.hpp
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
// Copyright 2013 Yangqing Jia
// Copyright 2014 kloudkl@github

#ifndef CAFFE_UTIL_MATH_FUNCTIONS_H_
#define CAFFE_UTIL_MATH_FUNCTIONS_H_
Expand Down Expand Up @@ -100,6 +101,9 @@ Dtype caffe_cpu_dot(const int n, const Dtype* x, const Dtype* y);
template <typename Dtype>
void caffe_gpu_dot(const int n, const Dtype* x, const Dtype* y, Dtype* out);

template <typename Dtype>
int caffe_hamming_distance(const int n, const Dtype* x, const Dtype* y);

} // namespace caffe


Expand Down
42 changes: 42 additions & 0 deletions src/caffe/net.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,12 @@ void Net<Dtype>::Init(const NetParameter& in_param) {
LOG(INFO) << "This network produces output " << *it;
net_output_blobs_.push_back(blobs_[blob_name_to_idx[*it]].get());
}
for (size_t i = 0; i < blob_names_.size(); ++i) {
blob_names_index_[blob_names_[i]] = i;
}
for (size_t i = 0; i < layer_names_.size(); ++i) {
layer_names_index_[layer_names_[i]] = i;
}
GetLearningRateAndWeightDecay();
LOG(INFO) << "Network initialization done.";
LOG(INFO) << "Memory required for Data " << memory_used*sizeof(Dtype);
Expand Down Expand Up @@ -327,6 +333,42 @@ void Net<Dtype>::Update() {
}
}

template <typename Dtype>
bool Net<Dtype>::has_blob(const string& blob_name) {
return blob_names_index_.find(blob_name) != blob_names_index_.end();
}

template <typename Dtype>
const shared_ptr<Blob<Dtype> > Net<Dtype>::blob_by_name(
const string& blob_name) {
shared_ptr<Blob<Dtype> > blob_ptr;
if (has_blob(blob_name)) {
blob_ptr = blobs_[blob_names_index_[blob_name]];
} else {
blob_ptr.reset((Blob<Dtype>*)(NULL));
LOG(WARNING) << "Unknown blob name " << blob_name;
}
return blob_ptr;
}

template <typename Dtype>
bool Net<Dtype>::has_layer(const string& layer_name) {
return layer_names_index_.find(layer_name) != layer_names_index_.end();
}

template <typename Dtype>
const shared_ptr<Layer<Dtype> > Net<Dtype>::layer_by_name(
const string& layer_name) {
shared_ptr<Layer<Dtype> > layer_ptr;
if (has_layer(layer_name)) {
layer_ptr = layers_[layer_names_index_[layer_name]];
} else {
layer_ptr.reset((Layer<Dtype>*)(NULL));
LOG(WARNING) << "Unknown layer name " << layer_name;
}
return layer_ptr;
}

INSTANTIATE_CLASS(Net);

} // namespace caffe
77 changes: 77 additions & 0 deletions src/caffe/test/test_math_functions.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
// Copyright 2014 kloudkl@github

#include <stdint.h> // for uint32_t & uint64_t

#include "gtest/gtest.h"
#include "caffe/blob.hpp"
#include "caffe/common.hpp"
#include "caffe/filler.hpp"
#include "caffe/util/math_functions.hpp"

#include "caffe/test/test_caffe_main.hpp"

namespace caffe {

template<typename Dtype>
class MathFunctionsTest : public ::testing::Test {
protected:
MathFunctionsTest()
: blob_bottom_(new Blob<Dtype>()),
blob_top_(new Blob<Dtype>()) {
}

virtual void SetUp() {
Caffe::set_random_seed(1701);
this->blob_bottom_->Reshape(100, 70, 50, 30);
this->blob_top_->Reshape(100, 70, 50, 30);
// fill the values
FillerParameter filler_param;
GaussianFiller<Dtype> filler(filler_param);
filler.Fill(this->blob_bottom_);
filler.Fill(this->blob_top_);
}

virtual ~MathFunctionsTest() {
delete blob_bottom_;
delete blob_top_;
}
// http://en.wikipedia.org/wiki/Hamming_distance
int ReferenceHammingDistance(const int n, const Dtype* x, const Dtype* y);

Blob<Dtype>* const blob_bottom_;
Blob<Dtype>* const blob_top_;
};

#define REF_HAMMING_DIST(float_type, int_type) \
template<> \
int MathFunctionsTest<float_type>::ReferenceHammingDistance(const int n, \
const float_type* x, \
const float_type* y) { \
int dist = 0; \
int_type val; \
for (int i = 0; i < n; ++i) { \
val = static_cast<int_type>(x[i]) ^ static_cast<int_type>(y[i]); \
/* Count the number of set bits */ \
while (val) { \
++dist; \
val &= val - 1; \
} \
} \
return dist; \
}

REF_HAMMING_DIST(float, uint32_t);
REF_HAMMING_DIST(double, uint64_t);

typedef ::testing::Types<float, double> Dtypes;
TYPED_TEST_CASE(MathFunctionsTest, Dtypes);

TYPED_TEST(MathFunctionsTest, TestHammingDistance) {
int n = this->blob_bottom_->count();
const TypeParam* x = this->blob_bottom_->cpu_data();
const TypeParam* y = this->blob_top_->cpu_data();
CHECK_EQ(this->ReferenceHammingDistance(n, x, y),
caffe_hamming_distance<TypeParam>(n, x, y));
}

} // namespace caffe
Loading

0 comments on commit d074b29

Please sign in to comment.