Skip to content

Commit

Permalink
Merge pull request #5 from erikbern/master
Browse files Browse the repository at this point in the history
upmerge
  • Loading branch information
yurymalkov authored Apr 5, 2018
2 parents 9ebae83 + 4e4d783 commit e5db6ac
Show file tree
Hide file tree
Showing 36 changed files with 749 additions and 567 deletions.
4 changes: 3 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,14 +12,14 @@ env:
- LIBRARY=datasketch
- LIBRARY=dolphinn
- LIBRARY=faiss
- LIBRARY=falconn
- LIBRARY=flann
- LIBRARY=kgraph
- LIBRARY=nearpy
- LIBRARY=nmslib
- LIBRARY=panns
- LIBRARY=rpforest
- LIBRARY=sklearn
- LIBRARY=pynndescent

before_install:
- pip install -r requirements.txt
Expand All @@ -28,3 +28,5 @@ before_install:
script:
- python run.py --docker-tag ann-benchmarks-${LIBRARY} --max-n-algorithms 5 --dataset random-xs-20-angular
- python plot.py --dataset random-xs-20-angular --output plot.png
- python -m unittest test/test-metrics.py
- python create_website.py --outputdir . --scatter --latex
21 changes: 21 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2018 Erik Bernhardsson

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
49 changes: 41 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,6 @@ Doing fast searching of nearest neighbors in high dimensional spaces is an incre

This project contains some tools to benchmark various implementations of approximate nearest neighbor (ANN) search for different metrics. We have pregenerated datasets (in HDF5) formats and we also have Docker containers for each algorithm. There's a [test suite](https://travis-ci.org/erikbern/ann-benchmarks) that makes sure every algorithm works.

See [the results of this benchmark](http://sss.projects.itu.dk/ann-benchmarks).

Evaluated
=========

Expand All @@ -20,10 +18,10 @@ Evaluated
* [KGraph](https://github.com/aaalgo/kgraph)
* [NMSLIB (Non-Metric Space Library)](https://github.com/searchivarius/nmslib): SWGraph, HNSW, BallTree, MPLSH
* [RPForest](https://github.com/lyst/rpforest)
* [FALCONN](http://falconn-lib.org/)
* [FAISS](https://github.com/facebookresearch/faiss.git)
* [DolphinnPy](https://github.com/ipsarros/DolphinnPy)
* [Datasketch](https://github.com/ekzhu/datasketch)
* [PyNNDescent](https://github.com/lmcinnes/pynndescent)

Data sets
=========
Expand All @@ -33,15 +31,45 @@ We have a number of precomputed data sets for this. All data sets are pre-split
| Dataset | Dimensions | Train size | Test size | Neighbors | Distance | Download |
| ----------------------------------------------------------------- | ---------: | ---------: | --------: | --------: | --------- | ---------------------------------------------------------------------------- |
| [Fashion-MNIST](https://github.com/zalandoresearch/fashion-mnist) | 784 | 60,000 | 10,000 | 100 | Euclidean | [HDF5](http://vectors.erikbern.com/fashion-mnist-784-euclidean.hdf5) (217MB) |
| [GIST](http://corpus-texmex.irisa.fr/) | 960 | 1,000,000 | 1,000 | 100 | Euclidean | [HDF5](http://vectors.erikbern.com/gist-960-euclidean.hdf5) (121MB) |
| [GloVe](http://nlp.stanford.edu/projects/glove/) | 25 | 1,133,628 | 59,886 | 100 | Angular | [HDF5](http://vectors.erikbern.com/glove-25-angular.hdf5) (121MB) |
| GloVe | 50 | 1,133,628 | 59,886 | 100 | Angular | [HDF5](http://vectors.erikbern.com/glove-50-angular.hdf5) (235MB) |
| GloVe | 100 | 1,133,628 | 59,886 | 100 | Angular | [HDF5](http://vectors.erikbern.com/glove-100-angular.hdf5) (463MB) |
| GloVe | 200 | 1,133,628 | 59,886 | 100 | Angular | [HDF5](http://vectors.erikbern.com/glove-200-angular.hdf5) (918MB) |
| [GIST](http://corpus-texmex.irisa.fr/) | 960 | 1,000,000 | 1,000 | 100 | Euclidean | [HDF5](http://vectors.erikbern.com/gist-960-euclidean.hdf5) (3.6GB) |
| [GloVe](http://nlp.stanford.edu/projects/glove/) | 25 | 1,183,514 | 10,000 | 100 | Angular | [HDF5](http://vectors.erikbern.com/glove-25-angular.hdf5) (121MB) |
| GloVe | 50 | 1,183,514 | 10,000 | 100 | Angular | [HDF5](http://vectors.erikbern.com/glove-50-angular.hdf5) (235MB) |
| GloVe | 100 | 1,183,514 | 10,000 | 100 | Angular | [HDF5](http://vectors.erikbern.com/glove-100-angular.hdf5) (463MB) |
| GloVe | 200 | 1,183,514 | 10,000 | 100 | Angular | [HDF5](http://vectors.erikbern.com/glove-200-angular.hdf5) (918MB) |
| [MNIST](http://yann.lecun.com/exdb/mnist/) | 784 | 60,000 | 10,000 | 100 | Euclidean | [HDF5](http://vectors.erikbern.com/mnist-784-euclidean.hdf5) (217MB) |
| [NYTimes](https://archive.ics.uci.edu/ml/datasets/bag+of+words) | 256 | 290,000 | 10,000 | 100 | Angular | [HDF5](http://vectors.erikbern.com/nytimes-256-angular.hdf5) (301MB) |
| [SIFT](https://corpus-texmex.irisa.fr/) | 128 | 1,000,000 | 10,000 | 100 | Euclidean | [HDF5](http://vectors.erikbern.com/sift-128-euclidean.hdf5) (501MB) |

Results
=======


glove-100-angular

![glove-100-angular](https://raw.github.com/erikbern/ann-benchmarks/master/results/glove-100-angular.png)

sift-128-euclidean

![glove-100-angular](https://raw.github.com/erikbern/ann-benchmarks/master/results/sift-128-euclidean.png)

fashion-mnist-784-euclidean

![fashion-mnist-784-euclidean](https://raw.github.com/erikbern/ann-benchmarks/master/results/fashion-mnist-784-euclidean.png)

gist-960-euclidean

![gist-960-euclidean](https://raw.github.com/erikbern/ann-benchmarks/master/results/gist-960-euclidean.png)

nytimes-256-angular

![nytimes-256-angular](https://raw.github.com/erikbern/ann-benchmarks/master/results/nytimes-256-angular.png)

glove-25-angular

![glove-25-angular](https://raw.github.com/erikbern/ann-benchmarks/master/results/glove-25-angular.png)

Results as of Feb 2018-02-05, running all benchmarks on a c5.4xlarge machine on AWS.

Install
=======

Expand Down Expand Up @@ -85,3 +113,8 @@ Principles
* We currently support CPU-based ANN algorithms. GPU support is planned as future work.
* Do proper train/test set of index data and query points.
* Note that Hamming distance and set similarity was supported in the past. This might hopefully be added back soon.

Authors
=======

Built by [Erik Bernhardsson](https://erikbern.com) with significant contributions from [Martin Aumüller](http://itu.dk/people/maau/) and [Alexander Faithfull](https://github.com/ale-f).
35 changes: 16 additions & 19 deletions algos.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,14 @@ float:
# {"tuneK": 10, "desiredRecall": 0.97}), and so on up to
# NmslibNewIndex("angular", "vptree", {"tuneK": 10, "desiredRecall":
# 0.1}).
pynndescent:
docker-tag: ann-benchmarks-pynndescent
module: ann_benchmarks.algorithms.pynndescent
constructor: PyNNDescent
base-args: ["@metric"]
run-groups:
pynndescent:
args: [[5, 10, 15, 20], [4, 8, 16], [40], [1.0, 2.0, 4.0, 8.0]]
euclidean:
kgraph:
docker-tag: ann-benchmarks-kgraph
Expand Down Expand Up @@ -279,28 +287,17 @@ float:
base:
args: [[3, 5, 10, 20, 40, 100, 200, 400],
[3, 5, 10, 20, 40, 100, 200, 400]]
falconn:
docker-tag: ann-benchmarks-falconn
module: ann_benchmarks.algorithms.falconn
constructor: FALCONN
base-args: ["@metric", 16]
pynndescent:
docker-tag: ann-benchmarks-pynndescent
module: ann_benchmarks.algorithms.pynndescent
constructor: PyNNDescent
base-args: ["@metric"]
run-groups:
base:
L: &l [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 15, 17, 19, 21, 24, 27,
30, 33, 37, 41, 46, 51, 57, 63, 70, 77, 85, 94, 104, 115, 127,
140, 154, 170, 188, 207, 228, 251, 277, 305, 336, 370, 408, 449,
494, 544, 599, 659, 725, 798, 878, 966, 1063, 1170, 1287, 1416]
args: [*l]
pynndescent:
args: [[10, 20, 30, 40], [8, 16, 32], [40], [2.0, 4.0, 8.0, 16.0, 32.0]]

bit:
hamming:
falconn:
docker-tag: ann-benchmarks-falconn
module: ann_benchmarks.algorithms.falconn
constructor: FALCONN
base-args: ["@metric", 16]
run-groups:
base:
args: [*l]
kgraph:
docker-tag: ann-benchmarks-kgraph
module: ann_benchmarks.algorithms.kgraph
Expand Down
15 changes: 15 additions & 0 deletions ann_benchmarks/algorithms/definitions.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
import sys
import traceback
import yaml
from enum import Enum
from itertools import product


Expand All @@ -20,6 +21,20 @@ def instantiate_algorithm(definition):
constructor = getattr(module, definition.constructor)
return constructor(*definition.arguments)

class InstantiationStatus(Enum):
AVAILABLE = 0
NO_CONSTRUCTOR = 1
NO_MODULE = 2

def algorithm_status(definition):
try:
module = importlib.import_module(definition.module)
if hasattr(module, definition.constructor):
return InstantiationStatus.AVAILABLE
else:
return InstantiationStatus.NO_CONSTRUCTOR
except ImportError:
return InstantiationStatus.NO_MODULE

def get_result_filename(dataset, count, definition):
d = ['results',
Expand Down
40 changes: 17 additions & 23 deletions ann_benchmarks/algorithms/nmslib.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,12 @@
from ann_benchmarks.constants import INDEX_DIR
from ann_benchmarks.algorithms.base import BaseANN


class NmslibReuseIndex(BaseANN):
@staticmethod
def encode(d):
return ["%s=%s" % (a, b) for (a, b) in d.iteritems()]

def __init__(self, metric, method_name, index_param, save_index, query_param):
self._nmslib_metric = {'angular': 'cosinesimil', 'euclidean': 'l2'}[metric]
self._method_name = method_name
Expand All @@ -19,7 +21,7 @@ def __init__(self, metric, method_name, index_param, save_index, query_param):

d = os.path.dirname(self._index_name)
if not os.path.exists(d):
os.makedirs(d)
os.makedirs(d)

def fit(self, X):
if self._method_name == 'vptree':
Expand All @@ -28,28 +30,24 @@ def fit(self, X):
# what(): The data size is too small or the bucket size is too big. Select the parameters so that <total # of records> is NOT less than <bucket size> * 1000
# Aborted (core dumped)
self._index_param.append('bucketSize=%d' % min(int(X.shape[0] * 0.0005), 1000))

self._index = nmslib.init(space=self._nmslib_metric,
method=self._method_name)

for i, x in enumerate(X):
nmslib.addDataPoint(self._index, i, x.tolist())
self._index = nmslib.init(space=self._nmslib_metric, method=self._method_name)
self._index.addDataPointBatch(X)

if os.path.exists(self._index_name):
print('Loading index from file')
nmslib.loadIndex(self._index, self._index_name)
self._index.loadIndex(self._index_name)
else:
nmslib.createIndex(self._index, self._index_param)
if self._save_index:
nmslib.saveIndex(self._index, self._index_name)
self._index.createIndex(self._index_param)
if self._save_index:
self._index.saveIndex(self._index_name)

nmslib.setQueryTimeParams(self._index, self._query_param)
self._index.setQueryTimeParams(self._query_param)

def query(self, v, n):
return nmslib.knnQuery(self._index, n, v.tolist())
ids, distances = self._index.knnQuery(v, n)
return ids

def freeIndex(self):
nmslib.freeIndex(self._index)

class NmslibNewIndex(BaseANN):
def __init__(self, metric, method_name, method_param):
Expand All @@ -65,16 +63,12 @@ def fit(self, X):
# what(): The data size is too small or the bucket size is too big. Select the parameters so that <total # of records> is NOT less than <bucket size> * 1000
# Aborted (core dumped)
self._method_param.append('bucketSize=%d' % min(int(X.shape[0] * 0.0005), 1000))

self._index = nmslib.init(self._nmslib_metric, [], self._method_name, nmslib.DataType.DENSE_VECTOR, nmslib.DistType.FLOAT)

for i, x in enumerate(X):
nmslib.addDataPoint(self._index, i, x.tolist())

self._index = nmslib.init(space=self._nmslib_metric, method=self._method_name)
self._index.addDataPointBatch(X)

nmslib.createIndex(self._index, self._method_param)

def query(self, v, n):
return nmslib.knnQuery(self._index, n, v.tolist())

def freeIndex(self):
nmslib.freeIndex(self._index)
ids, distances = self._index.knnQuery(v, n)
return ids
28 changes: 28 additions & 0 deletions ann_benchmarks/algorithms/pynndescent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
from __future__ import absolute_import
import pynndescent
import numpy as np
from ann_benchmarks.algorithms.base import BaseANN

class PyNNDescent(BaseANN):
def __init__(self, metric, n_neighbors=10, n_trees=8, leaf_size=40, queue_size=2.0):
self._n_neighbors = int(n_neighbors)
self._n_trees = int(n_trees)
self._leaf_size = int(leaf_size)
self._queue_size = float(queue_size)
self._pynnd_metric = {'angular': 'cosine', 'euclidean': 'euclidean'}[metric]
self.name = 'PyNNDescent(n_neighbors=%d,n_trees=%d,leaf_size=%d,queue_size=%.2f)' % \
(self._n_neighbors, self._n_trees, self._leaf_size, self._queue_size)

def fit(self, X):
self._index = pynndescent.NNDescent(X,
n_neighbors=self._n_neighbors,
n_trees=self._n_trees,
leaf_size=self._leaf_size,
metric=self._pynnd_metric)

def query(self, v, n):
ind, dist = self._index.query(np.array([v]), k=n, queue_size=self._queue_size)
return ind[0]

def use_threads(self):
return False
Loading

0 comments on commit e5db6ac

Please sign in to comment.