Skip to content

An efficient implementation of Partitioned Label Trees & its variations for extreme multi-label classification

License

Notifications You must be signed in to change notification settings

tomtung/omikuji

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 
Ā 

Repository files navigation

Omikuji

Build Status Crate version PyPI version

An efficient implementation of Partitioned Label Trees (Prabhu et al., 2018) and its variations for extreme multi-label classification, written in RustšŸ¦€ with lovešŸ’–.

Features & Performance

Omikuji has has been tested on datasets from the Extreme Classification Repository. All tests below are run on a quad-core IntelĀ® Coreā„¢ i7-6700 CPU, and we allowed as many cores to be utilized as possible. We measured training time, and calculated precisions at 1, 3, and 5. (Note that, due to randomness, results might vary from run to run, especially for smaller datasets.)

Parabel, better parallelized

Omikuji provides a more parallelized implementation of Parabel (Prabhu et al., 2018) that trains faster when more CPU cores are available. Compared to the original implementation written in C++, which can only utilize the same number of CPU cores as the number of trees (3 by default), Omikuji maintains the same level of precision but trains 1.3x to 1.7x faster on our quad-core machine. Further speed-up is possible if more CPU cores are available.

Dataset Metric Parabel Omikuji
(balanced,
cluster.k=2)
EURLex-4K P@1 82.2 82.1
P@3 68.8 68.8
P@5 57.6 57.7
Train Time 18s 14s
Amazon-670K P@1 44.9 44.8
P@3 39.8 39.8
P@5 36.0 36.0
Train Time 404s 234s
WikiLSHTC-325K P@1 65.0 64.8
P@3 43.2 43.1
P@5 32.0 32.1
Train Time 959s 659s

Regular k-means for shallow trees

Following Bonsai (Khandagale et al., 2019), Omikuji supports using regular k-means instead of balanced 2-means clustering for tree construction, which results in wider, shallower and unbalanced trees that train slower but have better precision. Comparing to the original Bonsai implementation, Omikuji also achieves the same precisions while training 2.6x to 4.6x faster on our quad-core machine. (Similarly, further speed-up is possible if more CPU cores are available.)

Dataset Metric Bonsai Omikuji
(unbalanced,
cluster.k=100,
max_depth=3)
EURLex-4K P@1 82.8 83.0
P@3 69.4 69.5
P@5 58.1 58.3
Train Time 87s 19s
Amazon-670K P@1 45.5* 45.6
P@3 40.3* 40.4
P@5 36.5* 36.6
Train Time 5,759s 1,753s
WikiLSHTC-325K P@1 66.6* 66.6
P@3 44.5* 44.4
P@5 33.0* 33.0
Train Time 11,156s 4,259s

*Precision numbers as reported in the paper; our machine doesn't have enough memory to run the full prediction with their implementation.

Balanced k-means for balanced shallow trees

Sometimes it's desirable to have shallow and wide trees that are also balanced, in which case Omikuji supports the balanced k-means algorithm used by HOMER (Tsoumakas et al., 2008) for clustering as well.

Dataset Metric Omikuji
(balanced,
cluster.k=100)
EURLex-4K P@1 82.1
P@3 69.4
P@5 58.1
Train Time 19s
Amazon-670K P@1 45.4
P@3 40.3
P@5 36.5
Train Time 1,153s
WikiLSHTC-325K P@1 65.6
P@3 43.6
P@5 32.5
Train Time 3,028s

Layer collapsing for balanced shallow trees

An alternative way for building balanced, shallow and wide trees is to collapse adjacent layers, similar to the tree compression step used in AttentionXML (You et al., 2019): intermediate layers are removed, and their children replace them as the children of their parents. For example, with balanced 2-means clustering, if we collapse 5 layers after each layer, we can increase the tree arity from 2 to 2āµāŗĀ¹ = 64.

Dataset Metric Omikuji
(balanced,
cluster.k=2,
collapse 5 layers)
EURLex-4K P@1 82.4
P@3 69.3
P@5 58.0
Train Time 16s
Amazon-670K P@1 45.3
P@3 40.2
P@5 36.4
Train Time 460s
WikiLSHTC-325K P@1 64.9
P@3 43.3
P@5 32.3
Train Time 1,649s

Build & Install

Omikuji can be easily built & installed with Cargo as a CLI app:

cargo install omikuji --features cli --locked

Or install from the latest source:

cargo install --git https://github.com/tomtung/omikuji.git --features cli --locked

The CLI app will be available as omikuji. For example, to reproduce the results on the EURLex-4K dataset:

omikuji train eurlex_train.txt --model_path ./model
omikuji test ./model eurlex_test.txt --out_path predictions.txt

Python Binding

A simple Python binding is also available for training and prediction. It can be install via pip:

pip install omikuji

Note that you might still need to install Cargo should compilation become necessary.

You can also install from the latest source:

pip install git+https://github.com/tomtung/omikuji.git -v

The following script demonstrates how to use the Python binding to train a model and make predictions:

import omikuji

# Train
hyper_param = omikuji.Model.default_hyper_param()
# Adjust hyper-parameters as needed
hyper_param.n_trees = 5
model = omikuji.Model.train_on_data("./eurlex_train.txt", hyper_param)

# Serialize & de-serialize
model.save("./model")
model = omikuji.Model.load("./model")
# Optionally densify model weights to trade off between prediction speed and memory usage
model.densify_weights(0.05)

# Predict
feature_value_pairs = [
    (0, 0.101468),
    (1, 0.554374),
    (2, 0.235760),
    (3, 0.065255),
    (8, 0.152305),
    (10, 0.155051),
    # ...
]
label_score_pairs =  model.predict(feature_value_pairs)

Usage

$ omikuji train --help
Train a new omikuji model

USAGE:
    omikuji train [OPTIONS] <TRAINING_DATA_PATH>

ARGS:
    <TRAINING_DATA_PATH>
            Path to training dataset file

            The dataset file is expected to be in the format of the Extreme Classification
            Repository.

OPTIONS:
        --centroid_threshold <THRESHOLD>
            Threshold for pruning label centroid vectors

            [default: 0]

        --cluster.eps <CLUSTER_EPS>
            Epsilon value for determining linear classifier convergence

            [default: 0.0001]

        --cluster.k <K>
            Number of clusters

            [default: 2]

        --cluster.min_size <MIN_SIZE>
            Labels in clusters with sizes smaller than this threshold are reassigned to other
            clusters instead

            [default: 2]

        --cluster.unbalanced
            Perform regular k-means clustering instead of balanced k-means clustering

        --collapse_every_n_layers <N_LAYERS>
            Number of adjacent layers to collapse

            This increases tree arity and decreases tree depth.

            [default: 0]

    -h, --help
            Print help information

        --linear.c <C>
            Cost coefficient for regularizing linear classifiers

            [default: 1]

        --linear.eps <LINEAR_EPS>
            Epsilon value for determining linear classifier convergence

            [default: 0.1]

        --linear.loss <LOSS>
            Loss function used by linear classifiers

            [default: hinge]
            [possible values: hinge, log]

        --linear.max_iter <M>
            Max number of iterations for training each linear classifier

            [default: 20]

        --linear.weight_threshold <MIN_WEIGHT>
            Threshold for pruning weight vectors of linear classifiers

            [default: 0.1]

        --max_depth <DEPTH>
            Maximum tree depth

            [default: 20]

        --min_branch_size <SIZE>
            Number of labels below which no further clustering & branching is done

            [default: 100]

        --model_path <MODEL_PATH>
            Optional path of the directory where the trained model will be saved if provided

            If an model with compatible settings is already saved in the given directory, the newly
            trained trees will be added to the existing model")

        --n_threads <N_THREADS>
            Number of worker threads

            If 0, the number is selected automatically.

            [default: 0]

        --n_trees <N_TREES>
            Number of trees

            [default: 3]

        --train_trees_1_by_1
            Finish training each tree before start training the next

            This limits initial parallelization but saves memory.

        --tree_structure_only
            Build the trees without training classifiers

            Might be useful when a downstream user needs the tree structures only.
$ omikuji test --help
Test an existing omikuji model

USAGE:
    omikuji test [OPTIONS] <MODEL_PATH> <TEST_DATA_PATH>

ARGS:
    <MODEL_PATH>
            Path of the directory where the trained model is saved

    <TEST_DATA_PATH>
            Path to test dataset file

            The dataset file is expected to be in the format of the Extreme Classification
            Repository.

OPTIONS:
        --beam_size <BEAM_SIZE>
            Beam size for beam search

            [default: 10]

    -h, --help
            Print help information

        --k_top <K>
            Number of top predictions to write out for each test example

            [default: 5]

        --max_sparse_density <DENSITY>
            Density threshold above which sparse weight vectors are converted to dense format

            Lower values speed up prediction at the cost of more memory usage.

            [default: 0.1]

        --n_threads <N_THREADS>
            Number of worker threads

            If 0, the number is selected automatically.

            [default: 0]

        --out_path <OUT_PATH>
            Path to the which predictions will be written, if provided

Data format

Our implementation takes dataset files formatted as those provided in the Extreme Classification Repository. A data file starts with a header line with three space-separated integers: total number of examples, number of features, and number of labels. Following the header line, there is one line per each example, starting with comma-separated labels, followed by space-separated feature:value pairs:

label1,label2,...labelk ft1:ft1_val ft2:ft2_val ft3:ft3_val .. ftd:ftd_val

Trivia

The project name comes from o-mikuji (å¾”ē„žē±¤), which are predictions about one's future written on strips of paper (labels?) at jinjas and temples in Japan, often tied to branches of pine trees after they are read.

References

  • Y. Prabhu, A. Kag, S. Harsola, R. Agrawal, and M. Varma, ā€œParabel: Partitioned Label Trees for Extreme Classification with Application to Dynamic Search Advertising,ā€ in Proceedings of the 2018 World Wide Web Conference, 2018, pp. 993ā€“1002.
  • S. Khandagale, H. Xiao, and R. Babbar, ā€œBonsai - Diverse and Shallow Trees for Extreme Multi-label Classification,ā€ Apr. 2019.
  • G. Tsoumakas, I. Katakis, and I. Vlahavas, ā€œEffective and efficient multilabel classification in domains with large number of labels,ā€ ECML, 2008.
  • R. You, S. Dai, Z. Zhang, H. Mamitsuka, and S. Zhu, ā€œAttentionXML: Extreme Multi-Label Text Classification with Multi-Label Attention Based Recurrent Neural Networks,ā€ Jun. 2019.

License

Omikuji is licensed under the MIT License.