Models: various bag-of-words approaches
This page describes regressions for the Web Tracks from TREC 2013 and 2014 using the (full) ClueWeb12 collection. The exact configurations for these regressions are stored in this YAML file. Note that this page is automatically generated from this template as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead.
From one of our Waterloo servers (e.g., orca
), the following command will perform the complete regression, end to end:
python src/main/python/run_regression.py --index --verify --search --regression cw12
Typical indexing command:
bin/run.sh io.anserini.index.IndexCollection \
-threads 44 \
-collection ClueWeb12Collection \
-input /path/to/cw12 \
-generator DefaultLuceneDocumentGenerator \
-index indexes/lucene-index.cw12/ \
-storeRaw \
>& logs/log.cw12 &
The directory /path/to/cw12/
should be the root directory of the (full) ClueWeb12 collection, i.e., /path/to/cw12/
should contain Disk1
, Disk2
, Disk3
, Disk4
.
For additional details, see explanation of common indexing options.
Topics and qrels are stored here, which is linked to the Anserini repo as a submodule. They are downloaded from NIST:
topics.web.201-250.txt
: topics for the TREC 2013 Web Track (Topics 201-250)topics.web.251-300.txt
: topics for the TREC 2014 Web Track (Topics 251-300)qrels.web.201-250.txt
: one aspect per topic qrels for the TREC 2013 Web Track (Topics 201-250)qrels.web.251-300.txt
: one aspect per topic qrels for the TREC 2014 Web Track (Topics 251-300)
After indexing has completed, you should be able to perform retrieval as follows:
bin/run.sh io.anserini.search.SearchCollection \
-index indexes/lucene-index.cw12/ \
-topics tools/topics-and-qrels/topics.web.201-250.txt \
-topicReader Webxml \
-output runs/run.cw12.bm25.topics.web.201-250.txt \
-bm25 &
bin/run.sh io.anserini.search.SearchCollection \
-index indexes/lucene-index.cw12/ \
-topics tools/topics-and-qrels/topics.web.251-300.txt \
-topicReader Webxml \
-output runs/run.cw12.bm25.topics.web.251-300.txt \
-bm25 &
bin/run.sh io.anserini.search.SearchCollection \
-index indexes/lucene-index.cw12/ \
-topics tools/topics-and-qrels/topics.web.201-250.txt \
-topicReader Webxml \
-output runs/run.cw12.bm25+rm3.topics.web.201-250.txt \
-parallelism 16 -bm25 -rm3 -collection ClueWeb09Collection &
bin/run.sh io.anserini.search.SearchCollection \
-index indexes/lucene-index.cw12/ \
-topics tools/topics-and-qrels/topics.web.251-300.txt \
-topicReader Webxml \
-output runs/run.cw12.bm25+rm3.topics.web.251-300.txt \
-parallelism 16 -bm25 -rm3 -collection ClueWeb09Collection &
bin/run.sh io.anserini.search.SearchCollection \
-index indexes/lucene-index.cw12/ \
-topics tools/topics-and-qrels/topics.web.201-250.txt \
-topicReader Webxml \
-output runs/run.cw12.ql.topics.web.201-250.txt \
-qld &
bin/run.sh io.anserini.search.SearchCollection \
-index indexes/lucene-index.cw12/ \
-topics tools/topics-and-qrels/topics.web.251-300.txt \
-topicReader Webxml \
-output runs/run.cw12.ql.topics.web.251-300.txt \
-qld &
bin/run.sh io.anserini.search.SearchCollection \
-index indexes/lucene-index.cw12/ \
-topics tools/topics-and-qrels/topics.web.201-250.txt \
-topicReader Webxml \
-output runs/run.cw12.ql+rm3.topics.web.201-250.txt \
-parallelism 16 -qld -rm3 -collection ClueWeb09Collection &
bin/run.sh io.anserini.search.SearchCollection \
-index indexes/lucene-index.cw12/ \
-topics tools/topics-and-qrels/topics.web.251-300.txt \
-topicReader Webxml \
-output runs/run.cw12.ql+rm3.topics.web.251-300.txt \
-parallelism 16 -qld -rm3 -collection ClueWeb09Collection &
Evaluation can be performed using trec_eval
and gdeval.pl
:
tools/eval/gdeval.pl tools/topics-and-qrels/qrels.web.201-250.txt runs/run.cw12.bm25.topics.web.201-250.txt
bin/trec_eval -m map -m P.30 tools/topics-and-qrels/qrels.web.201-250.txt runs/run.cw12.bm25.topics.web.201-250.txt
tools/eval/gdeval.pl tools/topics-and-qrels/qrels.web.251-300.txt runs/run.cw12.bm25.topics.web.251-300.txt
bin/trec_eval -m map -m P.30 tools/topics-and-qrels/qrels.web.251-300.txt runs/run.cw12.bm25.topics.web.251-300.txt
tools/eval/gdeval.pl tools/topics-and-qrels/qrels.web.201-250.txt runs/run.cw12.bm25+rm3.topics.web.201-250.txt
bin/trec_eval -m map -m P.30 tools/topics-and-qrels/qrels.web.201-250.txt runs/run.cw12.bm25+rm3.topics.web.201-250.txt
tools/eval/gdeval.pl tools/topics-and-qrels/qrels.web.251-300.txt runs/run.cw12.bm25+rm3.topics.web.251-300.txt
bin/trec_eval -m map -m P.30 tools/topics-and-qrels/qrels.web.251-300.txt runs/run.cw12.bm25+rm3.topics.web.251-300.txt
tools/eval/gdeval.pl tools/topics-and-qrels/qrels.web.201-250.txt runs/run.cw12.ql.topics.web.201-250.txt
bin/trec_eval -m map -m P.30 tools/topics-and-qrels/qrels.web.201-250.txt runs/run.cw12.ql.topics.web.201-250.txt
tools/eval/gdeval.pl tools/topics-and-qrels/qrels.web.251-300.txt runs/run.cw12.ql.topics.web.251-300.txt
bin/trec_eval -m map -m P.30 tools/topics-and-qrels/qrels.web.251-300.txt runs/run.cw12.ql.topics.web.251-300.txt
tools/eval/gdeval.pl tools/topics-and-qrels/qrels.web.201-250.txt runs/run.cw12.ql+rm3.topics.web.201-250.txt
bin/trec_eval -m map -m P.30 tools/topics-and-qrels/qrels.web.201-250.txt runs/run.cw12.ql+rm3.topics.web.201-250.txt
tools/eval/gdeval.pl tools/topics-and-qrels/qrels.web.251-300.txt runs/run.cw12.ql+rm3.topics.web.251-300.txt
bin/trec_eval -m map -m P.30 tools/topics-and-qrels/qrels.web.251-300.txt runs/run.cw12.ql+rm3.topics.web.251-300.txt
With the above commands, you should be able to reproduce the following results:
MAP | BM25 | +RM3 | QL | +RM3 |
---|---|---|---|---|
TREC 2013 Web Track (Topics 201-250) | 0.1695 | 0.1477 | 0.1494 | 0.1284 |
TREC 2014 Web Track (Topics 251-300) | 0.2469 | 0.2342 | 0.2467 | 0.2185 |
P30 | BM25 | +RM3 | QL | +RM3 |
TREC 2013 Web Track (Topics 201-250) | 0.2767 | 0.2400 | 0.2607 | 0.2373 |
TREC 2014 Web Track (Topics 251-300) | 0.4547 | 0.4140 | 0.4380 | 0.3800 |
nDCG@20 | BM25 | +RM3 | QL | +RM3 |
TREC 2013 Web Track (Topics 201-250) | 0.2083 | 0.2058 | 0.1993 | 0.1701 |
TREC 2014 Web Track (Topics 251-300) | 0.2572 | 0.2548 | 0.2220 | 0.2076 |
ERR@20 | BM25 | +RM3 | QL | +RM3 |
TREC 2013 Web Track (Topics 201-250) | 0.1283 | 0.1304 | 0.1232 | 0.0995 |
TREC 2014 Web Track (Topics 251-300) | 0.1616 | 0.1655 | 0.1323 | 0.1242 |