Skip to content

Latest commit

 

History

History
68 lines (53 loc) · 4.72 KB

README.md

File metadata and controls

68 lines (53 loc) · 4.72 KB

GloVe: Global Vectors for Word Representation

This fork: Add GloVe wrapper

examples:
`````````
    ./scripts/run_glove.py -texj 3 --corpus-fpath ../word2vec_data/data_no_unk_tag.txt
    ./scripts/run_glove.py -feac toy --corpus-fpath ./data_toy/data_toy.txt

Optional arguments:
```````````````````
    -h, --help          Show this help message and exit
    -c {big,toy}, --corpus-type {big,toy}
                        Training dataset name.
    --corpus-fpath CORPUS_FPATH
                        Training dataset filepath.
    -j NUM_JOBS, --num-jobs NUM_JOBS
                        Set the number of successive jobs.
    -i DATA_INFO, --data-info DATA_INFO
                        Extra info used to describe and sort the current
                        model.
    -e, --eval          Perform the evaluation test.
    -x, --export-embeds Export embeddings and vocabulary to file.
    -l NUM_THREADS, --num-threads NUM_THREADS
                        Limit the number of CPU threads.
    -a, --analysis      Start analysis interactive mode.
    -p, --pre-process   Train the models.
    -t, --train         Train the models.
    -f, --full-train    Pre-process and train the models.

Original readme

nearest neighbors of
frog
Litoria Leptodactylidae Rana Eleutherodactylus
Pictures
Comparisons man -> woman city -> zip comparative -> superlative
GloVe Geometry

We provide an implementation of the GloVe model for learning word representations, and describe how to download web-dataset vectors or train your own. See the project page or the paper for more information on glove vectors.

Download pre-trained word vectors

The links below contain word vectors obtained from the respective corpora. If you want word vectors trained on massive web datasets, you need only download one of these text files! Pre-trained word vectors are made available under the Public Domain Dedication and License.

  • Common Crawl (42B tokens, 1.9M vocab, uncased, 300d vectors, 1.75 GB download): glove.42B.300d.zip
  • Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download): glove.840B.300d.zip
  • Wikipedia 2014 + Gigaword 5 (6B tokens, 400K vocab, uncased, 300d vectors, 822 MB download): glove.6B.zip
  • Twitter (2B tweets, 27B tokens, 1.2M vocab, uncased, 200d vectors, 1.42 GB download): glove.twitter.27B.zip

Train word vectors on a new corpus

If the web datasets above don't match the semantics of your end use case, you can train word vectors on your own corpus.

$ git clone http://github.com/stanfordnlp/glove
$ cd glove && make
$ ./demo.sh

The demo.sh script downloads a small corpus, consisting of the first 100M characters of Wikipedia. It collects unigram counts, constructs and shuffles cooccurrence data, and trains a simple version of the GloVe model. It also runs a word analogy evaluation script in python to verify word vector quality. More details about training on your own corpus can be found by reading demo.sh or the src/README.md

License

All work contained in this package is licensed under the Apache License, Version 2.0. See the include LICENSE file.