Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docstrings for Wordrank #1378

Merged
merged 6 commits into from
Jun 6, 2017
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/notebooks/Wordrank_comparisons.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@
" \n",
" # Train using wordrank\n",
" output_file = '{:s}_wr'.format(output_name)\n",
" output_dir = 'wordrank_model' # directory to save embeddings and metadata to\n",
" output_dir = 'model' # directory to save embeddings and metadata to\n",
" if not os.path.isfile(os.path.join(MODELS_DIR, '{:s}.vec'.format(output_file))):\n",
" print('\\nTraining wordrank on {:s} corpus..'.format(corpus_file))\n",
" %time wr_model = Wordrank.train(WR_HOME, corpus_file, output_dir, **wr_params); wr_model\n",
Expand Down
8 changes: 6 additions & 2 deletions gensim/models/wrappers/wordrank.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
`Word2Vec` for that.

Example:
>>> model = gensim.models.wrappers.Wordrank('/Users/dummy/wordrank', corpus_file='text8', out_name='wr_model')
>>> model = gensim.models.wrappers.Wordrank.train('/Users/dummy/wordrank', corpus_file='text8', out_name='wr_model')
>>> print model[word] # prints vector for given words

.. [1] https://bitbucket.org/shihaoji/wordrank/
Expand Down Expand Up @@ -47,8 +47,12 @@ class Wordrank(KeyedVectors):
@classmethod
def train(cls, wr_path, corpus_file, out_name, size=100, window=15, symmetric=1, min_count=5, max_vocab_size=0,
sgd_num=100, lrate=0.001, period=10, iter=90, epsilon=0.75, dump_period=10, reg=0, alpha=100,
beta=99, loss='hinge', memory=4.0, cleanup_files=True, sorted_vocab=1, ensemble=0):
beta=99, loss='hinge', memory=4.0, cleanup_files=False, sorted_vocab=1, ensemble=0):
Copy link
Contributor

@menshikh-iv menshikh-iv Jun 3, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the reason for change cleanup_file to False?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cleanup_files=False will not delete the (word/context) embedding files and vocab file generated by wordrank during training, which are saved inside wordrank's directory . Though the train() method loads the final required embedding file before deleting everything that was generated during training but it could be confusing to users who expect to find it after the training is finished.
So, making the default behavior to not delete them could be better to avoid confusion.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please enumerate output files (filename and what the file contains) in docstring.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added the output filenames and content info. in out_name param description because it is the directory which contain these files.

"""
The word and context embedding files are generated by wordrank binary and are saved in "out_name" directory
which is created inside wordrank directory. The vocab and cooccurence files are generated using glove code
available inside the wordrank directory. These files are used by the wordrank binary for training.

`wr_path` is the path to the Wordrank directory.
`corpus_file` is the filename of the text file to be used for training the Wordrank model.
Expects file to contain space-separated tokens in a single line
Expand Down