Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loading ensemble embeddings take wordrank .word and context file where these files are located i did'nt get this. Is this embeddings are generated by demo script given in wordrank or by your train function given in wordrank wrapper #1357

Closed
gauravsaxenaiiit opened this issue May 23, 2017 · 3 comments

Comments

@gauravsaxenaiiit
Copy link

Description

TODO: change commented example

Steps/Code/Corpus to Reproduce

Expected Results

Actual Results

Versions

@tmylk
Copy link
Contributor

tmylk commented May 23, 2017

CC @parulsethi Let's add more comments in the docstring

@parulsethi
Copy link
Contributor

@gauravsaxenaiiit You can set cleanup_files=False to keep the .word and .context files generated by wordrank, and then they can be located in wordrank directory. And embeddings are generted by gensim wrapper's train() not demo script.

@tmylk Sure

@menshikh-iv
Copy link
Contributor

Implemented in #1066

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants