-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Biomedical pre-trained word embeddings #28
Comments
Thanks @gbrokos ! That's definitely useful. What we also need is a clear description of the preprocessing (especially since this is biomedical domain, where good tokenization / phrase detection is important). How can users of your dataset match this preprocessing, in order to look up words? The license seems a bit limiting. What is the reason not to allow commercial use? |
For text preprocessing we used the "bioclean" lambda function defined in the code below. Originally this were included in the toolkit.py script that accompanies the word embeddings of the BioASQ challenge. I just removed the surrounding ' '.join() to avoid joining tokens by spaces after splitting. Here is a python code example:
Output: Other than that, we followed the exact workflow described in the readme file's preprocessing section. Regarding the license, MEDLINE/PubMed Terms and Conditions declare that "some PubMed/MEDLINE abstracts may be protected by copyright." We are still, not sure if this affects word embeddings produced using this dataset. MEDLINE/PubMed Terms and Conditions: https://www.nlm.nih.gov/databases/download/terms_and_conditions.html |
Downloading Failure.Please upload your file to another place. |
CC @mpenkov |
Links of the original comment have been updated. It should work now. |
Thank you for the pretrained file. Why simple disorders like "breast cancer" and "heart attack" show Out Of Vocabulary? Pubmed must have references to such common disorders! |
Hi, the preprocessing and tokenization of the text has been done as described above. Word2vec was trained on the words resulted by this process and not bi-grams like "breast cancer" or "heart attack". However, word embeddings for the uni-grams "breast", "cancer", "heart" and "attack" exist. |
Hi, At the rate I am getting OOV after processing a small chunk of my data, I think I would expect 100x OOV of this list. I guess, one needs a text file, not the bin file to add to the vocabulary, right? |
Hi, Regards Prabhat |
Do you include full model with both input and output layers (center and context word embeddings)? |
FYI
|
Thanks a lot for creating this model! |
Hi @mariask2, glad you found it useful! |
We (AUEB's NLP group: http://nlp.cs.aueb.gr/) recently released word embeddings pre-trained on text from 27 million biomedical articles from the MEDLINE/PubMed Baseline 2018.
Two versions of word embeddings are provided, both in Word2Vec's C binary format:
200-dimensional: https://archive.org/download/pubmed2018_w2v_200D.tar/pubmed2018_w2v_200D.tar.gz
400-dimensional: https://archive.org/download/pubmed2018_w2v_400D.tar/pubmed2018_w2v_400D.tar.gz
Each .tar.gz file contains a folder with the pre-trained model and a readme file which you can also find here:
https://archive.org/download/pubmed2018_w2v_200D.tar/README.txt
The readme file contains details, statistics and license information for this dataset.
We would be happy to contribute this dataset to the gensim-data project. Let me know if you need any additional information or change in the files' format.
Code example: Load and use the 200D pre-trained model.
The text was updated successfully, but these errors were encountered: