Skip to content
/ pine Public

A python package that allows you to train, use, and evaluate position-independent word embeddings (PInE).

License

Notifications You must be signed in to change notification settings

MIR-MU/pine

Repository files navigation

Position-Independent Embeddings

Continuous Integration Status Documentation Status Open in Colab

Position-independent word embeddings (PInE) are word embeddings produced by shallow log-bilinear language models (e.g. word2vec, fastText, or GLoVe) using positional weighting. Positional weighting allows the models to distinguish between words on different positions in a sentence and to produce better position-independent representations of words. See our paper for details:

  • Novotný, V., Štefánik, M., Ayetiran, E. F., Sojka, P. & Řehůřek, R. (2022). When FastText Pays Attention: Efficient Estimation of Word Representations using Constrained Positional Weighting. JUCS – Journal of Universal Computer Science (28, Issue 2, 181–201). https://doi.org/10.3897/jucs.69619

This Python package allows you to train, use, and evaluate position-independent word embeddings.

https://github.com/MIR-MU/pine/raw/main/images/pine.png

Recent deep neural language models based on the Transformer architecture are Turing-complete universal approximators that can understand language better than humans on a number of natural language processing tasks.

In contrast, log-bilinear language models such as word2vec, fastText, and GLoVE are shallow and use a simplifying bag-of-words representation of text, which severely limits their predictive ability. However, they are fast and cheap to train on large corpora and their internal word embeddings can be used for transfer-learning to improve the performance of other models.

Our constrained positional model improves the bag-of-words representation of text by allowing the model to react to the position of words in a sentence and produce position-independent word embeddings without sacrificing the simplicity and speed that is pivotal to the success of log-bilinear language models. Unlike the positional model of Mikolov et al. (2018), our model constrains the capacity dedicated to modeling the positions of words, which improves the speed of the model as well as its accuracy on a number of natural language processing tasks.

You can start from our Colab tutorial. In this tutorial, we are going to produce our position-independent word embeddings and compare them with the word embeddings of the subword model (fastText) of Bojanowski et al. (2017) and the positional model of Mikolov et al. (2018) on a number of natural language processing tasks. We will also visualize the embeddings of positions, which are a byproduct of the position-independent word embeddings, discuss their properties and their possible applications for transfer learning.

Name Link
Training + Masked Word Prediction + Language Modeling + Importance of Positions Open in Colab

At the command line:

$ pip install git+https://github.com/MIR-MU/pine.git

Or, if you have virtualenvwrapper installed:

$ mkvirtualenv -p `which python3` pine
(pine) $ pip install git+https://github.com/MIR-MU/pine.git

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

Remember that this is a research tool. 😉

About

A python package that allows you to train, use, and evaluate position-independent word embeddings (PInE).

Resources

License

Stars

Watchers

Forks

Packages

No packages published