Skip to content

artefactory/NLPretext

Repository files navigation

CasText

😫 Working on an NLP project and tired of always looking for the same silly preprocessing functions on the web?

πŸ˜₯ Need to efficiently extract email adresses from a document? Hashtags from tweets? Remove accents from a French post?

CasText got you covered! πŸš€

CasText packages in a unique library all the text preprocessing functions you need to ease your NLP project.

πŸ” Quickly explore below our preprocessing pipelines and individual functions referential.

Cannot find what you were looking for? Feel free to open an issue.

Installation

This package has been tested on Python 3.7.

To install this library you should first clone the repository:

git clone git@github.com:artefactory/castext.git && cd castext/

We strongly advise you to do the remaining steps in a virtual environnement.

First install the required files:

pip install -r requirements.txt

then install the library with pip:

pip install -e .

This library uses Spacy as tokenizer. Current models supported are en_core_web_sm and fr_core_news_sm.

Preprocessing pipeline

Default pipeline

Need to preprocess your text data but no clue about what function to use and in which order? The default preprocessing pipeline got you covered:

from castext import Preprocessor
text = "I just got the best dinner in my life @latourdargent !!! I  recommend πŸ˜€ #food #paris \n"
preprocessor = Preprocessor()
text = preprocessor.run(text)
print(text)
# "I just got the best dinner in my life !!! I recommend"

Create your custom pipeline

Another possibility is to create your custom pipeline if you know exactly what function to apply on your data, here's an example:

from castext import Preprocessor
from castext.basic.preprocess import (normalize_whitespace, remove_punct, remove_eol_characters,
remove_stopwords, lower_text)
from castext.social.preprocess import remove_mentions, remove_hashtag, remove_emoji
text = "I just got the best dinner in my life @latourdargent !!! I  recommend πŸ˜€ #food #paris \n"
preprocessor = Preprocessor()
preprocessor.pipe(lower_text)
preprocessor.pipe(remove_mentions)
preprocessor.pipe(remove_hashtag)
preprocessor.pipe(remove_emoji)
preprocessor.pipe(remove_eol_characters)
preprocessor.pipe(remove_stopwords, args={'lang': 'en'})
preprocessor.pipe(remove_punct)
preprocessor.pipe(normalize_whitespace)
text = preprocessor.run(text)
print(text)
# "dinner life recommend"

Take a look at all the functions that are available here in the preprocess.py scripts in the different folders: basic, social, token.

Individual Functions

Replacing emails

from castext.basic.preprocess import replace_emails
example = "I have forwarded this email to obama@whitehouse.gov"
example = replace_emails(example, replace_with="*EMAIL*")
print(example)
# "I have forwarded this email to *EMAIL*"

Replacing phone numbers

from castext.basic.preprocess import replace_phone_numbers
example = "My phone number is 0606060606"
example = replace_phone_numbers(example, country_to_detect=["FR"], replace_with="*PHONE*")
print(example)
# "My phone number is *PHONE*"

Removing Hashtags

from castext.social.preprocess import remove_hashtag
example = "This restaurant was amazing #food #foodie #foodstagram #dinner"
example = remove_hashtag(example)
print(example)
# "This restaurant was amazing"

Extracting emojis

from castext.social.preprocess import extract_emojis
example = "I take care of my skin πŸ˜€"
example = extract_emojis(example)
print(example)
# [':grinning_face:']

Make HTML documentation

In order to make the html Sphinx documentation, you need to run at the castext root path: sphinx-apidoc -f castext -o docs/ This will generate the .rst files. You can generate the doc with cd docs && make html

You can now open the file index.html located in the build folder.

Project Organization


β”œβ”€β”€ LICENSE
β”œβ”€β”€ Makefile            <- Makefile with commands like `make data` or `make train`
β”œβ”€β”€ README.md           <- The top-level README for developers using this project.
β”œβ”€β”€ config              <- Where the configuration and constants live
β”œβ”€β”€ datasets/external   <- Bash scripts to download external datasets
β”œβ”€β”€ docker              <- Where to build a docker image using this lib
β”œβ”€β”€ docs                <- Sphinx HTML documentation
β”‚   β”œβ”€β”€ _build
β”‚   β”‚   └── html
β”‚   β”œβ”€β”€ source
β”œβ”€β”€ castext             <- Main Package. This is where the code lives
β”‚   β”œβ”€β”€ preprocessor.py <- Main preprocessing script
β”‚   β”œβ”€β”€ augmentation    <- Text augmentation script
β”‚   β”œβ”€β”€ basic           <- Basic text preprocessing 
β”‚   β”œβ”€β”€ social          <- Social text preprocessing
β”‚   └── token           <- Token preprocessing
β”œβ”€β”€ utils               <- Where preprocessing utils scripts lives
β”œβ”€β”€ tests               <- Where the tests lives
β”œβ”€β”€ setup.py            <- makes project pip installable (pip install -e .) so the package can be imported
β”œβ”€β”€ requirements.txt    <- The requirements file for reproducing the analysis environment, e.g.
                          generated with `pip freeze > requirements.txt`