Skip to content

Latest commit

 

History

History
86 lines (60 loc) · 2.25 KB

README.md

File metadata and controls

86 lines (60 loc) · 2.25 KB

Vltava

PyPI Test Publish

Opinionated Czech language processing.

The processor takes in raw documents and applies basic preprocessing (such as tags and accent striping) and lemmatization using either Majka or MorphoDiTa.

from vltava import DocumentProcessor

doc = "v televizi říkali, že zítra pršet nebude"
document_processor = DocumentProcessor()
result = document_processor.process(doc)
# result is ['televize', 'rikat', 'zitra', 'prset', 'byt']

DocumentProcessor supports multiprocessing when dealing with large collections of documents.

from vltava import DocumentProcessor

docs = ["Ahoj, jak se máš?"] * 100

result = DocumentProcessor().process_from_iterable(docs, n_jobs=2)

Installation

pip install vltava

Backend

The package is using two different backends for finding Czech lemmas: Majka, MorphoDiTa. Check out the links for more information. The required binary files are contained directly in the package.

Public API

vltava.DocumentProcessor

vltava.DocumentProcessor(backend: str = "majka")

Initializes DocumentProcessor with the selected backend.

Methods:

DocumentProcessor.process(
    self, doc: str, tokenize: bool = True
) -> Union[str, List[str]]

Processes the input doc and returns it as a processed string or a list of processed tokens, if tokenize is True.

DocumentProcessor.process_from_iterable(
    self, docs: Iterable[str], tokenize: bool = True, n_jobs: int = 1
) -> Union[Iterable[str], Iterable[List[str]]]:

Processes the input docs collection of documents. Result is either an iterable of processed strings or an iterable of lists of processed tokens (if tokenize is True).

If n_jobs is greater than one, multiple worker are launched to process the documents.