-
Notifications
You must be signed in to change notification settings - Fork 390
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Add the OpenNMT tokenizer as a third party * Add detokenization logic * Add learn_bpe.py script * Replace spaces in CharacterTokenizer * Fix encoding issue when redirecting tokenization output * Move stream tokenization to the library * Build tokenizer plugin in Travis * Add BLEU evaluator variant which applies a light tokenization * Cleanup * Add documentation * Fix link * Complete the README * Fix tokenization configuration loading * Add missing instruction
- Loading branch information
1 parent
009a592
commit 298f117
Showing
26 changed files
with
760 additions
and
51 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
[submodule "third_party/OpenNMTTokenizer"] | ||
path = third_party/OpenNMTTokenizer | ||
url = https://github.com/OpenNMT/Tokenizer.git |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
cmake_minimum_required(VERSION 3.1) | ||
|
||
set(CMAKE_CXX_STANDARD 11) | ||
set(CMAKE_BUILD_TYPE Release) | ||
set(LIB_ONLY ON) | ||
set(WITH_PYTHON_BINDINGS ON) | ||
|
||
add_subdirectory(third_party/OpenNMTTokenizer) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
"""Standalone script to detokenize a corpus.""" | ||
|
||
from __future__ import print_function | ||
|
||
import argparse | ||
|
||
from opennmt import tokenizers | ||
|
||
|
||
def main(): | ||
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) | ||
parser.add_argument( | ||
"--delimiter", default=" ", | ||
help="Token delimiter used in text serialization.") | ||
tokenizers.add_command_line_arguments(parser) | ||
args = parser.parse_args() | ||
|
||
tokenizer = tokenizers.build_tokenizer(args) | ||
tokenizer.detokenize_stream(delimiter=args.delimiter) | ||
|
||
if __name__ == "__main__": | ||
main() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
mode: aggressive | ||
joiner_annotate: true |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
# This is a sample tokenization configuration with all values set to their default. | ||
|
||
mode: conservative | ||
bpe_model_path: "" | ||
joiner: ■ | ||
joiner_annotate: false | ||
joiner_new: false | ||
case_feature: false | ||
segment_case: false | ||
segment_numbers: false | ||
segment_alphabet_change: false | ||
segment_alphabet: [] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -7,6 +7,7 @@ Overview | |
:maxdepth: 1 | ||
|
||
data.md | ||
tokenization.md | ||
configuration.md | ||
training.md | ||
serving.md | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
opennmt\.tokenizers\.opennmt\_tokenizer module | ||
============================================== | ||
|
||
.. automodule:: opennmt.tokenizers.opennmt_tokenizer | ||
:members: | ||
:undoc-members: | ||
:show-inheritance: |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -11,5 +11,6 @@ Submodules | |
|
||
.. toctree:: | ||
|
||
opennmt.tokenizers.opennmt_tokenizer | ||
opennmt.tokenizers.tokenizer | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,89 @@ | ||
# Tokenization | ||
|
||
OpenNMT-tf can use the OpenNMT [Tokenizer](https://github.com/OpenNMT/Tokenizer) as a plugin to provide advanced tokenization behaviors. | ||
|
||
## Installation | ||
|
||
The following tools and packages are required: | ||
|
||
* C++11 compiler | ||
* CMake | ||
* Boost.Python | ||
|
||
On Ubuntu, these packages can be installed with `apt-get`: | ||
|
||
```bash | ||
sudo apt-get install build-essential gcc cmake libboost-python-dev | ||
``` | ||
|
||
1\. Fetch the Tokenizer plugin under OpenNMT-tf repository: | ||
|
||
```bash | ||
git submodule update --init | ||
``` | ||
|
||
2\. Compile the tokenizer plugin: | ||
|
||
```bash | ||
mkdir build && cd build | ||
cmake .. && make | ||
cd .. | ||
``` | ||
|
||
3\. Configure your environment for Python to find the newly generated package: | ||
|
||
```bash | ||
export PYTHONPATH="$PYTHONPATH:$HOME/OpenNMT-tf/build/third_party/OpenNMTTokenizer/bindings/python/" | ||
``` | ||
|
||
4\. Test the plugin: | ||
|
||
```bash | ||
$ echo "Hello world!" | python -m bin.tokenize_text --tokenizer OpenNMTTokenizer | ||
Hello world ! | ||
``` | ||
|
||
## Usage | ||
|
||
YAML files are used to set the tokenizer options to ensure consistency during data preparation and training. See the sample file `config/tokenization/sample.yml`. | ||
|
||
Here is an example workflow: | ||
|
||
1\. Build the vocabularies with the custom tokenizer, e.g.: | ||
|
||
```bash | ||
python -m bin.build_vocab --tokenizer OpenNMTTokenizer --tokenizer_config config/tokenization/aggressive.yml --size 50000 --save_vocab data/enfr/en-vocab.txt data/enfr/en-train.txt | ||
python -m bin.build_vocab --tokenizer OpenNMTTokenizer --tokenizer_config config/tokenization/aggressive.yml --size 50000 --save_vocab data/enfr/fr-vocab.txt data/enfr/fr-train.txt | ||
``` | ||
|
||
*The text files are only given as examples and are not part of the repository.* | ||
|
||
2\. Update your model's `TextInputter`s to use the custom tokenizer, e.g.: | ||
|
||
```python | ||
return onmt.models.SequenceToSequence( | ||
source_inputter=onmt.inputters.WordEmbedder( | ||
vocabulary_file_key="source_words_vocabulary", | ||
embedding_size=512, | ||
tokenizer=onmt.tokenizers.OpenNMTTokenizer( | ||
configuration_file_or_key="source_tokenizer_config")), | ||
target_inputter=onmt.inputters.WordEmbedder( | ||
vocabulary_file_key="target_words_vocabulary", | ||
embedding_size=512, | ||
tokenizer=onmt.tokenizers.OpenNMTTokenizer( | ||
configuration_file_or_key="target_tokenizer_config")), | ||
...) | ||
``` | ||
|
||
3\. Reference the tokenizer configurations in the data configuration, e.g.: | ||
|
||
```yaml | ||
data: | ||
source_tokenizer_config: config/tokenization/aggressive.yml | ||
target_tokenizer_config: config/tokenization/aggressive.yml | ||
``` | ||
## Notes | ||
* As of now, tokenizers are not part of the exported graph. | ||
* Predictions saved during inference or evaluation are detokenized. Consider using the "BLEU-detok" external evaluator that applies a simple word level tokenization before computing the BLEU score. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.