From 9ba3da5d01ff4ed72d42db93bb78e858eff1cfec Mon Sep 17 00:00:00 2001 From: Ben Brandt Date: Mon, 25 Mar 2024 23:34:49 +0100 Subject: [PATCH] Reference markdown splitter in features table --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index b5803e7..ad014db 100644 --- a/README.md +++ b/README.md @@ -188,10 +188,10 @@ There are lots of methods of determining sentence breaks, all to varying degrees ### Tokenizer Support -| Dependency Feature | Version Supported | Description | -| ------------------ | ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `tiktoken-rs` | `0.5.8` | Enables the `TextSplitter::new` to take a `tiktoken_rs::CoreBPE` as an argument. This is useful for splitting text for OpenAI models. | -| `tokenizers` | `0.15.2` | Enables the `TextSplitter::new` to take a `tokenizers::Tokenizer` as an argument. This is useful for splitting text models that have a Hugging Face-compatible tokenizer. | +| Dependency Feature | Version Supported | Description | +| ------------------ | ----------------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `tiktoken-rs` | `0.5.8` | Enables `[Text | Markdown]Splitter::new`to take`tiktoken_rs::CoreBPE` as an argument. This is useful for splitting text for OpenAI models. | +| `tokenizers` | `0.15.2` | Enables `[Text | Markdown]Splitter::new`to take`tokenizers::Tokenizer` as an argument. This is useful for splitting text models that have a Hugging Face-compatible tokenizer. | ## Inspiration