Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add standard tokenizer docs #8490

Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion _analyzers/tokenizers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
layout: default
title: Tokenizers
nav_order: 60
has_children: false
has_children: true
has_toc: false
---

Expand Down
111 changes: 111 additions & 0 deletions _analyzers/tokenizers/standard.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
---
layout: default
title: Standard
parent: Tokenizers
nav_order: 130
---

# Standard tokenizer

The `standard` tokenizer is the default tokenizer in OpenSearch. It tokenizes text based on word boundaries using a grammar-based approach that recognizes letters, digits, and other characters like punctuation. It is highly versatile and suitable for many languages, because it follows Unicode text segmentation rules ([UAX#29](https://unicode.org/reports/tr29/)) to break text into tokens.
kolchfa-aws marked this conversation as resolved.
Show resolved Hide resolved

## Example usage

The following example request creates a new index named `my_index` and configures an analyzer with a `standard` tokenizer:

```json
PUT /my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_standard_analyzer": {
"type": "standard"
}
}
}
},
"mappings": {
"properties": {
"content": {
"type": "text",
"analyzer": "my_standard_analyzer"
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
POST /my_index/_analyze
{
"analyzer": "my_standard_analyzer",
"text": "OpenSearch is powerful, fast, and scalable."
}
```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "opensearch",
"start_offset": 0,
"end_offset": 10,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "is",
"start_offset": 11,
"end_offset": 13,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "powerful",
"start_offset": 14,
"end_offset": 22,
"type": "<ALPHANUM>",
"position": 2
},
{
"token": "fast",
"start_offset": 24,
"end_offset": 28,
"type": "<ALPHANUM>",
"position": 3
},
{
"token": "and",
"start_offset": 30,
"end_offset": 33,
"type": "<ALPHANUM>",
"position": 4
},
{
"token": "scalable",
"start_offset": 34,
"end_offset": 42,
"type": "<ALPHANUM>",
"position": 5
}
]
}
```

## Configuration
kolchfa-aws marked this conversation as resolved.
Show resolved Hide resolved

The `standard` tokenizer can be configured with the following parameter.

Parameter | Required/Optional | Data type | Description
:--- | :--- | :--- | :---
`max_token_length` | Optional | Integer | Sets the maximum length of the produced token. If this length is exceeded, the token is split into multiple tokens at length configured in the `max_token_length`. Default is `255`.
kolchfa-aws marked this conversation as resolved.
Show resolved Hide resolved

Loading