forked from opensearch-project/documentation-website
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add decimal digit token filter docs opensearch-project#7923 (opensear…
…ch-project#7977) * Adding decimal digit token filter docs opensearch-project#7923 Signed-off-by: Anton Rubin <[email protected]> * Adding -- around the number range Signed-off-by: Anton Rubin <[email protected]> * Update _analyzers/token-filters/decimal-digit.md Signed-off-by: kolchfa-aws <[email protected]> * Update decimal-digit.md Signed-off-by: AntonEliatra <[email protected]> * Apply suggestions from code review Co-authored-by: kolchfa-aws <[email protected]> Signed-off-by: AntonEliatra <[email protected]> * Update _analyzers/token-filters/decimal-digit.md Co-authored-by: Nathan Bower <[email protected]> Signed-off-by: AntonEliatra <[email protected]> --------- Signed-off-by: Anton Rubin <[email protected]> Signed-off-by: kolchfa-aws <[email protected]> Signed-off-by: AntonEliatra <[email protected]> Co-authored-by: kolchfa-aws <[email protected]> Co-authored-by: Nathan Bower <[email protected]> Signed-off-by: Eric Pugh <[email protected]>
- Loading branch information
Showing
2 changed files
with
89 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,88 @@ | ||
--- | ||
layout: default | ||
title: Decimal digit | ||
parent: Token filters | ||
nav_order: 80 | ||
--- | ||
|
||
# Decimal digit token filter | ||
|
||
The `decimal_digit` token filter is used to normalize decimal digit characters (0--9) into their ASCII equivalents in various scripts. This is useful when you want to ensure that all digits are treated uniformly in text analysis, regardless of the script in which they are written. | ||
|
||
|
||
## Example | ||
|
||
The following example request creates a new index named `my_index` and configures an analyzer with a `decimal_digit` filter: | ||
|
||
```json | ||
PUT /my_index | ||
{ | ||
"settings": { | ||
"analysis": { | ||
"filter": { | ||
"my_decimal_digit_filter": { | ||
"type": "decimal_digit" | ||
} | ||
}, | ||
"analyzer": { | ||
"my_analyzer": { | ||
"type": "custom", | ||
"tokenizer": "standard", | ||
"filter": ["my_decimal_digit_filter"] | ||
} | ||
} | ||
} | ||
} | ||
} | ||
|
||
``` | ||
{% include copy-curl.html %} | ||
|
||
## Generated tokens | ||
|
||
Use the following request to examine the tokens generated using the analyzer: | ||
|
||
```json | ||
POST /my_index/_analyze | ||
{ | ||
"analyzer": "my_analyzer", | ||
"text": "123 ١٢٣ १२३" | ||
} | ||
``` | ||
{% include copy-curl.html %} | ||
|
||
`text` breakdown: | ||
|
||
- "123" (ASCII digits) | ||
- "١٢٣" (Arabic-Indic digits) | ||
- "१२३" (Devanagari digits) | ||
|
||
The response contains the generated tokens: | ||
|
||
```json | ||
{ | ||
"tokens": [ | ||
{ | ||
"token": "123", | ||
"start_offset": 0, | ||
"end_offset": 3, | ||
"type": "<NUM>", | ||
"position": 0 | ||
}, | ||
{ | ||
"token": "123", | ||
"start_offset": 4, | ||
"end_offset": 7, | ||
"type": "<NUM>", | ||
"position": 1 | ||
}, | ||
{ | ||
"token": "123", | ||
"start_offset": 8, | ||
"end_offset": 11, | ||
"type": "<NUM>", | ||
"position": 2 | ||
} | ||
] | ||
} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters