Skip to content

Commit

Permalink
add trim token filter docs #8449 (#8461)
Browse files Browse the repository at this point in the history
* add trim token filter docs #8449

Signed-off-by: Anton Rubin <[email protected]>

* updating the nav_order

Signed-off-by: Anton Rubin <[email protected]>

* Doc review

Signed-off-by: Fanit Kolchina <[email protected]>

* Apply suggestions from code review

Co-authored-by: Nathan Bower <[email protected]>
Signed-off-by: kolchfa-aws <[email protected]>

---------

Signed-off-by: Anton Rubin <[email protected]>
Signed-off-by: Fanit Kolchina <[email protected]>
Signed-off-by: kolchfa-aws <[email protected]>
Co-authored-by: Fanit Kolchina <[email protected]>
Co-authored-by: kolchfa-aws <[email protected]>
Co-authored-by: Nathan Bower <[email protected]>
(cherry picked from commit deac802)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
  • Loading branch information
4 people committed Dec 2, 2024
1 parent d812a72 commit c7977a9
Show file tree
Hide file tree
Showing 2 changed files with 94 additions and 1 deletion.
2 changes: 1 addition & 1 deletion _analyzers/token-filters/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Token filter | Underlying Lucene token filter| Description
`stop` | [StopFilter](https://lucene.apache.org/core/8_7_0/core/org/apache/lucene/analysis/StopFilter.html) | Removes stop words from a token stream.
[`synonym`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/synonym/) | N/A | Supplies a synonym list for the analysis process. The synonym list is provided using a configuration file.
[`synonym_graph`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/synonym-graph/) | N/A | Supplies a synonym list, including multiword synonyms, for the analysis process.
`trim` | [TrimFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/TrimFilter.html) | Trims leading and trailing white space from each token in a stream.
[`trim`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/trim/) | [TrimFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/TrimFilter.html) | Trims leading and trailing white space characters from each token in a stream.
`truncate` | [TruncateTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilter.html) | Truncates tokens whose length exceeds the specified character limit.
`unique` | N/A | Ensures each token is unique by removing duplicate tokens from a stream.
`uppercase` | [UpperCaseFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/LowerCaseFilter.html) | Converts tokens to uppercase.
Expand Down
93 changes: 93 additions & 0 deletions _analyzers/token-filters/trim.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
---
layout: default
title: Trim
parent: Token filters
nav_order: 430
---

# Trim token filter

The `trim` token filter removes leading and trailing white space characters from tokens.

Many popular tokenizers, such as `standard`, `keyword`, and `whitespace` tokenizers, automatically strip leading and trailing white space characters during tokenization. When using these tokenizers, there is no need to configure an additional `trim` token filter.
{: .note}


## Example

The following example request creates a new index named `my_pattern_trim_index` and configures an analyzer with a `trim` filter and a `pattern` tokenizer, which does not remove leading and trailing white space characters:

```json
PUT /my_pattern_trim_index
{
"settings": {
"analysis": {
"filter": {
"my_trim_filter": {
"type": "trim"
}
},
"tokenizer": {
"my_pattern_tokenizer": {
"type": "pattern",
"pattern": ","
}
},
"analyzer": {
"my_pattern_trim_analyzer": {
"type": "custom",
"tokenizer": "my_pattern_tokenizer",
"filter": [
"lowercase",
"my_trim_filter"
]
}
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
GET /my_pattern_trim_index/_analyze
{
"analyzer": "my_pattern_trim_analyzer",
"text": " OpenSearch , is , powerful "
}
```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "opensearch",
"start_offset": 0,
"end_offset": 12,
"type": "word",
"position": 0
},
{
"token": "is",
"start_offset": 13,
"end_offset": 18,
"type": "word",
"position": 1
},
{
"token": "powerful",
"start_offset": 19,
"end_offset": 32,
"type": "word",
"position": 2
}
]
}
```

0 comments on commit c7977a9

Please sign in to comment.