Skip to content

Commit

Permalink
Add condition token filter docs opensearch-project#7923 (opensearch-p…
Browse files Browse the repository at this point in the history
…roject#7950)

* adding condition token filter docs opensearch-project#7923

Signed-off-by: AntonEliatra <[email protected]>

* Update condition.md

Signed-off-by: AntonEliatra <[email protected]>

* updating parameter table

Signed-off-by: Anton Rubin <[email protected]>

* addressing PR comments

Signed-off-by: Anton Rubin <[email protected]>

* addressing PR comments

Signed-off-by: Anton Rubin <[email protected]>

* Apply suggestions from code review

Co-authored-by: kolchfa-aws <[email protected]>
Signed-off-by: AntonEliatra <[email protected]>

* Update condition.md

Signed-off-by: AntonEliatra <[email protected]>

* Apply suggestions from code review

Co-authored-by: Nathan Bower <[email protected]>
Signed-off-by: AntonEliatra <[email protected]>

---------

Signed-off-by: AntonEliatra <[email protected]>
Signed-off-by: Anton Rubin <[email protected]>
Co-authored-by: Melissa Vagi <[email protected]>
Co-authored-by: kolchfa-aws <[email protected]>
Co-authored-by: Nathan Bower <[email protected]>
Signed-off-by: Eric Pugh <[email protected]>
  • Loading branch information
4 people authored and epugh committed Nov 23, 2024
1 parent 53b2d6c commit ea402b1
Show file tree
Hide file tree
Showing 2 changed files with 136 additions and 1 deletion.
135 changes: 135 additions & 0 deletions _analyzers/token-filters/condition.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
---
layout: default
title: condition
parent: Token filters
nav_order: 70
---

# Condition token filter

The `condition` token filter is a special type of filter that allows you to apply other token filters conditionally based on certain criteria. This provides more control over when certain token filters should be applied during text analysis.
Multiple filters can be configured and only applied when they meet the conditions you define.
This token filter can be very useful for language-specific processing and handling of special characters.


## Parameters

There are two parameters that must be configured in order to use the `condition` token filter.

Parameter | Required/Optional | Data type | Description
:--- | :--- | :--- | :---
`filter` | Required | Array | Specifies which token filters should be applied to the tokens when the specified condition (defined by the `script` parameter) is met.
`script` | Required | Object | Configures an [inline script]({{site.url}}{{site.baseurl}}/api-reference/script-apis/exec-script/) that defines the condition that needs to be met in order for the filters specified in the `filter` parameter to be applied (only inline scripts are accepted).


## Example

The following example request creates a new index named `my_conditional_index` and configures an analyzer with a `condition` filter. This filter applies a `lowercase` filter to any tokens that contain the character sequence "um":

```json
PUT /my_conditional_index
{
"settings": {
"analysis": {
"filter": {
"my_conditional_filter": {
"type": "condition",
"filter": ["lowercase"],
"script": {
"source": "token.getTerm().toString().contains('um')"
}
}
},
"analyzer": {
"my_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"my_conditional_filter"
]
}
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
GET /my_conditional_index/_analyze
{
"analyzer": "my_analyzer",
"text": "THE BLACK CAT JUMPS OVER A LAZY DOG"
}
```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "THE",
"start_offset": 0,
"end_offset": 3,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "BLACK",
"start_offset": 4,
"end_offset": 9,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "CAT",
"start_offset": 10,
"end_offset": 13,
"type": "<ALPHANUM>",
"position": 2
},
{
"token": "jumps",
"start_offset": 14,
"end_offset": 19,
"type": "<ALPHANUM>",
"position": 3
},
{
"token": "OVER",
"start_offset": 20,
"end_offset": 24,
"type": "<ALPHANUM>",
"position": 4
},
{
"token": "A",
"start_offset": 25,
"end_offset": 26,
"type": "<ALPHANUM>",
"position": 5
},
{
"token": "LAZY",
"start_offset": 27,
"end_offset": 31,
"type": "<ALPHANUM>",
"position": 6
},
{
"token": "DOG",
"start_offset": 32,
"end_offset": 35,
"type": "<ALPHANUM>",
"position": 7
}
]
}
```

2 changes: 1 addition & 1 deletion _analyzers/token-filters/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Token filter | Underlying Lucene token filter| Description
[`cjk_width`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/cjk-width/) | [CJKWidthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/cjk/CJKWidthFilter.html) | Normalizes Chinese, Japanese, and Korean (CJK) tokens according to the following rules: <br> - Folds full-width ASCII character variants into their equivalent basic Latin characters. <br> - Folds half-width katakana character variants into their equivalent kana characters.
[`classic`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/classic) | [ClassicFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/classic/ClassicFilter.html) | Performs optional post-processing on the tokens generated by the classic tokenizer. Removes possessives (`'s`) and removes `.` from acronyms.
[`common_grams`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/common_gram/) | [CommonGramsFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/commongrams/CommonGramsFilter.html) | Generates bigrams for a list of frequently occurring terms. The output contains both single terms and bigrams.
`conditional` | [ConditionalTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ConditionalTokenFilter.html) | Applies an ordered list of token filters to tokens that match the conditions provided in a script.
[`conditional`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/condition/) | [ConditionalTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/ConditionalTokenFilter.html) | Applies an ordered list of token filters to tokens that match the conditions provided in a script.
[`decimal_digit`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/decimal-digit/) | [DecimalDigitFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/core/DecimalDigitFilter.html) | Converts all digits in the Unicode decimal number general category to basic Latin digits (0--9).
[`delimited_payload`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/delimited-payload/) | [DelimitedPayloadTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/payloads/DelimitedPayloadTokenFilter.html) | Separates a token stream into tokens with corresponding payloads, based on a provided delimiter. A token consists of all characters preceding the delimiter, and a payload consists of all characters following the delimiter. For example, if the delimiter is `|`, then for the string `foo|bar`, `foo` is the token and `bar` is the payload.
[`delimited_term_freq`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/delimited-term-frequency/) | [DelimitedTermFrequencyTokenFilter](https://lucene.apache.org/core/9_7_0/analysis/common/org/apache/lucene/analysis/miscellaneous/DelimitedTermFrequencyTokenFilter.html) | Separates a token stream into tokens with corresponding term frequencies, based on a provided delimiter. A token consists of all characters before the delimiter, and a term frequency is the integer after the delimiter. For example, if the delimiter is `|`, then for the string `foo|5`, `foo` is the token and `5` is the term frequency.
Expand Down

0 comments on commit ea402b1

Please sign in to comment.