Skip to content

Commit

Permalink
add kstem token filter docs opensearch-project#8150
Browse files Browse the repository at this point in the history
Signed-off-by: Anton Rubin <[email protected]>
  • Loading branch information
AntonEliatra committed Oct 7, 2024
1 parent 76486a4 commit 3b71e8a
Show file tree
Hide file tree
Showing 2 changed files with 89 additions and 1 deletion.
2 changes: 1 addition & 1 deletion _analyzers/token-filters/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Token filter | Underlying Lucene token filter| Description
`keep_word` | [KeepWordFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/KeepWordFilter.html) | Checks the tokens against the specified word list and keeps only those that are in the list.
`keyword_marker` | [KeywordMarkerFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/KeywordMarkerFilter.html) | Marks specified tokens as keywords, preventing them from being stemmed.
`keyword_repeat` | [KeywordRepeatFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/KeywordRepeatFilter.html) | Emits each incoming token twice: once as a keyword and once as a non-keyword.
`kstem` | [KStemFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/en/KStemFilter.html) | Provides kstem-based stemming for the English language. Combines algorithmic stemming with a built-in dictionary.
[`kstem`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/kstem/) | [KStemFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/en/KStemFilter.html) | Provides kstem-based stemming for the English language. Combines algorithmic stemming with a built-in dictionary.
`kuromoji_completion` | [JapaneseCompletionFilter](https://lucene.apache.org/core/9_10_0/analysis/kuromoji/org/apache/lucene/analysis/ja/JapaneseCompletionFilter.html) | Adds Japanese romanized terms to the token stream (in addition to the original tokens). Usually used to support autocomplete on Japanese search terms. Note that the filter has a `mode` parameter, which should be set to `index` when used in an index analyzer and `query` when used in a search analyzer. Requires the `analysis-kuromoji` plugin. For information about installing the plugin, see [Additional plugins]({{site.url}}{{site.baseurl}}/install-and-configure/plugins/#additional-plugins).
`length` | [LengthFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LengthFilter.html) | Removes tokens whose lengths are shorter or longer than the length range specified by `min` and `max`.
`limit` | [LimitTokenCountFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/LimitTokenCountFilter.html) | Limits the number of output tokens. A common use case is to limit the size of document field values based on token count.
Expand Down
88 changes: 88 additions & 0 deletions _analyzers/token-filters/kstem.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
---
layout: default
title: Kstem
parent: Token filters
nav_order: 220
---

# Kstem token filter

The `length` token filter is a stemming filter used to reduce words to their root forms. The KStem filter is a lightweight, algorithmic stemmer designed for the English language, performing the following stemming:

- Reduces plurals to singular form.
- Converts different tenses of verbs to their base form.
- Removes common derivational endings such as "-ing", "-ed".

## Example

The following example request creates a new index named `my_kstem_index` and configures an analyzer with `kstem` filter:

```json
PUT /my_kstem_index
{
"settings": {
"analysis": {
"filter": {
"kstem_filter": {
"type": "kstem"
}
},
"analyzer": {
"my_kstem_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"kstem_filter"
]
}
}
}
},
"mappings": {
"properties": {
"content": {
"type": "text",
"analyzer": "my_kstem_analyzer"
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the created analyzer:

```json
POST /my_kstem_index/_analyze
{
"analyzer": "my_kstem_analyzer",
"text": "stops stopped"
}
```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "stop",
"start_offset": 0,
"end_offset": 5,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "stop",
"start_offset": 6,
"end_offset": 13,
"type": "<ALPHANUM>",
"position": 1
}
]
}
```

0 comments on commit 3b71e8a

Please sign in to comment.