diff --git a/_analyzers/token-filters/index.md b/_analyzers/token-filters/index.md
index 003b275782..39dcdbdc93 100644
--- a/_analyzers/token-filters/index.md
+++ b/_analyzers/token-filters/index.md
@@ -57,7 +57,7 @@ Normalization | `arabic_normalization`: [ArabicNormalizer](https://lucene.apache
`stemmer` | N/A | Provides algorithmic stemming for the following languages in the `language` field: `arabic`, `armenian`, `basque`, `bengali`, `brazilian`, `bulgarian`, `catalan`, `czech`, `danish`, `dutch`, `dutch_kp`, `english`, `light_english`, `lovins`, `minimal_english`, `porter2`, `possessive_english`, `estonian`, `finnish`, `light_finnish`, `french`, `light_french`, `minimal_french`, `galician`, `minimal_galician`, `german`, `german2`, `light_german`, `minimal_german`, `greek`, `hindi`, `hungarian`, `light_hungarian`, `indonesian`, `irish`, `italian`, `light_italian`, `latvian`, `Lithuanian`, `norwegian`, `light_norwegian`, `minimal_norwegian`, `light_nynorsk`, `minimal_nynorsk`, `portuguese`, `light_portuguese`, `minimal_portuguese`, `portuguese_rslp`, `romanian`, `russian`, `light_russian`, `sorani`, `spanish`, `light_spanish`, `swedish`, `light_swedish`, `turkish`.
`stemmer_override` | N/A | Overrides stemming algorithms by applying a custom mapping so that the provided terms are not stemmed.
`stop` | [StopFilter](https://lucene.apache.org/core/8_7_0/core/org/apache/lucene/analysis/StopFilter.html) | Removes stop words from a token stream.
-`synonym` | N/A | Supplies a synonym list for the analysis process. The synonym list is provided using a configuration file.
+[`synonym`]({{site.url}}{{site.baseurl}}/analyzers/token-filters/synonym/) | N/A | Supplies a synonym list for the analysis process. The synonym list is provided using a configuration file.
`synonym_graph` | N/A | Supplies a synonym list, including multiword synonyms, for the analysis process.
`trim` | [TrimFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/TrimFilter.html) | Trims leading and trailing white space from each token in a stream.
`truncate` | [TruncateTokenFilter](https://lucene.apache.org/core/9_10_0/analysis/common/org/apache/lucene/analysis/miscellaneous/TruncateTokenFilter.html) | Truncates tokens whose length exceeds the specified character limit.
diff --git a/_analyzers/token-filters/synonym.md b/_analyzers/token-filters/synonym.md
new file mode 100644
index 0000000000..a6865b14d7
--- /dev/null
+++ b/_analyzers/token-filters/synonym.md
@@ -0,0 +1,277 @@
+---
+layout: default
+title: Synonym
+parent: Token filters
+nav_order: 420
+---
+
+# Synonym token filter
+
+The `synonym` token filter allows you to map multiple terms to a single term or create equivalence groups between words, improving search flexibility.
+
+## Parameters
+
+The `synonym` token filter can be configured with the following parameters.
+
+Parameter | Required/Optional | Data type | Description
+:--- | :--- | :--- | :---
+`synonyms` | Either `synonyms` or `synonyms_path` must be specified | String | A list of synonym rules defined directly in the configuration.
+`synonyms_path` | Either `synonyms` or `synonyms_path` must be specified | String | The file path to a file containing synonym rules (either an absolute path or a path relative to the config directory).
+`lenient` | Optional | Boolean | Whether to ignore exceptions when loading the rule configurations. Default is `false`.
+`format` | Optional | String | Specifies the format used to determine how OpenSearch defines and interprets synonyms. Valid values are:
- `solr`
- [`wordnet`](https://wordnet.princeton.edu/).
Default is `solr`.
+`expand` | Optional | Boolean | Whether to expand equivalent synonym rules. Default is `false`.
For example:
If `synonyms` are defined as `"quick, fast"` and `expand` is set to `true`, then the synonym rules are configured as follows:
- `quick => quick`
- `quick => fast`
- `fast => quick`
- `fast => fast`
If `expand` is set to `false`, the synonym rules are configured as follows:
- `quick => quick`
- `fast => quick`
+
+## Example: Solr format
+
+The following example request creates a new index named `my-synonym-index` and configures an analyzer with a `synonym` filter. The filter is configured with the default `solr` rule format:
+
+```json
+PUT /my-synonym-index
+{
+ "settings": {
+ "analysis": {
+ "filter": {
+ "my_synonym_filter": {
+ "type": "synonym",
+ "synonyms": [
+ "car, automobile",
+ "quick, fast, speedy",
+ "laptop => computer"
+ ]
+ }
+ },
+ "analyzer": {
+ "my_synonym_analyzer": {
+ "type": "custom",
+ "tokenizer": "standard",
+ "filter": [
+ "lowercase",
+ "my_synonym_filter"
+ ]
+ }
+ }
+ }
+ }
+}
+```
+{% include copy-curl.html %}
+
+## Generated tokens
+
+Use the following request to examine the tokens generated using the analyzer:
+
+```json
+GET /my-synonym-index/_analyze
+{
+ "analyzer": "my_synonym_analyzer",
+ "text": "The quick dog jumps into the car with a laptop"
+}
+```
+{% include copy-curl.html %}
+
+The response contains the generated tokens:
+
+```json
+{
+ "tokens": [
+ {
+ "token": "the",
+ "start_offset": 0,
+ "end_offset": 3,
+ "type": "",
+ "position": 0
+ },
+ {
+ "token": "quick",
+ "start_offset": 4,
+ "end_offset": 9,
+ "type": "",
+ "position": 1
+ },
+ {
+ "token": "fast",
+ "start_offset": 4,
+ "end_offset": 9,
+ "type": "SYNONYM",
+ "position": 1
+ },
+ {
+ "token": "speedy",
+ "start_offset": 4,
+ "end_offset": 9,
+ "type": "SYNONYM",
+ "position": 1
+ },
+ {
+ "token": "dog",
+ "start_offset": 10,
+ "end_offset": 13,
+ "type": "",
+ "position": 2
+ },
+ {
+ "token": "jumps",
+ "start_offset": 14,
+ "end_offset": 19,
+ "type": "",
+ "position": 3
+ },
+ {
+ "token": "into",
+ "start_offset": 20,
+ "end_offset": 24,
+ "type": "",
+ "position": 4
+ },
+ {
+ "token": "the",
+ "start_offset": 25,
+ "end_offset": 28,
+ "type": "",
+ "position": 5
+ },
+ {
+ "token": "car",
+ "start_offset": 29,
+ "end_offset": 32,
+ "type": "",
+ "position": 6
+ },
+ {
+ "token": "automobile",
+ "start_offset": 29,
+ "end_offset": 32,
+ "type": "SYNONYM",
+ "position": 6
+ },
+ {
+ "token": "with",
+ "start_offset": 33,
+ "end_offset": 37,
+ "type": "",
+ "position": 7
+ },
+ {
+ "token": "a",
+ "start_offset": 38,
+ "end_offset": 39,
+ "type": "",
+ "position": 8
+ },
+ {
+ "token": "computer",
+ "start_offset": 40,
+ "end_offset": 46,
+ "type": "SYNONYM",
+ "position": 9
+ }
+ ]
+}
+```
+
+## Example: WordNet format
+
+The following example request creates a new index named `my-wordnet-index` and configures an analyzer with a `synonym` filter. The filter is configured with the [`wordnet`](https://wordnet.princeton.edu/) rule format:
+
+```json
+PUT /my-wordnet-index
+{
+ "settings": {
+ "analysis": {
+ "filter": {
+ "my_wordnet_synonym_filter": {
+ "type": "synonym",
+ "format": "wordnet",
+ "synonyms": [
+ "s(100000001,1,'fast',v,1,0).",
+ "s(100000001,2,'quick',v,1,0).",
+ "s(100000001,3,'swift',v,1,0)."
+ ]
+ }
+ },
+ "analyzer": {
+ "my_wordnet_analyzer": {
+ "type": "custom",
+ "tokenizer": "standard",
+ "filter": [
+ "lowercase",
+ "my_wordnet_synonym_filter"
+ ]
+ }
+ }
+ }
+ }
+}
+```
+{% include copy-curl.html %}
+
+## Generated tokens
+
+Use the following request to examine the tokens generated using the analyzer:
+
+```json
+GET /my-wordnet-index/_analyze
+{
+ "analyzer": "my_wordnet_analyzer",
+ "text": "I have a fast car"
+}
+```
+{% include copy-curl.html %}
+
+The response contains the generated tokens:
+
+```json
+{
+ "tokens": [
+ {
+ "token": "i",
+ "start_offset": 0,
+ "end_offset": 1,
+ "type": "",
+ "position": 0
+ },
+ {
+ "token": "have",
+ "start_offset": 2,
+ "end_offset": 6,
+ "type": "",
+ "position": 1
+ },
+ {
+ "token": "a",
+ "start_offset": 7,
+ "end_offset": 8,
+ "type": "",
+ "position": 2
+ },
+ {
+ "token": "fast",
+ "start_offset": 9,
+ "end_offset": 13,
+ "type": "",
+ "position": 3
+ },
+ {
+ "token": "quick",
+ "start_offset": 9,
+ "end_offset": 13,
+ "type": "SYNONYM",
+ "position": 3
+ },
+ {
+ "token": "swift",
+ "start_offset": 9,
+ "end_offset": 13,
+ "type": "SYNONYM",
+ "position": 3
+ },
+ {
+ "token": "car",
+ "start_offset": 14,
+ "end_offset": 17,
+ "type": "",
+ "position": 4
+ }
+ ]
+}
+```