Skip to content

Commit

Permalink
Fix attributes case in examples for search_index (#725)
Browse files Browse the repository at this point in the history
* Fix attributes case in examples for search_index

* Fix snake case in JSON fields in search_index docs

* Fix snake case in JSON test data of search_index_test
  • Loading branch information
daniel-orlov authored Jun 23, 2022
1 parent 80d231a commit 16c9ea8
Show file tree
Hide file tree
Showing 2 changed files with 65 additions and 65 deletions.
4 changes: 2 additions & 2 deletions mongodbatlas/resource_mongodbatlas_search_index_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -270,7 +270,7 @@ func testAccMongoDBAtlasSearchIndexConfigAdvanced(projectID, clusterName string)
analyzers = <<-EOF
[{
"name": "index_analyzer_test_name",
"char_filters": {
"charFilters": {
"type": "mapping",
"mappings": {"\\" : "/"}
},
Expand All @@ -279,7 +279,7 @@ func testAccMongoDBAtlasSearchIndexConfigAdvanced(projectID, clusterName string)
"min_gram": 2,
"max_gram": 5
},
"token_filters": {
"tokenFilters": {
"type": "length",
"min": 20,
"max": 33
Expand Down
126 changes: 63 additions & 63 deletions website/docs/r/search_index.html.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ resource "mongodbatlas_search_index" "test" {
cluster_name = "<CLUSTER_NAME>"
analyzer = "lucene.standard"
collectionName = "collection_test"
collection_name = "collection_test"
database = "database_test"
mappings_dynamic = true
searchAnalyzer = "lucene.standard"
search_analyzer = "lucene.standard"
}
```

Expand All @@ -34,7 +34,7 @@ resource "mongodbatlas_search_index" "test" {
project_id = "%[1]s"
cluster_name = "%[2]s"
analyzer = "lucene.standard"
collectionName = "collection_test"
collection_name = "collection_test"
database = "database_test"
mappings_dynamic = false
mappings_fields = <<-EOF
Expand Down Expand Up @@ -70,20 +70,20 @@ resource "mongodbatlas_search_index" "test" {
}
EOF
name = "name_test"
searchAnalyzer = "lucene.standard"
search_analyzer = "lucene.standard"
analyzers = <<-EOF
[{
"name": "index_analyzer_test_name",
"char_filters": {
"charFilters": {
"type": "mapping",
"mappings": {"\\" : "/"}
},
"tokenizer": {
"type": "nGram",
"min_gram": 2,
"max_gram": 5
"minGram": 2,
"maxGram": 5
},
"token_filters": {
"tokenFilters": {
"type": "length",
"min": 20,
"max": 33
Expand Down Expand Up @@ -112,16 +112,16 @@ EOF
analyzers = <<-EOF
[{
"name": "index_analyzer_test_name",
"char_filters": {
"charFilters": {
"type": "mapping",
"mappings": {"\\" : "/"}
},
"tokenizer": {
"type": "nGram",
"min_gram": 2,
"max_gram": 5
"minGram": 2,
"maxGram": 5
},
"token_filters": {
"tokenFilters": {
"type": "length",
"min": 20,
"max": 33
Expand All @@ -141,33 +141,33 @@ EOF
mappings_fields = <<-EOF
{
"address": {
"type": "document",
"fields": {
"city": {
"type": "string",
"analyzer": "lucene.simple",
"ignoreAbove": 255
},
"state": {
"type": "string",
"analyzer": "lucene.english"
}
}
"type": "document",
"fields": {
"city": {
"type": "string",
"analyzer": "lucene.simple",
"ignoreAbove": 255
},
"state": {
"type": "string",
"analyzer": "lucene.english"
}
}
},
"company": {
"type": "string",
"analyzer": "lucene.whitespace",
"multi": {
"mySecondaryAnalyzer": {
"type": "string",
"analyzer": "lucene.french"
}
}
"type": "string",
"analyzer": "lucene.whitespace",
"multi": {
"mySecondaryAnalyzer": {
"type": "string",
"analyzer": "lucene.french"
}
}
},
"employees": {
"type": "string",
"analyzer": "lucene.standard"
}
"type": "string",
"analyzer": "lucene.standard"
}
}
```

Expand All @@ -182,23 +182,23 @@ An [Atlas Search analyzer](https://docs.atlas.mongodb.com/reference/atlas-search
* `lucene`
* `builtin`
* `mongodb`
* `char_filters` - Array containing zero or more character filters. Always require a `type` field, and some take additional options as well
* `charFilters` - Array containing zero or more character filters. Always require a `type` field, and some take additional options as well
```terraform
"char_filters":{
"charFilters":{
"type": "<FILTER_TYPE>",
"ADDITIONAL_OPTION": VALUE
}
```
Atlas search supports four `types` of character filters:
* [htmlStrip](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-htmlStrip-ref) - Strips out HTML constructs
* `type` - (Required) Must be `htmlStrip`
* `ignored_tags`- a list of HTML tags to exclude from filtering
* `ignoredTags`- a list of HTML tags to exclude from filtering
```terraform
analyzers = <<-EOF [{
"name": "analyzer_test",
"char_filters":{
"charFilters":{
"type": "htmlStrip",
"ignored_tags": ["a"]
"ignoredTags": ["a"]
}
}]
```
Expand Down Expand Up @@ -230,19 +230,19 @@ An [Atlas Search analyzer](https://docs.atlas.mongodb.com/reference/atlas-search
Atlas Search supports the following tokenizer options:
* [standard](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-standard-tokenizer-ref) - Tokenize based on word break rules from the [Unicode Text Segmentation algorithm](http://www.unicode.org/L2/L2019/19034-uax29-34-draft.pdf):
* `type` - Must be `standard`
* `max_token_length` - Maximum length for a single token. Tokens greater than this length are split at `maxTokenLength` into multiple tokens.
* `maxTokenLength` - Maximum length for a single token. Tokens greater than this length are split at `maxTokenLength` into multiple tokens.
* [keyword](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-keyword-tokenizer-ref) - Tokenize the entire input as a single token.
* [whitespace](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-whitespace-tokenizer-ref) - Tokenize based on occurrences of whitespace between words.
* `type` - Must be `whitespace`
* `max_token_length` - Maximum length for a single token. Tokens greater than this length are split at `maxTokenLength` into multiple tokens.
* `maxTokenLength` - Maximum length for a single token. Tokens greater than this length are split at `maxTokenLength` into multiple tokens.
* [nGram](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-ngram-tokenizer-ref) - Tokenize into text chunks, or "n-grams", of given sizes.
* `type` - Must be `nGram`
* `min_gram` - (Required) Number of characters to include in the shortest token created.
* `max_gram` - (Required) Number of characters to include in the longest token created.
* `minGram` - (Required) Number of characters to include in the shortest token created.
* `maxGram` - (Required) Number of characters to include in the longest token created.
* [edgeGram](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-edgegram-tokenizer-ref) - Tokenize input from the beginning, or "edge", of a text input into n-grams of given sizes.
* `type` - Must be `edgeGram`
* `min_gram` - (Required) Number of characters to include in the shortest token created.
* `max_gram` - (Required) Number of characters to include in the longest token created.
* `minGram` - (Required) Number of characters to include in the shortest token created.
* `maxGram` - (Required) Number of characters to include in the longest token created.
* [regexCaptureGroup](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-regexcapturegroup-tokenizer-ref) - Match a regular expression pattern to extract tokens.
* `type` - Must be `regexCaptureGroup`
* `pattern` - (Required) A regular expression to match against.
Expand All @@ -252,19 +252,19 @@ An [Atlas Search analyzer](https://docs.atlas.mongodb.com/reference/atlas-search
* `pattern` - (Required) A regular expression to match against.
* [uaxUrlEmail](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-uaxUrlEmail-tokenizer-ref) - Tokenize URLs and email addresses. Although [uaxUrlEmail](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-uaxUrlEmail-tokenizer-ref) tokenizer tokenizes based on word break rules from the [Unicode Text Segmentation algorithm](http://www.unicode.org/L2/L2019/19034-uax29-34-draft.pdf), we recommend using uaxUrlEmail tokenizer only when the indexed field value includes URLs and email addresses. For fields that do not include URLs or email addresses, use the [standard](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-standard-tokenizer-ref) tokenizer to create tokens based on word break rules.
* `type` - Must be `uaxUrlEmail`
* `max_token_length` - The maximum number of characters in one token.
* `maxTokenLength` - The maximum number of characters in one token.
* `tokenFilters` - Array containing zero or more token filters. Always require a type field, and some take additional options as well:
* `token_filters` - Array containing zero or more token filters. Always require a type field, and some take additional options as well:
```terraform
"token_filters":{
"tokenFilters":{
"type": "<FILTER_TYPE>",
"ADDITIONAL-OPTIONS": VALUE
}
```
Atlas Search supports the following token filters:
* [daitchMokotoffSoundex](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-daitchmokotoffsoundex-tf-ref) - Creates tokens for words that sound the same based on [Daitch-Mokotoff Soundex](https://en.wikipedia.org/wiki/Daitch%E2%80%93Mokotoff_Soundex) phonetic algorithm. This filter can generate multiple encodings for each input, where each encoded token is a 6 digit number:
* `type` - Must be `daitchMokotoffSoundex`
* `original_tokens` - Specifies whether to include or omit the original tokens in the output of the token filter. Value can be one of the following:
* `originalTokens` - Specifies whether to include or omit the original tokens in the output of the token filter. Value can be one of the following:
* `include` - to include the original tokens with the encoded tokens in the output of the token filter. We recommend this value if you want queries on both the original tokens as well as the encoded forms.
* `omit` - to omit the original tokens and include only the encoded tokens in the output of the token filter. Use this value if you want to only query on the encoded forms of the original tokens.
* [lowercase](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-lowercase-tf-ref) - Normalizes token text to lowercase.
Expand All @@ -275,7 +275,7 @@ An [Atlas Search analyzer](https://docs.atlas.mongodb.com/reference/atlas-search
* [icuFolding](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-icufolding-tf-ref) - Applies character folding from [Unicode Technical Report #30](http://www.unicode.org/reports/tr30/tr30-4.html).
* [icuNormalizer](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-icunormalizer-tf-ref) - Normalizes tokens using a standard [Unicode Normalization Mode](https://unicode.org/reports/tr15/):
* `type` - Must be 'icuNormalizer'.
* `normalization_form` - Normalization form to apply. Accepted values are:
* `normalizationForm` - Normalization form to apply. Accepted values are:
* `nfd` (Canonical Decomposition)
* `nfc` (Canonical Decomposition, followed by Canonical Composition)
* `nfkd` (Compatibility Decomposition)
Expand All @@ -284,26 +284,26 @@ An [Atlas Search analyzer](https://docs.atlas.mongodb.com/reference/atlas-search
For more information about the supported normalization forms, see [Section 1.2: Normalization Forms, UTR#15](https://unicode.org/reports/tr15/#Norm_Forms).
* [nGram](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-ngram-tf-ref) - Tokenizes input into n-grams of configured sizes.
* `type` - Must be `nGram`
* `min_gram` - (Required) The minimum length of generated n-grams. Must be less than or equal to `maxGram`.
* `max_gram` - (Required) The maximum length of generated n-grams. Must be greater than or equal to `minGram`.
* `terms_not_in_bounds` - Accepted values are:
* `minGram` - (Required) The minimum length of generated n-grams. Must be less than or equal to `maxGram`.
* `maxGram` - (Required) The maximum length of generated n-grams. Must be greater than or equal to `minGram`.
* `termNotInBounds` - Accepted values are:
* `include`
* `omit`

If `include` is specified, tokens shorter than `min_gram` or longer than `max_gram` are indexed as-is. If `omit` is specified, those tokens are not indexed.
If `include` is specified, tokens shorter than `minGram` or longer than `maxGram` are indexed as-is. If `omit` is specified, those tokens are not indexed.
* [edgeGram](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-edgegram-tf-ref) - Tokenizes input into edge n-grams of configured sizes:
* `type` - Must be `edgeGram`
* `min_gram` - (Required) The minimum length of generated n-grams. Must be less than or equal to `max_gram`.
* `max_gram` - (Required) The maximum length of generated n-grams. Must be greater than or equal to `min_gram`.
* `terms_not_in_bounds` - Accepted values are:
* `minGram` - (Required) The minimum length of generated n-grams. Must be less than or equal to `max_gram`.
* `maxGram` - (Required) The maximum length of generated n-grams. Must be greater than or equal to `min_gram`.
* `termsNotInBounds` - Accepted values are:
* `include`
* `omit`

If `include` is specified, tokens shorter than `min_gram` or longer than `max_gram` are indexed as-is. If `omit` is specified, those tokens are not indexed.
If `include` is specified, tokens shorter than `minGram` or longer than `maxGram` are indexed as-is. If `omit` is specified, those tokens are not indexed.
* [shingle](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-shingle-tf-ref) - Constructs shingles (token n-grams) from a series of tokens.
* `type` - Must be `shingle`
* `min_shingle_size` - (Required) Minimum number of tokens per shingle. Must be less than or equal to `max_shingle_size`.
* `max_shingle_size` - (Required) Maximum number of tokens per shingle. Must be greater than or equal to `min_shingle_size`.
* `minShingleSize` - (Required) Minimum number of tokens per shingle. Must be less than or equal to `maxShingleSize`.
* `maxShingleSize` - (Required) Maximum number of tokens per shingle. Must be greater than or equal to `minShingleSize`.
* [regex](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-regex-tf-ref) - Applies a regular expression to each token, replacing matches with a specified string.
* `type` - Must be `regex`
* `pattern` - (Required) Regular expression pattern to apply to each token.
Expand All @@ -315,7 +315,7 @@ An [Atlas Search analyzer](https://docs.atlas.mongodb.com/reference/atlas-search
If `matches` is set to `all, replace all matching patterns. Otherwise, replace only the first matching pattern.
* [snowballStemming](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-snowballstemming-tf-ref) - Stems tokens using a [Snowball-generated stemmer](https://snowballstem.org/).
* `type` - Must be `snowballstemming`
* `stemmer_name` - (Required) The following values are valid:
* `stemmerName` - (Required) The following values are valid:
* `arabic`
* `armenian`
* `basque`
Expand Down Expand Up @@ -344,7 +344,7 @@ An [Atlas Search analyzer](https://docs.atlas.mongodb.com/reference/atlas-search
* [stopword](https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/#std-label-stopword-tf-ref) - Removes tokens that correspond to the specified stop words. This token filter doesn't analyze the specified stop word:
* `type` - Must be `stopword`
* `token` - (Required) The list of stop words that correspond to the tokens to remove. Value must be one or more stop words.
* `ignore_case` - The flag that indicates whether or not to ignore case of stop words when filtering the tokens to remove. The value can be one of the following:
* `ignoreCase` - The flag that indicates whether or not to ignore case of stop words when filtering the tokens to remove. The value can be one of the following:
* `true` - to ignore case and remove all tokens that match the specified stop words
* `false` - to be case-sensitive and remove only tokens that exactly match the specified case

Expand Down

0 comments on commit 16c9ea8

Please sign in to comment.