Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] Use keyword tokenizer in word delimiter graph examples #53384

Merged
merged 2 commits into from
Mar 11, 2020
Merged

[DOCS] Use keyword tokenizer in word delimiter graph examples #53384

merged 2 commits into from
Mar 11, 2020

Conversation

jrodewig
Copy link
Contributor

In a tip admonition, we recommend using the keyword tokenizer with the
word_delimiter_graph token filter. However, we only use the
whitespace tokenizer in the example snippets. This updates those
snippets to use the keyword tokenizer instead.

Also corrects several spacing issues for arrays in these docs.

In a tip admonition, we recommend using the `keyword` tokenizer with the
`word_delimiter_graph` token filter. However, we only use the
`whitespace` tokenizer in the example snippets. This updates those
snippets to use the `keyword` tokenizer instead.

Also corrects several spacing issues for arrays in these docs.
@jrodewig jrodewig added >docs General docs changes :Search Relevance/Analysis How text is split into tokens v8.0.0 v7.7.0 v7.6.2 labels Mar 11, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-docs (>docs)

@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-search (:Search/Analysis)

@jrodewig jrodewig merged commit 377539e into elastic:master Mar 11, 2020
@jrodewig jrodewig deleted the patch__update-wdg-tokenizer branch March 11, 2020 08:45
jrodewig added a commit that referenced this pull request Mar 11, 2020
In a tip admonition, we recommend using the `keyword` tokenizer with the
`word_delimiter_graph` token filter. However, we only use the
`whitespace` tokenizer in the example snippets. This updates those
snippets to use the `keyword` tokenizer instead.

Also corrects several spacing issues for arrays in these docs.
jrodewig added a commit that referenced this pull request Mar 11, 2020
In a tip admonition, we recommend using the `keyword` tokenizer with the
`word_delimiter_graph` token filter. However, we only use the
`whitespace` tokenizer in the example snippets. This updates those
snippets to use the `keyword` tokenizer instead.

Also corrects several spacing issues for arrays in these docs.
@jrodewig
Copy link
Contributor Author

Backport commits

master 377539e
7.x a9dd777
7.6 096fb5f

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>docs General docs changes :Search Relevance/Analysis How text is split into tokens v7.6.2 v7.7.0 v8.0.0-alpha1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants