The standard
analyzer is used by default for any full-text analyzed
string
field. If we were to reimplement the standard
analyzer as a
custom
analyzer, it would be defined as follows:
{
"type": "custom",
"tokenizer": "standard",
"filter": [ "lowercase", "stop" ]
}
In [token-normalization] and [stopwords], we talk about the
lowercase
, and stop
token filters, but for the moment, let’s focus on
the standard
tokenizer.