diff --git a/docs/reference/ml/anomaly-detection/configuring.asciidoc b/docs/reference/ml/anomaly-detection/configuring.asciidoc deleted file mode 100644 index 759c0e2153562..0000000000000 --- a/docs/reference/ml/anomaly-detection/configuring.asciidoc +++ /dev/null @@ -1,52 +0,0 @@ -[role="xpack"] -[[ml-configuring]] -== Configuring machine learning - -If you want to use {ml-features}, there must be at least one {ml} node in -your cluster and all master-eligible nodes must have {ml} enabled. By default, -all nodes are {ml} nodes. For more information about these settings, see -{ref}/modules-node.html#ml-node[{ml} nodes]. - -To use the {ml-features} to analyze your data, you can create an {anomaly-job} -and send your data to that job. - -* If your data is stored in {es}: - -** You can create a {dfeed}, which retrieves data from {es} for analysis. -** You can use {kib} to expedite the creation of jobs and {dfeeds}. - -* If your data is not stored in {es}, you can -{ref}/ml-post-data.html[POST data] from any source directly to an API. - -The results of {ml} analysis are stored in {es} and you can use {kib} to help -you visualize and explore the results. - -//For a tutorial that walks you through these configuration steps, -//see <>. - -Though it is quite simple to analyze your data and provide quick {ml} results, -gaining deep insights might require some additional planning and configuration. -The scenarios in this section describe some best practices for generating useful -{ml} results and insights from your data. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -include::customurl.asciidoc[] - -include::aggregations.asciidoc[] - -include::detector-custom-rules.asciidoc[] - -include::categories.asciidoc[] - -include::populations.asciidoc[] - -include::transforms.asciidoc[] - -include::delayed-data-detection.asciidoc[] \ No newline at end of file diff --git a/docs/reference/ml/anomaly-detection/functions/count.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-count-functions.asciidoc similarity index 100% rename from docs/reference/ml/anomaly-detection/functions/count.asciidoc rename to docs/reference/ml/anomaly-detection/functions/ml-count-functions.asciidoc diff --git a/docs/reference/ml/anomaly-detection/ml-functions.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-functions.asciidoc similarity index 100% rename from docs/reference/ml/anomaly-detection/ml-functions.asciidoc rename to docs/reference/ml/anomaly-detection/functions/ml-functions.asciidoc diff --git a/docs/reference/ml/anomaly-detection/functions/geo.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-geo-functions.asciidoc similarity index 100% rename from docs/reference/ml/anomaly-detection/functions/geo.asciidoc rename to docs/reference/ml/anomaly-detection/functions/ml-geo-functions.asciidoc diff --git a/docs/reference/ml/anomaly-detection/functions/info.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-info-functions.asciidoc similarity index 100% rename from docs/reference/ml/anomaly-detection/functions/info.asciidoc rename to docs/reference/ml/anomaly-detection/functions/ml-info-functions.asciidoc diff --git a/docs/reference/ml/anomaly-detection/functions/metric.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-metric-functions.asciidoc similarity index 100% rename from docs/reference/ml/anomaly-detection/functions/metric.asciidoc rename to docs/reference/ml/anomaly-detection/functions/ml-metric-functions.asciidoc diff --git a/docs/reference/ml/anomaly-detection/functions/rare.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-rare-functions.asciidoc similarity index 100% rename from docs/reference/ml/anomaly-detection/functions/rare.asciidoc rename to docs/reference/ml/anomaly-detection/functions/ml-rare-functions.asciidoc diff --git a/docs/reference/ml/anomaly-detection/functions/sum.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-sum-functions.asciidoc similarity index 100% rename from docs/reference/ml/anomaly-detection/functions/sum.asciidoc rename to docs/reference/ml/anomaly-detection/functions/ml-sum-functions.asciidoc diff --git a/docs/reference/ml/anomaly-detection/functions/time.asciidoc b/docs/reference/ml/anomaly-detection/functions/ml-time-functions.asciidoc similarity index 100% rename from docs/reference/ml/anomaly-detection/functions/time.asciidoc rename to docs/reference/ml/anomaly-detection/functions/ml-time-functions.asciidoc diff --git a/docs/reference/ml/anomaly-detection/aggregations.asciidoc b/docs/reference/ml/anomaly-detection/ml-configuring-aggregations.asciidoc similarity index 97% rename from docs/reference/ml/anomaly-detection/aggregations.asciidoc rename to docs/reference/ml/anomaly-detection/ml-configuring-aggregations.asciidoc index a12f50a4702a5..b6ee3e4866134 100644 --- a/docs/reference/ml/anomaly-detection/aggregations.asciidoc +++ b/docs/reference/ml/anomaly-detection/ml-configuring-aggregations.asciidoc @@ -1,6 +1,6 @@ [role="xpack"] [[ml-configuring-aggregation]] -=== Aggregating data for faster performance += Aggregating data for faster performance By default, {dfeeds} fetch data from {es} using search and scroll requests. It can be significantly more efficient, however, to aggregate data in {es} @@ -17,7 +17,7 @@ search and scroll behavior. [discrete] [[aggs-limits-dfeeds]] -==== Requirements and limitations +== Requirements and limitations There are some limitations to using aggregations in {dfeeds}. Your aggregation must include a `date_histogram` aggregation, which in turn must contain a `max` @@ -48,7 +48,7 @@ functions, set the interval to the same value as the bucket span. [discrete] [[aggs-include-jobs]] -==== Including aggregations in {anomaly-jobs} +== Including aggregations in {anomaly-jobs} When you create or update an {anomaly-job}, you can include the names of aggregations, for example: @@ -134,7 +134,7 @@ that match values in the job configuration are fed to the job. [discrete] [[aggs-dfeeds]] -==== Nested aggregations in {dfeeds} +== Nested aggregations in {dfeeds} {dfeeds-cap} support complex nested aggregations. This example uses the `derivative` pipeline aggregation to find the first order derivative of the @@ -180,7 +180,7 @@ counter `system.network.out.bytes` for each value of the field `beat.name`. [discrete] [[aggs-single-dfeeds]] -==== Single bucket aggregations in {dfeeds} +== Single bucket aggregations in {dfeeds} {dfeeds-cap} not only supports multi-bucket aggregations, but also single bucket aggregations. The following shows two `filter` aggregations, each gathering the @@ -232,7 +232,7 @@ number of unique entries for the `error` field. [discrete] [[aggs-define-dfeeds]] -==== Defining aggregations in {dfeeds} +== Defining aggregations in {dfeeds} When you define an aggregation in a {dfeed}, it must have the following form: diff --git a/docs/reference/ml/anomaly-detection/categories.asciidoc b/docs/reference/ml/anomaly-detection/ml-configuring-categories.asciidoc similarity index 99% rename from docs/reference/ml/anomaly-detection/categories.asciidoc rename to docs/reference/ml/anomaly-detection/ml-configuring-categories.asciidoc index c2f55f975cc46..afb9fc3936612 100644 --- a/docs/reference/ml/anomaly-detection/categories.asciidoc +++ b/docs/reference/ml/anomaly-detection/ml-configuring-categories.asciidoc @@ -1,7 +1,7 @@ [role="xpack"] [testenv="platinum"] [[ml-configuring-categories]] -=== Detecting anomalous categories of data += Detecting anomalous categories of data Categorization is a {ml} process that tokenizes a text field, clusters similar data together, and classifies it into categories. It works best on @@ -100,7 +100,7 @@ SQL statement from the categorization algorithm. [discrete] [[ml-configuring-analyzer]] -==== Customizing the categorization analyzer +== Customizing the categorization analyzer Categorization uses English dictionary words to identify log message categories. By default, it also uses English tokenization rules. For this reason, if you use diff --git a/docs/reference/ml/anomaly-detection/detector-custom-rules.asciidoc b/docs/reference/ml/anomaly-detection/ml-configuring-detector-custom-rules.asciidoc similarity index 97% rename from docs/reference/ml/anomaly-detection/detector-custom-rules.asciidoc rename to docs/reference/ml/anomaly-detection/ml-configuring-detector-custom-rules.asciidoc index a757c9036a1bf..c8f1e3b54791a 100644 --- a/docs/reference/ml/anomaly-detection/detector-custom-rules.asciidoc +++ b/docs/reference/ml/anomaly-detection/ml-configuring-detector-custom-rules.asciidoc @@ -1,6 +1,6 @@ [role="xpack"] [[ml-configuring-detector-custom-rules]] -=== Customizing detectors with custom rules += Customizing detectors with custom rules <> enable you to change the behavior of anomaly detectors based on domain-specific knowledge. @@ -15,7 +15,7 @@ scope and conditions. For the full list of specification details, see the {anomaly-jobs} API. [[ml-custom-rules-scope]] -==== Specifying custom rule scope +== Specifying custom rule scope Let us assume we are configuring an {anomaly-job} in order to detect DNS data exfiltration. Our data contain fields "subdomain" and "highest_registered_domain". @@ -131,7 +131,7 @@ Such a detector will skip results when the values of all 3 scoped fields are included in the referenced filters. [[ml-custom-rules-conditions]] -==== Specifying custom rule conditions +== Specifying custom rule conditions Imagine a detector that looks for anomalies in CPU utilization. Given a machine that is idle for long enough, small movement in CPU could @@ -211,7 +211,7 @@ PUT _ml/anomaly_detectors/rule_with_range // TEST[skip:needs-licence] [[ml-custom-rules-lifecycle]] -==== Custom rules in the lifecycle of a job +== Custom rules in the lifecycle of a job Custom rules only affect results created after the rules were applied. Let us imagine that we have configured an {anomaly-job} and it has been running @@ -222,7 +222,7 @@ rule we added will only be in effect for any results created from the moment we added the rule onwards. Past results will remain unaffected. [[ml-custom-rules-filtering]] -==== Using custom rules vs. filtering data +== Using custom rules vs. filtering data It might appear like using rules is just another way of filtering the data that feeds into an {anomaly-job}. For example, a rule that skips results when diff --git a/docs/reference/ml/anomaly-detection/populations.asciidoc b/docs/reference/ml/anomaly-detection/ml-configuring-populations.asciidoc similarity index 98% rename from docs/reference/ml/anomaly-detection/populations.asciidoc rename to docs/reference/ml/anomaly-detection/ml-configuring-populations.asciidoc index 7df0d2ffbc258..907d1ca6bf7f0 100644 --- a/docs/reference/ml/anomaly-detection/populations.asciidoc +++ b/docs/reference/ml/anomaly-detection/ml-configuring-populations.asciidoc @@ -1,6 +1,6 @@ [role="xpack"] -[[ml-configuring-pop]] -=== Performing population analysis +[[ml-configuring-populations]] += Performing population analysis Entities or events in your data can be considered anomalous when: diff --git a/docs/reference/ml/anomaly-detection/transforms.asciidoc b/docs/reference/ml/anomaly-detection/ml-configuring-transform.asciidoc similarity index 99% rename from docs/reference/ml/anomaly-detection/transforms.asciidoc rename to docs/reference/ml/anomaly-detection/ml-configuring-transform.asciidoc index 6cda51caaa514..c60677b49691b 100644 --- a/docs/reference/ml/anomaly-detection/transforms.asciidoc +++ b/docs/reference/ml/anomaly-detection/ml-configuring-transform.asciidoc @@ -1,6 +1,6 @@ [role="xpack"] [[ml-configuring-transform]] -=== Transforming data with script fields += Transforming data with script fields If you use {dfeeds}, you can add scripts to transform your data before it is analyzed. {dfeeds-cap} contain an optional `script_fields` property, where @@ -190,7 +190,7 @@ the **Edit JSON** tab. For example: image::images/ml-scriptfields.jpg[Adding script fields to a {dfeed} in {kib}] [[ml-configuring-transform-examples]] -==== Common script field examples +== Common script field examples While the possibilities are limitless, there are a number of common scenarios where you might use script fields in your {dfeeds}. diff --git a/docs/reference/ml/anomaly-detection/customurl.asciidoc b/docs/reference/ml/anomaly-detection/ml-configuring-url.asciidoc similarity index 98% rename from docs/reference/ml/anomaly-detection/customurl.asciidoc rename to docs/reference/ml/anomaly-detection/ml-configuring-url.asciidoc index 1c0c463e9e8f5..abd8ba80498ca 100644 --- a/docs/reference/ml/anomaly-detection/customurl.asciidoc +++ b/docs/reference/ml/anomaly-detection/ml-configuring-url.asciidoc @@ -1,6 +1,6 @@ [role="xpack"] [[ml-configuring-url]] -=== Adding custom URLs to machine learning results += Adding custom URLs to machine learning results When you create an advanced {anomaly-job} or edit any {anomaly-jobs} in {kib}, you can optionally attach one or more custom URLs. @@ -49,7 +49,7 @@ You can also specify these custom URL settings when you create or update [float] [[ml-configuring-url-strings]] -==== String substitution in custom URLs +== String substitution in custom URLs You can use dollar sign ($) delimited tokens in a custom URL. These tokens are substituted for the values of the corresponding fields in the anomaly records. diff --git a/docs/reference/ml/anomaly-detection/delayed-data-detection.asciidoc b/docs/reference/ml/anomaly-detection/ml-delayed-data-detection.asciidoc similarity index 95% rename from docs/reference/ml/anomaly-detection/delayed-data-detection.asciidoc rename to docs/reference/ml/anomaly-detection/ml-delayed-data-detection.asciidoc index 53f1756a4ec92..372df97a1f627 100644 --- a/docs/reference/ml/anomaly-detection/delayed-data-detection.asciidoc +++ b/docs/reference/ml/anomaly-detection/ml-delayed-data-detection.asciidoc @@ -1,6 +1,6 @@ [role="xpack"] [[ml-delayed-data-detection]] -=== Handling delayed data += Handling delayed data Delayed data are documents that are indexed late. That is to say, it is data related to a time that the {dfeed} has already processed. @@ -15,7 +15,7 @@ if it is set too high, analysis drifts farther away from real-time. The balance that is struck depends upon each use case and the environmental factors of the cluster. -==== Why worry about delayed data? +== Why worry about delayed data? This is a particularly prescient question. If data are delayed randomly (and consequently are missing from analysis), the results of certain types of @@ -27,7 +27,7 @@ however, {anomaly-jobs} with a `low_count` function may provide false positives. In this situation, it would be useful to see if data comes in after an anomaly is recorded so that you can determine a next course of action. -==== How do we detect delayed data? +== How do we detect delayed data? In addition to the `query_delay` field, there is a delayed data check config, which enables you to configure the datafeed to look in the past for delayed data. @@ -41,7 +41,7 @@ arrived since the analysis. If there is indeed missing data due to their ingest delay, the end user is notified. For example, you can see annotations in {kib} for the periods where these delays occur. -==== What to do about delayed data? +== What to do about delayed data? The most common course of action is to simply to do nothing. For many functions and situations, ignoring the data is acceptable. However, if the amount of diff --git a/docs/reference/ml/anomaly-detection/stopping-ml.asciidoc b/docs/reference/ml/anomaly-detection/stopping-ml.asciidoc deleted file mode 100644 index 9902ef59857f8..0000000000000 --- a/docs/reference/ml/anomaly-detection/stopping-ml.asciidoc +++ /dev/null @@ -1,88 +0,0 @@ -[role="xpack"] -[[stopping-ml]] -== Stopping {ml} {anomaly-detect} - -An orderly shutdown ensures that: - -* {dfeeds-cap} are stopped -* Buffers are flushed -* Model history is pruned -* Final results are calculated -* Model snapshots are saved -* {anomaly-jobs-cap} are closed - -This process ensures that jobs are in a consistent state in case you want to -subsequently re-open them. - -[float] -[[stopping-ml-datafeeds]] -=== Stopping {dfeeds} - -When you stop a {dfeed}, it ceases to retrieve data from {es}. You can stop a -{dfeed} by using {kib} or the -{ref}/ml-stop-datafeed.html[stop {dfeeds} API]. For example, the following -request stops the `feed1` {dfeed}: - -[source,console] --------------------------------------------------- -POST _ml/datafeeds/feed1/_stop --------------------------------------------------- -// TEST[skip:setup:server_metrics_startdf] - -NOTE: You must have `manage_ml`, or `manage` cluster privileges to stop {dfeeds}. -For more information, see {ref}/security-privileges.html[Security privileges] - -A {dfeed} can be started and stopped multiple times throughout its lifecycle. - -//For examples of stopping {dfeeds} in {kib}, see <>. - -[float] -[[stopping-all-ml-datafeeds]] -==== Stopping all {dfeeds} - -If you are upgrading your cluster, you can use the following request to stop all -{dfeeds}: - -[source,console] ----------------------------------- -POST _ml/datafeeds/_all/_stop ----------------------------------- -// TEST[skip:needs-licence] - -[float] -[[closing-ml-jobs]] -=== Closing {anomaly-jobs} - -When you close an {anomaly-job}, it cannot receive data or perform analysis -operations. If a job is associated with a {dfeed}, you must stop the {dfeed} -before you can close the job. If the {dfeed} has an end date, the job closes -automatically on that end date. - -You can close a job by using the -{ref}/ml-close-job.html[close {anomaly-job} API]. For -example, the following request closes the `job1` job: - -[source,console] --------------------------------------------------- -POST _ml/anomaly_detectors/job1/_close --------------------------------------------------- -// TEST[skip:setup:server_metrics_openjob] - -NOTE: You must have `manage_ml`, or `manage` cluster privileges to stop {dfeeds}. -For more information, see {ref}/security-privileges.html[Security privileges] - -{anomaly-jobs-cap} can be opened and closed multiple times throughout their -lifecycle. - -[float] -[[closing-all-ml-datafeeds]] -==== Closing all {anomaly-jobs} - -If you are upgrading your cluster, you can use the following request to close -all open {anomaly-jobs} on the cluster: - -[source,console] ----------------------------------- -POST _ml/anomaly_detectors/_all/_close ----------------------------------- -// TEST[skip:needs-licence] diff --git a/docs/reference/ml/ml-shared.asciidoc b/docs/reference/ml/ml-shared.asciidoc index a60ccc55dbdcf..31005cfbf1844 100644 --- a/docs/reference/ml/ml-shared.asciidoc +++ b/docs/reference/ml/ml-shared.asciidoc @@ -1120,7 +1120,7 @@ tag::over-field-name[] The field used to split the data. In particular, this property is used for analyzing the splits with respect to the history of all splits. It is used for finding unusual values in the population of all splits. For more information, -see {ml-docs}/ml-configuring-pop.html[Performing population analysis]. +see {ml-docs}/ml-configuring-populations.html[Performing population analysis]. end::over-field-name[] tag::partition-field-name[]